Skip to main content

2010 | Buch

Handbook of Geomathematics

herausgegeben von: Willi Freeden, M. Zuhair Nashed, Thomas Sonar

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

During the last three decades geosciences and geo-engineering were influenced by two essential scenarios: First, the technological progress has changed completely the observational and measurement techniques. Modern high speed computers and satellite based techniques are entering more and more all geodisciplines. Second, there is a growing public concern about the future of our planet, its climate, its environment, and about an expected shortage of natural resources. Obviously, both aspects, viz. efficient strategies of protection against threats of a changing Earth and the exceptional situation of getting terrestrial, airborne as well as spaceborne data of better and better quality explain the strong need of new mathematical structures, tools, and methods. Mathematics concerned with geoscientific problems, i.e., Geomathematics, is becoming increasingly important.

The ‘Handbook Geomathematics’ as a central reference work in this area comprises the following scientific fields: (I) observational and measurement key technologies (II) modelling of the system Earth (geosphere, cryosphere, hydrosphere, atmosphere, biosphere) (III) analytic, algebraic, and operator-theoretic methods (IV) statistical and stochastic methods (V) computational and numerical analysis methods (VI) historical background and future perspectives.

Inhaltsverzeichnis

Frontmatter

General Issues, Historical Background, and Future Perspectives

Frontmatter
1. Geomathematics: Its Role, Its Aim, and Its Potential

During the last decades geosciences and geoengineering were influenced by two essential scenarios: First, the technological progress has completely changed the observational and measurement techniques. Modern high-speed computers and satellite-based techniques more and more enter all geodisciplines. Second, there is a growing public concern about the future of our planet, its climate, its environment, and about an expected shortage of natural resources. Obviously, both aspects, namely efficient strategies of protection against threats of a changing Earth and the exceptional situation of getting terrestrial, airborne as well as spaceborne data of better and better quality explain the strong need of new mathematical structures, tools, and methods, i.e., geomathematics.This chapter deals with geomathematics, its role, its aim, and its potential. Moreover, the “circuit” geomathematics is exemplified by two classical problems involving the Earth’s gravity field, namely gravity field determination from terrestrial deflections of the vertical and ocean flow modeling from satellite (altimeter measured) ocean topography.

Willi Freeden
2. Navigation on Sea: Topics in the History of Geomathematics

In this essay we review the development of the magnet as a means for navigational purposes. Around 1600, knowledge of the properties and behavior of magnetic needles began to grow in England mainly through the publication of William Gilbert’s influential book De Magnete. Inspired by the rapid advancement of knowledge on one side and of the English fleet on the other, scientists associated with Gresham College began thinking of using magnetic instruments to measure the degree of latitude without being dependent on a clear sky, a quiet sea, or complicated navigational tables. The construction and actual use of these magnetic instruments, called dip rings, is a tragic episode in the history of seafaring since the latitude does not depend on the magnetic field of the Earth but the construction of a table enabling seafarers to take the degree of latitude from is certainly a highlight in the history of geomathematics.

Thomas Sonar

Observational and Measurement Key Technologies

Frontmatter
3. Earth Observation Satellite Missions and Data Access

This chapter provides an overview on Earth Observation (EO) satellites, describing the end-to-end elements of an EO mission, then focusing on the European EO programmes. Some significant results obtained using data from European missions (ERS, Envisat) are provided. Finally the access to EO data through the European Space Agency (ESA), mostly free of charge, is described.

Henri Laur, Volker Liebig
4. GOCE: Gravitational Gradiometry in a Satellite

Spring 2009 the satellite Gravity and steady-state Ocean Circulation Explorer (GOCE), equipped with a gravitational gradiometer, has been launched by European Space Agency (ESA). Its purpose is the detailed determination of the spatial variations of the Earth’s gravitational field, with applications in oceanography, geophysics, geodesy, glaciology, and climatology. Gravitational gradients are derived from the differences between the measurements of an ensemble of three orthogonal pairs of accelerometers, located around the center of mass of the spacecraft. Gravitational gradiometry is complemented by gravity analysis from orbit perturbations. The orbits are thereby derived from uninterrupted and three-dimensional GPS tracking of GOCE.The gravitational tensor consists of the nine second-derivatives of the Earth’s gravitational potential. These nine components can also be interpreted in terms of the local curvature of the field or in terms of components of the tidal field generated by the Earth inside the spacecraft. Four of the nine components are measured with high precision (10 − 11s − 2 per square-root of Hz), the others are less precise. Several strategies exist for the determination of the gravity field at the Earth’s surface from the measured tensor components at altitude. The analysis can be based on one, several or all components; the measurements may be regarded as time series or as spatial data set; recovery may take place in terms of spherical harmonics or other types of global or local base functions. After calibration GOCE entered into its first measurement phase in fall 2009. First results are expected to become available in summer 2010.

Reiner Rummel
5. Sources of the Geomagnetic Field and the Modern Data That EnableTheir Investigation

The geomagnetic field one can measure at the Earth’s surface or on board satellites is the sum of contributions from many different sources. These sources have different physical origins and can be found both below (in the form of electrical currents and magnetized material) and above (only in the form of electrical currents) the Earth’s surface. Each source happens to produce a contribution with rather specific spatiotemporal properties. This fortunate situation is what makes the identification and investigation of the contribution of each source possible, provided appropriate observational data sets are available and analyzed in an adequate way, to produce the so-called geomagnetic field models. Here a general overview of the various sources that contribute to the observed geomagnetic field, and of the modern data that enable their investigation via such procedures is provided.

Nils Olsen, Gauthier Hulot, Terence J. Sabaka

Modeling of the System Earth (Geosphere, Cryosphere, Hydrosphere, Atmosphere, Biosphere, Anthroposphere)

Frontmatter
6. Classical Physical Geodesy

Geodesy can be defined as the science of the figure of the Earth and its gravitational field, as well as their determination. Even though today the figure of the Earth, understood as the visible Earth’s surface, can be determined purely geometrically by satellites, using global positioning system (GPS) for the continents and satellite altimetry for the oceans, it would be pretty useless without gravity. One could not even stand upright or walk without being “told” by gravity where the upright direction is. So as soon as one likes to work with the Earth’s surface, one does need the gravitational field. (Not to speak of the fact that, without this gravitational field, no satellites could orbit around the Earth.)To be different from the existing textbooks, a working knowledge of professional mathematics can be taken for granted. In some areas where professors of geodesy are hesitant to enter too deeply, afraid of losing their students, some fundamental problems can be studied.Of course, there is a brief introduction to terrestrial gravitation as treated in the first few chapters of every textbook of geodesy, such as gravitation and gravity (gravitation plus the centrifugal force of the Earth’s rotation), the geoid, and heights above the ellipsoid (now determined directly by GPS) and above the sea level (a surprisingly difficult problem!).But then, as accuracies rise from 10 − 6 in 1960 (about < 10 m globally) to 10 − 8 to 10 − 9 (a few centimeters globally) one has to rethink the fundamentals, and make use of the new powerful measuring devices, not to forget the computers that are able to handle all this stuff.At the new accuracies, Newtonian mechanics is no longer sufficient. Einstein’s General Relativity is needed. Fortunately these “relativistic corrections” are small, and Newtonian mechanics and Euclidean geometry still provide a classical basis to which these corrections can be applied.Einstein’s relativity has put into focus an old ingenious technique of measuring the gravity field, gradiometry, which was invented around 1890 by Roland Eötvös. His torsion balance measured the second-order gradients of the gravitational potential, rather than the three first-order gradients, which form the gravity vector (with the centrifugal forces included). If one wants to measure gravity in a satellite, one would get zero, because the centrifugal force exactly balances gravitation (this is the essence of weightlessness already recognized by Jules Verne). So one has to one step further to measure the second-order gradients, which leads to satellite gradiometry. The newest dedicated satellite mission, launched in 2009, is GOCE, and there is a long way of more than some 120 years to go from Eötvös to GOCE. On the way one has Einstein and then, in 1960, Synge who showed that Eötvös’ gradients are nothing else but components of the mysterious Riemann curvature tensor so prominent in General Relativity. Since 1957, of course, this was done in artificial satellites, after not so spectacular results in terrestrial and aerial gradiometry.Since the Earth’s rotation was, and still is, a fundamental measure of time, and the Earth is not rotating uniformly due to tidal effects, it is not surprising that geodesists became involved in precise time measurements. Time, however, is also affected by gravity according to Einstein.Immediately after the Sputnik of 1957, satellites were used to measure the global features of the external geopotential and to bring it down to the earth by “downward continuation,” analytical continuation of the harmonic potential, at least to the Earth’s surface, but still better, down to the geoid, to sea level.The old problem of the geodesist, to “reduce” their data to sea level, is not solvable exactly because the density of the masses above the geoid is not known to sufficient accuracy. If it were, then one could apply the classical boundary-value problems formulas by Stokes in 1849 and Neumann in 1887 (the latter is particularly appropriate in the GPS era).In 1945, the Russian geodesist and geophysicist M.S. Molodensky devised a highly ingenious and absolutely novel approach to overcome this problem. His idea was to forget about the geoid and to directly determine the Earth’s surface. Only, the boundary-value problem becomes much more difficult! Using the language of modern mathematics it is a “hard” problem of nonlinear functional analysis. Its existence and uniqueness was first shown on the basis of Krarup’s exact linearization by the well-known mathematician Lars Hörmander in 1976, but with presupposing a considerable amount of smoothing the topography.However, Molodensky and several others found approximate solutions, which seem to be practically sufficient and did not require the rock density. One of the best solutions, found and rejected by Molodensky and rediscovered by several others, uses again analytical continuation!Still, one cannot get rid of the rock density altogether in a very practical engineering problem: tunnel surveying. Here one is inside the rock masses and GPS cannot be used. If these masses are disregarded, GPS and ISS (inertial survey systems) may have an unpleasant encounter at the ends of the tunnel.A well-known practical and theoretical tool is the use of series of spherical harmonics, both for satellite determination of the gravitational field and for the study of analytical continuation. Harmonic functions are a three-dimensional analogue to complex functions in the plane, for which a well-known approximation theorem by Runge guarantees, loosely speaking, analytical continuability to any desired accuracy, as pointed out by Krarup. This chapter contains a comprehensive review of this problem.Since relativistic effects and analytical continuation are not easily found in books on geodesy, they are relatively broadly treated here.A method of data combination for arbitrary data to determine the geopotential in three-dimensional space is the least-squares collocation developed as an extension of least-squares gravity interpolation together with least-squares adjustment by Krarup and others. As is extensively used and well documented, a brief account will be given here.Open current problems such as an adequate treatment of ellipticity of the reference ellipsoid (already studied by Molodensky!), nonrigidity of the Earth, and relevant inverse problems are pointed out finally.

Helmut Moritz
7. Spacetime Modeling of the Earth’s Gravity Field by Ellipsoidal Harmonics

All planetary bodies like the Earth rotate causing centrifugal effect! The result is an equilibrium figure of ellipsoidal type. A natural representation of the planetary bodies and their gravity fields has therefore to be in terms of ellipsoidal harmonics and ellipsoidal wavelets, an approximation of its gravity field which is three times faster convergent when compared to the “ruling the world” spherical harmonics and spherical wavelets. Freeden et al. (1998, 2004). Here, various effects are treated when considering the Earth to be “ellipsoidal”: > Sections 2 and > 3 start the chapter with the celebrated ellipsoidal Dirichlet and ellipsoidal Stokes (to first order) boundary-value problems. > Section 4 is devoted to the definition and representation of the ellipsoidal vertical deflections in gravity space, extended in > Sect. 5 to the representation in geometry space. The potential theory of horizontal and vertical components of the gravity field, namely, in terms of ellipsoidal vector fields, is the target of > Sect. 6. > Section 7 is concentrated on the reference potential of type Somigliana–Pizzetti field and its tensor-valued derivatives. > Section 8 illustrates an ellipsoidal harmonic gravity field for the Earth called SEGEN (Gravity Earth Model), a set-up in ellipsoidal harmonics up to degree/order 360/360. Five plates are shown for the West–East/North–South components of type vertical deflections as well as gravity disturbances refering to the International Reference Ellipsoid 2000. The final topic starts with a review of the curvilinear datum problem refering to ellipsoidal harmonics. Such a datum transformation from one ellipsoidal representation to another one in > Sect. 9 is a seven-parameter transformation of type (i) translation (three parameters), (ii) rotation (three parameters) by Cardan angles, and (iii) dilatation (one parameter) as an action of the conformal group in a three-dimensional Weitzenbäck space W(3) with seven parameters. Here, the chapter is begun with an example, namely, with a datum transformation in terms of spherical harmonics in > Sect. 10. The hard work begins with > Sect. 11 to formulate the datum transformation in ellipsoidal coordinates/ellipsoidal harmonics! The highlight is > Sect. 12 with the characteristic example in terms of ellipsoidal harmonics for an ellipsoid of revolution transformed to another one, for instance, polar motion or gravitation from one ellipsoid to another ellipsoid of reference. > Section 13 reviews various approximations given in the previous three sections.

Erik W. Grafarend, Matthias Klapp, Zdeněk Martinec
8. Time-Variable Gravity Field and Global Deformation of the Earth
Jürgen Kusche
8. Satellite Gravity Gradiometry (SGG): From Scalar to Tensorial Solution

Satellite gravity gradiometry (SGG) is an ultra-sensitive detection technique of the space gravitational gradient (i.e., the Hesse tensor of the Earth’s gravitational potential). In this note, SGG – understood as a spacewise inverse problem of satellite technology – is discussed under three mathematical aspects: First, SGG is considered from potential theoretic point of view as a continuous problem of “harmonic downward continuation.” The space-borne gravity gradients are assumed to be known continuously over the “satellite (orbit) surface”; the purpose is to specify sufficient conditions under which uniqueness and existence can be guaranteed. In a spherical context, mathematical results are outlined by decomposition of the Hesse matrix in terms of tensor spherical harmonics. Second, the potential theoretic information leads us to a reformulation of the SGG-problem as an ill-posed pseudodifferential equation. Its solution is dealt within classical regularization methods, based on filtering techniques. Third, a very promising method is worked out for developing an immediate interrelation between the Earth’s gravitational potential at the Earth’s surface and the known gravitational tensor.

Willi Freeden, Michael Schreiner
9. Gravitational Viscoelastodynamics

We consider a compositionally and entropically stratified, compressible, rotating fluid earth and study gravitational–viscoelastic perturbations of its hydrostatic initial state. Using the Lagrangian representation and assuming infinitesimal perturbations, we deduce the incremental field equations and interface conditions of gravitational viscoelastodynamics (GVED) governing the perturbations. In particular, we distinguish the material, material-local, and local forms of the incremental equations. We also demonstrate that their short-time asymptotes correspond to generalizations of the incremental field equations and interface conditions of gravitational elastodynamics (GED), whereas the long-time asymptotes agree with the incremental field equations and interface conditions of gravitational viscodynamics (GVD). The incremental thermodynamic pressure appearing in the long-time asymptote to the incremental constitutive equation is shown to satisfy the appropriate incremental state equation. Finally, we derive approximate field theories applying to gravitational–viscoelastic perturbations of isocompositional, isentropic and compressible or incompressible fluid domains.

Detlef Wolf
11. Multiresolution Analysis of Hydrology and Satellite Gravitational Data

We present a multiresolution analysis of temporal and spatial variations of the Earth’s gravitational potential by the use of tensor product wavelets which are built up by Legendre and spherical wavelets for the time and space domain, respectively. The multiresolution is performed for satellite and hydrological data, and based on these results we compute correlation coefficients between both data sets, which help us to develop a filter for the extraction of an improved hydrology model from the satellite data.

Helga Nutz, Kerstin Wolf
12. Time Varying Mean Sea Level

After a general theoretical consideration of basic mathematical aspects, numerical and physical details, which are related to a specific epoch and areas where sufficient and reliable data are available, are addressed. The concept of a “mean sea level” is in itself rather artificial, because it is not possible to determine a figure for mean sea level for the entire planet, and it varies on a much smaller scale. This is because the sea is in constant motion, affected by the high- and low-pressure zones above it, the tides, local gravitational differences, and so forth. What is possible to do is to calculate the mean sea level at that point and use it as a reference datum. Coastal and global sea level variability are analyzed between January 1993 and June 2008. The coastal variability is estimated from tide gauges, altimeter data colocated to the tide gauges and from altimeter data along the world coasts, while the global variability is estimated from altimeter data. Regionally there is a good agreement between coastal and open-ocean sea level variability from altimeter data. The sea level trends are regionally dependent, with positive average of 3. 1 ± 0. 4 mm∕year. The variability of the trend of coastal sea level appears to be related to the interannual time scale of variability of the Northern Atlantic Oscillation and of the El Nino- Southern Atlantic Oscillation climatic indices.

Luciana Fenoglio-Marc, Erwin Groten
13. Unstructured Meshes in Large-Scale Ocean Modeling

The current status of large-scale ocean modeling on unstructured meshes is discussed in the context of climate applications. Our review is based on FEOM, which is at present the only general circulation model on a triangular mesh with a proven record of global applications. Different setups are considered including some promising alternative finite-element and finite-volume configurations. The focus is on consistency and performance issues which are much easier to achieve with finite-volume methods. On the other hand, they sometimes suffer from numerical modes and require more research before they can be generally recommended for modeling of the general circulation.

Sergey Danilov, Jens Schröter
14. Numerical Methods in Support of Advanced Tsunami Early Warning

After the 2004 Great Sumatra-Andaman Tsunami that devastated vast areas bordering the Indian Ocean and claimed over 230,000 lives many research activities began to improve tsunami early warning capacities. Among these efforts was a large scientific and development project, the German-Indonesian Tsunami Early Warning System (GITEWS) endeavor. Advanced numerical methods for simulating the tsunami propagation and inundation, as well as for evaluating the measurement data and providing a forecast for precise warning bulletins have been developed in that context. We will take the developments of the GITEWS tsunami modeling as a guideline for introducing concepts and existing approaches in tsunami modeling and early warning.For tsunami propagation and inundation modeling, numerical methods for solving hyperbolic or parabolic partial differential equations play the predominant role. The behavior of tsunami waves is usually modeled by simplifications of the Navier–Stokes equations.In tsunami early warning approaches, an inverse problem needs to be solved, which can be formulated as follows: given a number of measurements of the tsunami event, what was the source; and when knowing the source, how do future states look like? This problem has to be solved within a few minutes in order to be of any use, and the number of available measurements is very small within the first few minutes after the rupture causing a tsunami.

Jörn Behrens
15. EfficientModeling of Flow and Transport in Porous Media Using Multiphysics andMultiscale Approaches

Flow and transport processes in porous media including multiple fluid phases are the governing processes in a large variety of geological and technical systems. In general, these systems include processes of different complexity occurring in different parts of the domain of interest. The different processes mostly also take place on different spatial and temporal scales. It is extremely challenging to model such systems in an adequate way accounting for the spatially varying and scale-dependent character of these processes. In this chapter, we give a brief overview of existing upscaling, multiscale, and multiphysics methods, and we present mathematical models and model formulations for multiphase flow in porous media including compositional and non-isothermal flow. Finally, we show simulation results for two-phase flow using a multiphysics method and a multiscale multiphysics algorithm.

Rainer Helmig, Jennifer Niessner, Bernd Flemisch, Markus Wolff, Jochen Fritz
16. Numerical Dynamo Simulations: From Basic Concepts to Realistic Models

The last years have witnessed an impressive growth in the number and quality of numerical dynamo simulations. The numerical models successfully describe many aspects of the geomagnetic field and also set out to explain the various fields of other planets. The success is somewhat surprising since numerical limitation force dynamo modelers to run their models at unrealistic parameters. In particular the Ekman number, a measure for the relative importance of viscous to Coriolis forces, is many orders of magnitude too large: Earth’s Ekman number is E = 10 − 15 while even today’s most advanced numerical simulations have to content themselves with E = 10 − 6. After giving a brief introduction into the basics of modern dynamo simulations the fundamental force balances are discussed and the question how well the modern models reproduce the geomagnetic field is addressed. First-level properties like the dipole dominance, realistic Elsasser and magnetic Reynolds numbers, and an Earth-like reversal behavior are already captured by larger Ekman number simulations around E = 10 − 3. However, low Ekman numbers are required for modeling torsional oscillations which are thought to be an important part of the decadal geomagnetic field variations. Moreover, only low Ekman number models seem to retain the huge dipole dominance of the geomagnetic field once the Rayleigh number has been increased to values where field reversals happen. These cases also seem to resemble the low-latitude field found at Earth’s core-mantle boundary more closely than larger Ekman numbers cases.

Johannes Wicht, Stephan Stellmach, Helmut Harder
17. Mathematical Properties Relevant to Geomagnetic Field Modeling

Geomagnetic field modeling consists in converting large numbers of magnetic observations into a linear combination of elementary mathematical functions that best describes those observations. The set of numerical coefficients defining this linear combination is then what one refers to as a geomagnetic field model. Such models can be used to produce maps. More importantly, they form the basis for the geophysical interpretation of the geomagnetic field, by providing the possibility of separating fields produced by various sources and extrapolating those fields to places where they cannot be directly measured. In this chapter, the mathematical foundation of global (as opposed to regional) geomagnetic field modeling is reviewed, and the spatial modeling of the field in spherical coordinates is focussed. Time can be dealt with as an independent variable and is not explicitly considered. The relevant elementary mathematical functions are introduced, their properties are reviewed, and how they can be used to describe the magnetic field in a source-free (such as the Earth’s neutral atmosphere) or source-dense (such as the ionosphere) environment is explained. Completeness and uniqueness properties of those spatial mathematical representations are also discussed, especially in view of providing a formal justification for the fact that geomagnetic field models can indeed be constructed from ground-based and satellite-born observations, provided those reasonably approximate the ideal situation where relevant components of the field can be assumed perfectly known on spherical surfaces or shells at the time for which the model is to be recovered.

Terence J. Sabaka, Gauthier Hulot, Nils Olsen
18. Multiscale Modeling of the Geomagnetic Field and Ionospheric Currents

This chapter reports on the recent application of multiscale techniques to the modeling of geomagnetic problems. Two approaches are presented: a spherical harmonics-oriented one, using frequency packages, and a spatially oriented one, using regularizations of the single layer kernel and Green’s function with respect to the Beltrami operator. As an example both approaches are applied to the separation of the magnetic field with respect to interior and exterior sources and the reconstruction of radial ionospheric currents.

Christian Gerhards
19. The Forward and Adjoint Methods of Global Electromagnetic Induction forCHAMP Magnetic Data

Detailed mathematical derivations of the forward and adjoint sensitivity methods are presented for computing the electromagnetic induction response of a 2-D heterogeneous conducting sphere to a transient external electric current excitation. The forward method is appropriate for determining the induced spatiotemporal electromagnetic signature at satellite altitudes associated with the upper and mid-mantle conductivity heterogeneities, while the adjoint method provides an efficient tool for computing the sensitivity of satellite magnetic data to the conductivity structure of the Earth’s interior. The forward and adjoint initial boundary-value problems, both solved in the time domain, are identical, except for the specification of the prescribed boundary conditions. The respective boundary-value data at the satellite’s altitude are the X magnetic component measured by the CHAMP vector magnetometer along the satellite track for the forward method and the difference between the measured and predicted Z magnetic component for the adjoint method. Both methods are alternatively formulated for the case when the time-dependent, spherical harmonic Gauss coefficients of the magnetic field generated by external equatorial ring currents in the magnetosphere and the magnetic field generated by the induced eddy currents in the Earth, respectively, are specified. Before applying these methods, the CHAMP vector magnetic data are modeled by a two-step, track-by-track spherical harmonic analysis. As a result, the X and Z components of CHAMP magnetic data are represented in terms of series of Legendre polynomial derivatives. Four examples of the two-step analysis of the signals recorded by the CHAMP vector magnetometer are presented. The track-by-track analysis is applied to the CHAMP data recorded in the year 2001, yielding a 1-year time series of spherical harmonic coefficients. The output of the forward modeling of electromagnetic induction, that is, the predicted Z component at satellite altitude, can then be compared with the satellite observations. The squares of the differences between the measured and predicted Z component summed up over all CHAMP tracks determine the misfit. The sensitivity of the CHAMP data, that is, the partial derivatives of the misfit with respect to mantle conductivity parameters, are then obtained by the scalar product of the forward and adjoint solutions, multiplied by the gradient of the conductivity and integrated over all CHAMP tracks. Such exactly determined sensitivities are checked against the numerical differentiation of the misfit, and a good agreement is obtained. The attractiveness of the adjoint method lies in the fact that the adjoint sensitivities are calculated for the price of only an additional forward calculation, regardless of the number of conductivity parameters. However, since the adjoint solution proceeds backwards in time, the forward solution must be stored at each time step, leading to memory requirements that are linear with respect to the number of steps undertaken. Having determined the sensitivities, the conjugate gradient inversion is run to infer 1-D and 2-D conductivity structures of the Earth based on the CHAMP residual time series (after the subtraction of the static field and secular variations as described by the CHAOS model) for the year 2001. It is shown that this time series is capable of resolving both 1-D and 2-D structures in the upper mantle and the upper part of the lower mantle, while it is not sufficiently long to reliably resolve the conductivity structure in the lower part of the lower mantle.

Zdeněk Martinec
20. Asymptotic Models for Atmospheric Flows

Atmospheric flows feature length and time scales from 10−5 to 105 m and from microseconds to weeks and more. For scales above several kilometers and minutes, there is a natural scale separation induced by the atmosphere's thermal stratification together with the influences of gravity and Earth's rotation, and the fact that atmospheric flow Mach numbers are typically small. A central aim of theoretical meteorology is to understand the associated scale-specific flow phenomena, such as internal gravity waves, baroclinic instabilities, Rossby waves, cloud formation and moist convection, (anti-)cyclonic weather patterns, hurricanes, and a variety of interacting waves in the tropics. Such understanding is greatly supported by analyses of reduced sets of model equations which capture just those fluid mechanical processes that are essential for the phenomenon in question while discarding higher-order effects. Such reduced models are typically proposed on the basis of combinations of physical arguments and mathematical derivations, and are not easily understood by the meteorologically untrained. This chapter demonstrates how many well-known reduced sets of model equations for specific, scale-dependent atmospheric flow phenomena may be derived in a unified and transparent fashion from the full compressible atmospheric flow equations using standard techniques of formal asymptotics. It also discusses an example for the limitations of this approach.> Sections 3–5 of this contribution are a recompilation of the author's more comprehensive article “Scale-dependent models for atmospheric flows”, Annual Reviews of Fluid Mechanics, 42 (2010).

Rupert Klein
21. Modern Techniques for Numerical Weather Prediction: A Picture Drawn fromKyrill

This chapter gives a short overview on modern numerical weather prediction (NWP): The chapter sketches the mathematical formulation of the underlying physical problem and its numerical treatment and gives an outlook on statistical weather forecasting (MOS). Special emphasis is given to the Kyrill event in order to demonstrate the application of the different methods.

Nils Dorband, Martin Fengler, Andreas Gumann, Stefan Laps
22. Modeling Deep Geothermal Reservoirs: Recent Advances and Future Problems

Due to the increasing demand of renewable energy production facilities, modeling geothermal reservoirs is a central issue in today’s engineering practice. After over 40 years of study, many models have been proposed and applied to hundreds of sites worldwide. Nevertheless, with increasing computational capabilities new efficient methods are becoming available. The aim of this chapter is to present recent progress on seismic processing as well as fluid and thermal flow simulations for porous and fractured subsurface systems. The commonly used methods in industrial energy exploration and production such as forward modeling, seismic migration, and inversion methods together with continuum and discrete flow models for reservoir monitoring and management are reviewed. Furthermore, for two specific features numerical examples are presented. Finally, future fields of studies are described.

Maxim Ilyasov, Isabel Ostermann, Alessandro Punzi
23. Phosphorus Cycles in Lakes and Rivers: Modeling, Analysis, and Simulation

From spring to summer period, a large number of lakes are laced with thick layers of algae implicitly representing a serious problem with respect to the fish stock as well as other important organisms and at the end for the complete biological diversity of species. Consequently, the investigation of the cause-and-effect chain represents an important task concerning the protection of the natural environment. Often such situations are enforced by an oversupply of nutrient. As phosphorus is the limiting nutrient element for most of all algae growth processes an advanced knowledge of the phosphorus cycle is essential. In this context the chapter gives a survey on our recent progress in modeling and numerical simulation of plankton spring bloom situations caused by eutrophication via phosphorus accumulation. Due to the underlying processes we employ the shallow water equations as the fluid dynamic part coupled with additional equations describing biogeochemical processes of interest within both the water layer and the sediment. Depending on the model under consideration one is faced with significant requirements like positivity as well as conservativity in the context of stiff source terms. The numerical method used to simulate the dynamic part and the evolution of the phosphorus and different biomass concentrations is based on a second-order finite volume scheme extended by a specific formulation of the modified Patankar approach to satisfy the natural requirements to be unconditionally positivity preserving as well as conservative due to stiff transition terms. Beside a mathematical analysis, several test cases are shown which confirm both the theoretical results and the applicability of the complete numerical scheme. In particular, the flow field and phosphorus dynamics for the West Lake in Hangzhou, China are computed using the previously stated mass and positivity preserving finite volume scheme.

Andreas Meister, Joachim Benz

Analytic, Algebraic, and Operator Theoretical Methods

Frontmatter
24. Noise Models for Ill-Posed Problems

The standard view of noise in ill-posed problems is that it is either deterministic and small (strongly bounded noise) or random and large (not necessarily small). Following Eggerment, LaRiccia and Nashed (2009), a new noise model is investigated, wherein the noise is weakly bounded. Roughly speaking, this means that local averages of the noise are small. A precise definition is given in a Hilbert space setting, and Tikhonov regularization of ill-posed problems with weakly bounded noise is analysed. The analysis unifies the treatment of “classical” ill-posed problems with strongly bounded noise with that of ill-posed problems with weakly bounded noise. Regularization parameter selection is discussed, and an example on numerical differentiation is presented.

Paul N. Eggermont, Vincent LaRiccia, M. Zuhair Nashed
25. Sparsity in Inverse Geophysical Problems

Many geophysical imaging problems are ill-posed in the sense that the solution does not depend continuously on the measured data. Therefore their solutions cannot be computed directly, but instead require the application of regularization. Standard regularization methods find approximate solutions with small L2 norm. In contrast, sparsity regularization yields approximate solutions that have only a small number of nonvanishing coefficients with respect to a prescribed set of basis elements. Recent results demonstrate that these sparse solutions often much better represent real objects than solutions with small L2 norm. In this survey, recent mathematical results for sparsity regularization are reviewed. As an application of the theoretical results, synthetic focusing in Ground Penetrating Radar is considered, which is a paradigm of inverse geophysical problem.

Markus Grasmair, Markus Haltmeier, Otmar Scherzer
26. Quantitative Remote Sensing Inversion in Earth Science: Theory and NumericalTreatment

Quantitative remote sensing is an appropriate way to estimate structural parameters and spectral component signatures of Earth surface cover type. Since the real physical system that couples the atmosphere, water and the land surface is very complicated and should be a continuous process, sometimes it requires a comprehensive set of parameters to describe such a system, so any practical physical model can only be approximated by a mathematical model which includes only a limited number of the most important parameters that capture the major variation of the real system. The pivot problem for quantitative remote sensing is the inversion. Inverse problems are typically ill-posed. The ill-posed nature is characterized by: (C1) the solution may not exist; (C2) the dimension of the solution space may be infinite; (C3) the solution is not continuous with variations of the observed signals. These issues exist nearly for all inverse problems in geoscience and quantitative remote sensing. For example, when the observation system is band-limited or sampling is poor, i.e., there are too few observations, or directions are poor located, the inversion process would be underdetermined, which leads to the large condition number of the normalized system and the significant noise propagation. Hence (C2) and (C3) would be the highlight difficulties for quantitative remote sensing inversion. This chapter will address the theory and methods from the viewpoint that the quantitative remote sensing inverse problems can be represented by kernel-based operator equations and solved by coupling regularization and optimization methods.

Yanfei Wang
27. Multiparameter Regularization in Downward Continuation of Satellite Data

This chapter discusses the downward continuation of the spaceborne gravity data. We analyze the ill-posed nature of this problem and describe some approaches to its treatment. This chapter focuses on the multiparameter regularization approach and show how it can naturally appear in the geodetic context in the form of the regularized total least squares or the dual regularized total least squares, for example. The numerical illustrations with synthetic data demonstrate that multiparameter regularization can indeed produce a good accuracy approximation.

Shuai Lu, Sergei V. Pereverzev
28. Correlation Modeling of the Gravity Field in Classical Geodesy

The spatial correlation of the Earth’s gravity field is well known and widely used in applications of geophysics and physical geodesy. This chapter develops the mathematical theory of correlation functions, as well as covariance functions under a statistical interpretation of the field, for functions and processes on the sphere and plane, with formulation of the corresponding power spectral densities in the respective frequency domains, and with extensions into the third dimension for harmonic functions. The theory is applied, in particular, to the disturbing gravity potential with consistent relationships of the covariance and power spectral density to any of its spatial derivatives. An analytic model for the covariance function of the disturbing potential is developed for both spherical and planar application, which has analytic forms also for all derivatives in both the spatial and the frequency domains (including the along-track frequency domain). Finally, a method is demonstrated to determine the parameters of this model from empirical regional power spectral densities of the gravity anomaly.

Christopher Jekeli
29. Modeling Uncertainty of Complex Earth Systems in Metric Space

Modeling the subsurface of the Earth has many characteristic challenges. Earth models reflect the complexity of the Earth subsurface, and contain many complex elements of modeling, such as the subsurface structures, the geological processes of growth and/or deposition, and the placement, movement, or injection/extraction of fluid and gaseous phases contained in rocks or soils. Moreover, due to the limited information provided by measurement data, whether from boreholes or geophysics, and the requirement to make interpretations at each stage of the modeling effort, uncertainty is inherent to any modeling effort. As a result, many alternative (input) models need to be built to reflect the ensemble of sources of uncertainty. On the other hand, the (engineering) purpose (in terms of target response) of these models is often very clear, simple, and straightforward: do we clean-up or not, do we drill, where do we drill, what are oil and gas reserves, how far have contaminants traveled, etc. The observation that models are complex but their purpose is simple suggests that input model complexity and dimensionality can be dramatically reduced, not by itself, but by means of the purpose or target response. Reducing dimension by only considering the variability between all possible models may be an impossible task, since the intrinsic variation between all input models is far too complex to be reduced to a few dimensions by simple statistical techniques such as principal component analysis (PCA). In this chapter, we will define a distance between two models created with different (and possibly randomized) input parameters. This distance can be tailored to the application or target output response at hand, but should be chosen such that it correlates with the difference in target response between any two models. A distance defines then a metric space with a broad gamma of theory. Starting from this point of view, we redefine many of the current Cartesian-based Earth modeling problems and methodologies, such as inverse modeling, stochastic simulation and estimation, model selection and screening, model updating and response uncertainty evaluation in metric space. We demonstrate how such a redefinition greatly simplifies as well as increases effectiveness and efficiency of any modeling effort, particularly those that require addressing the matter of model and response uncertainty.

Jef Caers, Kwangwon Park, Céline Scheidt
30. Slepian Functions and Their Use in Signal Estimation and Spectral Analysis

It is a well-known fact that mathematical functions that are timelimited (or spacelimited) cannot be simultaneously bandlimited (in frequency). Yet the finite precision of measurement and computation unavoidably bandlimits our observation and modeling scientific data, and we often only have access to, or are only interested in, a study area that is temporally or spatially bounded. In the geosciences, we may be interested in spectrally modeling a time series defined only on a certain interval, or we may want to characterize a specific geographical area observed using an effectively bandlimited measurement device. It is clear that analyzing and representing data of this kind will be facilitated if a basis of functions can be found that are “spatiospectrally” concentrated, i.e., “localized” in both domains at the same time. Here, we give a theoretical overview of one particular approach to this “concentration” problem, as originally proposed for time series by Slepian and coworkers, in the 1960s. We show how this framework leads to practical algorithms and statistically performant methods for the analysis of signals and their power spectra in one and two dimensions, and on the surface of a sphere.

Frederik J. Simons
20. Special Functions in Mathematical Geosciences: An Attempt at a Categorization

This chapter reports on the current activities and recent progress in the field of special functions of mathematical geosciences. The chapter focuses on two major topics of interest, namely, trial systems of polynomial (i.e., spherical harmonic) and polynomially based (i.e., zonal kernel) type. A fundamental tool is an uncertainty principle, which gives appropriate bounds for both the qualification and quantification of space and frequency (momentum) localization of the special (kernel) function under consideration. The essential outcome is a better understanding of the constructive approximation in terms of zonal kernel functions such as splines and wavelets.

Willi Freeden, Michael Schreiner
32. Tomography: Problems and Multiscale Solutions

In this chapter, a brief survey of three different approaches for the approximation of functions on the 3d-ball is presented: the expansion in an orthonormal (polynomial) basis, a reproducing kernel based spline interpolation/approximation, and a wavelet-based multiscale analysis. In addition, some geomathematical tomography problems are discussed as applications.

Volker Michel
33. Material Behavior: Texture and Anisotropy

This contribution is an attempt to present a self-contained and comprehensive survey of the mathematics and physics of material behavior of rocks in terms of texture and anisotropy. Being generally multi-phase and poly-crystalline, where each single crystallite is anisotropic with respect to its physical properties, texture, i.e., the statistical and spatial distribution of crystallographic orientations becomes a constitutive characteristic and determines the material behavior except for grain boundary effects, i.e., in first order approximation. This chapter is in particular an account of modern mathematical texture analysis, explicitly clarifying terms, providing definitions and justifying their application, and emphasizing its major insights. Thus, mathematical texture analysis is brought back to the realm of spherical Radon and Fourier transforms, spherical approximation, spherical probability, i.e., to the mathematics of spherical tomography.

Ralf Hielscher, David Mainprice, Helmut Schaeben
34. Dimensionality Reduction of Hyperspectral Imagery Data for FeatureClassification

The objective of this chapter is to highlight the current research activities and recent progress in the area of dimensionality reduction of hyperspectral geological/geographical imagery data, which are widely used in image segmentation and feature classification. We will only focus on four topics of interest, namely hyperspectral image (HSI) data preprocessing, similarity/dissimilarity definition of HSI data, construction of dimensionality reduction (DR) kernels for HSI data, and HSI data dimensionality reduction algorithms based on DR kernels.

Charles K. Chui, Jianzhong Wang

Statistical and Stochastic Methods

Frontmatter
35. Oblique Stochastic Boundary-Value Problem

The aim of this chapter is to report the current state of the analysis for weak solutions to oblique boundary problems for the Poisson equation. In this chapter, deterministic as well as stochastic inhomogeneities are treated and existence and uniqueness results for corresponding weak solutions are presented. We consider the problem for inner bounded and outer unbounded domains in ℝn. The main tools for the deterministic inner problem are a Poincar$$\acute{\text{ e}}$$ inequality and some analysis for Sobolev spaces on submanifolds, in order to use the Lax–Milgram lemma. The Kelvin transformation enables us to translate the outer problem to a corresponding inner problem. Thus, we can define a solution operator by using the solution operator of the inner problem. The extension to stochastic inhomogeneities is done with the help of tensor product spaces of a probability space with the Sobolev spaces from the deterministic problems. We can prove a regularization result, which shows that the weak solution fulfills the classical formulation for smooth data. A Ritz–Galerkin approximation method for numerical computations is available. Finally, we show that the results are applicable to geomathematical problems.

Martin Grothaus, Thomas Raskop
36. Geodetic Deformation Analysis with Respect to an Extended UncertaintyBudget

This chapter reports current activities and recent progress in the field of geodetic deformation analysis if a refined uncertainty budget is considered. This is meaningful in the context of a thorough system-theoretical assessment of geodetic monitoring and it leads to a more complete formulation of the modeling and analysis chain. The work focuses on three major topics: the mathematical modeling of an extended uncertainty budget, the adequate adaptation of estimation and analysis methods, and the consequences for one outstanding step of geodetic deformation analysis: the test of a linear hypothesis. The essential outcome is a consistent assessment of the quality of the final decisions such as the significance of a possible deformation.

Hansjörg Kutterer
37. Mixed Integer Estimation and Validation for Next Generation GNSS

The coming decade will bring a proliferation of Global Navigation Satellite Systems (GNSS) that are likely to revolutionize society in the same way as the mobile phone has done. The promise of a broader multi-frequency, multi-signal GNSS “system of systems” has the potential of enabling a much wider range of demanding applications compared to the current GPS-only situation. In order to achieve the highest accuracies one must exploit the unique properties of the received carrier signals. These properties include the multi-satellite system tracking, the mm-level measurement precision, the frequency diversity, and the integer ambiguities of the carrier phases. Successful exploitation of these properties results in an accuracy improvement of the estimated GNSS parameters of two orders of magnitude. The theory that underpins this ultraprecise GNSS parameter estimation and validation is the theory of integer inference. This theory is the topic of the present chapter.

Peter J.G. Teunissen
38. Mixed Integer Linear Models

Space geodesy has brought profound convenience to our daily life with positioning-based products on one hand and provided a challenging opportunity to contribute fundamentally to mathematics and statistics on the other hand. Use of carrier phase observables has given rise to new observation models we have never encountered in any course/lecture of statistics and/or adjustment theory. This chapter is to provide a tutorial on mixed integer linear models. First, we classify real-valued and mixed integer linear models and then, accordingly define the corresponding conventional and mixed integer least squares problems. Integer unknown parameters are solved under a general framework of integer programming (aided with decorrelation and/or reduction methods) and represented/estimated from the statistical point of view. As an indispensable and fundamental element of integer least squares estimator, the Voronoi cell is shown to be extremely complex, both computationally and in shape, and has to be bounded with figures of simple shape. As a direct application, we obtain lower and upper probabilistic bounds for the probability with which the integers are correctly estimated. Finally, we briefly discuss an integer hypothesis testing problem.

Peiliang Xu
39. Statistical Analysis of Climate Series

The topic of this contribution is the statistical analysis of climate time series. The data sets consist of monthly temperature means and monthly precipitation amounts gained at three German weather stations. Emphasis lies on the methods of time series analysis, comprising plotting, modeling, and predicting climate values in the near future.

Helmut Pruscha

Computational and Numerical Methods

Frontmatter
40. Numerical Integration on the Sphere

This chapter is concerned with numerical integration over the unit sphere $${\mathbb{S}}^{2} \subset {\mathbb{R}}^{3}$$. We first discuss basic facts about numerical integration rules with positive weights. Then some important types of rules are discussed in detail: rules with a specified polynomial degree of precision, including the important case of longitude–latitude rules; rules using scattered data points; rules based on equal-area partitions; and rules for numerical integration over subsets of the sphere. Finally we show that for numerical integration over the whole sphere and for functions with an appropriate degree of smoothness, an optimal rate of convergence can be achieved by positive-weight rules with polynomial precision, and also by rules obtained by integrating a suitable radial basis function interpolant.

Kerstin Hesse, Ian H. Sloan, Robert S. Womersley
41. Multiscale Approximation

In this chapter, we briefly recall the concept of multiscale approximations of functions by means of wavelet expansions. We present a short overview on the basic construction principles and discuss the most important properties of wavelets such as characterizations of function spaces. Moreover, we explain how wavelets can be used in signal/image analysis, in particular for compression and denoising.

Stephan Dahlke
42. Sparse Solutions of Underdetermined Linear Systems

Our main goal here is to discuss perspectives of applications based on solving underdetermined systems of linear equations (SLE). This discussion will include interconnection of underdetermined SLE with global problems of Information Theory and with data measuring and representation. The preference will be given to the description of the hypothetic destination point of the undertaken efforts, the current status of the problem, and possible methods to overcome difficulties on the way to that destination point. We do not pretend on full survey of the current state of the theoretic researches which are very extensive now. We are going to discuss only some fundamental theoretical results justifying main applied ideas. In the end of the chapter we give numerical results related to the popular algorithms like ℓ1 minimization, Stagewise Orthogonal Greedy Algorithm, Reweighted ℓ1 algorithm, and the new ℓ1 Greedy Algorithm.

Inna Kozlov, Alexander Petukhov
43. Multidimensional Seismic Compression by Hybrid Transform with MultiscaleBased Coding

We present algorithms that compress two- and three-dimensional seismic data arrays. These arrays are piece-wise smooth in the horizontal directions and have oscillating events in the vertical direction. The transform part of the compression process is an algorithm that combines wavelet and the local cosine transform (LCT). The quantization and the entropy coding parts of the compression were taken from wavelet–based coding such as Set Partitioning in Hierarchical Trees (SPIHT) and Embedded Zerotree Wavelet (EZW) encoding/decoding schemes that efficiently use the multiscale (multiresolution) structure of the wavelet transform coefficients. To efficiently apply these codecs (SPIHT or EZW) to a mixed coefficients array, reordering of the LCT coefficients takes place. This algorithm outperforms other algorithms that are based only on the 2D wavelet transforms. The wavelet part in the mixed transform of the hybrid algorithm utilizes the library of Butterworth wavelets transforms. In the 3D case, the 2D wavelet transform is applied to horizontal planes while the LCT is applied to its vertical direction. After reordering the LCT coefficients, the 3D coding (SPIHT or EZW) is applied. In addition, a 3D compression method for visualization is presented. For this, the data cube is decomposed into relatively small cubes, which are processed separately.

Amir Z. Averbuch, Valery A. Zheludev, Dan D. Kosloff
44. Cartography

This chapter gives an overview of theoretical and technological developments that have shaped and reshaped maps and cartography. It analyzes the nature of cartography and introduces three fundamental transformations involved in the cartographic process from the real world to a mental reality in user’s brain. The author gives a try to categorize cartographic products based on relationships between mapmakers and map users. The state of the art of cartography is summarized by a number of dichotomous developments concerned with the dimensionality of maps, design styles, cognitive workload of map use, usage context, and interactivity of mapping systems. Finally, the changed responsibilities of cartographers are outlined in view of the currently prevailing phenomenon “ubiquitous cartography and invisible cartographers.”

Liqiu Meng
45. Geoinformatics

In this chapter, the scientific background of geoinformatics is reflected and research issues are described, together with examples and an extensive list of references.

Monika Sester
Backmatter
Metadaten
Titel
Handbook of Geomathematics
herausgegeben von
Willi Freeden
M. Zuhair Nashed
Thomas Sonar
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-01546-5
Print ISBN
978-3-642-01545-8
DOI
https://doi.org/10.1007/978-3-642-01546-5