Skip to main content
Top

2015 | Book

The 1st International Workshop on the Quality of Geodetic Observation and Monitoring Systems (QuGOMS'11)

Proceedings of the 2011 IAG International Workshop, Munich, Germany April 13–15, 2011

insite
SEARCH

About this book

These proceedings contain 25 papers, which are the peer-reviewed versions of presentations made at the 1st International Workshop on the Quality of Geodetic Observation and Monitoring (QuGOMS’11), held 13 April to 15 April 2011 in Garching, Germany. The papers were drawn from five sessions which reflected the following topic areas: (1) Uncertainty Modeling of Geodetic Data, (2) Theoretical Studies on Combination Strategies and Parameter Estimation, (3) Recursive State-Space Filtering, (4) Sensor Networks and Multi Sensor Systems in Engineering Geodesy, (5) Multi-Mission Approaches With View to Physical Processes in the Earth System.

Table of Contents

Frontmatter

Uncertainty Modeling of Geodetic Data

Frontmatter
Modeling Data Quality Using Artificial Neural Networks
Abstract
Managing data quality is an important issue in all technical fields of applications. Demands on quality-assured data in combination with a more diversified quality description are rising with increasing complexity and automation of processes, for instance within advanced driver assistance systems (ADAS). Therefore it is important to use a comprehensive quality model and furthermore to manage and describe data quality throughout processes or sub-processes.
This paper focuses on the modeling of data quality in processes which are in general not known in detail or which are too complex to describe all influences on data quality. As emerged during research, artificial neural networks (ANN) are capable for modeling data quality parameters within processes with respect to their interconnections.
Since multi-layer feed-forward ANN are required for this task, a large number of examples, depending on the number of quality parameters to be taken into account, is necessary for the supervised learning of the ANN, respectively determining all parameters defining the net. Therefore the general usability of ANN was firstly evaluated for a simple geodetic application, the polar survey, where an unlimited number of learning examples could be generated easily. As will be shown, the quality parameters describing accuracy, availability, completeness and consistency of the data can be modeled using ANN. A combined evaluation of availability, completeness or consistency and accuracy was tested as well. Standard deviations of new points can be determined using ANN with sub-mm accuracy in all cases.
To benchmark the usability of ANN for a real practical problem, the complex process of mobile radio location and determination of driver trajectories on the digital road network based on these data, was used. The quality of calculated trajectories could be predicted sufficiently from a number of relevant input parameters such as antenna density and road density. The cross-deviation as an important quality parameter for the trajectories could be predicted with an accuracy of better than 40 m.
Ralf Laufer, Volker Schwieger
Magic Square of Real Spectral and Time Series Analysis with an Application to Moving Average Processes
Abstract
This paper is concerned with the spectral analysis of stochastic processes that are real-valued, one-dimensional, discrete-time, covariance-stationary, and which have a representation as a moving average (MA) process. In particular, we will review the meaning and interrelations of four fundamental quantities in the time and frequency domain, (1) the stochastic process itself (which includes filtered stochastic processes), (2) its autocovariance function, (3) the spectral representation of the stochastic process, and (4) the corresponding spectral distribution function, or if it exists, the spectral density function. These quantities will be viewed as forming the corners of a square (the “magic square of spectral and time series analysis”) with various connecting lines, which represent certain mathematical operations between them. To demonstrate the evaluation of these operations, we will discuss the example of a q-th order MA process.
I. Krasbutter, B. Kargoll, W.-D. Schuh
Describing the Quality of Inequality Constrained Estimates
Abstract
A key feature of geodetic adjustment theory is the description of stochastic properties of the estimated quantities. A variety of tools and measures have been developed to describe the quality of ordinary least-squares estimates, for example, variance-covariance information, redundancy numbers, etc. Many of these features can easily be extended to a constrained least-squares estimate with equality constraints. However, this is not true for inequality constrained estimates. In many applications in geodesy the introduction of inequality constraints could improve the results (e.g. filter and network design or the regularization of ill-posed problems). This calls for an adequate stochastic modeling accompanying the already highly developed estimation theory in the field of inequality constrained estimation. Therefore, in this contribution, an attempt is made to develop measures for the quality of inequality constrained least-squares estimates combining Monte Carlo methods and the theory of quadratic programming. Special emphasis is placed on the derivation of confidence regions.
L. Roese-Koerner, B. Devaraju, W.-D. Schuh, N. Sneeuw
GNSS Integer Ambiguity Validation Procedures: Sensitivity Analysis
Abstract
Global Navigation Satellite Systems (GNSS) have been widely used for many precise positioning and navigation applications. In satellite-based precise positioning, as carrier phase measurements are used, the determination of correct integer carrier phase ambiguities is a key issue in obtaining accurate and reliable positioning results. Therefore much effort has been investigated in developing a robust quality control procedure which can effectively validate the ambiguity resolution results. Such a quality control procedure has been traditionally based on the so-called F-ratio and R-ratio tests. A major shortcoming of these two ratio tests is that their probability distributions, which are normally considered to be Fisher distributed, are still unknown, which precludes the possibility to evaluate the confidence level for the ambiguity validation test. To overcome such a shortcoming, an alternative ambiguity validation test based on the so-called W-ratio has been proposed, which allows for a more rigorous quality control procedure under the assumption that the fixed integer ambiguities are constant. The W-ratio test has been widely used by many researchers. Like any other quality control procedures, there are assumptions to be made, for example, it is assumed that both functional and stochastic models are correct, in the W-ratio test. This paper presents a sensitivity analysis for the new ambiguity validation test based on the W-ratio as well as the other two ratio tests. The analysis will cover the sensitivities of the ratio tests to undetected gross errors (outliers), stochastic models, and geometry strengths relating to a variety of satellite constellations, such as GPS, GPS/GLONASS integration with real data sets. While the performances of different ratio tests are analyzed in terms of the so-called ambiguity “correct” rates based on the ground truth integer ambiguity values.
J. Wang, T. Li
Optimal Design of Deformation Monitoring Networks Using the Global Optimization Methods
Abstract
Geodetic networks are very important tools that can be used to monitor crustal movements or the deformation of structures. However, a geodetic network must be designed to sufficiently meet some network quality requirements such as accuracy, reliability, sensitivity and economy. This is the subject of geodetic network optimization. Traditional methods have been used for solving geodetic optimization problems. On the other hand, some evolutionary algorithms such as the particle swarm optimization algorithm have been started to be recently used. These methods are inspired by optimization and adaptation processes that are encountered in the nature. They are iterative procedures for quickly and efficiently solving complex optimization problems. They may provide global optimum solution or at least near-optimum solutions to problems. In this paper, the use of the shuffled frog-leaping algorithm for the optimal design of a deformation monitoring network is studied. The aim is to design and optimize a geodetic network in terms of high reliability.
M. Yetkin, C. Inal

Theoretical Studies on Combination Strategies and Parameter Estimation

Frontmatter
Towards the Combination of Data Sets from Various Observation Techniques
Abstract
Nowadays, heterogeneous data sets are often combined within a parameter estimation process in order to benefit from their individual strengths and favorable features. Frequently, the different data sets are complementary with respect to their measurement principle, the accuracy, the spatial and temporal distribution and resolution, as well as their spectral characteristics. This paper gives first a review on various combination strategies based on the Gauss-Markov model; special attention will be turned on the stochastic modeling of the input data, e.g. the influence of correlations between different sets of input data. Furthermore, the method of variance component estimation is presented to determine the relative weighting between the observation techniques. If the input data sets are sensitive to different parts of the frequency spectrum the multi-scale representation might be applied which basically means the decomposition of a target function into a number of detail signals each related to a specific frequency band. A successive parameter estimation can be applied to determine the detail signals.
M. Schmidt, F. Göttl, R. Heinkelmann
On the Weighted Total Least Squares Solutions
Abstract
Nowadays the terminology Total Least Squares (TLS) is frequently used as a standard name of the estimation method for the errors-in-variables (EIV) model. Although a significant number of contribution have been published to adjust the EIV model, the computational advantages of the TLS problem are still largely unknown. In this contribution various approaches are applied for solving the weighted TLS problem, where the covariance matrix of the observation vector can be fully populated: 1. The auxiliary Lagrange multipliers are applied to give some implementations for solving the problem. 2. In contrast to the nonlinear Gauss–Helmert model (GHM) proposed by other authors, the model matrices and the inconsistency vector are analytically formulated within the GHM. 3. The gradient of the objective function is given when the weighted TLS problem is expressed as an unconstrained optimization problem. If the gradient equals to zero, the necessary conditions for the optimality are identical with the normal equation which is derived by Lagrange multipliers. Furthermore, a numerical example demonstrates the identical solution by the proposed algorithms.
X. Fang, H. Kutterer
Integration of Observations and Models in a Consistent Least Squares Adjustment Model
Abstract
Models are often treated as deterministic in geodetic practice. Hence, inaccurate models directly affect the results of geodetic measurements. This paper proposes a method for the mutual validation of models and observed data. To consider the inaccuracy of models, data resulting from models are treated as stochastic parameter in a linear least squares adjustment. The required stochastic information is obtained by empirical auto and cross correlation functions. This approach is applied to the problem of the mutual validation of Earth orientation parameters, second degree gravity field coefficients and geophysical excitation functions. The results and the limitations of this approach are discussed.
A. Heiker, H. Kutterer
Comparison of Different Combination Strategies Applied for the Computation of Terrestrial Reference Frames and Geodetic Parameter Series
Abstract
The combination of space geodetic techniques is today and becomes in future more and more important for the computation of Earth system parameters as well as for the realization of reference systems. Precision, accuracy, long-term stability and reliability of the products can be improved by the combination of different observation techniques, which provide an individual sensitivity with respect to several parameters. The estimation of geodetic parameters from observations is mostly done by least squares adjustment within a Gauß-Markov model. The combination of different techniques can be done on three different levels: on the level of observations, on the level of normal equations and on the level of parameters. The paper discusses the differences between the approaches from a theoretical point of view. The combination on observation level is the most rigorous approach since all observations are processed together ab initio, including all pre-processing steps, like e.g. outlier detection. The combination on normal equation level is an approximation of the combination on observation level. The only difference is that pre-processing steps including an editing of the observations are done technique-wise. The combination on the parameter level is more different: Technique-individual solutions are computed and the solved parameters are combined within a second least squares adjustment process. Reliable pseudo-observations (constraints) have to be applied to generate the input solutions. In order to realize the geodetic datum of the combined solution independently from the datum of the input solutions, parameters of a similarity transformation have to be set up for each input solution within the combination. Due to imperfect network geometries, the transformation parameters can absorb also non-datum effects. The multiple parameter solution of the combination process leads to a stronger dependency of the combined solution on operator decisions and on numericalaspects.
Manuela Seitz
W-Ratio Test as an Integer Aperture Estimator: Pull-in Regions and Ambiguity Validation Performance
Abstract
Global Navigation Satellite Systems (GNSS) carrier phase integer ambiguity resolution is an indispensible step in generating highly accurate positioning results. As a quality control step, ambiguity validation, which is an essential procedure in ambiguity resolution, allows the user to make sure the resolved ambiguities are reliable and correct. Many ambiguity validation methods have been proposed in the past decades, such as R-ratio, F-ratio, W-ratio tests, and recently a new theory named integer aperture estimator. This integer aperture estimator provides a framework to compare the other validation methods with the same predefined fail-rate, even though its application in practice can only be based on simulations.
As shown in literature, the pull-in regions of different validation methods may have a variety of shapes which may dictate the closeness of such validation methods to the optimal integer least-squares method. In this contribution, the W-ratio is shown to be connected with the integer aperture theory with an exact formula. The integer least-squares pull-in region for W-ratio is presented and analysed. The results show that the W-ratio’s pull-in region is close to the integer least-squares pull-in region. We have performed numerical experiments which show that the W-ratio is a robust way of validating the resolved ambiguities.
T. Li, J. Wang
Performing 3D Similarity Transformation Using the Weighted Total Least-Squares Method
Abstract
The 3D similarity transformation models, e.g. Bursa model is usually applied in geodesy and photogrammetry. In general, they are suitable in small angle 3D transformation. However, a lot of large 3D transformations need to be performed. This contribution describes a 3D similarity transformation model suitable for any angle rotation, where the nine elements in the rotation matrix are used to replace the three rotation angles as unknown parameters. In the coordinate transformation model, the Errors-In-Variables (EIV) model will be adjusted according to the theory of Least Squares (LS) method within the nonlinear Gauss–Helmert (GH) model. At the end of the contribution, case studies are investigated to demonstrate the coordinate transformation method proposed in this paper. The results show that using the linearized iterative GH model the correct solution can be obtained and this mixed model can be applied no matter whether the variance covariance matrices are full or diagonal.
J. Lu, Y. Chen, X. Fang, B. Zheng
Comparison of SpatialAnalyzer and Different Adjustment Programs
Abstract
Net adjustment is one of the basic tools for various surveying tasks. Among the transformation of coordinates or the analysis and comparison of geometries, the adjustment of geodetic networks is an important part of the surveyor’s work. The market offers a number of software solutions, both commercial and freeware.
Seeing the range of software solutions, the question arises, whether the programs give equivalent results. Earlier evaluations of net adjustment programs, partly including New River Kinematics’ SpatialAnalyzer (SA), revealed on the one hand almost identical adjustment results for the classic programs. On the other hand, the evaluations showed that SA, using a different mathematical model (bundle adjustment), yields clearly distinguishable deviations. Hence, in this paper the authors focused on SA with the classic programs as reference. The first part of the comparison deals with the results of evaluating a terrestrial network. As programs do not account for the earth’s curvature in a standardized way, the chosen network is of small size to minimize the influence of the curvature to an insignificant level.
The second part of the paper compares the results of the evaluation of basic geometries (plane, circle, cylinder, sphere) using SA and other software packages with the least squares solution obtained in a rigorous Gauss–Helmert model (GHM).
C. Herrmann, M. Lösler, H. Bähr

Recursive State-Space Filtering

Frontmatter
State-Space Filtering with Respect to Data Imprecision and Fuzziness
Abstract
State-space filtering is an important task in geodetic science and in practical applications. The main goal is an optimal combination of prior knowledge about a (non-linear) system and additional information based on observations of the system state. The widely used approach in geodesy is the extended Kalman filter (KF), which minimizes the quadratic error (variance) between the prior knowledge and the observations. The quality of a predicted or filtered system state is only determinable in a reliable way if all significant components of the uncertainty budget are considered and propagated appropriately. But in the nowadays applications, many measurement configurations cannot be optimized to reveal or even eliminate non-stochastic error components.
Therefore, new methods and algorithms are shown to handle these non-stochastic error components (imprecision and fuzziness) in state-space filtering. The combined modeling of random variability and imprecision/fuzziness leads to fuzzy-random variables. In this approach, the random components are modeled in a stochastic framework and imprecision and fuzziness are treated with intervals and fuzzy membership functions. One example in KF is presented which focuses on the determination of a kinematic deformation process in structural monitoring. The results are compared to the pure stochastic case. As the influence of imprecision in comparison to random uncertainty can either be significant or less important during the monitoring process it has to be considered in modeling and analysis.
I. Neumann, H. Kutterer
Unscented Kalman Filter Algorithm with Colored Noise and Its Application in Spacecraft Attitude Estimation
Abstract
The accuracy and reliability of the estimation and prediction of satellite attitude are affected by not only the random noise and systematic errors, but also the colored noise related to time. Any theory or technique based on the hypothesis of Gaussian white noise ignoring the colored noise cannot guarantee the actual reliability of the parameter estimates. On the basis of Unscented Kalman Filter (UKF), the paper regards colored noise as pseudo white noise and considers colored noise as ARMA model, calculates its variance by polynomial-quotient which expresses colored noise model as form of progression. The random model can be corrected with this method. Then the new UKF is formulated by time series analysis theory. In order to verify the validity and rationality of this method, a simulated experiment is showed which validates that the method can restrain effectively the influence of colored noise for satellite attitude estimation.
Lifen Sui, Zhongkai Mou, Yu Gan, Xianyuan Huang
Principles and Comparisons of Various Adaptively Robust Filters with Applications in Geodetic Positioning
Abstract
The quality of kinematic positioning and navigation depends on the quality of the kinematic model describing the vehicle movements and the reliability of the measurements. A series of adaptive Kalman filters have been studied in recent years. The main principles of four kinds of adaptive filters are summarized, i.e. fading Kalman filter, adaptive Sage windowing filter, robust filter and adaptively robust filter. Some of the developed equivalent weight functions and the adaptive factors including the fading factors are also introduced. Some applications are mentioned.
Yuanxi Yang, Tianhe Xu, Junyi Xu
Alternative Nonlinear Filtering Techniques in Geodesy for Dual State and Adaptive Parameter Estimation
Abstract
In many fields of geodesy applications, state and parameter estimation are of major importance within modeling of on-line processes. The fundamental block of such processes is a filter for recursive estimation. Kalman Filter is the well known filter, a simple and efficient algorithm, as an optimal recursive Bayesian estimator for a somewhat restricted class of linear Gaussian problems. However, in the case that state and/or measurement functions are highly non-linear and the density function of process and/or measurement noise are non-Gaussian, classical filters do not yield satisfying estimates. So it is necessary to adopt alternative filtering techniques in order to provide almost optimal results. A number of such filtering techniques will be reviewed in this contribution, but the main focus lays on the sequential Monte Carlo (SMC) estimation. The SMC filter (well known as particle filter) allows to reach this goal numerically, and works properly with nonlinear, non-Gaussian state estimation. The main idea behind the SMC filter is to approximate the posterior PDF by a set of random particles, which can be generated from a known PDF. These particles are propagated through the nonlinear dynamic model. They are then weighted according to the likelihood of the observations. By means of the particles the true mean and the covariance of the state vector are estimated. However, the computational cost of particle filters has often been considered as their main disadvantage. This occur due to the large, sufficient number of particles to be drawn. Therefore a more efficient approach will be presented, which is based on the combination of SMC filter and the Kalman Filter. The efficiency of the developed filters will be demonstrated through application to a method for direct georeferencing tasks for a multi-sensor system (MSS).
H. Alkhatib

Sensor Networks and Multi Sensor Systems in Engineering Geodesy

Frontmatter
Parametric Modeling of Static and Dynamic Processes in Engineering Geodesy
Abstract
In this paper, the main focus is set on the utilization of parametric methods for the quantification of causative relationships in static and dynamic deformation processes. Parametric methods are still ‘exotic’ in engineering geodesy but state of the art e.g. in civil and mechanical engineering. Within this context, an essential part is the physical (parametric) modeling of the functional relationships based on partial or ordinary differential equations using the corresponding numerical solutions represented by finite element (FE) or finite difference (FD) models.
The identification of a physical model is realised by combination with monitoring data. One important part of the identification includes establishing the deterministic model structure and estimating a priori unknown model parameters and initial respectively boundary conditions by filtering (e.g. adaptive Kalman-filtering). Major challenges are establishing the parametric model structure, quantifying disturbances and the identifiability of the model parameters which are possibly non-stationary. These challenges are discussed with the help of a practical example from engineering geology.
A. Eichhorn
Land Subsidence in Mahyar Plain, Central Iran, Investigated Using Envisat SAR Data
Abstract
In recent decades land subsidence and its associated fissures have been observed in many plain aquifers of Iran. Knowledge of the deformation field in groundwater basins is of basic interest for understanding the cause and mechanism of deformation phenomenon, and for mitigating hazard related to it. In this paper the result of Envisat InSAR time-series analysis for monitoring land subsidence in Mahyar Plain, Central Iran, is presented. Long-term extraction of groundwater, which started in 1970 with the development of agriculture in this area, has caused substantial subsidence and formation of many earth fissures in Mahyar. Our analysis indicates significant subsidence bowl south of Mahyar plain with an elliptical pattern directed northwest–southeast along the axis of the plain. The velocity map obtained by the time-series analysis of InSAR data shows a maximum subsidence velocity of ∼9 cm/year in the line of sight from the ground to the satellite in the year 2003–2006.
M. Davoodijam, M. Motagh, M. Momeni
Recent Impacts of Sensor Network Technology on Engineering Geodesy
Abstract
Wireless Sensor Networks (WSN) as an infrastructure comprised of sensing, computing and communication elements are designed for the decentralized recording of environmental information. If the matter of concern is referenced geospatial information this in particular forms a Geo Sensor Network (GSN).
Recently there is an ongoing extensive impact of these new techniques on all geo sciences, whereby it depends on the application which demands with respect to data quality, size and costs of the sensor nodes, number of nodes in the network, requirements of the communication component (coverage, data throughput) and the power management are relevant. In Engineering Geodesy the new possibilities and new ideas emerging from GSN especially concern geo(detic) monitoring of objects like landslides or engineering construction sites. Theory and possibilities of GSN technology as well as some selected aspects for geo monitoring in particular will be discussed. As the early detection of already small variations is essential for Early Warning Systems (EWS) and risk management, data quality and reliability is of utmost importance. Thus, customary utilization of low cost equipment in such a GSN generally requires for calibration procedures and higher sophisticated evaluation concepts in order to provide meaningful results.
O. Heunecke
Design of Artificial Neural Networks for Change-Point Detection
Abstract
An important assumption in the global approach to system identification is the homogeneity of observed time series from statistical point of view. A violation of this assumption leads to biased estimated parameters and a low quality of the model.
This paper addresses the task of change-point detection by means of Artificial Neural Networks (ANN). The focus lies on the appropriate design of ANN by specifying the inputs, outputs and the necessary number of hidden nodes for an error-free classification of the data.
H. Neuner
Spatial and Temporal Kinematics of the Inylchek Glacier in Kyrgyzstan Derived from Landsat and ASTER Imagery
Abstract
Spatio-temporal variations of glacier flow are a key indicator of impact of global warming, as the glaciers react sensitively to change in climate. Satellite remote sensing using optical imagery is an efficient tool for studying ice-velocity fields on mountain glaciers. This study evaluates the potential of Landsat and ASTER imagery to investigate surface velocity field associated with the Inylchek Glacier in Kyrgyzstan. We present a detailed map for the kinematics of Inylchek glacier obtained by cross correlation analysis of Landsat images, acquired between 2000 and 2010, and a pair of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images covering the time period of 2007–2008. Our result indicates a high-velocity region in the elevated part of the glacier moving up to a rate of about 0.5 m/day. Time series analysis reveals some annual variations in the mean surface velocity of the Inylchek during 2000–2010.
M. Nobakht, M. Motagh, H. U. Wetzel, M. A. Sharifi
Response Automation in Geodetic Sensor Networks by Means of Bayesian Networks
Abstract
Today’s geodetic sensors allow an almost fully automated data collection. This happens in a previously fixed chain with constant parameters. A reaction to events during the measurement process, for example by adjusting the measurement resolution or a specific control of an actuator, is usually not intended. This lack will be overcome by adaption of new communication techniques with networked sensors and a proper assessment of occurring events. As the basis of such an assessment probabilistic state variables of the processes are introduced. As an analysis method Bayesian networks are used in our study, which are powerful tools to make decisions based on uncertain information. The evidence calculation on the sensor nodes is derived by Kalman filtering and a subsequent compatibility test. The advantages of this method are shown by means of a simulation.
S. Horst, H. Alkhatib, H. Kutterer
Efficiency Optimization of Surveying Processes
Abstract
In order to perform an efficiency optimization of surveying processes typical measuring processes can be modeled by using Petri nets. Petri nets are a mathematical and graphical modeling language for the description of concurrent and distributed systems. The modeling allows a simulation and an efficiency optimization of the processes. Simulations of surveying processes can be performed with different input values like the number of staff or the order of activities. The main goals of the optimization are the reduction of cost or the decrease of the required time. Since the exact duration of the individual steps of a measurement task cannot be defined in advance, timed transitions in stochastic Petri nets are selected to introduce the duration of the activities. The presented method is applied to the optimization of a polar network measurement.
I. von Gösseln, H. Kutterer
Modeling and Propagation of Quality Parameters in Engineering Geodesy Processes in Civil Engineering
Abstract
Quality assurance in civil engineering is a complex and multifaceted field. Especially for successful automation it plays an important role. One aspect of the quality assurance relates to the geometry of a building. In order to determine and control geometric elements, measurement and evaluation processes of engineering geodesy have to be integrated into the construction processes. So the task of engineering geodesy is to create the basis for bringing the planned building geometry in quality-assured reality.
One way to describe the quality is to define a quality model, which describes the quality on the basis of characteristics, which are substantiated by parameters. In generally the characteristics and the parameters are derived from the requirements.
For engineering geodesy processes in civil engineering a process- and product-oriented quality model consisting of the characteristics “accuracy”, “correctness”, “completeness”, “reliability” and “timelessness” was build. These five characteristics are substantiated by altogether ten parameters. In addition to the well-known parameters like “standard deviation” in geodesy and “tolerance” in civil engineering, other parameters like “number of missing elements” and “condition density” help to have a complete and detailed description of the quality of the geometry of a building and the related processes. The parameters can be differentiated in process- and product-related parameters. Finally the quality parameters can be analyzed to get a significant statement about the actual reached quality within the process.
Jürgen Schweitzer, Volker Schwieger

Multi-Mission Approaches With View to Physical Processes in the Earth System

Frontmatter
Completion of Band-Limited Data Sets on the Sphere
Abstract
In this study we propose the complementation of satellite-only gravity field models by additional a priori information to obtain a complete model. While the accepted gravity field models are restricted to a sub-domain of the frequency space, the complete models form a complete basis in the entire space, which can be represented in the frequency domain (spherical harmonics) as well as in the space domain (data grids). The additional information is obtained by the smoothness of the potential field. Using this a priori knowledge, a stochastic process on the sphere is established as a background model. The measurements of satellite-only models are assimilated to this background model by a subdivision into the commission, transition and omission sub-domain. Complete models can be used for a rigorous fusion of complementary data sets in a multi-mission approach and guarantee also, as stand-alone gravity-field models, full-rank variance/covariance matrices for all vector-valued, linearly independent functionals.
W.-D. Schuh, S. Müller, J. M. Brockmann
Backmatter
Metadata
Title
The 1st International Workshop on the Quality of Geodetic Observation and Monitoring Systems (QuGOMS'11)
Editors
Hansjörg Kutterer
Florian Seitz
Hamza Alkhatib
Michael Schmidt
Copyright Year
2015
Electronic ISBN
978-3-319-10828-5
Print ISBN
978-3-319-10827-8
DOI
https://doi.org/10.1007/978-3-319-10828-5

Premium Partner