Skip to main content
Top

2006 | Book

Progress in Spatial Data Handling

12th International Symposium on Spatial Data Handling

Editors: Dr. Andreas Riedl, Prof. Wolfgang Kainz, Prof. Gregory A. Elmes

Publisher: Springer Berlin Heidelberg

insite
SEARCH

About this book

Since the first symposium in 1984 the International Symposia on Spatial Data Handling (SDH) has become a major resource for recent advances in GIS research. The International Symposium on Spatial Data Handling is regarded as a premier international research forum for GIS. All papers are fully reviewed by an international program committee composed of experts in the field.

Table of Contents

Frontmatter

Plenary of Submitted Papers

The Devil is still in the Data: Persistent Spatial Data Handling Challenges in Grassroots GIS
Sarah Elwood
Physical vs. Web Space — Similarities and Differences

Virtual worlds, and especially the Internet, become increasingly important for advertisement, planning purposes, and simulation. Concepts that proved useful in the real world have been transferred to the Internet. A frequently seen example is the concept of navigation processes. In the real world such processes are based on the properties of space. A transfer to the virtual reality of the Internet is only useful if the properties of real space and virtual space are similar. In this paper we present different concepts of space and discuss their suitability for the Internet.

Elissavet Pontikakis, Gerhard Navratil

Spatial Cognition

Utilization of Qualitative Spatial Reasoning in Geographic Information Systems

Spatial reasoning is a fundamental part of human cognition, playing an important role in structuring our activities and relationships with the physical world. A substantial body of spatial data is now available. In order to make effective use of this large quantity of data, the focus of GIS tools must shift towards helping a user derive relevant, high quality information from the data available. Standard GIS tools have lacked focus in this area, with querying capabilities being limited, and requiring a user to have specialized knowledge in areas such as set theory, or Structured Query Language (SQL). A fundamental issue in standard GIS is that, by relying entirely on numerical methods when working with spatial data, vagueness and imprecision can not be handled. Alternatively, qualitative methods for working with spatial data have been developed to address some key limitations in other standard numerical systems. TreeSap is a GIS application that applies qualitative reasoning, with a strong emphasis on providing a user with powerful and intuitive query support. TreeSap’s query interface is presented, along with visualization strategies that address the issue of conveying complex qualitative information to a user. The notion of a relative feature is introduced as an alternative approach to representing spatial information.

Carl P. L. Schultz, Timothy R. Clephane, Hans W. Guesgen, Robert Amor
Identification of the Initial Entity in Granular Route Directions

Current navigation services assume wayfinders new to an environment. In contrast, our focus is on route directions for wayfinders who are familiar with the environment, such as taxi drivers and couriers. We observe that people communicate route directions to these wayfinders in a hierarchical and granular manner, assuming some shared knowledge of the environment. These route directions do not focus on the route but rather on the destination. In this paper we solve the first problem of automatically generating route directions in a hierarchical and granular manner: finding the initial entity of such a communication. We propose a formal model to determine the initial entity, based on Grice’s conversational maxims, and applied to a topological hierarchy of elements of the city. An implementation of the model is tested for districts in a political subdivision hierarchy. The tests show a reasonable behavior for a local expert, and demonstrate the efficiency of granular route directions.

Martin Tomko, Stephan Winter

Data Models

Modeling and Engineering Algorithms for Mobile Data

In this paper, we present an object-oriented approach to modeling mobile data and algorithms operating on such data. Our model is general enough to capture any kind of continuous motion while at the same time allowing for encompassing algorithms optimized for specific types of motion. Such motion may be available in a specific form, e.g., described by polynomials or splines, or implicitly restricted using bounds for speed or acceleration given by the application context.

Henrik Blunck, Klaus H. Hinrichs, Joëlle Sondern, Jan Vahrenhold
Database Model and Algebra for Complex and Heterogeneous Spatial Entities

Current Geographic Information Systems (GISs) adopt spatial database models that do not allow an easy interaction with users engaged in spatial analysis operations. In fact, users must be well aware of the representation of the spatial entities and specifically of the way in which the spatial reference is structured in order to query the database. The main reason of this inadequacy is that the current spatial database models violate the independence principle of spatial data. The consequence is that potentially simple queries are difficult to specify and strongly depends on the actual data in the spatial database.

In this contribution we tackle the problem of defining a database model to manage in a unified way

spatial entities

(classes of spatial elements with common properties) with different levels of complexity. Complex spatial entities are defined by aggregation of primitive spatial entities; instances of spatial entities are called

spatial grains

.

The database model is provided with an algebra to perform spatial queries over complex spatial entities; the algebra is defined in such a way it guarantees the independence principle and meets the closure property. By means of the operators provided by the algebra, it is possible to easily perform spatial queries working at the logical level only.

Gloria Bordogna, Marco Pagani, Giuseppe Psaila
QACHE: Query Caching in Location-Based Services

Many emerging applications of location-based services continuously monitor a set of moving objects and answer queries pertaining to their locations. Query processing in such services is critical to ensure high performance of the system. Observing that one predominant cost in query processing is the frequent accesses to the database, in this paper we describe how to reduce the number of moving object to database server round-trips by caching query information on the application server tier. We propose a novel-caching framework, named QACHE, which stores and organizes spatially-relevant queries for selected moving objects. QACHE leverages the spatial indices and other algorithms in the database server for organizing and refreshing relevant cache entries within a configurable area of interest, referred to as the cache-footprint, around a moving object. QACHE contains appropriate refresh policies and prefetching algorithms for efficient cache-based evaluation of queries on moving objects. In experiments comparing QACHE to other proposed mechanisms, QACHE achieves a significant reduction (from 63% to $99%) in database roundtrips thereby improving the throughput of an LBS system.

Hui Ding, Aravind Yalamanchi, Ravi Kothuri, Siva Ravada, Peter Scheuermann
A Voronoi-Based Map Algebra

Although the map algebra framework is very popular within the GIS community for modelling fields, the fact that it is solely based on raster structures has been severely criticised. Instead of representing fields with a regular tessellation, we propose in this paper using the Voronoi diagram (VD), and argue that it has many advantages over other tessellations. We also present a variant of map algebra where all the operations are performed directly on VDs. Our solution is valid in two and three dimensions, and permits us to circumvent the gridding and resampling processes that must be performed with map algebra.

Hugo Ledoux, Christopher Gold
Modeling Geometric Rules in Object Based Models: An XML / GML Approach

Most object-based approaches to Geographical Information Systems (GIS) have concentrated on the representation of geometric properties of objects in terms of fixed geometry. In our road traffic marking application domain we have a requirement to represent the static locations of the road markings but also enforce the associated regulations, which are typically geometric in nature. For example a give way line of a pedestrian crossing in the UK must be within 1100–3000 mm of the edge of the crossing pattern. In previous studies of the application of spatial rules (often called ‘business logic’) in GIS emphasis has been placed on the representation of topological constraints and data integrity checks. There is very little GIS literature that describes models for geometric rules, although there are some examples in the Computer Aided Design (CAD) literature. This paper introduces some of the ideas from so called variational CAD models to the GIS application domain, and extends these using a Geography Markup Language (GML) based representation. In our application we have an additional requirement; the geometric rules are often changed and vary from country to country so should be represented in a flexible manner. In this paper we describe an elegant solution to the representation of geometric rules, such as requiring lines to be offset from other objects. The method uses a feature-property model embraced in GML 3.1 and extends the possible relationships in feature collections to permit the application of parameterized geometric constraints to sub features. We show the parametric rule model we have developed and discuss the advantage of using simple parametric expressions in the rule base. We discuss the possibilities and limitations of our approach and relate our data model to GML 3.1.

Trevor Reeves, Dan Cornford, Michal Konecny, Jeremy Ellis

Data Mining

Exploring Geographical Data with Spatio-Visual Data Mining

Efficiently exploring a large spatial dataset with the aim of forming a hypothesis is one of the main challenges for information science. This study presents a method for exploring spatial data with a combination of spatial and visual data mining. Spatial relationships are modeled during a data pre-processing step, consisting of the density analysis and vertical view approach, after which an exploration with visual data mining follows. The method has been tried on emergency response data about fire and rescue incidents in Helsinki.

Urška Demšar, Jukka M. Krisp, Olga Křemenová
Continuous Wavelet Transformations for Hyperspectral Feature Detection

A novel method for the analysis of spectra and detection of absorption features in hyperspectral signatures is proposed, based on the ability of wavelet transformations to enhance absorption features. Field spectra of wheat grown on different levels of available nitrogen were collected, and compared to the foliar nitrogen content. The spectra were assessed both as absolute reflectances and recalculated into derivative spectra, and their respective wavelet transformed signals. Wavelet transformed signals, transformed using the Daubechies 5 motherwavelet at scaling level 32, performed consistently better than reflectance or derivative spectra when tested in a bootstrapped phased regression against nitrogen.

Jelle G. Ferwerda, Simon D. Jones
Measuring Linear Complexity with Wavelets

This paper explores wavelets as a measure of linear complexity. An introduction to wavelets is given before applying them to spatial data and testing them as a complexity measure on a vector representation of the Australian coastline. Wavelets are shown to be successful at measuring linear complexity. The technique used breaks a single line into different classes of complexity, and has the advantages that it is objective, automated and fast.

Geoff J. Lawford
Expert Knowledge and Embedded Knowledge: Or Why Long Rambling Class Descriptions are Useful

In many natural resource inventories class descriptions have atrophied to little more than simple labels or ciphers; the data producer expects the data user to share a common understanding of the way the world works and how it should be characterized (that is the producer implicitly assumes that their epistemology, ontology and semantics are universal). Activities like the UK e-science programme and the EU INSPIRE initiative mean that it is increasingly difficult for the producer to anticipate who the users of the data are going to be. It is increasingly less likely that producer and user share a common understanding and the interaction between them necessary to clarify any inconsistencies has been reduced. There are still some cases where the data producer provides more than a class label making it possible for a user unfamiliar with the semantics and ontology of the producer to process the text and assess the relationship between classes and between classifications. In this paper we apply computer characterization to the textual descriptions of two land cover maps, LCMGB (land cover map of Great Britain produced in 1990) and LCM2000 (land cover map 2000). Statistical analysis of the text is used to parameterize a look-up table and to evaluate the consistency of the two classification schemes. The results show that automatic processing of the text generates similar relations between classes as that produced by human experts. It also showed that the automatically generated relationships were as useful as the expert derived relationships in identifying change.

R. A. Wadsworth, A. J. Comber, P. F. Fisher

Data Retrieval

Preference Based Retrieval of Information Elements

Spatial information systems like GIS assist a user in making a spatial decision by presenting information that supports the decision process. Planning a holiday via the Internet can be a daunting process. Decision-making is based on finding the relevant data sets in short time among an overwhelming amount of data sources. In many cases the user has to figure out how the presented information can fit the decision he has to make on his own. To address this problem a tourist would need tools to retrieve the spatial information according to his current preferences. The relevant data is determined as data elements describing facts corresponding to the tourist’s preferences. The present work suggests a way how a user dealing with a tourist information system can indicate his preferences through the user interface. According to the suggested alternatives the user can change his preferences and generate a new set of alternatives. A feedback loop provides a tool that allows the user to explore the available alternatives. The work was motivated by classical dialog based booking processes between human operators, carried out on the telephone. We introduce the conceptual model of a user interface, which considers the agents preferences and describes the overall interaction process.

Claudia Achatschitz
Spatiotemporal Event Detection and Analysis over Multiple Granularities

Granularity in time and space has a fundamental role in our perception and understanding of various phenomena. Currently applied analysis methods are based on a single level of granularity that is user-driven, leaving the user with the difficult task of determining the level of spatiotemporal abstraction at which processing will take place. Without a priori knowledge about the nature of the phenomenon at hand this is often a difficult task that may have a substantial impact on the processing results. In light of this, this paper introduces a spatiotemporal data analysis and knowledge discovery framework, which is based on two primary components: the spatiotemporal helix and scale-space analysis. While the spatiotemporal helix offers the ability to model and summarize spatiotemporal data, the scale space analysis offers the ability to simultaneously process the data at multiple scales, thus allowing processing without a priori knowledge. In particular, this paper discusses how scale space representation and the derived deep structure can be used for the detection of events (and processes) in spatiotemporal data, and demonstrates the robustness of our framework in the presence of noise.

Arie Croitoru, Kristin Eickhorst, Anthony Stefandis, Peggy Agouris
Reduced Data Model for Storing and Retrieving Geographic Data

The ‘industry-strength’ data models are complex to use and tend to obscure the fundamental issues. Going back to the original proposal of Chen for Entities and Relationships, I describe here a reduced data model with Objects and Relations. It is mathematically well founded in the category of relations and has been implemented to demonstrate that it is viable. An example how this is used to structure data and load data is shown.

Andrew U. Frank
Filling the Gaps in Keyword-Based Query Expansion for Geodata Retrieval

Query expansion describes the automated process of supplementing a user’s search with additional terms or geographic locations to make it more appropriate for the user’s needs. Such process relies on the system’s knowledge about the relation between geographic terms and places. Geodata repositories host spatial data, which can be queried over their metadata, such as keywords. One way to organize the system’s knowledge structure for keyword-based query expansion is to use a similarity network. In a complete similarity network the total number of similarity values between keyterms increases with the square of included keywords. Thus, the task of determining all these values becomes time consuming very quickly. One efficient method is to start with a sparse similarity network, and automatically estimate missing similarity values from other values with an algorithm. Hence, this paper introduces and evaluates four such algorithms.

Hartwig H. Hochmair

Data Quality

Using Metadata to Link Uncertainty and Data Quality Assessments
A. J. Comber, P. F. Fisher, F. Harvey, M. Gahegan, R. Wadsworth
An Evaluation Method for Determining Map-Quality

The quality of maps, geo-visualization and usage of multimedia presentation techniques for spatial communication is an important issue for map creation, distribution and acceptance of these information systems (IS) by a public community. The purpose of this paper is to present an evaluation method based on stochastic reasoning for supporting map designers. We investigate the applicability of Bayesian Belief networks and present a prototypical implementation. We will give an outlook to future research questions.

Markus Jobst, Florian A. Twaroch
Efficient Evaluation Techniques for Topological Predicates on Complex Regions

Topological predicates between spatial objects have for a long time been a focus of intensive research in a number of diverse disciplines. In the context of spatial databases and geographical information systems, they support the construction of suitable query languages for spatial data retrieval and analysis. Whereas to a large extent conceptual aspects of topological predicates have been emphasized, the development of efficient evaluation techniques for them has been rather neglected. Recently, the design of topological predicates for different combinations of

complex

spatial data types has led to a large increase of their numbers and accentuated the need for their efficient implementation. The goal of this paper is to develop efficient implementation techniques for them within the framework of the spatial algebra SPAL2D.

Reasey Praing, Markus Schneider
Implementation of a Prototype Toolbox for Communicating Spatial Data Quality and Uncertainty Using a Wildfire Risk Example

Current GIS are often described as rich in functionality but poor in knowledge content and transfer. This paper presents a prototype for communicating data quality in spatial databases using a hybrid design between data-driven and user-driven factors based upon traditional communication and cartographic concepts. The prototype aims to give data users a better understanding of the uncertainty that affects their information by utilizing a knowledge-based method where they can choose from multiple visualizations to represent the uncertainty in their data, as well as access information about why a particular visualization has been proposed. In doing so, decisions become more transparent to data users, which increases the capability of the prototype to act as a training aid. The example case study examines the data quality in a source dataset and illustrates how the concepts apply in an operational environment at different levels of communication.

K. J. Reinke, S. Jones, G. J. Hunter

Integration and Fusion

Changes in Topological Relations when Splitting and Merging Regions

This paper addresses changes in topological relations as they occur when splitting a region into two. It derives systematically what qualitative inferences can be made about binary topological relations when one region is cut into two pieces. The new insights about the possible topological relations obtained after splitting regions form a foundation for high-level spatio- temporal reasoning without explicit geometric information about each object’s shapes, as well as for transactions in spatio-temporal databases that want to enforce consistency constraints.

Max J. Egenhofer, Dominik Wilmsen
Integrating 2D Topographic Vector Data with a Digital Terrain Model — a Consistent and Semantically Correct Approach

The most commonly used topographic vector data are currently two-dimensional. The topography is modeled by different objects; in contrast, a digital terrain model (DTM) is a continuous representation of the Earth surface. The integration of the two data sets leads to an augmentation of the dimension of the topographic objects, which is useful in many applications. However, the integration process may lead to inconsistent and semantically incorrect results.

In this paper we describe recent work on consistent and semantically correct integration of 2D GIS vector data and a DTM. In contrast to our prior work in this area, the presented algorithm takes into account geometric inaccuracies of both, planimetric and height data, and thus achieves more realistic results. Height information, implicitly contained in our understanding of certain topographic objects, is explicitly formulated and introduced into an optimization procedure together with the height data from the DTM. Results using real data demonstrate the applicability of the approach.

Andreas Koch, Christian Heipke
A Hierarchical Approach to the Line-Line Topological Relations

Topological relations have been recognized to be very useful for spatial query, analysis and reasoning. This paper concentrates on the topological relations between two lines in

IR

2

. The line of thought employed in this study is that the topological relation between two lines can be described by a combination of finite number of basic (or elementary) relations. Based on this idea, a hierarchical approach is proposed for the description and determination of basic relations between two lines. Seventeen (17) basic relations are identified and eleven (11) of them form the basis for combinational description of a complex relation, which can be determined by a compound relation model. A practical example of bus routes is provided for illustration of the approach proposed in this paper, which is an application of the line-line topological relations in traffic planning.

Zhilin Li, Min Deng
Coastline Matching Process Based on the Discrete Fréchet Distance

Spatial distances are the main tools used for data matching and control quality. This paper describes new measures adapted to sinuous lines to compute the maximal and average discrepancy: Discrete Fréchet distance and Discrete Average Fréchet distance. Afterwards, a global process is defined to automatically handle two sets of lines. The usefulness of these distances is tested, with a comparison of coastlines. The validation is done with the computation of three sets of coastlines, obtained respectively from SPOT 5 orthophotographs and GPS points. Finally, an extension to Digital Elevation Model is presented.

Ariane Mascret, Thomas Devogele, Iwan Le Berre, Alain Hénaff

Semantics and Ontologies

Characterizing Land Cover Structure with Semantic Variograms

This paper introduces the semantic variogram, which is a measure of spatial variation based upon semantic similarity metrics calculated for nominal land cover class definitions. Traditional approaches for measuring spatial autocorrelation for nominal geographical data compare classes between pairs of observations to determine a simple binary measure of similarity (identical/different). These binary values are summarized over many sample pairs separated by various distances to characterize some spatial metric of correlation, or variation. The use of binary similarity measures ignores potentially substantial ranges in similarity between different classes. Through the development of category representations capable of producing quantifiable measures of pair wise class similarity, descriptive spatial statistics that operate upon ratio data may be employed. These measures, including the semantic variogram proposed in this work, may characterize spatial variability of categorical maps more sensitively than traditional measures. We apply the semantic variogram to National Land Cover Data (NLCD) for three different study sites, and compare results to those from a multiple class indicator semivariogram. We demonstrate that substantial differences exist in observed short-range variability for the two metrics in all sites. The semantic variograms detect much lower short-range variability due to the tendency of semantically similar classes to be closer together.

Ola Ahlqvist, Ashton Shortridge
Semantic Similarity Measures within the Semantic Framework of the Universal Ontology of Geographical Space

The objective of this paper is to discuss our methodology for comparing, searching and integrating geographic concepts. Searching for spatially oriented datasets could be illustrated by the complexity of the communication between the producer and user. The common vocabulary consists of a set of concepts describing the geographic space called universal ontology of geographical space (UOGS). We have defined the semantic parameters for measuring semantic similarities within the UOGS semantic framework and described our applicative approach to the similarity analyses of spatial databases. In order to test our results we have implemented the entire vocabulary as a set prolog fact. Following this we also implemented functionality such as the querying mechanism and the simple semantic similarity model, again as a set of prolog clauses. In addition to this, we applied prolog rules for the purpose of extracting semantic information describing geographic concepts and extracting it from natural language texts.

Marjan Čeh, Tomaž Podobnikar, Domen Smole
A Quantitative Similarity Measure for Maps

In on-demand map generation, a base-map is modified to meet user requirements on scale, resolution, and other parameters. Since there are many ways of satisfying the requirement, we need a method of measuring the quality of the alternative maps. In this paper, we introduce a uniform framework for measuring the quality of generalized maps. The proposed Map Quality measure takes into account changes in all local objects (Shape Similarity), their neighborhoods (Location Similarity) and lastly across the entire map (Semantic Content Similarity). These three quality aspects measure the major generalization operators of simplification, relocation and selection, exaggeration and aggregation, collapse and typification. The three different aspects are combined using user-specified weights. Thus, the proposed framework supports the automatic choice of best alternative map according to preferences of the user or application.

Richard Frank, Martin Ester
A Semantic-based Approach to the Representation of Network-Constrained Trajectory Data

Recent technological advances in urban traffic systems engender the availability of large trajectory data sets. However, the potential of these large urban databases are often neglected. This is due to a twofold problem. First, the volumes generated represent gigabytes of information per day, thus making data processing and analysis a computationally costly operation. Secondly, there is a lack of analysis of the semantics revealed by urban trajectories, at both the representation and data manipulation levels. The research presented in this paper addresses these two issues. We introduce an optimized representation approach that can efficiently reduce trajectory data volumes and facilitate data access and query languages. Our approach is a semantic-based representation model that characterizes significant trajectory points within a network. Key points are selected according to a combination of network, velocity, and direction criteria. This semantic approach facilitates trajectory data queries, the implicit modeling of trajectory processes. The proposed model is illustrated by a prototype implemented in a district of Hong Kong.

Xiang Li, Christophe Claramunt, Cyril Ray, Hui Lin
Towards an Ontologically-driven GIS to Characterize Spatial Data Uncertainty

Current data models for representing geospatial data are decades old and well developed, but suffer from two major flaws. First, they employ a onesize-fits-all approach, in which no connection is made between the characteristics of data and the specific applications that employ the data. Second, they fail to convey adequate information about the gap between the data and the phenomena they represent. All spatial data are approximations of reality, and the errors they contain may have serious implications for geoprocessing activities that employ them. As a consequence of this lack of information, users of spatial data generally have a limited understanding of how errors in data affect their particular applications. This paper reviews extensive work on spatial data uncertainty propagation. It then proposes development of a data producer focused ontologically-driven GIS to implement the Monte Carlo based uncertainty propagation paradigm. We contend that this model offers tremendous advantages to the developers and users of spatial information by encapsulating with data appropriate uncertainty models for specific users and applications.

Ashton Shortridge, Joseph Messina, Sarah Hession, Yasuyo Makido

2D-Visualization

Structuring Kinetic Maps

We attempt to show that a tessellated spatial model has definite advantages for cartographic applications, and facilitates a kinetic structure for map updating and simulation. We develop the moving-point Delaunay/Voronoi model that manages collision detection snapping and intersection at the data input stage by maintaining a topology based on a complete tessellation. We show that the Constrained Delaunay triangulation allows the simulation of edges, and not just points, with only minor changes to the moving-point model. We then develop an improved kinetic Line-segment Voronoi diagram, which is a better-specified model of the spatial relationships for compound map objects than is the Constrained Triangulation. However, until now it has been more difficult to implement. We believe that this method is now viable for 2D cartography, and in many cases it should replace the Constrained approach. Whichever method is used, the concept of using the moving point as a pen, with the ability to delete and add line segments as desired in the construction and updating process, appears to be a valuable development.

Maciej Dakowicz, Chris Gold
Advanced Operations for Maps in Spatial Databases

Maps are a fundamental spatial concept capable of representing and storing large amounts of information in a visual form. Map operations have been studied and rigorously defined in the literature; however, we identify a new class of

map join

operations which cannot be completed using existing operations. We then consider existing operations involving connectivity concepts, and extend this class of operations by defining new, more complex operations that take advantage of the connectivity properties of maps.

Mark McKenney, Markus Schneider
A Tangible Augmented Reality Interface to Tiled Street Maps and its Usability Testing

The Tangible Augmented Street Map (TASM) is a novel interface to geographic objects, such as tiled maps of labeled city streets. TASM uses tangible Augmented Reality, the superimposition of digital graphics on top of real world objects in order to enhance the user’s experience. The tangible object (i.e. a cube) replicates the role of an input device. Hence the cube can be rotated to display maps that are adjacent to the current tile in geographic space. The cube is capable of theoretically infinite movement, embedded in a coordinate system with topology enabled. TASM has been tested for usability using heuristic evaluation, where selected experts use the cube, establishing non-correspondence with recognized usability principles. While general and vague, the heuristics helped prioritize immediate geographic and system-based tasks needed to improve the usability of TASM, also pointing the way towards a group of geographically oriented heuristics. This addresses a key geovisualization challenge — the creation of domain-specific and technology-related theory.

Antoni Moore
A Linear Programming Approach to Rectangular Cartograms

In [

26

], the first two authors of this paper presented the first algorithms to construct rectangular cartograms. The first step is to determine a representation of all regions by rectangles and the second — most important — step is to get the areas of all rectangles correct. This paper presents a new approach to the second step. It is based on alternatingly solving linear programs on the

x

-coordinates and the

y

-coordinates of the sides of the rectangles. Our algorithm gives cartograms with considerably lower error and better visual qualities than previous approaches. It also handles countries that cannot be present in any purely rectangular cartogram and it introduces a new way of controlling incorrect adjacencies of countries. Our implementation computes aesthetically pleasing rectangular and nearly rectangular cartograms, for instance depicting the 152 countries of the World that have population over one million.

Bettina Speckmann, Marc van Kreveld, Sander Florisson

3D-Visualization

Automated Construction of Urban Terrain Models

Elements of urban terrain models such as streets, pavements, lawns, walls, and fences are fundamental for effective recognition and convincing appearance of virtual 3D cities and virtual 3D landscapes. These elements complement important other components such as 3D building models and 3D vegetation models. This paper introduces an object-oriented, rule-based and heuristic-based approach for modeling detailed virtual 3D terrains in an automated way. Terrain models are derived from 2D vector-based plans based on generation rules, which can be controlled by attributes assigned to 2D vector elements. The individual parts of the resulting urban terrain models are represented as “first-class” objects. These objects remain linked to the underlying 2D vector-based plan elements and, therefore, preserve data semantics and associated thematic information.

With urban terrain models, we can achieve high-quality photorealistic 3D geovirtual environments and support interactive creation and manipulation. The automated construction represents a systematic solution for the bi-directional linkage of 2D plans and 3D geovirtual environments and overcomes cost-intensive CAD-based construction processes. The approach both simplifies the geometric construction of detailed urban terrain models and provides a seamless integration into traditional GIS-based workflows.

The resulting 3D geovirtual environments are well suited for a variety of applications including urban and open-space planning, information systems for tourism and marketing, and navigation systems. As a case study, we demonstrate our approach applied to an urban development area of downtown Potsdam, Germany.

Henrik Buchholz, Jürgen Döllner, Lutz Ross, Birgit Kleinschmit
A Flexible, Extensible Object Oriented Real-time Near Photorealistic Visualization System: The System Framework Design

In this paper we describe a novel, extensible visualization system currently under development at Aston University. We introduce modern programming methods, such as the use of data driven programming, design patterns, and the careful definition of interfaces to allow easy extension using plug-ins, to 3D landscape visualization software. We combine this with modern developments in computer graphics, such as vertex and fragment shaders, to create an extremely flexible, extensible real-time near photorealistic visualization system. In this paper we show the design of the system and the main sub-components. We stress the role of modern programming practices and illustrate the benefits these bring to 3D visualization.

Anthony Jones, Dan Cornford
A Tetrahedronized Irregular Network Based DBMS Approach for 3D Topographic Data Modeling

Topographic features such as physical objects become more complex due to increasing multiple land use. Increasing awareness of the importance of sustainable (urban) development leads to the need for 3D planning and analysis. As a result, topographic products need to be extended into the third dimension. In this paper, we developed a new topological 3D data model that relies on Poincaré algebra. The internal structure is based on a network of simplexes, which are well defined, and very suitable for keeping the 3D data set consistent. More complex 3D features are based on this simple structure and computed when needed. We describe an implementation of this 3D model on a commercial DBMS. We also show how a 2D visualizer can be extended to visualize these 3D objects.

Friso Penninga, Peter van Oosterom, Baris M. Kazar
3D Analysis with High-Level Primitives: A Crystallographic Approach

This paper introduces a new approach to 3D handling of geographical information in the context of risk analysis. We propose to combine several geometrical and topological models for 3D data to take advantage from their respective capabilities. Besides, we adapt from crystallography a high-level description of geographical features that enables to compute several metric and cardinal relations, such as the “lay on” relation, which plays a key-role for geographical information.

Benoit Poupeau, Olivier Bonin

Generalization

The Hierarchical Watershed Partitioning and Data Simplification of River Network

For the generalization of river network, the importance decision of river channels in a catchment has to consider three aspects at different levels: the spatial distribution pattern at macro level, the distribution density at meso level and the individual geometric properties at micro level. To extract such structured information, this study builds the model of watershed hierarchical partitioning based on Delaunay triangulation. The watershed area is determined by the spatial competition process applying the partitioning similar to Voronoi diagram to obtain the basin polygon of each river channel. The hierarchical relation is constructed to represent the inclusion between different level watersheds. This model supports to compute the parameters such as distribution density, distance between neighbor channels and the hierarchical watershed area. The study presents a method to select the river network by the watershed area threshold. The experiment on real river data shows this method has good generalization effect.

Tinghua Ai, Yaolin Liu, Jun Chen
Grid Typification

In this paper the detection and typification of grid structures in building groups is described. Typification is a generalization operation that replaces a large number of similar objects by a smaller number of objects, while preserving the global structure of the object distribution. The typification approach is based on three processes. First the grid structures are detected based on the so-called

relative neighborhood graph

. Second the detected grid structures are regularized by a least square adjustment of an affine or Helmert transformation. The third process is the reduction or simplification of the grid structure, which can be done using the same affine or Helmert transformation approach.

Karl-Heinrich Anders
Skeleton Based Contour Line Generalization

Contour lines are a widely utilized representation of terrain models in both cartography and Geographical Information Systems (GIS). Since they are often presented at different scales there is a need for generalization techniques. In this paper an algorithm for the generalization of contour lines based on skeleton pruning is presented. The algorithm is based on the boundary residual function and retraction of the skeleton of contour lines. The novelty of this method relies on pruning not only the internal skeleton branches, but also those skeleton branches placed outside the closed contour polygon. This approach, in contrast to original method which was designed for closed shapes is capable of handling also open polygonal chains.

A simplified version of the skeleton is extracted in the first step of the algorithm and in the next a simpler boundary is computed. The simpler boundary as shown in this paper, can be found using three different ways: detection of stable vertices, computation of an average vertex and approximation of the boundary by Bezier splines.

Krzysztof Matuk, Christopher Gold, Zhilin Li
Conflict Identification and Representation for Roads Based on a Skeleton

This paper presents a method to detect, represent and classify conflicts between roads. The dataset used in the study is OS Integrated Transport Network™(ITN). A partial skeleton is created from a constrained Delaunay triangulation of the road network. The skeleton is used to interrogate the space between the roads, creating ‘conflict region’ features. Nine types of ‘conflict region’ are characterized and examples of each are given, created from ITN data. A discussion is presented of possible uses for these features showing how they help to orchestrate the removal of conflicts from a road network as part of an automatic generalization process.

Stuart Thom
The’ stroke’ Concept in Geographic Network Generalization and Analysis

Strokes are relatively simple linear elements readily perceived in a network. Apart from their role as graphical elements, strokes reflect lines of flow or movement within the network itself and so constitute natural functional units. Since the functional importance of a stroke is reflected in its perceived salience this makes strokes a suitable basis for network generalization, through the preferential preservation of salient strokes during data reduction. In this paper an exploration of the dual functional-graphical nature of strokes is approached via a look at perceptual grouping in generalization. The identification and use of strokes are then described. The strengths and limitations of stroke-based generalization are discussed; how the technique may be developed is also considered. Finally, the functional role of strokes in networks is highlighted by a look at recent developments in space syntax and related studies.

Robert C. Thomson

Uncertainty

An Integrated Cloud Model for Measurement Errors and Fuzziness

Two kinds of uncertainties — measurement errors and concept (or classification) fuzziness, can be differentiated in GIS data. There are many tools to handle them separately. However, an integrated model is needed to assess their combined effect in GIS analysis (such as classification and overlay) and to assess the plausible effects on subsequent decision-making. The cloud model sheds lights on integrated modeling of uncertainties of fuzziness and randomness. But how to adopt the cloud model to GIS uncertainties needs to be investigated. Indeed, this paper proposes an integrated formal model for measurement errors and fuzziness based upon the cloud model. It addresses physical meaning of the parameters for the cloud model and provides the guideline of setting these values. Using this new model, via multi-criteria reasoning, the combined effect of uncertainty in data and classification on subsequent decision-making can be assessed through statistical indicators, which can be used for quality assurance.

Tao Cheng, Zhilin Li, Deren Li, Deyi Li
The Influence of Uncertainty Visualization on Decision Making: An Empirical Evaluation

Uncertainty visualization is a research area that integrates visualization with the study of uncertainty. Many techniques have been developed for representing uncertainty, and there have been many participant-based empirical studies evaluating the effectiveness of specific techniques. However, there is little empirical evidence to suggest that uncertainty visualization influences, or results in, different decisions. Through a human-subjects experiment, this research evaluates whether specific uncertainty visualization methods, including texture and value, influence decisions and a users confidence in their decisions. The results of this study indicate that uncertainty visualization may effect decisions, but the degree of influence is affected by how the uncertainty is expressed.

Stephanie Deitrick, Robert Edsall
Modeling Uncertainty in Knowledge Discovery for Classifying Geographic Entities with Fuzzy Boundaries

Boosting

is a machine learning strategy originally designed to increase classification accuracies of classifiers through inductive learning. This paper argues that this strategy of learning and inference actually corresponds to a cognitive model that explains the uncertainty associated with class assignments for classifying geographic entities with fuzzy boundaries. This paper presents a study that adopts the boosting strategy in knowledge discovery, which allows for the modeling and mapping of such uncertainty when the discovered knowledge is used for classification. A case study of knowledge discovery for soil classification proves the effectiveness of this approach.

Feng Qi, A-Xing Zhu
Capturing and Representing Conceptualization Uncertainty Interactively using Object-Fields

We present a method for representing, recording and managing conceptualization uncertainty. We review components of uncertainty associated with semantics and metadata. We present a way of recording and visualizing uncertainty using sketching and suggest a framework for recording and managing uncertainty and associated semantics using Object-Fields. A case study is also used to demonstrate a software prototype that shows proof-of concept. We conclude by identifying future research challenges in terms of supporting dynamic exploration of uncertainty, semantics and field objects.

Vlasios Voudouris, Peter F. Fisher, Jo Wood

Elevation Modeling

From Point Cloud to Grid DEM: A Scalable Approach

Given a set

S

of points in ℝ

3

sampled from an

elevation

function

H

: ℝ

2

→ ℝ, we present a scalable algorithm for constructing a grid digital elevation model (DEM). Our algorithm consists of three stages: First, we construct a quad tree on

S

to partition the point set into a set of non-overlapping segments. Next, for each segment

q

, we compute the set of points in

q

and all segments neighboring

q

. Finally, we interpolate each segment independently using points within the segment and its neighboring segments.

Data sets acquired by LIDAR and other modern mapping technologies consist of hundreds of millions of points and are too large to fit in main memory. When processing such massive data sets, the transfer of data between disk and main memory (also called I/O), rather than the CPU time, becomes the performance bottleneck. We therefore present an

I/O-efficient

algorithm for constructing a grid DEM. Our experiments show that the algorithm scales to data sets much larger than the size of main memory, while existing algorithms do not scale. For example, using a machine with 1GB RAM, we were able to construct a grid DEM containing 1.3 billion cells (occupying 1.2GB) from a LIDAR data set of over 390 million points (occupying 20GB) in about 53 hours. Neither ArcGIS nor GRASS, two popular GIS products, were able to process this data set.

Pankaj K. Agarwal, Lars Arge, Andrew Danner
Use of Plan Curvature Variations for the Identification of Ridges and Channels on DEM

This paper proposes novel improvements in the traditional algorithms for the identification of ridge and channel (also called ravines) topographic features on raster digital elevation models (DEMs). The overall methodology consists of two main steps: (1) smoothing the DEM by applying a mean filter, and (2) detection of ridge and channel features as cells with positive and negative plan curvature respectively, along with a decline and incline in plan curvature away from the cell in direction orthogonal to the feature axis respectively. The paper demonstrates a simple approach to visualize the multi-scale structure of terrains and utilize it for semiautomated topographic feature identification. Despite its simplicity, the revised algorithm produced markedly superior outputs than a comparatively sophisticated feature extraction algorithm based on conic-section analysis of terrain.

Sanjay Rana
An Evaluation of Spatial Interpolation Accuracy of Elevation Data

This paper makes a general evaluation of the spatial interpolation accuracy of elevation data. Six common interpolators were examined, including Kriging, inverse distance to a power, minimum curvature, modified Shepard’s method, radial basis functions, and triangulation with linear interpolation. The main properties and mathematical procedures of the interpolation algorithms were reviewed. In order to obtain full evaluation of the interpolations, both statistical (including root-mean-square-error, standard deviation, and mean) and spatial accuracy measures (including accuracy surface, and spatial autocorrelation) were employed. It is found that the accuracy of spatial interpolation of elevations was primarily subject to input data point density and distribution, grid size (resolution), terrain complexity, and interpolation algorithm used. The variations in interpolation parameters may significantly improve or worsen the accuracy. Further researches are needed to examine the impacts of terrain complexity in details and various data sampling strategies. The combined use of variogram models, accuracy surfaces, and spatial autocorrelation represents a promising direction in mapping spatial data accuracy.

Qihao Weng

Working with Elevation

I/O-Efficient Hierarchical Watershed Decomposition of Grid Terrain Models

Recent progress in remote sensing has made massive amounts of high resolution terrain data readily available. Often the data is distributed as regular grid terrain models where each grid cell is associated with a height. When terrain analysis applications process such massive terrain models, data movement between main memory and slow disk (

I/O

), rather than CPU time, often becomes the performance bottleneck. Thus it is important to consider I/O-efficient algorithms for fundamental terrain problems. One such problem is the hierarchical decomposition of a grid terrain model into

watersheds

—regions where all water flows towards a single common outlet. Several different hierarchical watershed decompositions schemes have been described in the hydrology literature. One important such scheme is the

Pfafstetter

label method where each watershed is assigned a unique label and each grid cell is assigned a sequence of labels corresponding to the (nested) watersheds to which it belongs.

In this paper we present an I/O-efficient algorithm for computing the Pfafstetter label of each cell of a grid terrain model. The algorithm uses

O

(

s

rt(

T

)) I/Os, the number of I/Os needed to sort

T

elements, where

T

is the total length of the cell labels. To our knowledge, our algorithm is the first efficient algorithm for the problem. We also present the results of a experimental study using massive real life terrain data that shows our algorithm is practically as well as theoretically efficient.

Lars Arge, Andrew Danner, Herman Haverkort, Norbert Zeh
Tradeoffs when Multiple Observer Siting on Large Terrain Cells

This paper demonstrates a toolkit for multiple observer siting to maximize their joint viewshed, on high-resolution gridded terrains, up to 2402 × 2402, with the viewsheds’ radii of up to 1000. It shows that approximate (rather than exact) visibility indexes of observers are sufficient for siting multiple observers. It also shows that, when selecting potential observers, geographic dispersion is more important than maximum estimated visibility, and it quantifies this. Applications of optimal multiple observer siting include radio towers, terrain observation, and mitigation of environmental visual nuisances.

W. Randolph Franklin, Christian Vogt
Scale-Dependent Definitions of Gradient and Aspect and their Computation

In order to compute lines of constant gradient and areas of constant aspect on a terrain, we introduce the notion of scale dependent local gradient and aspect for a neighborhood around each point of a terrain. We present three definitions for local gradient and aspect, and give efficient algorithms to compute them. We have implemented our algorithms for grid data and we compare the results for all methods.

Iris Reinbacher, Marc van Kreveld, Marc Benkert

Spatial Modeling

Development Density-Based Optimization Modeling of Sustainable Land Use Patterns

Current land use patterns with low-density, single-use, and leapfrogging urban growth on city outskirts call for more efficient land use development strategies balancing economy, environmental protection, and social equity. In this paper, we present a new spatial multiobjective optimization model with a constraint based on the level of neighborhood development density. The constraint encourages infill development and land use compatibility by requiring compact and contiguous land use allocation. The multiobjective optimization model presented in this paper minimizes the conflicting objectives of open space development, infill and redevelopment, land use neighborhood compatibility, and cost distance to already urbanized areas.

Arika Ligmann-Zielinska, Richard Church, Piotr Jankowski
Building an Integrated Cadastral Fabric for Higher Resolution Socioeconomic Spatial Data Analysis
Nadine Schuurman, Agnieszka Leszczynski, Rob Fiedler, Darrin Grund, Nathaniel Bell
Analysis of Cross Country Trafficability

Many decisions — not only in the field of Emergency Management or Military oriented actions — require nowadays in addition to reaching verdicts a large amount of spatial and geographical information data. If these data are handled in Geographical Information Systems — GIS, we are introducing new possibilities to handle and analyze this type of information in a way that divert substantially from traditional handling of the paper maps. A Geographical Information System is an IS with the capabilities not only to handle currently being produced digital maps in raster and vector formats but in addition analyze those for instance together with Remote Sensing techniques like GPS positioning and combining it with a real time intelligence reports. The development of the societies parallel to globalization and global dependencies trends, some symptoms of climate changes, ageing population, more complex societies and more complex systems lead also to a grater demand for more sophisticated information and information systems (Trnka 2003; Trnka et al. 2005a and 2005b; Quarantelli 1999; Rubin 1998; Rubin 2000; Kiranoudis et al. 2002; Mendonça et al. 2001; Beroggi 2001; Johnson 2002). The research teams at IDA/LiU have extensive experience with testing various forms of data capture, real time analyzes and diffusion of geographically registered data through, for example, mobile GIS technology.

However, we have experienced a necessity for development of both entirely GIS-based models and supportive to them data to be analyzed, in order to improve all crucial phases of the Emergency Management scenario reasoning during a preventive and as well an information provisions stages. In this way information could be regarded as a strategic infrastructure that is now being investigated on the Swedish national level as well as by the European Union, for example through the European Network of Excellence — the GMOSS. One goal for GMOSS is to investigate good procedures for Emergency Management and Crisis Response, and as a consequence to build standardized and harmonized geographical databases that can be used in decision support systems. What is still to be added to the agenda is an implementation of several GIS-based models that are making use of all those databases for prediction of potential hazards, for preventive works and action plans. Objectives in this article are to contribute to the development of such models and to investigate necessary data for use in rescue-, relief- and preventive works, and to stress obligations for data uptodateness for structuralizing better preparedness plans etc.

Åke Sivertun, Aleksander Gumos
Metadata
Title
Progress in Spatial Data Handling
Editors
Dr. Andreas Riedl
Prof. Wolfgang Kainz
Prof. Gregory A. Elmes
Copyright Year
2006
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-35589-2
Print ISBN
978-3-540-35588-5
DOI
https://doi.org/10.1007/3-540-35589-8