Skip to main content
Top

2014 | Book

Connecting a Digital Europe Through Location and Place

insite
SEARCH

About this book

This book collects innovative research presented at the 17th Conference of the Association of Geographic Information Laboratories for Europe (AGILE) on Geographic Information Science, held in 2014 in Castellón, Spain. The scientific papers cover a variety of fundamental research topics as well as applied research in Geospatial Information Science, including measuring spatiotemporal phenomena, crowdsourcing and VGI, geosensor networks, indoor navigation, spatiotemporal analysis, modeling and visualization, spatiotemporal decision support, digital earth and spatial information infrastructures.

The book is intended for researchers, practitioners, and students working in various fields and disciplines related to Geospatial Information Science and technology.

Table of Contents

Frontmatter

User Generated and Social Network Data

Frontmatter
Estimating Completeness of VGI Datasets by Analyzing Community Activity Over Time Periods
Abstract
Due to the dynamic nature and heterogeneity of Volunteered Geographic Information (VGI) datasets a crucial question isu concerned with geographic data quality. Among others, one of the main quality categories addresses data completeness. Most of the previous work tackles this question by comparing VGI datasets to external reference datasets. Although such comparisons give valuable insights, questions about the quality of the external dataset and syntactic as well as semantic differences arise. This work proposes a novel approach for internal estimation of regional data completeness of VGI datasets by analyzing the changes in community activity over time periods. It builds on empirical evidence that completeness of selected feature classes in distinct geographical regions may only be achieved when community activity in the selected region runs through a well-defined sequence of activity stages beginning at the start stage, continuing with some years of growth and finally reaching saturation. For the retrospective calculation of activity stages, the annual shares of new features in combination with empirically founded heuristic rules for stage transitions are used. As a proof-of-concept the approach is applied to the OpenStreetMap History dataset by analyzing activity stages for 12 representative metropolitan areas. Results give empirical evidence that reaching the saturation stage is an adequate indication for a certain degree of data completeness in the selected regions. Results also show similarities and differences of community activity in the different cities, revealing that community activity stages follow similar rules but with significant temporal variances.
Simon Gröchenig, Richard Brunauer, Karl Rehrl
Estimation of Building Types on OpenStreetMap Based on Urban Morphology Analysis
Abstract
Buildings are man-made structures and serve several needs of society. Hence, they have a significant socio-economic relevance. From this point of view, building types should be strongly correlated to the shape and sized of their footprints on the one hand. On the other hand, building types are very impacted by the contextual configuration among building footprints. Based on this hypothesis, a novel approach is introduced to estimate building types of building footprints data on OpenStreetMap. The proposed approach has been tested for the building footprints data on OSM in Heidelberg, Germany. An overall accuracy of 85.77 % can be achieved. Residential buildings can be labeled with accuracy of more than 90 %. Besides, the proposed approach can distinguish industrial buildings and accessory buildings for storage with high accuracies. However, public buildings and commercial buildings are difficult to be estimated, since their footprints reveal a large diversity in shape and size.
Hongchao Fan, Alexander Zipf, Qing Fu
Qualitative Representations of Extended Spatial Objects in Sketch Maps
Abstract
With the advent of Volunteered Geographic Information (VGI) the amount and accessibility of the spatial information such as sketched information produced by layperson increased drastically. In many geo-spatial applications, sketch maps are considered an intuitive user interaction modality. In sketch maps, the spatial objects and their relationships enable users to communicate and reason about their actions in the physical environment. The information people draw in sketch maps are distorted, schematized, and incomplete. Thus, processing spatial information from sketch maps and making it available in information systems requires suitable representation and alignment approaches. As typically only qualitative relations are preserved in sketch maps, performing alignment and matching with geo-referenced maps on qualitative level has been suggested. In this study, we analyzed different qualitative representations and proposed a set of plausible representations to formalize the topology and orientation information of extended objects in sketch maps. Using the proposed representations, the qualitative relations among depicted objects are extracted in the form of Qualitative Constraint Networks (QCNs). Next, the obtained QCNs from the sketch maps are compared with QCN derived from the metric maps to determine the degree to which the information is identical. If the representations are suitable, the QCNs of both maps should be identical to a high degree. The consistency of obtaining QCNs allows the alignment and integration of spatial information from sketch maps into Geographic Information Systems (GISs).
Sahib Jan, Angela Schwering, Malumbo Chipofya, Talakisew Binor
Exploring the Geographical Relations Between Social Media and Flood Phenomena to Improve Situational Awareness
A Study About the River Elbe Flood in June 2013
Abstract
Recent research has shown that social media platforms like twitter can provide relevant information to improve situation awareness during emergencies. Previous work is mostly concentrated on the classification and analysis of tweets utilizing crowdsourcing or machine learning techniques. However, managing the high volume and velocity of social media messages still remains challenging. In order to enhance information extraction from social media, this chapter presents a new approach that relies upon the geographical relations between twitter data and flood phenomena. Our approach uses specific geographical features like hydrological data and digital elevation models to prioritize crisis-relevant twitter messages. We apply this approach to examine the River Elbe Flood in Germany in June 2013. The results show that our approach based on geographical relations can enhance information extraction from volunteered geographic information, thus being valuable for both crisis response and preventive flood monitoring.
Benjamin Herfort, João Porto de Albuquerque, Svend-Jonas Schelhorn, Alexander Zipf
Event Identification from Georeferenced Images
Abstract
Geotagged images (e.g., Flickr) could indicate some social events (e.g., festival, parade, protest, sports, etc.) with spatial, temporal and semantic information. Previous researches relied much on tag frequency, thus some images which do not have clearly tag indicating the occurrence of social event would be missing in this case. One potential way to address this problem or enhance the event identification is to make more use of the spatial and temporal information. In this chapter, we take into consideration the underlying spatio-temporal pattern of social events. Particularly, the influence of urban land use and road on the occurrence of event is considered. Specifically, with a spatio-temporal cluster detection method, we firstly detected spatio-temporal clusters composed of geotagged images. Among these detected S-T clusters, we furthermore attempted to identify social events in terms of a classification model. Specifically, land use and road were considered to generate new kinds of spatial characteristics used as dependent variables incorporated into the classification model. In addition to this, user characteristics (i.e., the number of images and the number of users), spatial and temporal range of images, and the heterogeneity of temporal distribution of images were considered as the other dependent variables for the classification model. Consequently, with a binary logistic regression (BLR) method, we estimated the categories (i.e., ‘event’ or ‘non-event’ one) of the S-T clusters (cases). Experimental results demonstrated the good performance of the method with a total accuracy of 71 %. With the variable selection process of the BLR method, empirical result also indicates that (1) some characteristics (e.g., the distance to the road and the heterogeneity of temporal distribution of images) do not have considerable influence on the occurrence of ‘event’; and (2) compared to the other urban land categories (i.e., residential and recreational land), commercial land has a relatively high influence on the occurrence of ‘event’.
Yeran Sun, Hongchao Fan

Trajectory Analysis

Frontmatter
A Recursive Bayesian Filter for Anomalous Behavior Detection in Trajectory Data
Abstract
This chapter presents an original approach to anomalous behavior analysis in trajectory data by means of a recursive Bayesian filter. The anomalous pattern detection is of great interest in the areas of navigation, driver assistant system, surveillance and emergency management. In this work we focus on the GPS trajectories finding where the driver is encountering navigation problems, i.e., taking a wrong turn, performing a detour or tending to lose his way. To extract the related features, i.e., turns and their density, degree of detour and route repetition, a long-term perspective is required to observe data sequences instead of individual data points. We therefore employ high-order Markov chain to remodel the trajectory integrating these long-term features. A recursive Bayesian filter is conducted to process the Markov model and deliver an optimal probability distribution of the potential anomalous driving behaviors dynamically over time. The proposed filter performs unsupervised detection in single trajectory with solely the local features. No training process is required to characterize the anomalous behaviors. Based on the results of individual trajectories collective behaviors can be analyzed as well to indicate some traffic issues, e.g., turn restriction, blind alley, temporary road-block, etc. Experiments are performed on the trajectory data in urban areas demonstrating the potential of this approach.
Hai Huang, Lijuan Zhang, Monika Sester
Using GPS Logs to Identify Agronomical Activities
Abstract
The chapter presents an approach for collecting and identifying the daily rounds of agronomists working in the field for a farming products company. Besides recognizing their daily movements, the approach enables the collection of data about the shape and size of land parcels belonging to the company’s clients. The work developed involved the design of spatial movement patterns for data collection through GPS logs, with minimal disruption of the agronomists’ activities. The extracting of these patterns involved place and activity extraction, with specific algorithms proposed for marking and unmarking exploration parcels. These algorithms were evaluated by field testing with very positive results.
Armanda Rodrigues, Carlos Damásio, José Emanuel Cunha
Assessing the Influence of Preprocessing Methods on Raw GPS-Data for Automated Change Point Detection
Abstract
The automated recognition of transport modes from GPS data is a problem that has received a lot of attention from academia and industry. There is a comprehensive body of literature discussing algorithms and methods to find the right segments using mainly velocity-, acceleration- and accuracy-values. Less work is dedicated to the derivation of those variables. The goal of this chapter is to identify the most efficient way to preprocess GPS trajectory data for automated change-point (i.e., the points indicating a change in transportation mode) detection. Therefore the influence of different kernel based smoothing methods as well as an alternative velocity derivation method on the overall segmentation process is analyzed and assessed.
Tomas Thalmann, Amin Abdalla

Data Mining, Fusion and Integration

Frontmatter
Mining Frequent Spatio-Temporal Patterns in Wind Speed and Direction
Abstract
Wind is a dynamic geographic phenomenon that is often characterized by its speed and by the direction from which it blows. The cycle’s effect of heating and cooling on the Earth’s surface causes the wind speed and direction to change throughout the day. Understanding the changeability of wind speed and direction simultaneously in long term time series of wind measurements is a challenging task. Discovering such pattern highlights the recurring of speed together with direction that can be extracted in specific chronological order of time. In this chapter, we present a novel way to explore wind speed and direction simultaneously using sequential pattern mining approach for detecting frequent patterns in spatio-temporal wind datasets. The Linear time Closed pattern Miner sequence (LCMseq) algorithm is constructed to search for significant sequential patterns of wind speed and direction simultaneously. Then, the extracted patterns were explored using visual representation called TileVis and 3D wind rose in order to reveal any valuable trends in the occurrences patterns. The applied methods demonstrated an improvement way of understanding of temporal characteristics of wind resources.
Norhakim Yusof, Raul Zurita-Milla, Menno-Jan Kraak, Bas Retsios
STCode: The Text Encoding Algorithm for Latitude/Longitude/Time
Abstract
Encoding the geographic coordinates into the compact string expression is an important problem in geosciences, therefore, several algorithms exist. Such a string is used for several purposes. One of the most frequent is probably its use as a part of a URL or as a hashtag of ad hoc data. An important part of spatially referenced data can be also a timestamp, but current encoding methods do not allow to encode the temporal dimension. In this chapter we propose a new encoding algorithm focused on a point data expressed by latitude, longitude and timestamp coordinates. The algorithm details are described and its use is shown on examples.
Jan Ježek, Ivana Kolingerová
Fast SNN-Based Clustering Approach for Large Geospatial Data Sets
Abstract
Current positioning and sensing technologies enable the collection of very large spatio-temporal data sets. When analysing movement data, researchers often resort to clustering techniques to extract useful patterns from these data. Density-based clustering algorithms, although being very adequate to the analysis of this type of data, can be very inefficient when analysing huge amounts of data. The Shared Nearest Neighbour (SNN) algorithm presents low efficiency when dealing with high quantities of data due to its complexity evaluated in the worst case by O(n\(^{2}\)). This chapter presents a clustering method, based on the SNN algorithm that significantly reduces the processing time by segmenting the spatial dimension of the data into a set of cells, and by minimizing the number of cells that have to be visited while searching for the k-nearest neighbours of each vector. The obtained results show an expressive reduction of the time needed to find the k-nearest neighbours and to compute the clusters, while producing results that are equal to those produced by the original SNN algorithm. Experimental results obtained with three different data sets (2D and 3D), one synthetic and two real, show that the proposed method enables the analysis of much larger data sets within reasonable amount of time.
Arménio Antunes, Maribel Yasmina Santos, Adriano Moreira
RSS and Sensor Fusion Algorithms for Indoor Location Systems on Smartphones
Abstract
Location-based applications require knowing the user position constantly in order to find out and provide information about user’s context. They use GPS signals to locate users, but unfortunately GPS location systems do not work in indoor environments. Therefore, there is a need of new methods that calculate the location of users in indoor environments using smartphone sensors. There are studies that propose indoor positioning systems but, as far as we know, they neither run on Android devices, nor can work in real environments. The goal of this chapter is to address that problem by presenting two methods that estimate the user position through a smartphone. The first method is based on euclidean distance and use Received Signal Strength (RSS) from WLAN Acces Points present in buildings. The second method uses sensor fusion to combine raw data of accelerometer and magnetometer inertial sensors. An Android prototype that implements both methods has been created and used to test both methods. The conclusions of the test are that RSS technique works efficiently in smartphones and achieves to estimate the position of users well enough to be used in real applications. On the contrary, the test results show that sensor fusion technique can be discarded due to bias errors and low frequency readings from accelerometers sensor.
Laia Descamps-Vila, A. Perez-Navarro, Jordi Conesa
An Image Segmentation Process Enhancement for Land Cover Mapping from Very High Resolution Remote Sensing Data Application in a Rural Area
Abstract
In this chapter, we describe a procedure for enhancing the automatic image segmentation for land cover mapping from Very High Resolution images. The increased need for large scale mapping (1:10000) for local territorial monitoring led to think about mapping production. Nowadays mapping production for land cover and land use (LUC) is mainly performed with human photo-interpretation. This approach can be extremely time consuming, expensive and tedious for data producers. This is confirmed from the evidence of rural areas where the use of the GIS database for LUC is less numerous than the GIS urban database. In the last decade, Geographic Object-Based Image Analysis (GEOBIA) has been developed by the image processing community. This new paradigm builds on theory, methods and tools for replicating the human photo-interpretation process from remote sensing data (Hay and Castilla 2008). However, the GEOBIA community is still fragile and suffers from a lack of protocols and standards for operational LUC mapping applications. Currently, human photo-interpretation seems a safer option. The objective of this research is to find an alternative to this time consuming and expensive use of human expertise. We explored the limits of GEOBIA to propose an automatic image segmentation enhancement for an operational mapping application. Questions behind this study were: What is a good segmentation? How can we obtain it?
M. Vitter, P. Pluvinet, L. Vaudor, C. Jacqueminet, R. Martin, B. Etlicher
Line Matching for Integration of Photographic and Geographic Databases
Abstract
The aim of this chapter is to describe a new method for assigning a geographical position to an urban picture. The method is based only on the content of the picture. The photograph is compared to a sample of geolocated 3D images generated automatically from a virtual model of the terrain and the buildings. The relation between the picture and the images is built through the matching of detected lines in the photograph and in the image. The lines extraction is based on the Hough transform. This matching is followed by a statistical analysis to propose a probable location of the picture with an estimation of accuracy. The chapter presents and discusses the results of an experiment with data about Saint-Etienne, France and ends with proposals for improving and extending the method.
Youssef Attia, Thierry Joliveau, Eric Favier

Representation, Visualization and Perception

Frontmatter
Encoding and Querying Historic Map Content
Abstract
Libraries have large collections of map documents with rich spatio-temporal information encoded in the visual representation of the map. Currently, historic map content is covered by the provided metadata only to a very limited degree, and thus is not available in a machine-readable form. A formal representation would support querying for and reasoning over detailed semantic contents of maps, instead of only map documents. From a historian’s perspective, this would support search for map resources which contain information that answers very specific questions, such as maps that show the cities of Prussia in 1830, without manually searching through maps. A particular challenge lies in the wealth and ambiguity of map content for queries. In this chapter, we propose an approach to describe map contents more explicitly. We suggest ways to formally encode historic map content in an approximate intensional manner which still allows useful queries. We discuss tools for georeferencing and enriching historic map descriptions by external sources, such as DBpedia. We demonstrate the use of this approach by content queries on map examples.
Simon Scheider, Jim Jones, Alber Sánchez, Carsten Keßler
An Area Merge Operation for Smooth Zooming
Abstract
When zooming a digital map it is often necessary that two or more area features must be merged. If this is done abruptly, it leads to big changes in geometry, perceived by the user as a “jump” on the screen. To obtain a gradual merge of two input objects to one output object this chapter presents three algorithms to construct a corresponding 3D geometry that may be used for the user’s smooth zooming operations. This is based on the assumption that every feature in the map is represented in 3D, where the 2D coordinates are the original representation, and 1D represents the scale as a Z value. Smooth zooming in or out is thus equivalent to the vertical movement of a horizontal slice plane (downwards or upwards).
Radan Šuba, Martijn Meijers, Lina Huang, Peter van Oosterom
Point Labeling with Sliding Labels in Interactive Maps
Abstract
We consider the problem of labeling point objects in interactive maps where the user can pan and zoom continuously. We allow labels to slide along the point they label. We assume that each point comes with a priority; the higher the priority the more important it is to label the point. Given a dynamic scenario with user interactions, our objective is to maintain an occlusion-free labeling such that, on average over time, the sum of the priorities of the labeled points is maximized. Even the static version of the problem is known to be NP-hard. We present an efficient and effective heuristic that labels points with sliding labels in real time. Our heuristic proceeds incrementally; it tries to insert one label at a time, possibly pushing away labels that have already been placed. To quickly predict which labels have to be pushed away, we use a geometric data structure that partitions screen space. With this data structure we were able to double the frame rate when rendering maps with many labels.
Nadine Schwartges, Jan-Henrik Haunert, Alexander Wolff, Dennis Zwiebler
Routes to Remember: Comparing Verbal Instructions and Sketch Maps
Abstract
Sketch maps of routes have been widely used to externalize human spatial knowledge and to study wayfinding behavior. However, specific studies on what information and how people recall route information they obtain from verbal instructions by drawing sketch maps are limited. This chapter aims to know how much information, especially landmarks and streets, people recall after following a wayfinding task. We conducted an experiment and asked participants to draw a sketch map of the route they travelled. Landmarks were classified based on their locations on the route. Sketch maps were compared with verbal instructions to analyze what specific landmarks and street information people recalled as well as what other information was added. Our study showed that (1) landmarks along the route were sketched as often as landmarks located at decision points; (2) participants added landmarks and streets which were not mentioned in the verbal instructions. This chapter provides a better understanding of wayfinding strategies and spatial learning.
Vanessa Joy A. Anacta, Jia Wang, Angela Schwering

Geospatial Decision Support Services

Frontmatter
Behaviour-Driven Development Applied to the Conformance Testing of INSPIRE Web Services
Abstract
The implementation of the INSPIRE directive requires to check the conformity of a large number of network services with the implementing rules of INSPIRE. The evaluation whether a service is fully conformant with INSPIRE is complex and requires the use of specialized testing tools that should report how verification has been made and should identify non-conformances. The use of these tools requires a high degree of technical knowledge. This fact makes very difficult for non-technical stakeholders (end users, managers, domain experts, etc.) to participate effectively in conformance testing, hinders stakeholders understanding of the causes and consequences of non-conformant results and may cause in some stakeholders disinterest in conformance testing. This work explores the suitability of a behaviour-driven development (BDD) approach to the conformance testing of OGC Web services in the context of the INSPIRE directive. BDD emphasizes the participation of non-technical parties in the design of acceptance tests by means of automatable abstract tests expressed in a human readable format. Using this idea as base, this work describes a BDD based workflow to derive abstract test suites and executable test suites from INSPIRE implementation requirements that can be written in the language used by non-technical stakeholders. This work also analyses if BDD and popular BDD tools, such as Gherkin and Cucumber, are compatible with ISO 19105:2000 testing methodology. As demonstration, we present an online conformance tool for INSPIRE View and Discovery services that executes BDD test suites.
Francisco J. Lopez-Pellicer, Miguel Ángel Latre, Javier Nogueras-Iso, F. Javier Zarazaga-Soria, Jesús Barrera
Making the Web of Data Available Via Web Feature Services
Abstract
Interoperability is the main challenge on the way to efficiently find and access spatial data on the web. Significant contributions regarding interoperability have been made by the Open Geospatial Consortium (OGC), where web service standards to publish and download spatial data have been established. The OGCs GeoSPARQL specification targets spatial data on the Web as Linked Open Data (LOD) by providing a comprehensive vocabulary for annotation and querying. While OGC web service standards are widely implemented in Geographic Information Systems (GIS) and offer a seamless service infrastructure, the LOD approach offers structured techniques to interlink and semantically describe spatial information. It is currently not possible to use LOD as a data source for OGC web services. In this chapter we make a suggestion for technically linking OGC web services and LOD as a data source, and we explore and discuss its benefits. We describe and test an adapter that enables access to geographic LOD datasets from within OGC Web Feature Service (WFS), enabling most current GIS to access the Web of Data. We discuss performance tests by comparing the proposed adapter to a reference WFS implementation.
Jim Jones, Werner Kuhn, Carsten Keßler, Simon Scheider
CityBench: A Geospatial Exploration of Comparable Cities
Abstract
In many city comparisons and benchmarking attempts, scores are purely one-dimensional with results split accordingly for each dimension. We propose a methodology for comparing cities on multiple dimensions implemented as a map-centric web tool based on a pairwise similarity measure. CityBench web tool provides a quick-scan geographical exploration of multidimensional similarity across European cities. With this dynamic method, the user may easily discover city peers that could face similar risks and opportunities and consequently develop knowledge networks and share best practices. This web tool is destined to provide economic/financing institutions’, local governments’ and policy makers’ in Europe and beyond decision making support.
Elizabeth Kalinaki, Robert Oortwijn, Ana Sanchis Huertas, Eduardo Dias, Laura Díaz, Steven Ottens, Anne Blankert, Michael Gould, Henk Scholten
A GIS-Based Process for Calculating Visibility Impact from Buildings During Transmission Line Routing
Abstract
Planning linear infrastructures can be a tedious task for regions characterized by complex topography, natural constraints, high density population areas, and strong local opposition. These aspects make the planning of new transmission lines complex and time consuming. The method proposed in this work uses Multi-Criteria Analysis and Least-Cost Path approaches combined with a viewshed analysis in order to identify suitable routes. The visual impact is integrated, as a cost surface, into the process and combined with natural and anthropological constraints. The cumulated visibility of each raster cell is estimated as the sum of the weighted distance between buildings and the cell itself. In order to reduce the typical zig-zags resulting from Least-Cost Path methods, a weighted straightening approach is applied. A sensitivity analysis of the weights of the visibility and the straightening is carried out in order to assess different scenarios and to compare the existing TL path to the proposed ones. The method is applied to a case study where an old transmission line needs to be replaced by a new one and the local grid operator needs to identify feasible routes. A set of 30 routes is identified and most of them have a lower visibility that the existing path but, only some of them present a comparable complexity to be realized.
Stefano Grassi, Roman Friedli, Michel Grangier, Martin Raubal
Metadata
Title
Connecting a Digital Europe Through Location and Place
Editors
Joaquín Huerta
Sven Schade
Carlos Granell
Copyright Year
2014
Electronic ISBN
978-3-319-03611-3
Print ISBN
978-3-319-03610-6
DOI
https://doi.org/10.1007/978-3-319-03611-3

Premium Partner