Skip to main content

2017 | Buch

Environmental Software Systems. Computer Science for Environmental Protection

12th IFIP WG 5.11 International Symposium, ISESS 2017, Zadar, Croatia, May 10-12, 2017, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 12th IFIP WG 5.11 International Symposium on Environmental Software Systems, ISESS 2017, held in Zadar, Croatia, in May 2017.
The 35 revised full papers presented together with 4 keynote lectures were carefully reviewed and selected from 46 submissions. The papers deal with environmental challenges and try to provide solutions using forward-looking and leading-edge IT technology. They are organized in the following topical sections: air and climate; water and hydrosphere; health and biosphere; risk and disaster management; information systems; and modelling, visualization and decision support.

Inhaltsverzeichnis

Frontmatter

Keynote Lectures

Frontmatter
Real-Time Web-Based Decision Support for Stakeholder Implementation of Basin-Scale Salinity Management

Real-time salinity management increases annual average salt export from the agriculture-dominated and salt-impacted San Joaquin Basin. This strategy also reduces the likelihood of potential fines associated with exceedences of monthly and annual salt load allocations which could exceed $1 million per year based on average year hydrology and State-mandated, TMDL-based salt load export limits. The essential components of this program include the establishment of telemetered sensor networks, a web-based information system for sharing data, a basin-scale salt load assimilative capacity forecasting model and institutional entities tasked with performing weekly forecasts of River salt assimilative capacity and coordinating west-side drainage return flows. San Joaquin River (SJRRTM) Online (SJRO) is a new web portal that combines WARMF-Online a dedicated web portal for sharing model input data and salt assimilative capacity forecasts with an informational website for increasing stakeholder awareness of the unique characteristics and opportunities for enhanced water and water quality resource management in the River Basin.

Nigel W. T. Quinn, Brian Hughes, Amye Osti, Joel Herr, Elwood Raley, Jun Wang
Trends in Policy Relevant European Environmental Information Systems

The paper presents the evolution of European environmental reporting and how it has transformed information systems. It connects systemic changes in policy assessments whilst acknowledging that information systems themselves have evolved both from a knowledge and a technology perspective. It starts out by setting the policy context where a review of the current legislation related to environmental monitoring and reporting goes hand in hand with initiatives to promote open and distributed data access. The knowledge management model of EEA has been developed over almost two decades and is the background against which an evolution related to the way environmental data is been reported and generated and indicators are been developed has to be seen. This evolution is triggered by a growing need to support systemic thinking and integrative projects involving a growing set of stakeholders. To support these new demands our ways to manage environmental data needs to change. We receive more volumes of often less homogeneous data in more frequent intervals. We need to combine data from very different sources – environmental data based on legislation; data from research and big earth observation programs; data from citizens and industry. This data is structured or unstructured. While we continue investing in streamlining the data management aspects of reporting, we have to step-wise engage in new approaches like big data analytics. With these newly emerging data flows we also need to revise our information technology infrastructure by introducing more modularity and new tools.

Stefan Jensen
Big Data Storage and Management: Challenges and Opportunities

The paper is focused on today’s very popular theme – Big Data. We describe and discuss its characteristics by eleven V’s (Volume, Velocity, Variety, Veracity, etc.) and Big Data quality. These characteristics represent both data and process challenges. Then we continue with problems of Big Data storage and management. Principles of NoSQL databases are explained including their categorization. We also shortly describe Hadoop and MapReduce technologies as well as their inefficiency for some interactive queries and applications within the domain of large-scale graph processing and streaming data. NoSQL databases and Hadoop M/R are designed to take advantage of cloud computing architectures and allow massive computations to be run inexpensively and efficiently. The term of Big Data 1.0 was introduced for these technologies. We continue with some new approaches called currently Big Data 2.0 processing systems. Particularly their four categories are introduced and discussed: General purpose Big Data Processing Systems, Big SQL Processing Systems, Big Graph Processing Systems, and Big Stream Processing Systems. Then, an attention is devoted to Big Analytics – the main application area for Big Data storage and processing. We argue that enterprises with complex, heterogeneous environments no longer want to adopt a BI access point just for one data source (Hadoop). More heterogeneous software platforms are needed. Even Hadoop has become a multi-purpose engine for ad hoc analysis. Finally, we mention some problems with Big Data. We also remind that Big Data creates a new type of digital divide. Having access and knowledge of Big Data technologies gives companies and people a competitive edge in today’s data driven world.

Jaroslav Pokorný
Environmental Software Systems in National Park Monitoring and Management

National Park (NP) Monitoring and Management is dealing with dozens of different data where the provenience of the data is as manifold as the fields covered in the monitoring process. International, national, and federal responsibility is found as well as NGO databases, crowd sourcing applications or dedicated field surveys in R&D – activities of single research groups. Environmental software systems are intelligent pencils to manage, analyse and visualize the environmental- but also the administrative data coming from the different sources mentioned. The paper emphazises different fields of activity in NP monitoring and management and is presenting software systems in use in the NP Hunsrück-Hochwald (Germany).In general, the software systems used are mostly highly adopted to the individual needs of a NP. This depends on specific landscape, features, or the research focus in the park, to name but a few. The software solutions are realized as a customization of standard software products, or, as individual software packages, designed and developed according to the special requirements of the fields of activity in a dedicated park.Regarding the future developments, there will be no significant changes: the heterogeneity of the data and software used will be similar as it was in the past or it is recently. Because of the long-lasting perspective in NP research and management, one important action the NP administration should focus on: a proper documentation of methods, datasets, publications and information systems targeting the NP, to make monitoring and management activities transparent, accessible and ready for future re-use.

Peter A. Fischer-Stabel

Air and Climate

Frontmatter
Hough-Transform-Based Interpolation Scheme for Generating Accurate Dense Spatial Maps of Air Pollutants from Sparse Sensing

Air pollution is a significant health risk factor and causes many negative effects on the environment. Thus, arises the need for studying and assessing air-quality. Today, air-pollution assessment is mostly based on data acquired from Air Quality Monitoring (AQM) stations. These AQM stations provide continuous measurements and considered to be accurate; however, they are expensive to build and operate, thus scattered sparingly. To cope with this limitation, typically, the information obtained from those measurements is generalized with interpolation methods such as IDW or Kriging. Yet, the mathematical basis of those schemes defines that pollution extremum values are obtained at the measuring points. In addition, they are not considering the location of the pollution source or any physicochemical characteristics of pollutant hence do not reveal the real spatial air-pollution patterns. This research introduces a new interpolation scheme which breaks the interpolation process into two stages. At the first stage, the source of pollution and its estimated emission rate are inferred through a detection procedure which is based on the Hough Transform. At the second stage, based on the detected source location and emission, spatial dense pollution maps are created. The method requires, for its computation, to assume a dispersion model. To this end, any model can be used as sophisticated as it may be. Spatial maps created with simplified dispersion models in a computational simulation, show that the suggested interpolation scheme manages to create more accurate and more physically reasonable maps than the state-of-the-art.

Asaf Nebenzal, Barak Fishbain
A New Feature Selection Methodology for Environmental Modelling Support: The Case of Thessaloniki Air Quality

Environmental systems status is described via a (usually big) set of parameters. Therefore, relevant models employ a large feature space, thus making feature selection a necessity towards better modelling results. Many methods have been used in order to reduce the number of features, while safeguarding environmental model performance and resulting to low computational time. In this study, a new feature selection methodology is presented, making use of the Self Organizing Maps (SOM) method. SOM visualization values are used as a similarity measure between the parameter that is to be forecasted, and parameters of the feature space. The method leads to the smallest set of parameters that surpass a similarity threshold. Results obtained, for the case of Thessaloniki air quality forecasting, are comparable to what feature selection methods offer.

Nikos Katsifarakis, Kostas Karatzas
Approaches to Fuse Fixed and Mobile Air Quality Sensors

Nowadays, air quality monitoring is identified as one of the key impacts in assessing the quality of life in urban areas. Traditional measuring procedures include expensive equipment in the fixed monitoring stations which is not suitable for urban areas because of the low spatio-temporal density of measurements. On the other hand, the technological development of small wearable sensor devices has created new opportunities for air pollution monitoring. Therefore, in this paper we discuss statistical approaches to fuse the data from fixed and mobile sensors for air quality monitoring.

Gerhard Dünnebeil, Martina Marjanović, Ivana Podnar Žarko
EO Big Data Connectors and Analytics for Understanding the Effects of Climate Change on Migratory Trends of Marine Wildlife

This paper describes the current ongoing research activities concerning the intelligent management and processing of Earth Observation (EO) big data together with the implementation of data connectors, advanced data analytics and Knowledge Base services to a Big Data platform in the EO4Wildlife project (www.eo4wildlife.eu). These components support on the discovery of marine wildlife migratory behaviours, some of which may be a direct consequence of the changing Met-Ocean resources and the globe climatic changes. In EO4wildlife, we specifically focus on the implementation of web-enabled advanced analytics web services which comply with OGC standards and make them accessible to a wide research community for investigating on trends of animal behaviour around specific marine regions of interest. Big data connectors and a catalogue service are being installed to enable access to COPERNICUS sentinels and ARGOS satellite big data together with other in situ heterogeneous sources. Furthermore, data mining services are being developed for knowledge extraction on species habitats and temporal behaviour trends. Also, high level fusion and reasoning services which process big data observations are deployed to forecast marine wildlife behaviour with estimated uncertainties. These will be tested and demonstrated under targeted thematic scenarios in EO4wildlife using a Big Data platform a cloud resources.

Z. A. Sabeur, G. Correndo, G. Veres, B. Arbab-Zavar, J. Lorenzo, T. Habib, A. Haugommard, F. Martin, J.-M. Zigna, G. Weller

Water and Hydrosphere

Frontmatter
Quick Scan Tool for Water Allocation in the Netherlands

In the Netherlands, suitable water allocation decisions are required to ensure fresh water availability under dry conditions, now and in the future. A high-resolution integrated surface-and groundwater of the Netherlands, called the National Hydrological Model, exists to support water management decisions on a national scale. Given the run times of this model, it is less suited to accommodate screening of water allocation alternatives that deviate from the common practice. Therefore, policy makers and operational water managers within the Ministry of Infrastructure and Environment felt the need for a tool that can assist in the screening of alternative water allocation strategies. This Quick Scan Tool uses a coarse scale network model of the Netherlands water system to compute the water allocation pattern given water demands and boundary conditions as provided by the National Hydrological Model. To accommodate the priority based water allocation policies commonly used in the Netherlands, a lexicographic goal programming technique is used to solve the water allocation problem. The tool has been developed using RTC-Tools 2 as computation engine and Delft-FEWS as a front-end, where Delft-FEWS is also responsible for workflow and data management. This paper presents the Quick Scan Tool developed, including the mathematical techniques used and the validation of the results against the allocations computed by the National Hydrological Model.

P. J. A. Gijsbers, J. H. Baayen, G. J. ter Maat
Information System as a Tool for Marine Spatial Planning: The SmartSea Vision and Prototype

Planning the use of marine areas requires support in allocating space to particular activities, assessing impacts and cumulative effects of activities, and generic decision making. The SmartSea project studies the Gulf of Bothnia, the northernmost arm of the Baltic Sea, as resource for sustainable growth. One objective of the project is to provide an open source and open access toolbox for marine spatial planning. Here we present a vision for the toolbox and an initial prototype. The vision is based on a model of information system as a meeting point of users, information providers, and tasks. A central technical requirement for the system was found to be a data model and related database of spatial planning and a programmable map service. An initial prototype of the system exists comprising a database, data browser/editor, dynamic tile map service, web mapping application, and extensions for a desktop GIS.

Ari Jolma, Ville Karvinen, Markku Viitasalo, Riikka Venesjärvi, Jari Haapala
Mobile Crowd Sensing of Water Level to Improve Flood Forecasting in Small Drainage Areas

Flood forecasting is particularly difficult and uncertain for small drainage basins. One reason for this is due to inadequate temporal and spatial hydrological input variables for model-based flood predictions. Incorporating additional information collected by volunteers with the help of their smartphones can improve flood forecasting systems. Data collected in this way is often referred to VGI data (Volunteered Geographic Information data). This paper discusses how this information can be incorporated into a flood forecasting system to support flood management in small drainage basins on the basis of mobile VGI data. It therefore outlines the main functional components involved in such a VGI-based flood forecasting platform while presenting the component for mobile data acquisition (mobile sensing) in more detail. In this context, relevant measurement variables are first introduced and then suitable methods for recording these data with mobile devices are described. The focus of the paper lies on discussing various methods for measuring the water level using inbuilt smartphone sensors. For this purpose, three different image-based methods for measuring the water level at the banks of small rivers using a mobile device and the inbuilt orientation and camera sensors are explained in detail. It is shown that performing the measurements with the user’s help via appropriate user interaction and utilising known structures at the measuring points results in a rather robust image-based measurement of the water level. A preliminary evaluation of the methods under ideal conditions found that the developed measurement techniques can achieve both an accuracy and precision of less than 1 cm.

Simon Burkard, Frank Fuchs-Kittowski, Anna O’Faolain de Bhroithe
Flood Modelling and Visualizations of Floods Through 3D Open Data

This paper is devoted to 3D modelling at the city level from data sources considered as open. The open data presented in this paper enable free usage, modifications, and sharing by anyone for any purpose. The main motivation was to verify feasibility of a 3D visualization of floods purely based on open technologies and data. The presented state-of-the-art analysis comprises the evaluation of available 3D open data sources, including formats, Web-based technologies, and software used for visualizations of 3D models. A pilot Web application visualizing floods was developed to verify the applicability of discovered data sources. 3D visualizations of terrain models, 3D buildings, flood areas, flood walls and other related information are available in a pilot application for a selected part of the city of Prague. The management of different types of input data, the design of interactive functionality including navigation aids, and actual limitations and opportunities for future development are discussed in detail at the end.

Lukáš Herman, Jan Russnák, Tomáš Řezník
Use of the Hydro-Salinity, Crop Production Optimization Model APSIDE to Validate Results from an Updated Regional Flow Model of the San Joaquin River Basin

APSIDE is an optimization model capable of simulating irrigation hydrology and agricultural production under saline conditions. The model has been used in the past to predict future agricultural production under future climate change in the San Joaquin River Basin of California (Quinn et al. 2004). In this study the model was used to query the results from a highly-regarded, published regional surface-groundwater flow model of the Central Valley of California – CVHM (Faunt et al. 2009) which includes the San Joaquin Basin. The APSIDE model was updated using recent aquifer and climate data and provided common initial conditions to allow a 53 year comparative simulation of the models. Model outputs for individual water districts for parameters such as deep percolation and upflux in APSIDE were compared to identical drained subareas within the CVHM model. The comparison showed that the APSIDE model produced lower values of deep percolation and upflux than CVHM. CVHM’s deep percolation values were 18% higher in Panoche WD, 40% higher in Broadview WD, 68% higher in San Luis WD, and 46% higher in Pacheco WD. Unlike the CVHM model that assumes fixed levels of irrigation and drainage technology and static average water district irrigation efficiency APSIDE will substitute more cost effective irrigation and drainage technologies based on the calculated future benefit stream relative to the cost of production and impact of salinity on crop yields. An unpublished recent update to the current CVHM model (CVHM-2) which substitutes actual irrigation diversion records from delivery canals rather than usually-reliable Agency records - produced water district irrigation diversions that were approximately 50% of the previously provided diversion data. The new model produces water district aquifer recharge estimates that correlate closely with APSIDE model output. This study demonstrates the successful use of a complementary agricultural production optimization and hydro-salinity simulation model to help validate a radical and important update to a widely distributed and well-accepted regional flow groundwater model.

Nigel W. T. Quinn, John Cronin
Business Intelligence and Geographic Information System for Hydrogeology

We have developed the Hydrogeological Information System (HgIS). Its purpose is to load data from available data sources of any kind, to visualize and analyze data and to implement simple models. HgIS is mostly built upon the Pentaho business intelligence (BI) platform. HgIS uses only some components of BI in comparison to enterprise BI solutions. Adequacy and limitation of data warehousing and BI application for groundwater data is discussed. Data extraction, transformation and loading is focused on integration of wide variety of structured and semi-structured data. Data warehouse uses a hybrid snowflake/star schema. Inmon’s paradigm is used because data semantics is known and the volume of data is limited. HgIS is data agnostic, database agnostic, scalable and interoperable. The architecture of the system corresponds to a spatial business intelligence solution (GeoBI) – a combination of BI and geographic information systems (GIS). Groundwater practitioners have worked with GIS software for decades but BI technologies and tools have not previously been applied to groundwater data.

Kamil Nešetřil, Jan Šembera

Health and Biosphere

Frontmatter
A Pilot Interactive Data Viewer for Cancer Screening

The paper introduces processing, modelling, analysis and visualisation of data on cancer epidemiology and cancer care in compliance with a proven and validated methodology. We aim to provide online access to unique data on cancer care and cancer epidemiology, including an interactive visualisation of various analytical reports in order to provide relevant information to the general public as well as to experts, such as health care managers, environmental experts and risk assessors. The data viewer has been developed and implemented as a web-based application, making a very time-consuming process of data analysis fully automatic. The presented data contain dozens of validated epidemiological trends in the form of tables, graphs and maps.

Ladislav Dušek, Jan Mužík, Matěj Karolyi, Michal Šalko, Denisa Malúšková, Martin Komenda
GMP Data Warehouse – a Supporting Tool of Effectiveness Evaluation of the Stockholm Convention on Persistent Organic Pollutants

The Stockholm Convention on Persistent Organic Pollutants is multilateral environmental agreement focused on selected persistent organic pollutants (POPs) for which the contracting Parties must adopt measures to eliminate or reduce their production and use or minimise the unintentional releases. One of the tools for the effectiveness evaluation of the Stockholm Convention is Global Monitoring Plan for Persistent Organic Pollutants (GMP) – a project that aims to collect global data on POPs concentrations in selected environmental matrices. This paper introduces an information system GMP Data Warehouse, which was developed in order to provide user-friendly tools for the collection, storage, analyses and visualisation of data from international POPs monitoring activities.

Jakub Gregor, Jana Borůvková, Richard Hůlek, Jiří Kalina, Kateřina Šebková, Jiří Jarkovský, Ladislav Dušek, Jana Klánová
A Variable Length Chromosome Genetic Algorithm Approach to Identify Species Distribution Models Useful for Freshwater Ecosystem Management

Increasing pressure on freshwater ecosystems requires river managers and policy makers to take actions to protect ecosystem health. Species distribution models (SDMs) are identified as appropriate tools to assess the effect of pressures on ecosystems. A number of methods are available to model species distributions, however, it remains a challenge to identify well-performing models from a large set of candidate models. Metaheuristic search algorithms can aid to identify appropriate models by scanning possible combinations of explanatory model variables, model parameters and interaction functions. This large search space can be efficiently scanned with simple genetic algorithms (SGAs). In this paper, we test the potential of a variable length chromosome SGA to perform parameter estimation (PE) and input variable selection (IVS) for a macroinvertebrate SDM. We show that the SGA is an appropriate tool to identify fair to satisfying performing SDMs. In addition, we show that SGA performance and the uncertainty varies as a function of the chosen hyper parameters. The results can aid to further optimise the algorithm so models explaining species distributions can be identified and used for analysis in river management.

Sacha Gobeyn, Peter L. M. Goethals
Conceptual Design of a Software Tool for Management of Biological Invasion

Invasion of alien species is recognized as one of the most pressing global challenges altering the composition, structure and functioning of invaded ecosystems as well as the services they generated before the invasion. We consider the case of Norway maple (Acer platanoides) which was intentionally introduced to North America as an ornamental street shade tree, but now has been viewed as a serious threat to native forest ecosystems in the United States and Canada. Decisions about the management of invasive cases are inherently difficult because of the multifactorial and multiattribute scope of the problem. To facilitate management efforts, decision-makers and environmental practitioners require a software tool integrating relevant knowledge and acting as a supporting expert. The underlying methodology, conceptual design of the tool and its main modules are discussed in the paper. In particular, we argue for an approach taking into account the entire ecosystem purview of the problem, phases of invasion process, tree development stages and driving mechanisms underlying the cases of biological invasion. Functional architecture of a software tool for environmental modelling and decision-making in managing of invasive cases (EMDMIC) is presented. Largely, the EMDMIC consists of the three main modules: “Factors”, “Ecosystem Modelling” and “Management”. Functionality of each module is articulated in the paper. At the current stage of architectural design, the principles of multi-layered designs and platform independence have been applied. The latter enable to keep the options for future implementations of the tool open and also makes it potentially suitable for various targeting environments.

Peter A. Khaiter, Marina G. Erechtchoukova
Open Farm Management Information System Supporting Ecological and Economical Tasks

A Farm Management Information System (FMIS) is a sophisticated tool managing geospatial data and functionalities as it provides answers to two basic questions: what has happened and where. The presented FOODIE (Farm-Oriented Open Data in Europe) and DataBio (Data-Driven Bioeconomy) approach may be recognized as an OpenFMIS, where environmental and reference geospatial data for precision agriculture are provided free of charge. On the other hand, added-value services like yield potential, sensor monitoring, and/or machinery fleet monitoring are provided on a paid basis through standardised Web services due to the costs of hardware and non-trivial computations. Results, i.e. reference, environmental and farm-oriented geospatial data, may be obtained from the FOODIE platform. All such results of whatever kind are used in the European DataBio project in order to minimise the environmental burden while maximising the economic benefits.

Tomáš Řezník, Karel Charvát, Vojtěch Lukas, Karel Charvát Junior, Michal Kepka, Šárka Horáková, Zbyněk Křivánek, Helena Řezníková

Risk and Disaster Management

Frontmatter
Large Scale Surveillance, Detection and Alerts Information Management System for Critical Infrastructure

A proof-of-concept system for large scale surveillance, detection and alerts information management (SDAIM) is presented in this paper. Various aspects of building the SDAIM software system for large scale critical infrastructure monitoring and decision support are described. The work is currently developped in the large collaborative ZONeSEC project (www.zonesec.eu). ZONeSEC specializes in the monitoring of so-called Wide-zones. These are large critical infrastructure which require 24/7 monitoring for safety and security. It involves integrated in situ and remote sensing together with large scale stationary sensor networks, that are supported by cross-border communication. In ZONeSEC, the specific deployed sensors around the critical infrastructure may include: Accelerometers that are mounted on perimeter fences; Underground acoustic sensors; Optical, thermal and hyperspectral video cameras or radar systems mounted on strategic areas or on airborne UAVs for mission exploration. The SDAIM system design supports the ingestion of the various types of sensors platform wide-zones’ environmental observations and provide large scale distributed data fusion and reasoning with near-real-time messaging and alerts for critical decision-support. On a functional level, the system design is founded on the JDL/DFIG (Joint Directors of Laboratories/Data Fusion Information Group) data and information fusion model. Further, it is technologically underpinned by proven Big Data technologies for distributed data storage and processing as well as on-demand access to intelligent data analytics modules. The SDAIM system development will be piloted and alidated at various selected ZONeSEC project wide-zones [1]. These include water, oil and transnational gas pipelines and motorway conveyed in six European countries.

Z. Sabeur, Z. Zlatev, P. Melas, G. Veres, B. Arbab-Zavar, L. Middleton, N. Museux
ENSURE - Integration of Volunteers in Disaster Management

Volunteers can be a valuable support for disaster management. Disaster management must be able to coordinate volunteers in order to benefit from their support. Interactive, collaborative, and mobile technologies have the potential to overcome the challenges involved in integrating volunteers into crisis and disaster management. This paper presents the ENSURE system, designed to effectively integrate volunteers for an improved approach to disaster management. The system supports the aid forces in recruiting, managing, activating, and coordinating volunteers in the event of a large-scale emergency. To achieve this, ENSURE provides necessary functions such as volunteer registration, volunteer profiles, sending alerts, and volunteer activation (via the mobile app). The system uses a subscription-based approach in which the volunteers agree to take part in an emergency operation by responding to an alert. The system architecture provides technical insights into how to implement crowdtasking systems. It contains seven logical components to provide the necessary features. It was designed to ensure scalability, performance, and availability. The results of a first comprehensive evaluation of ENSURE, which was performed as a large-scale exercise directed by the disaster relief forces in Berlin, were, without exception, positive in all three evaluation areas: efficiency & security, clarity & usability, reliability & availability.

Frank Fuchs-Kittowski, Michael Jendreck, Ulrich Meissen, Michel Rösler, Eridy Lukau, Stefan Pfennigschmidt, Markus Hardt
C2-SENSE – Pilot Scenario for Interoperability Testing in Command & Control Systems for Crises and Disaster Management: Apulia Example

Different organizations with their Command & Control (C2) and Sensing Systems have to cooperate and constantly exchange and share data and information in order to manage emergencies, crises and disasters. Although individual standards and specifications are usually adopted in C2 and Sensing Systems separately, there is no common, unified interoperability specification to be adopted in an emergency situation, which creates a crucial interoperability challenge for all the involved organizations. To address this challenge, we introduce a novel and practical profiling approach, which aims at achieving seamless interoperability of C2 and Sensing Systems in emergency management. At the end of this interoperability challenge a Pilot Application is set up and will be tested in the field to demonstrate the advantages resulting from this effort. This paper gives an overview about the involved entities in the pilot application scenario and the testing of the system functionality by using predefined micro-scenarios suitable for the pilot region in Apulia.

Marco Di Ciano, Agostino Palmitessa, Domenico Morgese, Denis Havlik, Gerald Schimak
Achieving Semantic Interoperability in Emergency Management Domain

This paper describes how semantic interoperability can be achieved in emergency management domain where different organizations in different domains should communicate through a number of distinct standards to manage crises and disasters effectively. To achieve this goal, a common ontology is defined as lingua franca and standard content models are mapped one by one to the ontology. Then, information represented in one standard is converted to another according to the mappings and exchanged between parties.

Mert Gençtürk, Enver Evci, Arda Guney, Yildiray Kabak, Gokce B. Laleci Erturkmen
Framework for Enabling Technical and Organizational Interoperability in the Management of Environmental Crises and Disasters

Interoperability is a core component in management of crises and disasters. Crises require interoperability on several different levels: physical (communication, devices and tools), operational (crisis response procedures and protocols) and document level (information exchange). Here we present the Framework that facilitates interoperability on a level that relieves a crisis manager from most burdens related to crisis response (e.g., being available to access sensor data or communicate with other responders). The software components in the Framework are described, as well as the profiling approach that is necessitated for functioning interoperability at such a demanding level. The Framework can be implemented for dealing with environmental challenges through real-time monitoring and response. The frequency of disasters is expected to increase in the forthcoming future mostly due to environmental changes, thus emphasizing the need for interoperability approaches as the one presented in this paper.

Refiz Duro, Mert Gençtürk, Gerald Schimak, Peter Kutschera, Denis Havlik, Katharina Kutschera
An Integrated Decision-Support Information System on the Impact of Extreme Natural Hazards on Critical Infrastructure

In this paper, we introduce an Integrated Decision-Support Tool (IDST v2.0) which was developed as part of the INFRARISK project (https://www.infrarisk-fp7.eu/). The IDST is an online tool which demonstrates the implementation of a risk-based stress testing methodology for analyzing the potential impact of natural hazards on transport infrastructure networks. The IDST is enabled with a set of software workflow processes that allow the definition of multiple cascading natural hazards, geospatial coverage and impact on important large infrastructure, including those which are critical to transport networks in Europe. Stress tests on these infrastructure are consequently performed together with the automated generation of useful case study reports for practitioners. An exemplar stress test study using the IDST is provided in this paper. In this study, risks and consequences of an earthquake-triggered landslide scenario in Northern Italy is described. Further, it provides a step-by-step account of the developed stress testing overarching methodology which is applied to the impact on a road network of the region of interest.

Z. A. Sabeur, P. Melas, K. Meacham, R. Corbally, D. D’Ayala, B. Adey
UNISDR Global Assessment Report - Current and Emerging Data and Compute Challenges

This paper discusses the data and compute challenges of the global collaboration producing the UNISDR Global Assessment Report on Disaster Risk Reduction. The assessment produces estimates – such as the “Probable Maximum Loss” – of the annual disaster losses due to natural hazards. The data is produced by multi-disciplinary teams in different organisations and countries that need to manage their compute and data challenges in a coherent and consistent manner.The compute challenge can be broken down into two phases: hazard modelling and loss calculation. The modelling is based on production of datasets describing flood, earthquake, storm etc. scenarios, typically thousands or tens of thousands scenarios per country. Transferring these datasets for the loss calculation presents a challenge – already at the current resolution used in the simulations. The loss calculation analyses the likely impact of these scenarios based on the location of the population and assets, and the risk reduction mechanisms (such as early warning systems or zoning regulations) in place. As the loss calculation is the final stage in the production of the assessment report, the algorithms were optimised to minimise risks of delays. This also paves the way for a more dynamic assessment approach, allowing refining national or regional analysis “on demand”.The most obvious driver of the future compute and data challenges will be the increased spatial resolution of the assessment that is needed to more accurately reflect the impact of natural disasters. However, the changes in the production model mentioned above and changing policy frameworks will also play a role. In parallel to these developments, aligning the current community engagement approaches (such as the open data portal) with the internal data management practices holds considerable promise for further improvements.

Nils gentschen Felde, Mabel Cristina Marulanda Fraume, Matti Heikkurinen, Dieter Kranzlmüller, Julio Serje

Information Systems

Frontmatter
netCDF-LD SKOS: Demonstrating Linked Data Vocabulary Use Within netCDF-Compliant Files

netCDF, the widely-used array-oriented data container file format, has previously been extended in an initiative called netCDF-LD, to include Linked Data metadata elements. In this paper, we build on that initiative with demonstrations of a Simple Knowledge Organization System (SKOS)-aware file format and associated tooling. First, we discuss a very simple way to reference SKOS vocabulary data stored online in netCDF files via Linked Data. Second, we describe our prototype ncskos tools, including ‘ncskosdump’, which wraps the well-known ‘ncdump’ tool used to print out netCDF headers and data. Our tools utilize some of the features of Linked Data and SKOS vocabularies to enhance the metadata of netCDF files by allowing: multilingual metadata label retrieval; alternate term name retrieval; and hierarchical vocabulary relationship navigation. In doing this, ncskosdump preserves the ncdump practice of writing output in standard CDL (network Common Data Language). For the demonstration of theses formats and tools, we relate how we have included URI links in netCDF files to SKOS concepts within a demonstration vocabulary and how the ncskos tools can be used to manage these files in ways that are not possible using only regular netCDF metadata. We also discuss problems we perceived in scaling Linked Data functionality when applying it to large numbers of netCDF files or in multiple file management sessions, and how we have catered for these. Finally, we indicate some future work in the area of more comprehensive Linked Data representation in netCDF files.

Nicholas J. Car, Alex Ip, Kelsey Druken
Evolution of Environmental Information Models
Reusable Properties Bound to a Persistent URI

Reusability of environmental data is essential for environmental research and control; standardized data models are being created by various organizations to facilitate this process. Due to the evolving nature of environmental science, these data models must be continuously extended for the support of new concepts, thus rapidly breaking the level of standardization achieved. The definition of reusable properties would allow for standardization of this extension process. In this paper, we first analyze the requirements to reusable properties, and explain the rational for the decision that reusable properties tightly bound to a URI would be the most apt solution; the following list of requirements was defined in order to compare the viability of the options proposed: URI Coupling, DataType Coupling, Semantics Coupling and Persistence. We then go on to explore possible avenues for implementation of reusable URI-Properties, whereby the following approaches where analysed for applicability: Data Types, Interfaces, MOF level adjustment of UML and a solution utilizing stereotypes for the definition and use of reusable URI-Properties. Of these approaches, all were deemed feasible except for the MOF level adjustment of UML; MOF level adjustment is not possible due to cardinality constraints within the MOF definition. Examples were created for the other 3 possibilities, including serialization options towards XML Schema. These examples were then compared with the requirements defined for URI-Properties; based on this analysis, the UML Stereotype based solution for the specification and use of reusable URI-Properties was deemed as most viable and is described in further detail.

Katharina Schleidt
Semantic BMS: Ontology for Analysis of Building Operation Efficiency

Building construction has gone through a significant change with the emerging spread of ICT during last decades. Intelligent buildings are equipped with building automation systems (BAS) that can be remotely controlled and programmed. However, such systems lack convenient tools for data inspection, making building performance and efficiency analysis demanding on large sites. The paper presents an adaptation of Semantic Sensor Network ontology for use in the field of building operation analysis. The proposed Semantic BMS ontology enriches the SSN with a model of building automation data points and describes relations between BAS and physical properties of a building. Proposed ontology allows facility managers to conveniently query BAS systems, providing decision support for tactical and strategic level planning.

Adam Kučera, Tomáš Pitner
A Generic Web Cache Infrastructure for the Provision of Multifarious Environmental Data

As a basis for the efficient data supply for web portals, web-based and mobile applications of several German environmental authorities, a microservice-based infrastructure is being used. It consists of a generic data model and a series of corresponding generic services, e.g. for the provision of master data, metrics, spatial data, digital assets, metadata, and links between them. The main objectives are the efficient provision of data as well as the use of the same data by a wide range of applications. In addition, the used technologies and services should enable data supplyasopen (government) data or as linked data in the sense of the Semantic Web. In a first version, these services are used exclusively for read access to the data. For this purpose, the data are usually extracted from their original systems, possibly processed and then stored redundantly in powerful backend systems (“Web Cache”). Generic microservices provide uniform REST interfaces to access the data. Each service can use different backend systems connected via adapters. In this way, consuming components such as frontend modules in a Web portal can transparently access various backend systems via stable interfaces, which can therefore be selected optimally for each application. A number of tools and workflows ensure the updating and consistency of the data in the Web Cache. Microservices and backend systems are operated on the basis of container virtualization using flexible cloud infrastructures.

Thorsten Schlachter, Eric Braun, Clemens Düpmeier, Christian Schmitt, Wolfgang Schillinger
The SensLog Platform – A Solution for Sensors and Citizen Observatories

SensLog is an integrated server side Web based solution for sensor data management. SensLog consists of a data model and a server-side application which is capable of storing, analyzing and publishing sensor data in various ways. This paper describes the technical advancements of the SensLog platform. SensLog receives measured data from nodes and/or gateways, stores data in a database, pre-processes data for easier queries if desired and then publishes data through the system of web-services. SensLog is suitable for sensor networks with static sensors (e.g. meteorological stations) as well as for mobile sensors (e.g. tracking of vehicles, human-as-sensor). The database model is based on the standardized data model for observations from OGC Observations & Measurements. The model was extended to provide more functionalities, especially in the field of users’ hierarchy, alerts and tracking of mobile sensors. The latest SensLog improvements include a new version of the database model and an API supporting citizen observatories. Examples of pilot applications using SensLog services are described in the paper.

Michal Kepka, Karel Charvát, Marek Šplíchal, Zbyněk Křivánek, Marek Musil, Šimon Leitgeb, Dmitrij Kožuch, Raitis Bērziņš
A Generic Microservice Architecture for Environmental Data Management

The growing popularity of Web applications and the Internet of Things cause an urgent need for modern scalable data management to cope with large amounts of data. In the environmental domain these problems also need a solution because of big data coming from a large amount of sensors or users (e.g. crowdsourcing applications). This paper presents an architecture that uses a microservice approach to create a data management backend for the mentioned applications. The main concept shows that microservices can be used to define separate services for different data types and management tasks. This separation leads to many advantages such as better scalability and low coupling between different features. Two prototypes, which are already implemented, are evaluated in this paper.

Eric Braun, Thorsten Schlachter, Clemens Düpmeier, Karl-Uwe Stucky, Wolfgang Suess
How to Start an Environmental Software Project

How to lay the grounds for interdisciplinary teams to start communicating and collaborating effectively remains an obstacle for many environmental software efforts. In this work, a structured, participatory interactive method is introduced: The Inception Workshop aims to assist interdisciplinary teams at the start-up stage of an environmental software project, with the goal to explore the solution space and for early requirements analysis. It is an ice-breaker event to engage heterogeneous actors to open up, express their interests, and start working together to identify and solve common problems. Two installations of the workshop were conducted, and participant familiarity to the problem and technologies involved were captured with pre- and post-workshop questionnaires. Participant responses proved statistical significance in increasing participant confidence with concepts across disciplines.

Ioannis N. Athanasiadis

Modelling, Visualization and Decision Support

Frontmatter
Environmental Modelling with Reverse Combinatorial Auctions: CRAB Software Modification for Sensitivity Analysis

This paper builds on reverse combinatorial auctions theory and its selected environmental applications, which were presented at ISESS 2013 and ISESS 2015. It provides an approach for calculating the sensitivity and proposals for necessary adjustments of CombinatoRial Auction Body Software System (CRAB), which makes its use for the relevant decision-making tasks more user friendly. Two possibilities are suggested. The first approach is appropriate for cases with relatively small numbers of subjects, where it is possible to compute all feasible solutions ordered by total cost. In such cases it is possible to analyse changes of coalition structures with increasing the cost. The second one suggests modification of the CRAB software, which would make it possible to analyse cases with high numbers of feasible coalition structures located between the optimal coalition (i.e. the cost-effective one) and the structure consisting of individual projects. This approach is appropriate for complex real applications involving setting of cost levels.

Petr Fiala, Petr Šauer
A Domain Specific Language to Simplify the Creation of Large Scale Federated Model Sets

This paper presents an attempt to address the challenge of modeling complex systems in which people, energy, and the environment meet. This challenge is met by developing a simple domain specific language for building systems models in a federated modeling environment. The language and its support infrastructure are designed for simplicity and ease of use. This language is demonstrated using a thermodynamic model of a biomass cookstove for the developing world as an example, and the use of the tools described in this paper to further extend that cookstove model into an end-to-end design tool for cookstoves and other energy systems for the developing world is discussed.

Zachary T. Reinhart, Sunil Suram, Kenneth M. Bryden
Modelling and Forecasting Waste Generation – DECWASTE Information System

The DECWASTE forecasting waste generation information system is presented. It is based on Annexes I, II and III of Regulation (EC) No 849/2010 amending Regulation (EC) No 2150/2002 of the European Parliament and of the European Council on waste (WSR). DECWASTE forecasts the quantity of waste generated for each waste category listed in Section 2(1) of Annex I of the WSR at the Czech national level. Its multi-linear regression forecasting model is based on environmental as well as economic and social predictors. These models use historical data of the waste information system (ISOH) of the Czech Environmental Information Agency and sets of indicators (predictors) integrated into forecasting models. The methodology consisted in adjusting predictors of the forecasting models into a Driving Force-Pressure-State-Impact-Response (DPSIR) framework and their sensitivity analysis enables their choice into forecasting models and their verification using appropriate data. DECWASTE supports decisions made by the Ministry of the Environment to improve the implementation of the national Waste Management Plan.

Jiří Hřebíček, Jiří Kalina, Jana Soukopová, Eva Horáková, Jan Prášek, Jiří Valta
Planning and Scheduling for Optimizing Communication in Smart Grids

Smart grid is a concept defining future electricity distribution network with the purpose of improving its reliability, efficiency and reducing its ecological impact. To achieve that, massive volumes of data need to be sensed and transmitted between the elements of the grid. A robust and efficient communication infrastructure is thus as essential part of smart grid. Some of the data transmissions are not time-critical, providing an opportunity to improve the communication network performance. In this paper, we present a new approach to optimize smart grid communications through time-based scheduling. Additionally, we provide a review of communication technologies in smart grids and published approaches to avoid congestions and transmission failures.

Miroslav Kadlec, Barbora Buhnova, Tomas Pitner
3D Volume Visualization of Environmental Data in the Web

The environmental community has an increasing need for visualizations because of the rapidly growing amount of data gathered from various sources e.g. sensors, users and apps. This paper presents a complex visualization for the Web that can be used to get insight into 3D volume data. Volumes like air, lakes and seas are often visualized using 2D slices or one dimensional diagrams that display measured values in a specific point of the volume. To get new insight the 3D volume has to be visualized in 3D. A technique called ray marching, which is known in the computer graphics field in combination with modern web technologies, can be used to create such visualizations for the Web. In addition to the new visualization, this paper also presents the usage of such complex software in a visualization framework created in the same research team. This framework hides the complexity behind user friendly web interfaces that allows one to configure the 3D volume visualization without any programming skills.

Eric Braun, Clemens Düpmeier, Stefan Mirbach, Ulrich Lang
Mobile Location-Based Augmented Reality Framework

GeoAR, or location-based augmented reality, can be used as an innovative representation of location-specific information in diverse applications. However, there are hardly any software development kits (SDKs) that can be effectively used by developers, as important functionality and customisation options are generally missing. This article presents the concept, implementation and example applications of a framework, or GeoAR SDK, that integrates the core functionality of location-based AR and enables developers to implement customised and highly adaptable mobile application with GeoAR.

Simon Burkard, Frank Fuchs-Kittowski, Sebastian Himberger, Fabian Fischer, Stefan Pfennigschmidt
Backmatter
Metadaten
Titel
Environmental Software Systems. Computer Science for Environmental Protection
herausgegeben von
Prof. Dr. Jiří Hřebíček
Ralf Denzer
Gerald Schimak
Tomáš Pitner
Copyright-Jahr
2017
Electronic ISBN
978-3-319-89935-0
Print ISBN
978-3-319-89934-3
DOI
https://doi.org/10.1007/978-3-319-89935-0

Premium Partner