Skip to main content
Top

2020 | Book

Environmental Software Systems. Data Science in Action

13th IFIP WG 5.11 International Symposium, ISESS 2020, Wageningen, The Netherlands, February 5–7, 2020, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 13th IFIP WG 5.11 International Symposium on Environmental Software Systems, ISESS 2020, held in Wageningen, The Netherlands, in February 2020.

The 22 full papers and 3 short papers were carefully reviewed and selected from 29 submissions. The papers cover a wide range of topics on environmental informatics, including data mining, artificial intelligence, high performance and cloud computing, visualization and smart sensing for environmental, earth, agricultural and food applications.

Table of Contents

Frontmatter
Unsupervised Learning of Robust Representations for Change Detection on Sentinel-2 Earth Observation Images
Abstract
The recent popularity of artificial intelligence techniques and the wealth of free and open access Copernicus data have led to the development of new data analytics applications in the Earth Observation domain. Among them, is the detection of changes on image time series, and in particular, the estimation of levels and superficies of changes. In this paper, we propose an unsupervised framework to detect generic but relevant and reliable changes using pairs of Sentinel-2 images. To illustrate this method, we will present a scenario focusing on the detection of changes in vineyards due to natural hazards such as frost and hail.
Michelle Aubrun, Andres Troya-Galvis, Mohanad Albughdadi, Romain Hugues, Marc Spigai
Dietary Intake Assessment: From Traditional Paper-Pencil Questionnaires to Technology-Based Tools
Abstract
Self-reported methods of recall and real-time recording are the most commonly used approaches to assess dietary intake, both in research as well as the health-care setting. The traditional versions of these methods are limited by various methodological factors and burdensome for interviewees and researchers. Technology-based dietary assessment tools have the potential to improve the accuracy of the data and reduce interviewee and researcher burden. Consequently, various research groups around the globe started to explore the use of technology-based tools. This paper provides an overview of the: (1) most-commonly used and generally accepted methods to assess dietary intake; (2) errors encountered using these methods; and (3) web-based and app-based tools (i.e., Compl-eatTM, Traqq, Dutch FFQ-TOOLTM, and “Eetscore”) that have been developed by researchers of the Division of Human Nutrition and Health of Wageningen University during the past years.
Elske M. Brouwer-Brolsma, Desiree Lucassen, Marielle G. de Rijk, Anne Slotegraaf, Corine Perenboom, Karin Borgonjen, Els Siebelink, Edith J. M. Feskens, Jeanne H. M. de Vries
Computational Infrastructure of SoilGrids 2.0
Abstract
SoilGrids maps soil properties for the entire globe at medium spatial resolution (250 m cell side) using state-of-the-art machine learning methods. The expanding pool of input data and the increasing computational demands of predictive models required a prediction framework that could deal with large data. This article describes the mechanisms set in place for a geo-spatially parallelised prediction system for soil properties. The features provided by GRASS GIS – mapset and region – are used to limit predictions to a specific geographic area, enabling parallelisation. The Slurm job scheduler is used to deploy predictions in a high-performance computing cluster. The framework presented can be seamlessly applied to most other geo-spatial process requiring parallelisation. This framework can also be employed with a different job scheduler, GRASS GIS being the main requirement and engine.
Luís M. de Sousa, Laura Poggio, Gwen Dawes, Bas Kempen, Rik van den Bosch
Defining and Classifying Infrastructural Contestation: Towards a Synergy Between Anthropology and Data Science
Abstract
The last decade infrastructure systems have been under strain around the globe. The 2008 financial crisis, the so-called fourth industrial revolution, ongoing urbanisation and climate change have contributed to the emergence of an infrastructural crisis that has been labelled as infrastructural gap. During this period, infrastructure systems have increasingly become sites of public contestation with significant effects on their operation and governance. At stake has been the issues of access to infrastructure, their social and environmental consequences and the ‘modern ideal’ embodied in the design of those socio-technical systems. With this paper we apply a cross-disciplinary methodology in order to document and define the practices of this new wave of infrastructural contestation, taking Greece in the 2008–2017 period as the case study. The synthesis of quantitative and qualitative datasets with ethnographic knowledge help us, furthermore, to record tendencies and patterns in the ongoing phenomenon of infrastructural contestation (This study is part of infra-demos project (www.​infrademos.​net), which is funded by a VIDI grant awarded by the Dutch Organisation of Science, PI: Prof. Dimitris Dalakoglou, Dept. of Social and Cultural Anthropology, Vrije Universiteit Amsterdam).
Christos Giovanopoulos, Yannis Kallianos, Ioannis N. Athanasiadis, Dimitris Dalakoglou
Automated Processing of Sentinel-2 Products for Time-Series Analysis in Grassland Monitoring
Abstract
Effective grassland management practices require a good understanding of soil and vegetation properties, that can be quantified by farmers’ knowledge and remote sensing techniques. Many systems have been proposed in the past for grassland monitoring, but open-source alternatives are increasingly being preferred. In this paper, a system is proposed to process data in an open-source and automated way. This system made use of Sentinel-2 data to support grassland management at Haus Riswick in the region around Kleve, Germany, retrieved with help of a platform called Sentinelsat that was developed by ESA. Consecutive processing steps consisted of atmospheric correction, cloud masking, clipping the raster data, and calculation of vegetation indices. First results from 2018 resembled the mowing regime of the area with four growing cycles, although outliers were detected due to a lack of data caused by cloud cover. Moreover, that year’s extremely dry summer was visible in the time-series pattern as well. The proposed script is a primary version of a processing chain, which is suitable to be further expanded for more advanced data pre-processing and data analysis in the future.
Tom Hardy, Marston Domingues Franceschini, Lammert Kooistra, Marcello Novani, Sebastiaan Richter
CLARITY Screening Service for Climate Hazards, Impacts and Effects of the Adaptation Options
Abstract
The CLARITY project (www.​clarity-h2020.​eu) aims to implement a new generation of climate services that allow the service users to perform an initial assessment of the expected climate change effects in the project area, as well as an initial assessment of the need for and of the usability of the adaptation options in the early project planning phase. The target users of this service are the consultants and urban planning experts that aren’t climate change experts but need to produce standardized reports indicating the climate hazard, exposure and impact data, as well as the expected impact of the adaptation options in the project area, as a part of the project planning. The initial implementation of this service uses the available open data to calculate the local heat hazard, population exposure and related impact indicators at the project location on the fly. In the initial implementation, the heat related can be automatically calculated for more than 400 European cities, with a spatial resolution of 500 × 500 m2. Extension to the flooding hazards and related impacts is in implementation. This article will describe in more detail the workflow and the technical implementation of the CLARITY screening service and discuss the value, potential and the limitations of the current service implementation.
Denis Havlik, Gerald Schimak, Patrick Kaleta, Pascal Dihé, Mattia Federico Leone
Diet Modelling: Combining Mathematical Programming Models with Data-Driven Methods
Abstract
Mathematical programming has been the principal workhorse behind most diet models since the 1940s. As a predominantly hypothesis-driven modelling paradigm, its structure is mostly defined by a priori information, i.e. expert knowledge. In this paper we consider two machine learning paradigms, and three instances thereof that could help leverage the readily available data and derive valuable insights for modelling healthier, and acceptable human diets.
Ante Ivancic, Argyris Kanellopoulos, Johanna M. Geleijnse
AGINFRA PLUS: Running Crop Simulations on the D4Science Distributed e-Infrastructure
Abstract
Virtual Research Environments (VREs) bridge the gap between the compute and storage infrastructure becoming available as the ‘cloud’, and the needs of researchers for tools supporting open science and analytics on ever larger datasets. In the AGINFRA PLUS project such a VRE, based on the D4Science platform, was examined to improve and test its capabilities for running large numbers of crop simulations at field level, based on the WOFOST-WISS model and Dutch input datasets from the AgroDataCube. Using the gCube DataMiner component of the VRE, and based on the Web Processing Service standard, a system has been implemented that can run such workloads successfully on an available cluster, and with good performance, providing summarized results to agronomists for further analysis. The methods used and the resulting implementation are briefly described in this paper. Overall the approach seems viable and opening the door to many follow-up implementation opportunities and further research. Some of them are indicated in more detail in the conclusions.
M. J. Rob Knapen, Rob M. Lokers, Leonardo Candela, Sander Janssen
Redefining Agricultural Insurance Services Using Earth Observation Data. The Case of Beacon Project
Abstract
BEACON is a market-led project that couples cutting edge Earth Observation (EO) technology with weather intelligence and blockchain to deliver a toolbox for the Agricultural Insurance (AgI) sector with timely cost-efficient and actionable insights for the agri-insurance industry. BEACON enables insurance companies to exploit the untapped market potential of AgI, while contributing to the redefinition of existing AgI products and services. The Damage Assessment Calculator of BEACON employs remote sensing techniques in order to improve the quality and cost-effectiveness of agri-insurance by: (i) increasing the objectivity of the experts field inspections; (ii) reducing the cost of field visits and (iii) increasing farmers’ confidence in the estimation results, given the significant economic impact of erroneous estimation. This paper provides an analysis of different type of EO data and remote sensing techniques implemented in the operational workflow of BEACON that can be used by AgI companies to provide safe and reliable results on storms, floods, wildfires and droughts damage on crops.
Emmanuel Lekakis, Stylianos Kotsopoulos, Gregory Mygdakos, Agathoklis Dimitrakos, Ifigeneia-Maria Tsioutsia, Polimachi Simeonidou
Producing Mid-Season Nitrogen Application Maps for Arable Crops, by Combining Sentinel-2 Satellite Images and Agrometeorological Data in a Decision Support System for Farmers. The Case of NITREOS
Abstract
NITREOS (Nitrogen Fertilization, Irrigation and Crop Growth Monitoring using Earth Observation Systems) is a farm management information system (FMIS) for organic and conventional agriculture which aims in enabling farmers to tackle crop abiotic stresses and control important growing parameters to ensure crop health and optimal yields. NITREOS employs a user friendly, web-based platform that integrates satellite remote sensing data, numerical weather predictions and agronomic models, and offers a suite of farm management advisory services to address the needs of smallholder farmers, agricultural cooperatives and agricultural consultants. This paper provides an analysis of different methodologies employed in the nitrogen fertilization service of NITREOS. The methods are based on the determination of the Nitrogen Fertilization Optimization Algorithm for cotton, maize and wheat crops. Available agro-meteorological data on two distinct agricultural regions were used for the calibration and validation of the recommended Nitrogen rates.
Emmanuel Lekakis, Dimitra Perperidou, Stylianos Kotsopoulos, Polimachi Simeonidou
Using Virtual Research Environments in Agro-Environmental Research
Abstract
Tackling some of the grand global challenges, agro-environmental research has turned more and more into an international venture, where distributed research teams work together to solve complex research questions. Moreover, the interdisciplinary character of these challenges requires that a large diversity of different data sources and information is combined in new, innovative ways. There is a pressing need to support researchers with environments that allow them to efficiently work together and co-develop research. As research is often data-intensive, and big data becomes a common part of a lot of research, such environments should also offer the resources, tools and workflows that allow to process data at scale if needed. Virtual research environments (VRE), which combine working in the Cloud, with collaborative functions and state of the art data science tools, can be a potential solution. In the H2020 AGINFRA+ project, the usability of the VREs has been explored for use cases around agro-climatic modelling. The implemented pilot application for crop growth modelling has successfully shown that VREs can support distributed research teams in co-development, helps them to adopt open science and that the VRE’s cloud computing facilities allow large scale modelling applications.
Rob M. Lokers, M. J. Rob Knapen, Leonardo Candela, Steven Hoek, Wouter Meijninger
Can We Use the Relationship Between Within-Field Elevation and NDVI as an Indicator of Drought-Stress?
Abstract
Large farmers’ datasets can help shed light on agroecological processes if used in the context of hypothesis testing. Here we used an anonymized set of data from the geoplatform Akkerweb to better understand the correlation between within-field elevation and normalized differential vegetation index (NDVI, a proxy for biomass). The dataset included 3249 Dutch potato fields, for each of which the cultivar, the field polygon, the year of cultivation and the soil type (clay or sandy) was known. We hypothesize that under dry conditions such correlation is negative, meaning that the lowest portions of the field have more biomass because of water redistribution. From the data, we observed that in dry periods, such as the summer of 2018, the correlation was negative in sandy soils. Furthermore, we observed that early cultivars show a weaker correlation between NDVI and elevation than late cultivars, possibly because early cultivar escape part of the long dry summer spells. We conclude that the correlation between NDVI and elevation may be a useful indicator of drought stress, and deviations from the norm may be useful to evaluate the resistance to drought of individual cultivars.
Bernardo Maestrini, Matthijs Brouwer, Thomas Been, Lambertus A. P. Lotz
Predicting Nitrogen Excretion of Dairy Cattle with Machine Learning
Abstract
Several tools were developed during the past decades to support farmers in nutrient management and to meet legal requirements such as the farm specific excretion tool. This tool is used by dairy farmers to estimate the farm specific nitrogen (N) excretion of their animals, which is calculated from farm specific data and some normative values. Some variables, like intake of grazed grass or roughage, are hard to measure. A data driven approach could help finding structures in data, and identifying key factors determining N excretion. The aim of this study was to benchmark machine learning methods such as Bayesian Network (BN) and boosted regression trees (BRT) in predicting N excretion, and to assess how sensitive both approaches are on the absence of hard-to-measure input variables. Data were collected from 25 Dutch dairy farms. In the period 2006–2018, detailed recordings of N intake and output were made during 6–10 weeks distributed over each year. Variables included milk production, feed intake and their composition. Calculated N excretion was categorized as low, medium, and high, with limits of 300 and 450 g/day/animal. Accuracy of prediction of the farm specific N excretion, and distinguishing the low and high cases from the medium ones, was slightly better with BRT than with BN. Leaving out information on intake during grazing did not negatively influence validation performance of both models, which opens opportunities to diminish data collection efforts on this aspect. Further analyses are required to confirm these results, such as cross-validation.
Herman Mollenhorst, Yamine Bouzembrak, Michel de Haan, Hans J. P. Marvin, Roel F. Veerkamp, Claudia Kamphuis
Investigation of Common Big Data Analytics and Decision-Making Requirements Across Diverse Precision Agriculture and Livestock Farming Use Cases
Abstract
The purpose of this paper is to present the investigation of common requirements and needs of users across a diverse set of precision agriculture and livestock farming use cases that was based on a series of interviews with experts and farmers. The requirements were based on nine interviews that were conducted in order to identify common requirements and challenges in terms of data collection and management, Big Data technologies, High Performance Computing infrastructure and decision making. The common requirements that derived from the interviews and user requirement analysis per use case can serve as basis for identifying functional and non-functional requirements of a technological solution of high re-usability, interoperability, adaptability and overall efficiency in terms of addressing common needs for precision agriculture and livestock farming.
Spiros Mouzakitis, Giannis Tsapelas, Sotiris Pelekis, Simos Ntanopoulos, Dimitris Askounis, Sjoukje Osinga, Ioannis N. Athanasiadis
Quantifying Uncertainty for Estimates Derived from Error Matrices in Land Cover Mapping Applications: The Case for a Bayesian Approach
Abstract
The use of land cover mappings built using remotely sensed imagery data has become increasingly popular in recent years. However, these mappings are ultimately only models. Consequently, it is vital for one to be able to assess and verify the quality of a mapping and quantify uncertainty for any estimates that are derived from them in a reliable manner.
For this, the use of validation sets and error matrices is a long standard practice in land cover mapping applications. In this paper, we review current state of the art methods for quantifying uncertainty for estimates obtained from error matrices in a land cover mapping context. Specifically, we review methods based on their transparency, generalisability, suitability when stratified sampling and suitability in low count situations. This is done with the use of a third-party case study to act as a motivating and demonstrative example throughout the paper.
The main finding of this paper is there is a major issue of transparency for methods that quantify uncertainty in terms of confidence intervals (frequentist methods). This is primarily because of the difficulty of analysing nominal coverages in common situations. Effectively, this leaves one without the necessary tools to know when a frequentist method is reliable in all but a few niche situations. The paper then discusses how a Bayesian approach may be better suited as a default method for uncertainty quantification when judged by our criteria.
Jordan Phillipson, Gordon Blair, Peter Henrys
Machine Learning Algorithms for Food Intelligence: Towards a Method for More Accurate Predictions
Abstract
It is evident that machine learning algorithms are being widely impacting industrial applications and platforms. Beyond typical research experimentation scenarios, there is a need for companies that wish to enhance their online data and analytics solutions to incorporate ways in which they can select, experiment, benchmark, parameterise and choose the version of a machine learning algorithm that seems to be most appropriate for their specific application context. In this paper, we describe such a need for a big data platform that supports food data analytics and intelligence. More specifically, we introduce Agroknow’s big data platform and identify the need to extend it with a flexible and interactive experimentation environment where different machine learning algorithms can be tested using a variation of synthetic and real data. A typical usage scenario is described, based on our need to experiment with various machine learning algorithms to support price prediction for food products and ingredients. The initial requirements for an experimentation environment are also introduced.
Ioanna Polychronou, Panagis Katsivelis, Mihalis Papakonstantinou, Giannis Stoitsis, Nikos Manouselis
Interoperability of Solutions in a Crisis Management Environment Showcased in Trial-Austria
Abstract
Crisis Management (CM) is a challenging area when it comes to connecting solutions aiming to support the various tasks involved in handling of CM situations. DRIVER+ [1] an EU-funded project launched in 2014 was setting up a technical infrastructure (so called Test-bed) that allows to interconnect solutions, so they can interact and exchange all crisis relevant information that commanders need, to make their decisions and plan their actions related to a specific crisis.
To verify the DRIVER+ Test-bed as well as the DRIVER+ Trial Guidance Methodology [3] and furthermore, to overcome identified CM gaps [2], a series of Trials was setup. Trial-Austria was the fourth one to be executed.
This Trial was especially challenging as it was held as a field exercise in parallel to a huge European Civil Protection Exercise (called IRONORE2019). The scenario to be dealt with was an earthquake scenario.
The developed methodology, Test-bed as well as various solutions taking part in DRIVER+ are a perfect base and platform to deal with whatever hazard (e.g. chemical, physical, etc.) is endangering our environment or wellbeing.
Gerald Schimak, Dražen Ignjatović, Erik Vullings, Maurice Sammels
ELFIE - The OGC Environmental Linked Features Interoperability Experiment
Abstract
The OGC Environmental Linked Feature Interoperability Experiment (ELFIE) sought to assess a suite of pre-existing OGC and W3C standards with a view to identifying best practice for exposing cross-domain links between environmental features and observations. Environmental domain models concerning landscape interactions with the hydrologic cycle served as the basis for this study, whilst offering a meaningful constraint on its scope. JSON-LD was selected for serialization; this combines the power of linked data with intuitive encoding. Vocabularies were utilized for the provision of the JSON-LD contexts; these ranged from common vocabularies such as schema.org to semantic representations of OGC/ISO observational standards to domain-specific feature models synonymous with the hydrological and geological domains. Exemplary data for the selected use cases was provided by participants and shared in static form via a GitHub repository. User applications were created to assess the validity of the proposed approach as it pertained to real-world situations. This process resulted in the identification of issues whose resolution is a prerequisite for wide-scale deployment and best practice definition. Addressing these issues will be the focus of future OGC Interoperability Experiments.
Kathi Schleidt, Michael O’Grady, Sylvain Grellet, Abdelfettah Feliachi, Hylke van der Schaaf
Real-Time Visualization of Methane Emission at Commercial Dairy Farms
Abstract
The Dutch government has set an objective to reduce greenhouse gasses (GHG) emissions to 116 Mton CO2-equivalent in 2030. The agriculture sector aims for 11–23 Mton of GHG emission by 2050 and thus contributes to this objective. For this sector, the major contributor to the GHG emission in the Netherlands is the dairy sector. Before any mitigation strategies can be enrolled, some key facts need to be measured regarding the GHG emission on a farm. One of these key facts is the establishment of the baseline of GHG emission on a farm (and per cow). For this, we previously have built an infrastructure to measure and collect methane and carbon dioxide (near) real-time on a farm. The next challenges, addressed in the current study, were to (1) combine the private methane data, collected real-time through the infrastructure, with open source weather information, and (2) visualize both data streams for farmers, by developing an application that can be viewed on a web or mobile phone platform.
Dirkjan Schokker, Herman Mollenhorst, Gerrit Seigers, Yvette de Haas, Roel F. Veerkamp, Claudia Kamphuis
Design of a Web-Service for Formal Descriptions of Domain-Specific Data
Abstract
The growing relevance of Big Data and the Internet of Things (IoT) leads to a need for an efficient handling of this data. One key concept to achieve efficient data handling is their semantic description. In the environmental and energy domain, these issues become more relevant since there are measurement stations that produce large amounts of data that software systems have to deal with. In the context of cloud-based infrastructure and virtualisation via containers, microservice architectures and scalability become important aspects in software engineering. This article presents the design of a web service providing software systems with semantic descriptions of data fostering a microservice architecture. It implements key concepts such as domain modelling, schema versioning and schema modularisation. It is evaluated and demonstrated in the context of a current environmental use case.
Jannik Sidler, Eric Braun, Thorsten Schlachter, Clemens Düpmeier, Veit Hagenmeyer

Open Access

Models in the Cloud: Exploring Next Generation Environmental Software Systems
Abstract
There is growing interest in the application of the latest trends in computing and data science methods to improve environmental science. However we found the penetration of best practice from computing domains such as software engineering and cloud computing into supporting every day environmental science to be poor. We take from this work a real need to re-evaluate the complexity of software tools and bring these to the right level of abstraction for environmental scientists to be able to leverage the latest developments in computing. In the Models in the Cloud project, we look at the role of model driven engineering, software frameworks and cloud computing in achieving this abstraction. As a case study we deployed a complex weather model to the cloud and developed a collaborative notebook interface for orchestrating the deployment and analysis of results. We navigate relatively poor support for complex high performance computing in the cloud to develop abstractions from complexity in cloud deployment and model configuration. We found great potential in cloud computing to transform science by enabling models to leverage elastic, flexible computing infrastructure and support new ways to deliver collaborative and open science.
Will Simm, Gordon Blair, Richard Bassett, Faiza Samreen, Paul Young
An Environmental Sensor Data Suite Using the OGC SensorThings API
Abstract
In many application domains sensor data contributes an important part to the situation awareness required for decision making. Examples range from environmental and climate change situations to industrial production processes. All these fields need to aggregate and fuse many data sources, the semantics of the data needs to be understood and the results must be presented to the decision makers in an accessible way. This process is already defined as the “sensor to decision chain” [11] but which solutions and technologies can be proposed for implementing it?
Since the Internet of Things (IoT) is rapidly growing with an estimated number of 30 billion sensors in 2020, it offers excellent potential to collect time-series data for improving situational awareness. The IoT brings several challenges: caused by a splintered sensor manufacturer landscape, data comes in various structures, incompatible protocols and unclear semantics. To tackle these challenges a well-defined interface, from where uniform data can be queried, is necessary. The Open Geospatial Consortium (OGC) has recognized this demand and developed the SensorThings API (STA) standard, an open, unified way to interconnect devices throughout the IoT. Since its introduction in 2016, it has shown to be a versatile and easy to use standard for exchanging and managing sensor data.
This paper proposes the STA as the central part for implementing the sensor to decision chain. Furthermore, it describes several projects that successfully implemented the architecture and identifies open issues with the SensorThings API that, if solved, would further improve the usability of the API.
Hylke van der Schaaf, Jürgen Moßgraber, Sylvain Grellet, Mickaël Beaufils, Kathi Schleidt, Thomas Usländer
WISS a Java Continuous Simulation Framework for Agro-Ecological Modelling
Abstract
A simulation framework is presented (WISS, Wageningen Integrated Systems Simulator) which targets the agro-ecological modelling domain. Especially simulation for a large number of locations, such as in detailed regional and global simulation studies. The framework strengths are in modularization, control, speed, robustness and computational protection (multiple system checks during simulation). The WOFOST model is currently implemented in WISS, through which it is used in a number of Wageningen University and Research projects. WISS is written in Java and the framework code is freely available.
D. W. G. van Kraalingen, M. J. Rob Knapen, A. de Wit, H. L. Boogaard
Mathematical Estimation of Particulate Air Pollution Levels by Multi-angle Imaging
Abstract
Air pollution control and mitigation are important factors in wellbeing and sustainability. To this end, air pollution monitoring has a significant role. Today, air pollution monitoring is mainly done by standardized stations. The spread of those stations is sparse and their cost hinders the option of adding more. Thus, arises the need for cheaper and available means to assess air pollution. In this article, a method for assessing air pollution levels by means of multi angle imaging is presented. Specifically, the focus is on estimating images’ blur as an indication for PM (Particulate Matter) ambient levels. The suggested method applies back-projection Radon transform. By back projection methodology, particles’ concentration at each voxel in a 3D space is reconstructed from photos taken from a few different angles.
Or Vernik, Barak Fishbain
Interpolation of Data Measured by Field Harvesters: Deployment, Comparison and Verification
Abstract
Yield is one of the key indicators in agriculture. The most common practices provide only one yield value for a whole field according to the weight of the harvested crop. On the contrary, precision agriculture techniques discover spatial patterns within a field to minimise the environmental burden caused by agricultural activities. Field harvesters equipped with sensors provide more detailed and spatially localised values. The measurements from such sensors need to be filtered and interpolated for the purposes of follow-up analyses and interpretations. This study verified the differences between three methods of interpolation (Inverse Distance Weighted, Inverse Distance Squared and Ordinary Kriging) derived from field sensor measurements that were (1) obtained directly from the field harvester, (2) processed by global filters, and (3) processed by global and local filters. Statistical analyses evaluated the results of interpolations from three fully operational Czech fields. The revealed spatial patterns, as well as recommendations regarding the suitability of the interpolation methods used, are presented at the end of this paper.
Tomáš Řezník, Lukáš Herman, Kateřina Trojanová, Tomáš Pavelka, Šimon Leitgeb
Backmatter
Metadata
Title
Environmental Software Systems. Data Science in Action
Editors
Ioannis N. Athanasiadis
Steven P. Frysinger
Gerald Schimak
Willem Jan Knibbe
Copyright Year
2020
Electronic ISBN
978-3-030-39815-6
Print ISBN
978-3-030-39814-9
DOI
https://doi.org/10.1007/978-3-030-39815-6

Premium Partner