Skip to main content

2020 | Buch

Advanced Analytics and Learning on Temporal Data

4th ECML PKDD Workshop, AALTD 2019, Würzburg, Germany, September 20, 2019, Revised Selected Papers

herausgegeben von: Vincent Lemaire, Simon Malinowski, Prof. Anthony Bagnall, Alexis Bondu, Dr. Thomas Guyet, Romain Tavenard

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science


Über dieses Buch

This book constitutes the refereed proceedings of the 4th ECML PKDD Workshop on Advanced Analytics and Learning on Temporal Data, AALTD 2019, held in Würzburg, Germany, in September 2019.
The 7 full papers presented together with 9 poster papers were carefully reviewed and selected from 31 submissions. The papers cover topics such as temporal data clustering; classification of univariate and multivariate time series; early classification of temporal data; deep learning and learning representations for temporal data; modeling temporal dependencies; advanced forecasting and prediction models; space-temporal statistical analysis; functional data analysis methods; temporal data streams; interpretable time-series analysis methods; dimensionality reduction, sparsity, algorithmic complexity and big data challenge; and bio-informatics, medical, energy consumption, on temporal data.



Oral Presentation

Robust Functional Regression for Outlier Detection
In this paper we propose an outlier detection algorithm for temperature sensor data from jet engine tests. Effective identification of outliers would enable engine problems to be examined and resolved efficiently. Outlier detection in this data is challenging because a human controller determines the speed of the engine during each manoeuvre. This introduces variability which can mask abnormal behaviour in the engine response. We therefore suggest modelling the dependency between speed and temperature in the process of identifying abnormalities. The engine temperature has a delayed response with respect to the engine speed, which we will model using robust functional regression. We then apply functional depth with respect to the residuals to rank the samples and identify the outliers. The effectiveness of the outlier detection algorithm is shown in a simulation study. The algorithm is also applied to real engine data, and identifies samples that warrant further investigation.
Harjit Hullait, David S. Leslie, Nicos G. Pavlidis, Steve King
Transform Learning Based Function Approximation for Regression and Forecasting
Regression and forecasting can be viewed as learning the functions with the appropriate input and output variables from the data. To capture the complex relationship among the variables, different techniques like, kernelized dictionary learning are being explored in the existing literature. In this paper, the transform learning based function approximation is presented which has computational and performance advantages over dictionary based techniques. Apart from providing the formulation and derivation of the necessary update steps, the performance results obtained with both synthetic and real data are presented in the paper. The initial results obtained with both the basic and kernelized versions demonstrate the usefulness of the proposed technique for regression and forecasting tasks.
Kriti Kumar, Angshul Majumdar, M. Girish Chandra, A. Anil Kumar
Proactive Fiber Break Detection Based on Quaternion Time Series and Automatic Variable Selection from Relational Data
We address the problem of event classification for proactive fiber break detection in high-speed optical communication systems. The proposed approach is based on monitoring the State of Polarization (SOP) via digital signal processing in a coherent receiver. We describe in details the design of a classifier providing interpretable decision rules and enabling low-complexity real-time detection embedded in network elements. The proposed method operates on SOP time series, which define trajectories on the 3D sphere; SOP time series are low-pass filtered (to reduce measurement noise), pre-rotated (to provide invariance to the starting point of trajectories) and converted to quaternion domain. Then quaternion sequences are recoded to relational data for automatic variable construction and selection. We show that a naïve Bayes classifier using a limited subset of variables can achieve an event classification accuracy of more than 99% for the tested conditions.
Vincent Lemaire, Fabien Boitier, Jelena Pesic, Alexis Bondu, Stéphane Ragot, Fabrice Clérot
A Fully Automated Periodicity Detection in Time Series
This paper presents a method to autonomously find periodicities in a signal. It is based on the same idea of using Fourier Transform and autocorrelation function presented in [12]. While showing interesting results this method does not perform well on noisy signals or signals with multiple periodicities. Thus, our method adds several new extra steps (hints clustering, filtering and detrending) to fix these issues. Experimental results show that the proposed method outperforms state of the art algorithms.
Tom Puech, Matthieu Boussard, Anthony D’Amato, Gaëtan Millerand
Conditional Forecasting of Water Level Time Series with RNNs
We describe a practical situation in which the application of forecasting models could lead to energy efficiency and decreased risk in water level management. The practical challenge of forecasting water levels in the next 24 h and the available data are provided by a dutch regional water authority. We formalized the problem as conditional forecasting of hydrological time series: the resulting models can be used for real-life scenario evaluation and decision support. We propose the novel Encoder/Decoder with Exogenous Variables RNN (ED-RNN) architecture for conditional forecasting with RNNs, and contrast its performance with various other time series forecasting models. We show that the performance of the ED-RNN architecture is comparable to the best performing alternative model (a feedforward ANN for direct forecasting), and more accurately captures short-term fluctuations in the water heights.
Bart J. van der Lugt, Ad J. Feelders
Challenges and Limitations in Clustering Blood Donor Hemoglobin Trajectories
In order to prevent iron deficiency, Sanquin—the national blood bank in the Netherlands—measures a blood donor’s hemoglobin (Hb) level before each donation and only allows a donor to donate blood if their Hb is above a certain threshold. In around 6.5% of blood bank visits by women, the donor’s Hb is too low and the donor is deferred from donation. For visits by men, this occurs in 3.0% of cases. To reduce the deferral rate and keep donors healthy and motivated, we would like to identify donors that are at risk of having a low Hb level. To this end we have historical Hb trajectories at our disposal, i.e., time series consisting of Hb measurements recorded for individual donors.
As a first step towards our long-term goal, in this paper we investigate the use of time series clustering. Unfortunately, existing methods have limitations that make them suboptimal for our data. In particular, Hb trajectories are of unequal length and have measurements at irregular intervals. We therefore experiment with two different data representations. That is, we apply a direct clustering method using dynamic time warping, and a trend clustering method using model-based feature extraction. In both cases the clustering algorithm used is k-means.
Both approaches result in distinct clusters that are well-balanced in size. The clusters obtained using direct clustering have a smaller mean within-cluster distance, but those obtained using the model-based features show more interesting trends. Neither approach results in ideal clusters though. We therefore conclude with an elaborate discussion on challenges and limitations that we hope to address in the near future.
Marieke Vinkenoog, Mart Janssen, Matthijs van Leeuwen
Localized Random Shapelets
Shapelet models have attracted a lot of attention from researchers in the time series community, due in particular to its good classification performance. However, such models only inform about the presence/absence of local temporal patterns. Structural information about the localization of these patterns is ignored. In addition, end-to-end learning shapelet models tend to generate meaningless shapelets, leading to poorly interpretable models. In this paper, we aim at designing an interpretable shapelet model that takes into account the localization of the shapelets in the time series. Time series are transformed into feature vectors composed of both a distance and a localization information. Then, we design a hierarchical feature selection process using regularization. This process can be tuned to select, for each shapelet, either only its distance information or both distance and localization information. It is hence possible for every selected shapelet to analyze whether only the presence or the presence and the localization contributed to the decision process improving interpretability of the decision. Experiments show that this feature selection process has competitive performance compared to state-of-the-art shapelet-based classifiers, while providing better interpretability.
Mael Guillemé, Simon Malinowski, Romain Tavenard, Xavier Renard

Poster Presentation

Feature-Based Gait Pattern Classification for a Robotic Walking Frame
This paper presents a system for fast detection of gait patterns of walking frame users, where the challenge is to recognize a change in activity before the signal behaves stationary. The system is used as a basis for inferring the user’s intention in order to develop an improved shared-control strategy for an electric-driven walking frame. The data required for gait pattern identification is recorded by a set of low budget infrared distance sensors. We compare different sliding window based feature extraction methods in combination with classical machine learning algorithms in order to realize a fast real-time online gait classification. Moreover, a simple hierarchical feature extraction method is proposed and evaluated on our data-set.
Christopher M. A. Bonenberger, Benjamin Kathan, Wolfgang Ertel
How to Detect Novelty in Textual Data Streams? A Comparative Study of Existing Methods
Since datasets with annotation for novelty at the document and/or word level are not easily available, we present a simulation framework that allows us to create different textual datasets in which we control the way novelty occurs. We also present a benchmark of existing methods for novelty detection in textual data streams. We define a few tasks to solve and compare several state-of-the-art methods. The simulation framework allows us to evaluate their performances according to a set of limited scenarios and test their sensitivity to some parameters. Finally, we experiment with the same methods on different kinds of novelty in the New York Times Annotated Dataset.
Clément Christophe, Julien Velcin, Jairo Cugliari, Philippe Suignard, Manel Boumghar
Seq2VAR: Multivariate Time Series Representation with Relational Neural Networks and Linear Autoregressive Model
Finding understandable and meaningful feature representation of multivariate time series (MTS) is a difficult task, since information is entangled both in temporal and spatial dimensions. In particular, MTS can be seen as the observation of simultaneous causal interactions between dynamical variables. Standard way to model these interactions is the vector linear autoregression (VAR). The parameters of VAR models can be used as MTS feature representation. Yet, VAR cannot generalize on new samples, hence independent VAR models must be trained to represent different MTS. In this paper, we propose to use the inference capacity of neural networks to overpass this limit. We propose to associate a relational neural network to a VAR generative model to form an encoder-decoder of MTS. The model is denoted Seq2VAR for Sequence-to-VAR. We use recent advances in relational neural network to build our MTS encoder by explicitly modeling interactions between variables of MTS samples. We also propose to leverage reparametrization tricks for binomial sampling in neural networks in order to build a sparse version of Seq2VAR and find back the notion of Granger causality defined in sparse VAR models. We illustrate the interest of our approach through experiments on synthetic datasets.
Edouard Pineau, Sébastien Razakarivony, Thomas Bonald
Modelling Patient Sequences for Rare Disease Detection with Semi-supervised Generative Adversarial Nets
Rare diseases affect 350 million patients worldwide, but they are commonly delayed in diagnosis or misdiagnosed. The problem of detecting rare disease faces two main challenges: the first being extreme imbalance of data and the second being finding the appropriate features. In this paper, we propose to address the problems by using semi-supervised generative adversarial networks (GANs) to deal with the data imbalance issue and recurrent neural networks (RNNs) to directly model patient sequences. We experimented with detecting patients with a particular rare disease (exocrine pancreatic insufficiency, EPI). The dataset includes 1.8 million patients with 29,149 patients being positive, from a large longitudinal study using 7 years medical claims. Our model achieved 0.56 PR-AUC and outperformed benchmark models in terms of precision and recall.
Kezi Yu, Yunlong Wang, Yong Cai
Extended Kalman Filter for Large Scale Vessels Trajectory Tracking in Distributed Stream Processing Systems
The growing number of vessel data being constantly reported by a variety of remote sensors, such as the Automatic Identification System (AIS), requires new data analytics that can operate at high data rates and are highly scalable. Based on a real-world dataset from maritime transport, we propose a large scale vessel trajectory tracking application implemented in the distributed stream processing system Apache Flink. By implementing a state-space model (SSM) - the Extended Kalman Filter (EKF) - we firstly demonstrate that an implementation of SSMs is feasible in modern distributed data flow systems and secondly we show that we can reach a high performance by leveraging the inherent parallelization of the distributed system. In our experiments we show that the distributed tracking system is able to handle a throughput of several hundred vessels per ms. Moreover, we show that the latency to predict the position of a vessel is well below 500 ms on average, allowing for real-time applications.
Katarzyna Juraszek, Nidhi Saini, Marcela Charfuelan, Holmer Hemsen, Volker Markl
Unsupervised Anomaly Detection in Multivariate Spatio-Temporal Datasets Using Deep Learning
Techniques used for spatio-temporal anomaly detection in an unsupervised settings has attracted great attention in recent years. It has extensive use in a wide variety of applications such as: medical diagnosis, sensor events analysis, earth science, fraud detection systems, etc. Most of the real world time series datasets have spatial dimension as additional context such as geographic location. Although many temporal data are spatio-temporal in nature, existing techniques are limited to handle both contextual (spatial and temporal) attributes during anomaly detection process. Taking into account of spatial context in addition to temporal context would help uncovering complex anomaly types and unexpected and interesting knowledge about problem domain. In this paper, a new approach to the problem of unsupervised anomaly detection in a multivariate spatio-temporal dataset is proposed using a hybrid deep learning framework. The proposed approach is composed of a Long Short Term Memory (LSTM) Encoder and Deep Neural Network (DNN) based classifier to extract spatial and temporal contexts. Although the approach has been employed on crime dataset from San Francisco Police Department to detect spatio-temporal anomalies, it can be applied to any spatio-temporal datasets.
Yildiz Karadayi
Learning Stochastic Dynamical Systems via Bridge Sampling
We develop algorithms to automate discovery of stochastic dynamical system models from noisy, vector-valued time series. By discovery, we mean learning both a nonlinear drift vector field and a diagonal diffusion matrix for an Itô stochastic differential equation in \(\mathbb {R}^d\). We parameterize the vector field using tensor products of Hermite polynomials, enabling the model to capture highly nonlinear and/or coupled dynamics. We solve the resulting estimation problem using expectation maximization (EM). This involves two steps. We augment the data via diffusion bridge sampling, with the goal of producing time series observed at a higher frequency than the original data. With this augmented data, the resulting expected log likelihood maximization problem reduces to a least squares problem. We provide an open-source implementation of this algorithm. Through experiments on systems with dimensions one through eight, we show that this EM approach enables accurate estimation for multiple time series with possibly irregular observation times. We study how the EM method performs as a function of the amount of data augmentation, as well as the volume and noisiness of the data.
Harish S. Bhat, Shagun Rawat
Quantifying Quality of Actions Using Wearable Sensor
This paper introduces a novel approach to quantify the quality of human actions. The presented approach uses expert action data to define the space in order to gauge the performance of any user to identify expertise level. The proposed approach uses pose estimation model to identify different body attributes (legs, shoulders, head ...) status (left, right, bend, curl ...), which is further passed to autoencoder to have a latent representation encoding all the relevant information. This encoded representation is further passed to OneClass SVM to estimate the boundaries based on latent representation of expert data. These learned boundaries are used to gauge the quality of any questioned user with respect to the selected expert. The proposed approach enables identifying any critical situations in real work environment to avoid risky positions.
Mohammad Al-Naser, Takehiro Niikura, Sheraz Ahmed, Hiroki Ohashi, Takuto Sato, Mitsuhiro Okada, Katsuyuki Nakamura, Andreas Dengel
An Initial Study on Adapting DTW at Individual Query for Electrocardiogram Analysis
This paper describes an initial investigation on adapting windowed Dynamic Time Warping (DTW) for enhancing the reliability of fast DTW for Electrocardiogram analysis in Cardiology, a domain where risks are especially important to avoid. The key question it explores is whether it is worthwhile to adapt the window size of DTW for every query temporal sequence, a factor critically determining the speed-accuracy tradeoff of DTW. It in addition extends the adaptation to cover also the order of sequences for lower bound calculations. Experiments on ECG temporal sequences show that the techniques help significantly reduce risks that windowed DTW algorithms are subject to and at the same time keeping a high speed.
Daniel Shen, Min Chi
Advanced Analytics and Learning on Temporal Data
herausgegeben von
Vincent Lemaire
Simon Malinowski
Prof. Anthony Bagnall
Alexis Bondu
Dr. Thomas Guyet
Romain Tavenard
Electronic ISBN
Print ISBN