Skip to main content

2020 | Buch

Computational Science – ICCS 2020

20th International Conference, Amsterdam, The Netherlands, June 3–5, 2020, Proceedings, Part IV

herausgegeben von: Dr. Valeria V. Krzhizhanovskaya, Dr. Gábor Závodszky, Michael H. Lees, Prof. Jack J. Dongarra, Prof. Dr. Peter M. A. Sloot, Sérgio Brissos, João Teixeira

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

share
TEILEN
insite
SUCHEN

Über dieses Buch

The seven-volume set LNCS 12137, 12138, 12139, 12140, 12141, 12142, and 12143 constitutes the proceedings of the 20th International Conference on Computational Science, ICCS 2020, held in Amsterdam, The Netherlands, in June 2020.*

The total of 101 papers and 248 workshop papers presented in this book set were carefully reviewed and selected from 719 submissions (230 submissions to the main track and 489 submissions to the workshops). The papers were organized in topical sections named:

Part I: ICCS Main Track

Part II: ICCS Main Track

Part III: Advances in High-Performance Computational Earth Sciences: Applications and Frameworks; Agent-Based Simulations, Adaptive Algorithms and Solvers; Applications of Computational Methods in Artificial Intelligence and Machine Learning; Biomedical and Bioinformatics Challenges for Computer Science

Part IV: Classifier Learning from Difficult Data; Complex Social Systems through the Lens of Computational Science; Computational Health; Computational Methods for Emerging Problems in (Dis-)Information Analysis

Part V: Computational Optimization, Modelling and Simulation; Computational Science in IoT and Smart Systems; Computer Graphics, Image Processing and Artificial Intelligence

Part VI: Data Driven Computational Sciences; Machine Learning and Data Assimilation for Dynamical Systems; Meshfree Methods in Computational Sciences; Multiscale Modelling and Simulation; Quantum Computing Workshop

Part VII: Simulations of Flow and Transport: Modeling, Algorithms and Computation; Smart Systems: Bringing Together Computer Vision, Sensor Networks and Machine Learning; Software Engineering for Computational Science; Solving Problems with Uncertainties; Teaching Computational Science; UNcErtainty QUantIficatiOn for ComputationAl modeLs

*The conference was canceled due to the COVID-19 pandemic.

Inhaltsverzeichnis

Frontmatter

Classifier Learning from Difficult Data

Frontmatter
Different Strategies of Fitting Logistic Regression for Positive and Unlabelled Data

In the paper we revisit the problem of fitting logistic regression to positive and unlabelled data. There are two key contributions. First, a new light is shed on the properties of frequently used naive method (in which unlabelled examples are treated as negative). In particular we show that naive method is related to incorrect specification of the logistic model and consequently the parameters in naive method are shrunk towards zero. An interesting relationship between shrinkage parameter and label frequency is established. Second, we introduce a novel method of fitting logistic model based on simultaneous estimation of vector of coefficients and label frequency. Importantly, the proposed method does not require prior estimation, which is a major obstacle in positive unlabelled learning. The method is superior in predicting posterior probability to both naive method and weighted likelihood method for several benchmark data sets. Moreover, it yields consistently better estimator of label frequency than other two known methods. We also introduce simple but powerful representation of positive and unlabelled data under Selected Completely at Random assumption which yields straightforwardly most properties of such model.

Paweł Teisseyre, Jan Mielniczuk, Małgorzata Łazęcka
Branch-and-Bound Search for Training Cascades of Classifiers

We propose a general algorithm that treats cascade training as a tree search process working according to the branch-and-bound technique. The algorithm allows to reduce the expected number of features used by an operating cascade—a key quantity we focus on in the paper. While searching, we observe suitable lower bounds on partial expectations and prune tree branches that cannot improve the best-so-far result. Both exact and approximate variants of the approach are formulated. Experiments pertain to cascades trained to be face or letter detectors with Haar-like features or Zernike moments being the input information, respectively. Results confirm shorter operating times of cascades obtained owing to the reduction in the number of extracted features.

Dariusz Sychel, Przemysław Klęsk, Aneta Bera
Application of the Stochastic Gradient Method in the Construction of the Main Components of PCA in the Task Diagnosis of Multiple Sclerosis in Children

Many different medical problems are characterized by quite large spatial dimensions, which causes the task of recognizing patterns to become troublesome. This is a well-known phenomenon called curse of dimensionality. These problems force the creation of various methods of reducing dimensionality. These methods are based on selection and extraction of features. The most commonly used method in literature, regarding the later, is the analysis of the main components of pca. The natural problem of this method is the possibility of applying it to linear space. It is a natural problem to develop the pca concept for cases of nonlinear feature spaces, optimization of feature selection for principal components and the inclusion of classes in the task of supervised learning. An important problem in the perspective of machine learning is not only a reduction of features and attributes but also separation of classes. The developed method was tested in two computer experiments using real data of multiple sclerosis in children. The discussed problem, even from the very nature of the data itself, is important because it can contribute to practical implementations in medical diagnostics. The purpose of the research is to develop a method of extracting features with the application of the stochastic gradient method in the task diagnosis of multiple sclerosis in children. This solution could contribute to the increasing quality of classification and thus may be the basis for building systems that support the medical diagnostics in recognition of multiple sclerosis in children.

Mariusz Topolski
Grammatical Inference by Answer Set Programming

In this paper, the identification of context-free grammars based on the presentation of samples is investigated. The main idea of solving this problem proposed in the literature is reformulated in two different ways: in terms of general constrains and as an answer set program. In a series of experiments, we showed that our answer set programming approach is much faster than our alternative method and the original SAT encoding method. Similarly to a pioneer work, some well-known context-free grammars have been induced correctly, and we also followed its test procedure with randomly generated grammars, making it clear that using our answer set programs increases computational efficiency. The research can be regarded as another evidence that solutions based on the stable model (answer set) semantics of logic programming may be a right choice for complex problems.

Wojciech Wieczorek, Łukasz Strąk, Arkadiusz Nowakowski, Olgierd Unold
Dynamic Classifier Selection for Data with Skewed Class Distribution Using Imbalance Ratio and Euclidean Distance

Imbalanced data analysis remains one of the critical challenges in machine learning. This work aims to adapt the concept of Dynamic Classifier Selection (dcs) to the pattern classification task with the skewed class distribution. Two methods, using the similarity (distance) to the reference instances and class imbalance ratio to select the most confident classifier for a given observation, have been proposed. Both approaches come in two modes, one based on the k-Nearest Oracles (knora) and the other also considering those cases where the classifier makes a mistake. The proposed methods were evaluated based on computer experiments carried out on datasets with a high imbalance ratio. The obtained results and statistical analysis confirm the usefulness of the proposed solutions.

Paweł Zyblewski, Michał Woźniak
On Model Evaluation Under Non-constant Class Imbalance

Many real-world classification problems are significantly class-imbalanced to detriment of the class of interest. The standard set of proper evaluation metrics is well-known but the usual assumption is that the test dataset imbalance equals the real-world imbalance. In practice, this assumption is often broken for various reasons. The reported results are then often too optimistic and may lead to wrong conclusions about industrial impact and suitability of proposed techniques. We introduce methods (Supplementary code related to techniques described in this paper is available at: https://github.com/CiscoCTA/nci_eval ) focusing on evaluation under non-constant class imbalance. We show that not only the absolute values of commonly used metrics, but even the order of classifiers in relation to the evaluation metric used is affected by the change of the imbalance rate. Finally, we demonstrate that using subsampling in order to get a test dataset with class imbalance equal to the one observed in the wild is not necessary, and eventually can lead to significant errors in classifier’s performance estimate.

Jan Brabec, Tomáš Komárek, Vojtěch Franc, Lukáš Machlica
A Correction Method of a Base Classifier Applied to Imbalanced Data Classification

In this paper, the issue of tailoring the soft confusion matrix classifier to deal with imbalanced data is addressed. This is done by changing the definition of the soft neighbourhood of the classified object. The first approach is to change the neighbourhood to be more local by changing the Gaussian potential function approach to the nearest neighbour rule. The second one is to weight the instances that are included in the neighbourhood. The instances are weighted inversely proportional to the a priori class probability. The experimental results show that for one of the investigated base classifiers, the usage of the KNN neighbourhood significantly improves the classification results. What is more, the application of the weighting schema also offers a significant improvement.

Pawel Trajdos, Marek Kurzynski
Standard Decision Boundary in a Support-Domain of Fuzzy Classifier Prediction for the Task of Imbalanced Data Classification

Many real classification problems are characterized by a strong disturbance in a prior probability, which for the most of classification algorithms leads to favoring majority classes. The action most often used to deal with this problem is oversampling of the minority class by the smote algorithm. Following work proposes to employ a modification of an individual binary classifier support-domain decision boundary, similar to the fusion of classifier ensembles done by the Fuzzy Templates method to deal with imbalanced data classification without introducing any repeated or artificial patterns into the training set. The proposed solution has been tested in computer experiments, which results shows its potential in the imbalanced data classification.

Pawel Ksieniewicz
Employing One-Class SVM Classifier Ensemble for Imbalanced Data Stream Classification

The classification of imbalanced data streams is gaining more and more interest. However, apart from the problem that one of the class is not well represented, there are problems typical for data stream classification, such as limited resources, lack of access to the true labels and the possibility of occurrence of the concept drift. Possibility of concept drift appearing enforces design in the method adaptation mechanism. In this article, we propose the OCEIS classifier (One-Class support vector machine classifier Ensemble for Imbalanced data Stream). The main idea is to supply the committee with one-class classifiers trained on clustered data for each class separately. The results obtained from experiments carried out on synthetic and real data show that the proposed method achieves results at a similar level as the state of the art methods compared with it.

Jakub Klikowski, Michał Woźniak
Clustering and Weighted Scoring in Geometric Space Support Vector Machine Ensemble for Highly Imbalanced Data Classification

Learning from imbalanced datasets is a challenging task for standard classification algorithms. In general, there are two main approaches to solve the problem of imbalanced data: algorithm-level and data-level solutions. This paper deals with the second approach. In particular, this paper shows a new proposition for calculating the weighted score function to use in the integration phase of the multiple classification system. The presented research includes experimental evaluation over multiple, open-source, highly imbalanced datasets, presenting the results of comparing the proposed algorithm with three other approaches in the context of six performance measures. Comprehensive experimental results show that the proposed algorithm has better performance measures than the other ensemble methods for highly imbalanced datasets.

Paweł Ksieniewicz, Robert Burduk
Performance Analysis of Binarization Strategies for Multi-class Imbalanced Data Classification

Multi-class imbalanced classification tasks are characterized by the skewed distribution of examples among the classes and, usually, strong overlapping between class regions in the feature space. Furthermore, frequently the goal of the final system is to obtain very high precision for each of the concepts. All of these factors contribute to the complexity of the task and increase the difficulty of building a quality data model by learning algorithms. One of the ways of addressing these challenges are so-called binarization strategies, which allow for decomposition of the multi-class problem into several binary tasks with lower complexity. Because of the different decomposition schemes used by each of those methods, some of them are considered to be better suited for handling imbalanced data than the others. In this study, we focus on the well-known binary approaches, namely One-Vs-All, One-Vs-One, and Error-Correcting Output Codes, and their effectiveness in multi-class imbalanced data classification, with respect to the base classifiers and various aggregation schemes for each of the strategies. We compare the performance of these approaches and try to boost the performance of seemingly weaker methods by sampling algorithms. The detailed comparative experimental study of the considered methods, supported by the statistical analysis, is presented. The results show the differences among various binarization strategies. We show how one can mitigate those differences using simple oversampling methods.

Michał Żak, Michał Woźniak
Towards Network Anomaly Detection Using Graph Embedding

In the face of endless cyberattacks, many researchers have proposed machine learning-based network anomaly detection technologies. Traditional statistical features of network flows are manually extracted and rely heavily on expert knowledge, while classifiers based on statistical features have a high false-positive rate. The communications between different hosts forms graphs, which contain a large number of latent features. By combining statistical features with these latent features, we can train better machine learning classifiers. Therefore, we propose a novel network anomaly detection method that can use latent features in graphs and reduce the false positive rate of anomaly detection. We convert network traffic into first-order and second-order graph. The first-order graph learns the latent features from the perspective of a single host, and the second-order graph learns the latent features from a global perspective. This feature extraction process does not require manual participation or expert knowledge. We use these features to train machine learning algorithm classifiers for detecting network anomalies. We conducted experiments on two real-world datasets, and the results show that our approach allows for better learning of latent features and improved accuracy of anomaly detection. In addition, our method has the ability to detect unknown attacks.

Qingsai Xiao, Jian Liu, Quiyun Wang, Zhengwei Jiang, Xuren Wang, Yepeng Yao
Maintenance and Security System for PLC Railway LED Sign Communication Infrastructure

LED marking systems are currently becoming key elements of every Smart Transport System. Ensuring proper level of security, protection and continuity of failure-free operation seems to be not a completely solved issue. In the article, a system is present allowing to detect different types of anomalies and failures/damage in critical infrastructure of railway transport realized by means of Power Line Communication. There is also described the structure of the examined LED Sign Communications Network. Other discussed topics include significant security problems and maintenance of LED sign system which have direct impact on correct operation of critical communication infrastructure. A two-stage method of anomaly/damage detection is proposed. In the first step, all the outlying observations are detected and eliminated from the analysed network traffic parameters by means of the Cook’s distance. So prepared data is used in stage two to create models on the basis of autoregressive neural network describing variability of the analysed LED Sign Communications Network parameters. Next, relations between the expected network traffic and its real variability are examined in order to detect abnormal behaviour which could indicate an attempt of an attack or failure/damage. There is also proposed a procedure of recurrent learning of the exploited neural networks in case there emerge significant fluctuations in the real PLC traffic. A number of scientific research was realized, which fully confirmed efficiency of the proposed solution and accuracy of autoregressive type of neural network for prediction of the analysed time series.

Tomasz Andrysiak, Łukasz Saganowski
Fingerprinting of URL Logs: Continuous User Authentication from Behavioural Patterns

Security of computer systems is now a critical and evolving issue. Current trends try to use behavioural biometrics for continuous authorization. Our work is intended to strengthen network user authentication by a software interaction analysis. In our research, we use HTTP request (URLs) logs that network administrators collect. We use a set of full-convolutional autoencoders and one authentication (one-class) convolutional neural network. The proposed method copes with extensive data from many users and allows to add new users in the future. Moreover, the system works in a real-time manner, and the proposed deep learning framework can use other user behaviour- and software interaction-related features.

Jakub Nowak, Taras Holotyak, Marcin Korytkowski, Rafał Scherer, Slava Voloshynovskiy
On the Impact of Network Data Balancing in Cybersecurity Applications

Machine learning methods are now widely used to detect a wide range of cyberattacks. Nevertheless, the commonly used algorithms come with challenges of their own - one of them lies in network dataset characteristics. The dataset should be well-balanced in terms of the number of malicious data samples vs. benign traffic samples to achieve adequate results. When the data is not balanced, numerous machine learning approaches show a tendency to classify minority class samples as majority class samples. Since usually in network traffic data there are significantly fewer malicious samples than benign samples, in this work the problem of learning from imbalanced network traffic data in the cybersecurity domain is addressed. A number of balancing approaches is evaluated along with their impact on different machine learning algorithms.

Marek Pawlicki, Michał Choraś, Rafał Kozik, Witold Hołubowicz
Pattern Recognition Model to Aid the Optimization of Dynamic Spectrally-Spatially Flexible Optical Networks

The following paper considers pattern recognition-aided optimization of complex and relevant problem related to optical networks. For that problem, we propose a four-step dedicated optimization approach that makes use, among others, of a regression method. The main focus of that study is put on the construction of efficient regression model and its application for the initial optimization problem. We therefore perform extensive experiments using realistic network assumptions and then draw conclusions regarding efficient approach configuration. According to the results, the approach performs best using multi-layer perceptron regressor, whose prediction ability was the highest among all tested methods.

Paweł Ksieniewicz, Róża Goścień, Mirosław Klinkowski, Krzysztof Walkowiak
Missing Features Reconstruction Using a Wasserstein Generative Adversarial Imputation Network

Missing data is one of the most common preprocessing problems. In this paper, we experimentally research the use of generative and non-generative models for feature reconstruction. Variational Autoencoder with Arbitrary Conditioning (VAEAC) and Generative Adversarial Imputation Network (GAIN) were researched as representatives of generative models, while the denoising autoencoder (DAE) represented non-generative models. Performance of the models is compared to traditional methods k-nearest neighbors (k-NN) and Multiple Imputation by Chained Equations (MICE). Moreover, we introduce WGAIN as the Wasserstein modification of GAIN, which turns out to be the best imputation model when the degree of missingness is less than or equal to $$30\%$$. Experiments were performed on real-world and artificial datasets with continuous features where different percentages of features, varying from $$10\%$$ to $$50\%$$, were missing. Evaluation of algorithms was done by measuring the accuracy of the classification model previously trained on the uncorrupted dataset. The results show that GAIN and especially WGAIN are the best imputers regardless of the conditions. In general, they outperform or are comparative to MICE, k-NN, DAE, and VAEAC.

Magda Friedjungová, Daniel Vašata, Maksym Balatsko, Marcel Jiřina

Complex Social Systems Through the Lens of Computational Science

Frontmatter
Cooperation for Public Goods Under Uncertainty

Everyone wants clean air, peace and other public goods but is tempted to freeride on others’ efforts. The usual way out of this dilemma is to impose norms, maintain reputations and incentivize individuals to contribute. In situations of high uncertainty, however, such as confrontations of protesters with a dictatorial regime, the usual measures are not feasible, but cooperation can be achieved nevertheless. We use an Ising model with asymmetric spins that represent cooperation and defection to show numerically how public goods can be realized. Under uncertainty, people use the heuristic of conformity. The turmoil of a confrontation causes some individuals to cooperate accidentally, and at a critical level of turmoil, they entail a cascade of cooperation. This critical level is much lower in small networks.

Jeroen Bruggeman, Rudolf Sprik
An Information-Theoretic and Dissipative Systems Approach to the Study of Knowledge Diffusion and Emerging Complexity in Innovation Systems

The paper applies information theory and the theory of dissipative systems to discuss the emergence of complexity in an innovation system, as a result of its adaptation to an uneven distribution of the cognitive distance between its members. By modelling, on one hand, cognitive distance as noise, and, on the other hand, the inefficiencies linked to a bad flow of information as costs, we propose a model of the dynamics by which a horizontal network evolves into a hierarchical network, with some members emerging as intermediaries in the transfer of knowledge between seekers and problem-solvers. Our theoretical model contributes to the understanding of the evolution of an innovation system by explaining how the increased complexity of the system can be thermodynamically justified by purely internal factors. Complementing previous studies, we demonstrate mathematically that the complexity of an innovation system can increase not only to address the complexity of the problems that the system has to solve, but also to improve the performance of the system in transferring the knowledge needed to find a solution.

Guillem Achermann, Gabriele De Luca, Michele Simoni
Mapping the Port Influence Diffusion Patterns: A Case Study of Rotterdam, Antwerp and Singapore

Ports play a vital role in global oil trade and those with significant influence implicitly have better control over global oil transportation. To provide a better understanding of port influence, it is necessary to analyze the development of the mechanisms underlying port influence. In this study, we adopt a port influence diffusion model to modelling diffusion patterns using vessel trajectory data from 2009 to 2016. The results of the case study of Rotterdam, Antwerp and Singapore ports shows: 1) ports with a strong direct influence control their neighboring ports, thereby building a direct influence area; 2) directly influenced ports show path-dependent characteristics, reflecting the importance of geographical distance; 3) the indirect influence of the initial diffusion port creates hierarchical diffusion, with directly influenced ports affected by previous diffusion-influenced ports. 4) a port’s indirect influence and efficiency can be increased via an increase in the number of significant ports it influences directly or by increasing its influence on significant ports in an earlier diffusion stage.

Peng Peng, Feng Lu
Entropy-Based Measure for Influence Maximization in Temporal Networks

The challenge of influence maximization in social networks is tackled in many settings and scenarios. However, the most explored variant is looking at how to choose a seed set of a given size, that maximizes the number of activated nodes for selected model of social influence. This has been studied mostly in the area of static networks, yet other kinds of networks, such as multilayer or temporal ones, are also in the scope of recent research. In this work we propose and evaluate the measure based on entropy, that investigates how the neighbourhood of nodes varies over time, and based on that and their activity ranks, the nodes as possible candidates for seeds are selected. This measure applied for temporal networks intends to favor nodes that vary their neighbourhood highly and, thanks to that, are good spreaders for certain influence models. The results demonstrate that for the Independent Cascade Model of social influence the introduced entropy-based metric outperforms typical seed selection heuristics for temporal networks. Moreover, compared to some other heuristics, it is fast to compute, thus can be used for fast-varying temporal networks.

Radosław Michalski, Jarosław Jankowski, Patryk Pazura
Evaluation of the Costs of Delayed Campaigns for Limiting the Spread of Negative Content, Panic and Rumours in Complex Networks

Increasing the performance of information spreading processes and influence maximisation is important from the perspective of marketing and other activities within social networks. Another direction is suppressing spreading processes for limiting the coverage of misleading information, spreading information helping to avoid epidemics or decreasing the role of competitors on the market. Suppressing action can take a form of spreading competing content and it’s performance is related to timing and campaign intensity. Presented in this paper study showed how the delay in launching suppressing process can be compensated by properly chosen parameters and the action still can be successful.

Jaroslaw Jankowski, Piotr Bartkow, Patryk Pazura, Kamil Bortko
From Generality to Specificity: On Matter of Scale in Social Media Topic Communities

Research question stated in current paper concerns measuring significance of interest topic to a person on the base of digital footprints, observed in on-line social media. Interests are represented by on-line social groups in VK social network, which were marked by topics. Topic significance to a person is supposed to be related to the fraction of representative groups in user’s subscription list. We imply that for each topic, depending on its popularity, relation to geographical region, and social acceptability, there is a value of group size which is significant. In addition, we suppose, that professional clusters of groups demonstrate relatively higher inner density and unify common groups. Therefore, following groups from more specific clusters indicate higher personal involvement to a topic – in this way, representative topical groups are marked. We build social group similarity graph, which is based on the number of common followers, extract subgraphs related to a single topic, and analyse bins of groups, build with increase of group sizes. Results show topics of general interests have higher density at larger groups in contrast to specific interests, which is in correspondence with initial hypothesis.

Danila Vaganov, Mariia Bardina, Valentina Guleva

Computational Health

Frontmatter
Hybrid Text Feature Modeling for Disease Group Prediction Using Unstructured Physician Notes

Existing Clinical Decision Support Systems (CDSSs) largely depend on the availability of structured patient data and Electronic Health Records (EHRs) to aid caregivers. However, in case of hospitals in developing countries, structured patient data formats are not widely adopted, where medical professionals still rely on clinical notes in the form of unstructured text. Such unstructured clinical notes recorded by medical personnel can also be a potential source of rich patient-specific information which can be leveraged to build CDSSs, even for hospitals in developing countries. If such unstructured clinical text can be used, the manual and time-consuming process of EHR generation will no longer be required, with huge person-hours and cost savings. In this article, we propose a generic ICD9 disease group prediction CDSS built on unstructured physician notes modeled using hybrid word embeddings. These word embeddings are used to train a deep neural network for effectively predicting ICD9 disease groups. Experimental evaluation showed that the proposed approach outperformed the state-of-the-art disease group prediction model built on structured EHRs by 15% in terms of AUROC and 40% in terms of AUPRC, thus proving our hypothesis and eliminating dependency on availability of structured patient data.

Gokul S. Krishnan, S. Sowmya Kamath
Early Signs of Critical Slowing Down in Heart Surface Electrograms of Ventricular Fibrillation Victims

Ventricular fibrillation (VF) is a dangerous type of cardiac arrhythmia which, without intervention, almost always results in sudden death. Implantable automatic defibrillators are among the most successful devices to prevent sudden death by automatically applying a shock to the heart when fibrillation occurs. However, the electric shock is very painful and could lead to dangerous situations when a patient is, for example, driving or biking. An early warning signal for VF could reduce the risk in such situations or, in the future, reduce the need for defibrillation altogether. Here, we test for the presence of critical slowing down (CSD), which has proven to be an early warning indicator for critical transitions in a range of different systems. CSD is characterized by a buildup of autocorrelation; we therefore study the residuals of heart surface electrocardiograms (ECGs) of patients that suffered VF to investigate if we can measure positive trends in autocorrelation. We consider several methods to extract these residuals from the original signals. For three out of four VF victims, we find a significant amount of positive autocorrelation trends in the residuals, which might be explained by CSD. We show that these positive trends may not be measurable from the original body surface ECGs, but only from certain areas around the heart surface. We argue that additional experimental studies involving heart surface ECG data of subjects that did not suffer VF are required to quantify the prediction accuracy of the promising results we get from the data of VF victims.

Berend Nannes, Rick Quax, Hiroshi Ashikaga, Mélèze Hocini, Remi Dubois, Olivier Bernus, Michel Haïssaguerre
A Comparison of Generalized Stochastic Milevsky-Promislov Mortality Models with Continuous Non-Gaussian Filters

The ability to precisely model mortality rates $$\mu _{x,t}$$ plays an important role from the economic point of view in healthcare. The aim of this article is to propose a comparison of the estimation of the mortality rates based on a class of stochastic Milevsky-Promislov mortality models. We assume that excitations are modeled by second, fourth and sixth order polynomials of outputs from a linear non-Gaussian filter. To estimate the model parameters we use the first and second moments of $$\mu _{x,t}$$. The theoretical values obtained in both cases were compared with theoretical $$\widehat{\mu _{x,t}}$$ based on a classical Lee-Carter model. The obtained results confirm the usefulness of the switched model based on the continuous non-Gaussian processes used for modeling $$\mu _{x,t}$$.

Piotr Śliwka, Leslaw Socha
Ontology-Based Inference for Supporting Clinical Decisions in Mental Health

According to the World Health Organization (WHO), mental and behavioral disorders are increasingly common and currently affect on average 1/4 of the world’s population at some point in their lives, economically impacting communities and generating a high social cost that involves human and technological resources. Among these problems, in Brazil, the lack of a transparent, formal and standardized mental health information model stands out, thus hindering the generation of knowledge, which directly influences the quality of the mental healthcare services provided to the population. Therefore, in this paper, we propose a computational ontology to serve as a common knowledge base among those involved in this domain, to make inferences about treatments, symptoms, diagnosis and prevention methods, helping health professionals in clinical decisions. To do this, we initially carried out a literature review involving scientific papers and the most current WHO guidelines on mental health, later we transferred this knowledge to a formal computational model, building the proposed ontology. Also, the Hermit Reasoner inference engine was used to deduce facts and legitimize the consistency of the logic rules assigned to the model. Hence, it was possible to develop a semantic computational artifact for storage and generate knowledge to assist mental health professionals in clinical decisions.

Diego Bettiol Yamada, Filipe Andrade Bernardi, Newton Shydeo Brandão Miyoshi, Inácia Bezerra de Lima, André Luiz Teixeira Vinci, Vinicius Tohoru Yoshiura, Domingos Alves
Towards Prediction of Heart Arrhythmia Onset Using Machine Learning

Current study aims at prediction of the onset of malignant cardiac arrhythmia in patients with Implantable Cardioverter-Defibrillators (ICDs) using Machine Learning algorithms. The input data consisted of 184 signals of RR-intervals from 29 patients with ICD, recorded both during normal heartbeat and arrhythmia. For every signal we generated 47 descriptors with different signal analysis methods. Then, we performed feature selection using several methods and used selected feature for building predictive models with the help of Random Forest algorithm. Entire modelling procedure was performed within 5-fold cross-validation procedure that was repeated 10 times. Results were stable and repeatable. The results obtained (AUC = 0.82, MCC = 0.45) are statistically significant and show that RR intervals carry information about arrhythmia onset. The sample size used in this study was too small to build useful medical predictive models, hence large data sets should be explored to construct models of sufficient quality to be of direct utility in medical practice.

Agnieszka Kitlas Golińska, Wojciech Lesiński, Andrzej Przybylski, Witold R. Rudnicki
Stroke ICU Patient Mortality Day Prediction

This article presents a study on development of methods for analysis of data reflecting the process of treatment of stroke inpatients to predict clinical outcomes at the emergency care unit. The aim of this work is to develop models for the creation of validated risk scales for early intravenous stroke with minimum number of parameters with maximum prognostic accuracy and possibility to calculate the time of “expected intravenous stroke mortality”. The study of experience in the development and use of medical information systems allows us to state the insufficient ability of existing models for adequate data analysis, weak formalization and lack of system approach in the collection of diagnostic data, insufficient personalization of diagnostic data on the factors determining early intravenous stroke mortality. In our study we divided patients into 3 subgroups according to the time of death - up to 1 day, 1 to 3 days, and 4 to 10 days. Early mortality in each subgroup was associated with a number of demographic, clinical, and instrumental-laboratory characteristics based on the interpretation of the results of calculating the significance of predictors of binary classification models by machine learning methods from the Scikit-Learn library. The target classes in training were “mortality rate of 1 day”, “mortality rate of 1–3 days”, “mortality rate from 4 days”. AUC ROC of trained models reached 91% for the method of random forest. The results of interpretation of decision trees and calculation of significance of predictors of built-in methods of random forest coincide that can prove to correctness of calculations.

Oleg Metsker, Vozniuk Igor, Georgy Kopanitsa, Elena Morozova, Prohorova Maria
Universal Measure for Medical Image Quality Evaluation Based on Gradient Approach

In this paper, a new universal measure of medical images quality is proposed. The measure is based on the analysis of the image by using gradient methods. The number of isolated peaks in the examined image, as a function of the threshold value, is the basis of the assessment of the image quality. It turns out that for higher quality images the curvature of the graph of the said function has a higher value for lower threshold values. On the basis of the observed property, a new method of no-reference image quality assessment has been created. The experimental verification confirmed the method efficiency. The correlation between the arrangement depending on the image quality done by an expert and by using the proposed method is equal to 0.74. This means that the proposed method gives a correlation of higher than the best methods described in the literature. The proposed measure is useful to maximize the image quality while minimizing the time of medical examination.

Marzena Bielecka, Andrzej Bielecki, Rafał Obuchowicz, Adam Piórkowski
Constructing Holistic Patient Flow Simulation Using System Approach

Patient flow often described as a systemic issue requiring a systemic approach because hospital is a collection of highly dynamic, interconnected, complex, ad hoc and multi-disciplinary sub-processes. However, studies on holistic patient flow simulation following system approach are limited and/or poorly understood. Several researchers have been investigating single departments such as ambulatory care unit, Intensive Care Unit (ICU), emergency department, surgery department or patients’ interaction with limited resources such as doctor, endoscopy or bed, independently. Hence, this article demonstrates how to achieve system approach in constructing holistic patient flow simulation, while maintaining the balance between the complexity and the simplicity of the model. To this end, system approach, network analysis and discrete event simulation (DES) were employed. The most important departments in the diagnosis and treatment process are identified by analyzing network of hospital departments. Holistic patient flow simulation is constructed using DES following system approach. Case studies are conducted and the results illustrate that healthcare systems must be modeled and investigated as a complex and interconnected system so that the real impact of changes on the entire system or parts of the system could be observed at strategic as well as operational levels.

Tesfamariam M. Abuhay, Oleg G. Metsker, Aleksey N. Yakovlev, Sergey V. Kovalchuk
Investigating Coordination of Hospital Departments in Delivering Healthcare for Acute Coronary Syndrome Patients Using Data-Driven Network Analysis

Healthcare systems are challenged to deliver high-quality and efficient care. Studying patient flow in a hospital is particularly fundamental as it demonstrates effectiveness and efficiency of a hospital. Since hospital is a collection of physically nearby services under one administration, its performance and outcome are shaped by the interaction of its discrete components. Coordination of processes at different levels of organizational structure of a hospital can be studied using network analysis. Hence, this article presents a data-driven static and temporal network of departments. Both networks are directed and weighted and constructed using seven years’ (2010–2016) empirical data of 24902 Acute Coronary Syndrome (ACS) patients. The ties reflect an episode-based transfer of ACS patients from department to department in a hospital. The weight represents the number of patients transferred among departments. As a result, the underlying structure of network of departments that deliver healthcare for ACS patients is described, the main departments and their role in the diagnosis and treatment process of ACS patients are identified, the role of departments over seven years is analyzed and communities of departments are discovered. The results of this study may help hospital administration to effectively organize and manage the coordination of departments based on their significance, strategic positioning and role in the diagnosis and treatment process which, in-turn, nurtures value-based and precision healthcare.

Tesfamariam M. Abuhay, Yemisrach G. Nigatie, Oleg G. Metsker, Aleksey N. Yakovlev, Sergey V. Kovalchuk
A Machine Learning Approach to Short-Term Body Weight Prediction in a Dietary Intervention Program

Weight and obesity management is one of the emerging challenges in current health management. Nutrient-gene interactions in human obesity (NUGENOB) seek to find various solutions to challenges posed by obesity and over-weight. This research was based on utilising a dietary intervention method as a means of addressing the problem of managing obesity and overweight. The dietary intervention program was done for a period of ten weeks. Traditional statistical techniques have been utilised in analyzing the potential gains in weight and diet intervention programs. This work investigates the applicability of machine learning to improve on the prediction of body weight in a dietary intervention program. Models that were utilised include Dynamic model, Machine Learning models (Linear regression, Support vector machine (SVM), Random Forest (RF), Artificial Neural Networks (ANN)). The performance of these estimation models was compared based on evaluation metrics like RMSE, MAE and R2. The results indicate that the Machine learning models (ANN and RF) perform better than the other models in predicting body weight at the end of the dietary intervention program.

Oladapo Babajide, Tawfik Hissam, Palczewska Anna, Gorbenko Anatoliy, Arne Astrup, J. Alfredo Martinez, Jean-Michel Oppert, Thorkild I. A. Sørensen
An Analysis of Demographic Data in Irish Healthcare Domain to Support Semantic Uplift

Healthcare data in Ireland is often fragmented and siloed making it difficult to access and use, and of the data that is digitized, it is rarely standardised from the perspective of data interoperability. The Web of Data (WoD) is an initiative to make data open and interconnected, stored and shared across the World Wide Web. Once a data schema is described using an ontology and published, it resides on the web, and any data described using Linked Data can be associated with this ontology so that the semantics of the data are open and freely available to a global audience. In this article we explore the semantic uplift of demographic data in the Irish context through an analysis of Irish data catalogues, and explore how demographic data is represented in health standards internationally. Through this analysis we identify the Fast Healthcare Interoperability Resources (FHIR) ontology as a basis for managing demographic health care data in Ireland.

Kris McGlinn, Pamela Hussey
From Population to Subject-Specific Reference Intervals

In clinical practice, normal values or reference intervals are the main point of reference for interpreting a wide array of measurements, including biochemical laboratory tests, anthropometrical measurements, physiological or physical ability tests. They are historically defined to separate a healthy population from unhealthy and therefore serve a diagnostic purpose. Numerous cross-sectional studies use various classical parametric and nonparametric approaches to calculate reference intervals. Based on a large cross-sectional study (N = 60,799), we compute reference intervals for subpopulations (e.g. males and females) which illustrate that subpopulations may have their own specific and more narrow reference intervals. We further argue that each healthy subject may actually have its own reference interval (subject-specific reference intervals or SSRIs). However, for estimating such SSRIs longitudinal data are required, for which the traditional reference interval estimating methods cannot be used. In this study, a linear quantile mixed model (LQMM) is proposed for estimating SSRIs from longitudinal data. The SSRIs can help clinicians to give a more accurate diagnosis as they provide an interval for each individual patient. We conclude that it is worthwhile to develop a dedicated methodology to bring the idea of subject-specific reference intervals to the preventive healthcare landscape.

Murih Pusparum, Gökhan Ertaylan, Olivier Thas
Analyzing the Spatial Distribution of Acute Coronary Syndrome Cases Using Synthesized Data on Arterial Hypertension Prevalence

In the current study, the authors demonstrate the method aimed at analyzing the distribution of acute coronary syndrome (ACS) cases in Saint Petersburg. The employed approach utilizes a synthetic population of Saint Petersburg and a statistical model for arterial hypertension prevalence. The number of ACS–related emergency services calls in an area is matched with the population density and the prospected number of individuals with arterial hypertension, which makes it possible to find locations with excessive ACS incidence. Three categories of locations, depending on the joint distribution of the above-mentioned indicators, are proposed as a result of data analysis. The method is implemented in Python programming language, the visualization is made using QGIS open software. The proposed method can be used to assess the prevalence of certain health conditions in the population and to match them with the corresponding severe health outcomes.

Vasiliy N. Leonenko
The Atrial Fibrillation Risk Score for Hyperthyroidism Patients

Thyrotoxicosis (TT) is associated with an increase in both total and cardiovascular mortality. One of the main thyrotoxicosis complications is Atrial Fibrillation (AF). Right AF predictors help medical personal prescribe the select patients with high risk of TAF for a closest follow-up or for an early radical treatment of thyrotoxicosis. The main goal of this study is creating a method for practical treatment and diagnostic AF. This study proposes a new method for assessing the risk of occurrence atrial fibrillation for patients with TT. This method considers both the features of the complication and the specifics of the chronic disease. A model is created based on case histories of patients with thyrotoxicosis. We used Machine Learning methods for creating several models. Each model has advantages and disadvantages depending on the diagnostic and medical purposes. The resulting models show high results in the different metrics of the prediction of a thyrotoxic AF. These models are interpreted and simple for use. Therefore, models can be used as part of the support and decision-making system (DSS) by medical specialists in the treatment AF.

Ilya V. Derevitskii, Daria A. Savitskaya, Alina Y. Babenko, Sergey V. Kovalchuk
Applicability of Machine Learning Methods to Multi-label Medical Text Classification

Structuring medical text using international standards allows to improve interoperability and quality of predictive modelling. Medical text classification task facilitates information extraction. In this work we investigate the applicability of several machine learning models and classifier chains (CC) to medical unstructured text classification. The experimental study was performed on a corpus of 11671 manually labeled Russian medical notes. The results showed that using CC strategy allows to improve classification performance. Ensemble of classifier chains based on linear SVC showed the best result: 0.924 micro F-measure, 0.872 micro precision and 0.927 micro recall.

Iuliia Lenivtceva, Evgenia Slasten, Mariya Kashina, Georgy Kopanitsa
Machine Learning Approach for the Early Prediction of the Risk of Overweight and Obesity in Young People

Obesity is a major global concern with more than 2.1 billion people overweight or obese worldwide which amounts to almost 30% of the global population. If the current trend continues, the overweight and obese population is likely to increase to 41% by 2030. Individuals developing signs of weight gain or obesity are also at a risk of developing serious illnesses such as type 2 diabetes, respiratory problems, heart disease and stroke. Some intervention measures such as physical activity and healthy eating can be a fundamental component to maintain a healthy lifestyle. Therefore, it is absolutely essential to detect childhood obesity as early as possible. This paper utilises the vast amount of data available via UK’s millennium cohort study in order to construct a machine learning driven model to predict young people at the risk of becoming overweight or obese. The childhood BMI values from the ages 3, 5, 7 and 11 are used to predict adolescents of age 14 at the risk of becoming overweight or obese. There is an inherent imbalance in the dataset of individuals with normal BMI and the ones at risk. The results obtained are encouraging and a prediction accuracy of over 90% for the target class has been achieved. Various issues relating to data preprocessing and prediction accuracy are addressed and discussed.

Balbir Singh, Hissam Tawfik
Gait Abnormality Detection in People with Cerebral Palsy Using an Uncertainty-Based State-Space Model

Assessment and quantification of feature uncertainty in modeling gait pattern is crucial in clinical decision making. Automatic diagnostic systems for Cerebral Palsy gait often ignored the uncertainty factor while recognizing the gait pattern. In addition, they also suffer from limited clinical interpretability. This study establishes a low-cost data acquisition set up and proposes a state-space model where the temporal evolution of gait pattern was recognized by analyzing the feature uncertainty using Dempster-Shafer theory of evidence. An attempt was also made to quantify the degree of abnormality by proposing gait deviation indexes. Results indicate that our proposed model outperformed state-of-the-art with an overall $$87.5\%$$ of detection accuracy (sensitivity $$80.00\%$$, and specificity $$100\%$$). In a gait cycle of a Cerebral Palsy patient, first double limb support and left single limb support were observed to be affected mainly. Incorporation of feature uncertainty in quantifying the degree of abnormality is demonstrated to be promising. Larger value of feature uncertainty was observed for the patients having higher degree of abnormality. Sub-phase wise assessment of gait pattern improves the interpretability of the results which is crucial in clinical decision making.

Saikat Chakraborty, Noble Thomas, Anup Nandy
Analyses of Public Health Databases via Clinical Pathway Modelling: TBWEB

One of the purposes of public health databases is to serve as repositories for storing information regarding the treatment of patients. TBWEB (TuBerculose WEB) is an epidemiological surveillance system for tuberculosis cases in the state of São Paulo, Brazil. This paper proposes an analysis of the TBWEB database with the use of clinical pathways modelling. Firstly, the database was analysed in order to find the interventions registered on the database. The clinical pathways were obtained from the database by the use of process mining techniques. Similar pathways were grouped into clusters in order to find the most common treatment sequences. Each cluster was characterised and the risk of bad outcomes associated with each cluster was discovered. Some clusters had an association with the risk of negative outcomes. This method can be applied to other databases, serve as a base for decision-making systems and can be used to monitor public health databases.

Anderson C. Apunike, Lívia Oliveira-Ciabati, Tiago L. M. Sanches, Lariza L. de Oliveira, Mauro N. Sanchez, Rafael M. Galliez, Domingos Alves
Preliminary Results on Pulmonary Tuberculosis Detection in Chest X-Ray Using Convolutional Neural Networks

Tuberculosis (TB), is an ancient disease that probably affects humans since pre-hominids. This disease is caused by bacteria belonging to the mycobacterium tuberculosis complex and usually affects the lungs in up to 67% of cases. In 2019, there were estimated to be over 10 million tuberculosis cases in the world, in the same year TB was between the ten leading causes of death, and the deadliest from a single infectious agent. Chest X-ray (CXR) has recently been promoted by the WHO as a tool possibly placed early in screening and triaging algorithms for TB detection. Numerous TB prevalence surveys have demonstrated that CXR is the most sensitive screening tool for pulmonary TB and that a significant proportion of people with TB are asymptomatic in the early stages of the disease. This study presents experimentation of classic convolutional neural network architectures on public CRX databases in order to create a tool applied to the diagnostic aid of TB in chest X-ray images. As result the study has an AUC ranging from 0.78 to 0.84, sensitivity from 0.76 to 0.86 and specificity from 0.58 to 0.74 depending on the network architecture. The observed performance by these metrics alone are within the range of metrics found in the literature, although there is much room for metrics improvement and bias avoiding. Also, the usage of the model in a triage use-case could be used to validate the efficiency of the model in the future.

Márcio Eloi Colombo Filho, Rafael Mello Galliez, Filipe Andrade Bernardi, Lariza Laura de Oliveira, Afrânio Kritski, Marcel Koenigkam Santos, Domingos Alves
Risk-Based AED Placement - Singapore Case

This paper presents a novel risk-based method for Automated External Defibrillator (AED) placement. In sudden cardiac events, availability of a nearby AED is crucial for the surviving of cardiac arrest patients. The common method uses historical Out-of-Hospital Cardiac Arrest (OHCA) data for AED placement optimization. But historical data often do not cover the entire area of investigation. The goal of this work is to develop an approach to improve the method based on historical data for AED placement. To this end, we have developed a risk-based method which generates artificial OHCAs based on a risk model. We compare our risk-based method with the one based on historical data using real Singapore OHCA occurrences from Pan-Asian Resuscitation Outcome Study (PAROS). Results show that to deploy a large number of AEDs the risk-based method outperforms the method purely using historical data on the testing dataset. This paper describes our risk-based AED placement method, discusses experimental results, and outlines future work.

Ivan Derevitskii, Nikita Kogtikov, Michael H. Lees, Wentong Cai, Marcus E. H. Ong
Time Expressions Identification Without Human-Labeled Corpus for Clinical Text Mining in Russian

To obtain accurate predictive models in medicine, it is necessary to use complete relevant information about the patient. We propose an approach for extracting temporary expressions from unlabeled natural language texts. This approach can be used for the first analysis of the corpus, for data labeling as the first stage, or for obtaining linguistic constructions that can be used for a rule-based approach to retrieve information. Our method includes the sequential use of several machine learning and natural language processing methods: classification of sentences, the transformation of word bag frequencies, clustering of sentences with time expressions, classification of new data into clusters and construction of sentence profiles using feature importances. With this method, we derive the list of the most frequent time expressions and extract events and/or time events for 9801 sentences of anamnesis in Russian. The proposed approach is independent of the corpus language and can be used for other tasks, for example, extracting an experiencer of a disease.

Anastasia A. Funkner, Sergey V. Kovalchuk
Experiencer Detection and Automated Extraction of a Family Disease Tree from Medical Texts in Russian Language

Text descriptions in natural language are an essential part of electronic health records (EHRs). Such descriptions usually contain facts about patient’s life, events, diseases and other relevant information. Sometimes it may also include facts about their family members. In order to find the facts about the right person (experiencer) and convert the unstructured medical text into structured information, we developed a module of experiencer detection. We compared different vector representations and machine learning models to get the highest quality of 0.96 f-score for binary classification and 0.93 f-score for multi-classification. Additionally, we present the results plotting the family disease tree.

Ksenia Balabaeva, Sergey Kovalchuk

Computational Methods for Emerging Problems in (dis-)Information Analysis

Frontmatter
Machine Learning – The Results Are Not the only Thing that Matters! What About Security, Explainability and Fairness?

Recent advances in machine learning (ML) and the surge in computational power have opened the way to the proliferation of ML and Artificial Intelligence (AI) in many domains and applications. Still, apart from achieving good accuracy and results, there are many challenges that need to be discussed in order to effectively apply ML algorithms in critical applications for the good of societies. The aspects that can hinder practical and trustful ML and AI are: lack of security of ML algorithms as well as lack of fairness and explainability. In this paper we discuss those aspects and provide current state of the art analysis of the relevant works in the mentioned domains.

Michał Choraś, Marek Pawlicki, Damian Puchalski, Rafał Kozik
Syntactic and Semantic Bias Detection and Countermeasures

Applied Artificial Intelligence (AAI) and, especially Machine Learning (ML), both had recently a breakthrough with high-performant hardware for Deep Learning [1]. Additionally, big companies like Huawei and Google are adapting their product philosophy to AAI and ML [2–4]. Using ML-based systems require always a training data set to achieve a usable, i.e. trained, AAI system. The quality of the training data set determines the quality of the predictions. One important quality factor is that the training data are unbiased. Bias may lead in the worst case to incorrect and unusable predictions. This paper investigates the most important types of bias, namely syntactic and semantic bias. Countermeasures and methods to detect these biases are provided to diminish the deficiencies.

Roman Englert, Jörg Muschiol
Detecting Rumours in Disasters: An Imbalanced Learning Approach

The online spread of rumours in disasters can create panic and anxiety and disrupt crisis operations. Hence, it is crucial to take measure against such a distressing phenomenon since it can turn into a crisis by itself. In this work, the automatic rumour detection in natural disasters is addressed from an imbalanced learning perspective due to the rumour dearth versus non-rumour abundance in social networks.We first provide two datasets by collecting and annotating tweets regarding the Hurricane Florence and Kerala flood. We then capture the properties of rumours and non-rumours in those disasters using 83 theory-based and early-available features, 47 of which are proposed for the first time. The proposed features show a high discrimination power that help us distinguish rumours from non-rumours more reliably. Next, We build the rumour identification models using imbalanced learning to address the scarcity of rumours compared to non-rumour. Additionally, to replicate the rumour detection in the real-world situation, we practice cross-incident learning by training the classifier with the samples of one incident and test it with the other one. In the end we measure the impact of imbalanced learning using Bayesian Wilcoxon Signed-rank test and observe a significant improvement in the classifiers performance.

Amir Ebrahimi Fard, Majid Mohammadi, Bartel van de Walle
Sentiment Analysis for Fake News Detection by Means of Neural Networks

The problem of fake news has become one of the most challenging issues having an impact on societies. Nowadays, false information may spread quickly through social media. In that regard, fake news needs to be detected as fast as possible to avoid negative influence on people who may rely on such information while making important decisions (e.g., presidential elections). In this paper, we present an innovative solution for fake news detection that utilizes deep learning methods. Our experiments prove that the proposed approach allows us to achieve promising results.

Sebastian Kula, Michał Choraś, Rafał Kozik, Paweł Ksieniewicz, Michał Woźniak
Backmatter
Metadaten
Titel
Computational Science – ICCS 2020
herausgegeben von
Dr. Valeria V. Krzhizhanovskaya
Dr. Gábor Závodszky
Michael H. Lees
Prof. Jack J. Dongarra
Prof. Dr. Peter M. A. Sloot
Sérgio Brissos
João Teixeira
Copyright-Jahr
2020
Electronic ISBN
978-3-030-50423-6
Print ISBN
978-3-030-50422-9
DOI
https://doi.org/10.1007/978-3-030-50423-6