Skip to main content

2018 | Buch

Artificial Neural Networks and Machine Learning – ICANN 2018

27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part II

herausgegeben von: Věra Kůrková, Prof. Yannis Manolopoulos, Barbara Hammer, Lazaros Iliadis, Ilias Maglogiannis

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This three-volume set LNCS 11139-11141 constitutes the refereed proceedings of the 27th International Conference on Artificial Neural Networks, ICANN 2018, held in Rhodes, Greece, in October 2018.

The 139 full and 28 short papers as well as 41 full poster papers and 41 short poster papers presented in these volumes was carefully reviewed and selected from total of 360 submissions. They are related to the following thematic topics: AI and Bioinformatics, Bayesian and Echo State Networks, Brain Inspired Computing, Chaotic Complex Models, Clustering, Mining, Exploratory Analysis, Coding Architectures, Complex Firing Patterns, Convolutional Neural Networks, Deep Learning (DL), DL in Real Time Systems, DL and Big Data Analytics, DL and Big Data, DL and Forensics, DL and Cybersecurity, DL and Social Networks, Evolving Systems – Optimization, Extreme Learning Machines, From Neurons to Neuromorphism, From Sensation to Perception, From Single Neurons to Networks, Fuzzy Modeling, Hierarchical ANN, Inference and Recognition, Information and Optimization, Interacting with The Brain, Machine Learning (ML), ML for Bio Medical systems, ML and Video-Image Processing, ML and Forensics, ML and Cybersecurity, ML and Social Media, ML in Engineering, Movement and Motion Detection, Multilayer Perceptrons and Kernel Networks, Natural Language, Object and Face Recognition, Recurrent Neural Networks and Reservoir Computing, Reinforcement Learning, Reservoir Computing, Self-Organizing Maps, Spiking Dynamics/Spiking ANN, Support Vector Machines, Swarm Intelligence and Decision-Making, Text Mining, Theoretical Neural Computation, Time Series and Forecasting, Training and Learning.

Inhaltsverzeichnis

Frontmatter

ELM/Echo State ANN

Frontmatter
Rank-Revealing Orthogonal Decomposition in Extreme Learning Machine Design

Extreme Learning Machine (ELM), a neural network technique used for regression problems, may be considered as a nonlinear transformation (from the training input domain into the output space of hidden neurons) which provides the basis for linear mean square (LMS) regression problem. The conditioning of this problem is the important factor influencing ELM implementation and accuracy. It is demonstrated that rank-revealing orthogonal decomposition techniques can be used to identify neurons causing collinearity among LMS regression basis. Such neurons may be eliminated or modified to increase the numerical rank of the matrix which is pseudo-inverted while solving LMS regression.

Jacek Kabziński
An Improved CAD Framework for Digital Mammogram Classification Using Compound Local Binary Pattern and Chaotic Whale Optimization-Based Kernel Extreme Learning Machine

The morbidity and mortality rate of breast cancer still continues to remain high among women across the world. This figure can be reduced if the cancer is identified at its early stage. A Computer-aided diagnosis (CAD) system is an efficient computerized tool used to analyze the mammograms for finding cancer in the breast and to reach a decision with maximum accuracy. The presented work aims at developing a CAD model which can classify the mammograms as normal or abnormal, and further, benign or malignant accurately. In the present model, CLAHE is used for image pre-processing, compound local binary pattern (CM-LBP) for feature extraction followed by principal component analysis (PCA) for feature reduction. Then, a chaotic whale optimization-based kernel extreme learning machine (CWO-KELM) is utilized to classify the mammograms as normal/abnormal and benign/malignant. The present model achieves the highest accuracy of 100% and 99.48% for MIAS and DDSM, respectively.

Figlu Mohanty, Suvendu Rup, Bodhisattva Dash
A Novel Echo State Network Model Using Bayesian Ridge Regression and Independent Component Analysis

We propose a novel Bayesian Ridge Echo State Network (BRESN) model for nonlinear time series prediction, based on Bayesian Ridge Regression and Independent Component Analysis. BRESN has a regularization effect to avoid over-fitting, at the same time being robust to noise owing to its probabilistic strategy. In BRESN we also use Independent Component Analysis (ICA) for dimensionality reduction, and show that ICA improves the model’s accuracy more than other reduction techniques. Furthermore, we evaluate the proposed model on both synthetic and real-world datasets to compare its accuracy with twelve combinations of four other regression models and three different choices of dimensionality reduction techniques, and measure its running time. Experimental results show that our model significantly outperforms other state-of-the-art ESN prediction models while maintaining a satisfactory running time.

Hoang Minh Nguyen, Gaurav Kalra, Tae Joon Jun, Daeyoung Kim

Image Processing

Frontmatter
A Model for Detection of Angular Velocity of Image Motion Based on the Temporal Tuning of the Drosophila

We propose a new bio-plausible model based on the visual systems of Drosophila for estimating angular velocity of image motion in insects’ eyes. The model implements both preferred direction motion enhancement and non-preferred direction motion suppression which is discovered in Drosophila’s visual neural circuits recently to give a stronger directional selectivity. In addition, the angular velocity detecting model (AVDM) produces a response largely independent of the spatial frequency in grating experiments which enables insects to estimate the flight speed in cluttered environments. This also coincides with the behaviour experiments of honeybee flying through tunnels with stripes of different spatial frequencies.

Huatian Wang, Jigen Peng, Paul Baxter, Chun Zhang, Zhihua Wang, Shigang Yue
Local Decimal Pattern for Pollen Image Recognition

In this paper, we propose local decimal pattern (LDP) for pollen image recognition. Considering that the gradient image of pollen grains has more prominent textural features, we quantify by comparing the gradient magnitude of pixel blocks rather than the single pixel value. Unlike the local binary pattern (LBP) and its variants, we encoding by counting the pixel blocks on different quantization intervals, which makes our descriptor robust to the rotation of pollen images. In order to capture the subtle textural feature of pollen images, we increase the number of quantization intervals. The average correct recognition rate of LDP on Pollenmonitor dataset is 90.95%, which is much higher than that of other compared pollen recognition methods. The experimental results show that our method is more suitable for the practical classification and identification of pollen images than compared methods.

Liping Han, Yonghua Xie
New Architecture of Correlated Weights Neural Network for Global Image Transformations

The paper describes a new extension of the convolutional neural network concept. The developed network, similarly to the CNN, instead of using independent weights for each neuron in the network uses related weights. This results in a small number of parameters optimized in the learning process, and high resistance to overtraining. However unlike the CNN, instead of sharing weights, the network takes advantage of weights correlated with coordinates of a neuron and its inputs, calculated by a dedicated subnet. This solution allows the neural layer of the network to perform global transformation of patterns what was unachievable for convolutional layers. The new network concept has been confirmed by verification of its ability to perform typical image affine transformations such as translation, scaling and rotation.

Sławomir Golak, Anna Jama, Marcin Blachnik, Tadeusz Wieczorek
Compression-Based Clustering of Video Human Activity Using an ASCII Encoding

Human Activity Recognition (HAR) from videos is an important area of computer vision research with several applications. There are a wide number of methods to classify video human activities, not without certain disadvantages such as computational cost, dataset specificity or low resistance to noise, among others. In this paper, we propose the use of the Normalized Compression Distance (NCD), as a complementary approach to identify video-based HAR. We have developed a novel ASCII video data format, as a suitable format to apply the NCD in video. For our experiments, we have used the Activities of Daily Living Dataset, to discriminate several human activities performed by different subjects. The experimental results presented in this paper show that the NCD can be used as an alternative to classical analysis of video HAR.

Guillermo Sarasa, Aaron Montero, Ana Granados, Francisco B. Rodriguez

Medical/Bioinformatics

Frontmatter
Deep Autoencoders for Additional Insight into Protein Dynamics

The study of protein dynamics through analysis of conformational transitions represents a significant stage in understanding protein function. Using molecular simulations, large samples of protein transitions can be recorded. However, extracting functional motions from these samples is still not automated and extremely time-consuming. In this paper we investigate the usefulness of unsupervised machine learning methods for uncovering relevant information about protein functional dynamics. Autoencoders are being explored in order to highlight their ability to learn relevant biological patterns, such as structural characteristics. This study is aimed to provide a better comprehension of how protein conformational transitions are evolving in time, within the larger framework of automatically detecting functional motions.

Mihai Teletin, Gabriela Czibula, Maria-Iuliana Bocicor, Silvana Albert, Alessandro Pandini
Pilot Design of a Rule-Based System and an Artificial Neural Network to Risk Evaluation of Atherosclerotic Plaques in Long-Range Clinical Research

Early diagnostics and knowledge of the progress of atherosclerotic plaques are key parameters which can help start the most efficient treatment. Reliable prediction of growing of atherosclerotic plaques could be very important part of early diagnostics to judge potential impact of the plaque and to decide necessity of immediate artery recanalization. For this pilot study we have a large set of measured data from total of 482 patients. For each patient the width of the plaque from left and right side during at least 5 years at regular intervals for 6 months was measured Patients were examined each 6 months and width of the plaque was measured using ultrasound B-image and the data were stored into a database. The first part is focused on rule-based expert system designed for evaluation of suggestion to immediate recanalization according to progress of the plaque. These results will be verified by an experienced sonographer. This system could be a starting point to design an artificial neural network with adaptive learning based on image processing of ultrasound B-images for classification of the plaques using feature analysis. The principle of the network is based on edge detection analysis of the plaques using feed-forwarded network with Error Back-Propagation algorithm. Training and learning of the ANN will be time-consuming processes for a long-term research. The goal is to create ANN which can recognize the border of the plaques and to measure of the width. The expert system and ANN are two different approaches, however, both of them can cooperate.

Jiri Blahuta, Tomas Soukup, Jakub Skacel
A Multi-channel Multi-classifier Method for Classifying Pancreatic Cystic Neoplasms Based on ResNet

Pancreatic cystic neoplasm (PCN) is one of the most common tumors in the digestive tract. It is still a challenging task for doctors to diagnose the types of pancreatic cystic neoplasms by using Computed Tomography (CT) images. Especially for serous cystic neoplasms (SCNs) and mucinous cystic neoplasms (MCNs), doctors hardly distinguish one from the other by the naked eyes owing to the high similarities between them. In this work, a multi-channel multiple-classifier (MCMC) model is proposed to distinguish the two pancreatic cystic neoplasms in CT images. At first, multi-channel images are used to enhance the image edge of the tumor, then the residual network is adopted to extract features. Finally, the multiple classifiers are applied to classify the results. Experiments show that the proposed method can effectively improve the classification effect, and the results can help doctors to utilize the CT images to achieve reliable non-invasive disease diagnosis.

Haigen Hu, Kangjie Li, Qiu Guan, Feng Chen, Shengyong Chen, Yicheng Ni
Breast Cancer Histopathological Image Classification via Deep Active Learning and Confidence Boosting

Classify image into benign and malignant is one of the basic image processing tools in digital pathology for breast cancer diagnosis. Deep learning methods have received more attention recently by training with large-scale labeled datas, but collecting and annotating clinical data is professional and time-consuming. The proposed work develops a deep active learning framework to reduce the annotation burden, where the method actively selects the valuable unlabeled samples to be annotated instead of random selecting. Besides, compared with standard query strategy in previous active learning methods, the proposed query strategy takes advantage of manual labeling and auto-labeling to emphasize the confidence boosting effect. We validate the proposed work on a public histopathological image dataset. The experimental results demonstrate that the proposed method is able to reduce up to 52% labeled data compared with random selection. It also outperforms deep active learning method with standard query strategy in the same tasks.

Baolin Du, Qi Qi, Han Zheng, Yue Huang, Xinghao Ding
Epileptic Seizure Prediction from EEG Signals Using Unsupervised Learning and a Polling-Based Decision Process

Epilepsy is a central nervous system disorder defined by spontaneous seizures and may present a risk to the physical integrity of patients due to the unpredictability of the seizures. It affects millions of people worldwide and about 30% of them do not respond to anti-epileptic drugs (AEDs) treatment. Therefore, a better seizure control with seizures prediction methods can improve their quality of life. This paper presents a patient-specific method for seizure prediction using a preprocessing wavelet transform associated to the Self-Organizing Maps (SOM) unsupervised learning algorithm and a polling-based method. Only 20 min of 23 channels scalp electroencephalogram (EEG) has been selected for the training phase for each of nine patients for EEG signals from the CHB-MIT public database. The proposed method has achieved up to 98% of sensitivity, 88% of specificity and 91% of accuracy. For each subsequence of EEG data received, the system takes less than one second to estimate the patient state, regarding the possibility of an impending seizure.

Lucas Aparecido Silva Kitano, Miguel Angelo Abreu Sousa, Sara Dereste Santos, Ricardo Pires, Sigride Thome-Souza, Alexandre Brincalepe Campo
Classification of Bone Tumor on CT Images Using Deep Convolutional Neural Network

Classification of bone tumor plays an important role in treatment. As artificial diagnosis is in low efficiency, an automatic classification system can help doctors analyze medical images better. However, most existing methods cannot reach high classification accuracy on clinical images because of the high similarity between images. In this paper, we propose a super label guided convolutional neural network (SG-CNN) to classify CT images of bone tumor. Images with two hierarchical labels would be fed into the network, and learned by its two sub-networks, whose tasks are learning the whole image and focusing on lesion area to learn more details respectively. To further improve classification accuracy, we also propose a multi-channel enhancement (ME) strategy for image preprocessing. Owing to the lack of suitable public dataset, we introduce a CT image dataset of bone tumor. Experimental results on this dataset show our SG-CNN and ME strategy improve the classification accuracy obviously.

Yang Li, Wenyu Zhou, Guiwen Lv, Guibo Luo, Yuesheng Zhu, Ji Liu
DSL: Automatic Liver Segmentation with Faster R-CNN and DeepLab

Liver segmentation is a crucial step in computer-assisted diagnosis and surgical planning of liver diseases. However, it is still a quite challenging task due to four reasons. First, the grayscale of the liver and its adjacent organ tissues is similar. Second, partial volume effect makes the liver contour blurred. Third, most clinical images have serious pathology such as liver tumor. Forth, each person’s liver shape is discrepant. In this paper, we proposed DSL (detection and segmentation laboratory) method based on Faster R-CNN (faster regions with CNN features) and DeepLab. The DSL consists of two steps: to reduce the scope of subsequent liver segmentation, Faster R-CNN is employed to detect liver area. Next, the detection results are input to DeepLab for segmentation. This work is evaluated on two datasets: 3Dircadb and MICCAI-Sliver07. Compared with the state-of-the-art automatic methods, our approach has achieved better performance in terms of VOE, RVD, ASD and total score.

Wei Tang, Dongsheng Zou, Su Yang, Jing Shi
Temporal Convolution Networks for Real-Time Abdominal Fetal Aorta Analysis with Ultrasound

The automatic analysis of ultrasound sequences can substantially improve the efficiency of clinical diagnosis. In this work we present our attempt to automate the challenging task of measuring the vascular diameter of the fetal abdominal aorta from ultrasound images. We propose a neural network architecture consisting of three blocks: a convolutional layer for the extraction of imaging features, a Convolution Gated Recurrent Unit (C-GRU) for enforcing the temporal coherence across video frames and exploiting the temporal redundancy of a signal, and a regularized loss function, called CyclicLoss, to impose our prior knowledge about the periodicity of the observed signal. We present experimental evidence suggesting that the proposed architecture can reach an accuracy substantially superior to previously proposed methods, providing an average reduction of the mean squared error from $$0.31\,\mathrm{mm}^2$$ (state-of-art) to $$0.09\,\mathrm{mm}^2$$ , and a relative error reduction from $$8.1\%$$ to $$5.3\%$$ . The mean execution speed of the proposed approach of 289 frames per second makes it suitable for real time clinical use.

Nicoló Savioli, Silvia Visentin, Erich Cosmi, Enrico Grisan, Pablo Lamata, Giovanni Montana
An Original Neural Network for Pulmonary Tuberculosis Diagnosis in Radiographs

Tuberculosis (TB) is a widespread and highly contagious disease that may lead serious harm to patient health. With the development of neural network, there is increasingly attention to apply deep learning on TB diagnosis. Former works validated the feasibility of neural networks in this task, but still suffer low accuracy problem due to lack of samples and complexity of radiograph information. In this work, we proposed an end-to-end neural network system for TB diagnosis, combining preprocessing, lung segmentation, feature extraction and classification. We achieved accuracy of 0.961 in our labeled dataset, 0.923 and 0.890 on Shenzhen and Montgomery Public Dataset respectively, demonstrating our work outperformed the state-of-the-art methods in this area.

Junyu Liu, Yang Liu, Cheng Wang, Anwei Li, Bowen Meng, Xiangfei Chai, Panli Zuo
Computerized Counting-Based System for Acute Lymphoblastic Leukemia Detection in Microscopic Blood Images

Counting of white blood cells (WBCs) and detecting the morphological abnormality of these cells allow for diagnosis some blood diseases such as leukemia. This can be accomplished by automatic quantification analysis of microscope images of blood smear. This paper is oriented towards presenting a novel framework that consists of two sub-systems as indicators for detection Acute Lymphoblastic Leukemia (ALL). The first sub-system aims at counting WBCs by adapting a deep learning based approach to separate agglomerates of WBCs. After separation of WBCs, we propose the second sub-system to detect and count abnormal WBCs (lymphoblasts) required to diagnose ALL. The performance of the proposed framework is evaluated using ALL-IDB dataset. The first presented sub-system is able to count WBCs with an accuracy up to 97.38%. Furthermore, an approach using ensemble classifiers based on handcrafted features is able to detect and count the lymphoblasts with an average accuracy of 98.67%.

Karima Ben-Suliman, Adam Krzyżak
Right Ventricle Segmentation in Cardiac MR Images Using U-Net with Partly Dilated Convolution

Segmentation of anatomical structures in cardiac MR images is an important problem because it is necessary for evaluation of morphology of these structures for diagnostic purposes. Automatic segmentation algorithm with near-human accuracy would be extremely helpful for a medical specialist. In this paper we consider such structures as endocardium and epicardium of right ventricle. We compare the performance of the best existing neural networks such as U-Net and GridNet, and propose our own modification of U-Net which implies replacement of every second convolution layer with dilated (atrous) convolution layer. Evaluation on benchmark dataset RVSC demonstrated that the proposed algorithm allows to improve the segmentation accuracy up to 6% both for endocardium and epicardium compared to original U-Net. The algorithm also overperforms GridNet for both segmentation problems.

Gregory Borodin, Olga Senyukova
Model Based on Support Vector Machine for the Estimation of the Heart Rate Variability

This paper shows the design, implementation and analysis of a Machine Learning (ML) model for the estimation of Heart Rate Variability (HRV). Through the integration of devices and technologies of the Internet of Things, a support tool is proposed for people in health and sports areas who need to know an individual’s HRV. The cardiac signals of the subjects were captured through pectoral bands, later they were classified by a Support Vector Machine algorithm that determined if the HRV is depressed or increased. The proposed solution has an efficiency of 90.3% and it’s the initial component for the development of an application oriented to physical training that suggests exercise routines based on the HRV of the individual.

Catalina Maria Hernández-Ruiz, Sergio Andrés Villagrán Martínez, Johan Enrique Ortiz Guzmán, Paulo Alonso Gaona Garcia
High-Resolution Generative Adversarial Neural Networks Applied to Histological Images Generation

For many years, synthesizing photo-realistic images has been a highly relevant task due to its multiple applications from aesthetic or artistic [19] to medical purposes [1, 6, 21]. Related to the medical area, this application has had greater impact because most classification or diagnostic algorithms require a significant amount of highly specialized images for their training yet obtaining them is not easy at all. To solve this problem, many works analyze and interpret images of a specific topic in order to obtain a statistical correlation between the variables that define it. By this way, any set of variables close to the map generated in the previous analysis represents a similar image. Deep learning based methods have allowed the automatic extraction of feature maps which has helped in the design of more robust models photo-realistic image synthesis. This work focuses on obtaining the best feature maps for automatic generation of synthetic histological images. To do so, we propose a Generative Adversarial Networks (GANs) [8] to generate the new sample distribution using the feature maps obtained by an autoencoder [14, 20] as latent space instead of a completely random one. To corroborate our results, we present the generated images against the real ones and their respective results using different types of autoencoder to obtain the feature maps.

Antoni Mauricio, Jorge López, Roger Huauya, Jose Diaz

Kernel

Frontmatter
Tensor Learning in Multi-view Kernel PCA

In many real-life applications data can be described through multiple representations, or views. Multi-view learning aims at combining the information from all views, in order to obtain a better performance. Most well-known multi-view methods optimize some form of correlation between two views, while in many applications there are three or more views available. This is usually tackled by optimizing the correlations pairwise. However, this ignores the higher-order correlations that could only be discovered when exploring all views simultaneously. This paper proposes novel multi-view Kernel PCA models. By introducing a model tensor, the proposed models aim to include the higher-order correlations between all views. The paper further explores the use of these models as multi-view dimensionality reduction techniques and shows experimental results on several real-life datasets. These experiments demonstrate the merit of the proposed methods.

Lynn Houthuys, Johan A. K. Suykens

Reinforcement

Frontmatter
ACM: Learning Dynamic Multi-agent Cooperation via Attentional Communication Model

The collaboration of multiple agents is required in many real world applications, and yet it is a challenging task due to partial observability. Communication is a common scheme to resolve this problem. However, most of the communication protocols are manually specified and can not capture the dynamic interactions among agents. To address this problem, this paper presents a novel Attentional Communication Model (ACM) to achieve dynamic multi-agent cooperation. Firstly, we propose a new Cooperation-aware Network (CAN) to capture the dynamic interactions including both the dynamic routing and messaging among agents. Secondly, the CAN is integrated into Reinforcement Learning (RL) framework to learn the policy of multi-agent cooperation. The approach is evaluated in both discrete and continuous environments, and outperforms competing methods promisingly.

Xue Han, Hongping Yan, Junge Zhang, Lingfeng Wang
Improving Fuel Economy with LSTM Networks and Reinforcement Learning

This paper presents a system for calculating the optimum velocities and trajectories of an electric vehicle for a specific route. Our objective is to minimize the consumption over a trip without impacting the overall trip time. The system uses a particular segmentation of the route and involves a three-step procedure. In the first step, a neural network is trained on telemetry data to model the consumption of the vehicle based on its velocity and the surface gradient. In the second step, two Q-learning algorithms compute the optimum velocities and the racing line in order to minimize the consumption. In the final step, the computed data is presented to the driver through an interactive application. This system was installed on a light electric vehicle (LEV) and by adopting the suggested driving strategy we reduced its consumption by 24.03% with respect to the classic constant-speed control technique.

Andreas Bougiouklis, Antonis Korkofigkas, Giorgos Stamou
Action Markets in Deep Multi-Agent Reinforcement Learning

Recent work on learning in multi-agent systems (MAS) is concerned with the ability of self-interested agents to learn cooperative behavior. In many settings such as resource allocation tasks the lack of cooperative behavior can be seen as a consequence of wrong incentives. I.e., when agents can not freely exchange their resources then greediness is not uncooperative but only a consequence of reward maximization. In this work, we show how the introduction of markets helps to reduce the negative effects of individual reward maximization. To study the emergence of trading behavior in MAS we use Deep Reinforcement Learning (RL) where agents are self-interested, independent learners represented through Deep Q-Networks (DQNs). Specifically, we propose Action Traders, referring to agents that can trade their atomic actions in exchange for environmental reward. For empirical evaluation we implemented action trading in the Coin Game – and find that trading significantly increases social efficiency in terms of overall reward compared to agents without action trading.

Kyrill Schmid, Lenz Belzner, Thomas Gabor, Thomy Phan
Continuous-Time Spike-Based Reinforcement Learning for Working Memory Tasks

As the brain purportedly employs on-policy reinforcement learning compatible with SARSA learning, and most interesting cognitive tasks require some form of memory while taking place in continuous-time, recent work has developed plausible reinforcement learning schemes that are compatible with these requirements. Lacking is a formulation of both computation and learning in terms of spiking neurons. Such a formulation creates both a closer mapping to biology, and also expresses such learning in terms of asynchronous and sparse neural computation. We present a spiking neural network with memory that learns cognitive tasks in continuous time. Learning is biologically plausibly implemented using the AuGMeNT framework, and we show how separate spiking forward and feedback networks suffice for learning the tasks just as fast the analog CT-AuGMeNT counterpart, while computing efficiently using very few spikes: 1–20 Hz on average.

Marios Karamanis, Davide Zambrano, Sander Bohté
Reinforcement Learning for Joint Extraction of Entities and Relations

Entity and relation extraction is an important task in natural language processing (NLP). Most existing researches handle this issue in a pipelined work or joint learning methods relied on human-annotated corpora, which are vulnerable to errors cascading. On the other side, in order to obtain large training data for methods of supervised learning, distant supervision are used in previous work whereas largely suffer from noisy labeling problem. To solve these problems, we propose a reinforcement learning framework for joint extraction of entities and relations. First, we construct a relation extractor based on a tagging scheme to extract entities and relations jointly. Meanwhile, a data cleaner is designed to select high-quality sentences and feed them into relation extractor, by means of cleaning noisy sentences generated by distant supervision hypothesis. Afterwards, the two modules are trained jointly with reinforcement learning to optimize models. In experiments, our model achieved better performance than comparative methods on the public dataset.

Wenpeng Liu, Yanan Cao, Yanbing Liu, Yue Hu, Jianlong Tan

Pattern Recognition/Text Mining/Clustering

Frontmatter
TextNet for Text-Related Image Quality Assessment

With the rapid increase of consumer photos, annotating and retrieving such images with text are becoming more significant, which requires optical character recognition (OCR) techniques. However, to predict OCR accuracy, text-related image quality assessment (TIQA) is necessary and of great value, especially in online business processes. With more interests in text, TIQA aims to compute the quality score of an image through predicting the degree of degradation at textual regions.To assess text-related quality on detected textlines, this paper proposes a deep neural network, TextNet, which mainly includes three layers: encoder, decoder, and prediction. The decoder layer combines the encoded feature map with the decoded map through deconvolution and concatenation. The prediction layer is designed for textline detection and quality assessment with a new loss function. Under the TIQA framework, the overall text-related image quality is computed through pooling the quality of all detected textlines by way of weighted averaging. Experimental results show that the proposed framework can work well in jointly assessing text related image quality and detecting textlines, even for unknown scene images.

Hongyu Li, Junhua Qiu, Fan Zhu
A Target Dominant Sets Clustering Algorithm

The dominant sets clustering algorithm has some interesting properties and has achieved impressive results in experiments. However, with the data represented as feature vectors, we need to estimate data similarity and the regularization parameter influences the clustering results and number of clusters significantly. To obtain a specified number of clusters efficiently with the dominant sets algorithm, we present a target dominant set clustering algorithm. Our algorithm detects clusters in the first step, and then extracts dominant sets around the cluster centers based on a specially designed game dynamics. In addition, we show that this game dynamics can be utilized to reduce the computation and memory load significantly. Experiments show that our algorithm performs favorably to the original dominant sets algorithm in clustering quality with much smaller computation load than the latter.

Jian Hou, Chengcong Lv, Aihua Zhang, Xu E.
Input Pattern Complexity Determines Specialist and Generalist Populations in Drosophila Neural Network

Neural heterogeneity has been reported as beneficial for information processing in neural networks. An example of this heterogeneity can be observed in the neural responses to stimuli, which divide the neurons into two populations: specialists and generalists. Being observed in the neural network of the locust olfactory system that a balance of these two neural populations is crucial for achieving a correct pattern recognition. However, these results may not be generalizable to other biological neural networks. Therefore, we took advantage of a recent biological study about the Drosophila connectome to study the balance of these two neural populations in its neural network. We conclude that the balance between specialists and generalists also occurs in the Drosophila. This balancing process does not affect the neural network connectivity, since specialist and generalist neurons are not differentiable by the number of incoming connections.

Aaron Montero, Jessica Lopez-Hazas, Francisco B. Rodriguez
A Hybrid Planning Strategy Through Learning from Vision for Target-Directed Navigation

In this paper, we propose a goal-directed navigation system consisting of two planning strategies that both rely on vision but work on different scales. The first one works on a global scale and is responsible for generating spatial trajectories leading to the neighboring area of the target. It is a biologically inspired neural planning and navigation model involving learned representations of place and head-direction (HD) cells, where a planning network is trained to predict the neural activities of these cell representations given selected action signals. Recursive prediction and optimization of the continuous action signals generates goal-directed activation sequences, in which states and action spaces are represented by the population of place-, HD- and motor neuron activities. To compensate the remaining error from this look-ahead model-based planning, a second planning strategy relies on visual recognition and performs target-driven reaching on a local scale so that the robot can reach the target with a finer accuracy. Experimental results show that through combining these two planning strategies the robot can precisely navigate to a distant target.

Xiaomao Zhou, Cornelius Weber, Chandrakant Bothe, Stefan Wermter

Optimization/Recommendation

Frontmatter
Check Regularization: Combining Modularity and Elasticity for Memory Consolidation

Catastrophic forgetting, which means that old tasks are forgotten mostly when new tasks are learned, is a crucial problem of neural networks for autonomous robots. This problem is due to backpropagation overwrites all network parameters, and therefore, can be solved by not overwriting important parameters for the old tasks. Hence, regularization methods, represented by elastic weight consolidation, give the globally stable equilibrium points to the optimal parameters for the old tasks. They unfortunately aim to hold all parameters, even if the regularization is weak. This paper therefore proposes a regularization method, named Check regularization, to consolidate only the important parameters for the tasks and to initialize the other parameters preparing for the future tasks. Simulations with two tasks to be learned sequentially show that the proposed method outperforms the previous method under a condition where the interference between the tasks is severe.

Taisuke Kobayashi
Con-CNAME: A Contextual Multi-armed Bandit Algorithm for Personalized Recommendations

Reinforcement learning algorithms play an important role in modern day and have been applied to many domains. For example, personalized recommendations problem can be modelled as a contextual multi-armed bandit problem in reinforcement learning. In this paper, we propose a contextual bandit algorithm which is based on Contexts and the Chosen Number of Arm with Minimal Estimation, namely Con-CNAME in short. The continuous exploration and context used in our algorithm can address the cold start problem in recommender systems. Furthermore, the Con-CNAME algorithm can still make recommendations under the emergency circumstances where contexts are unavailable suddenly. In the experimental evaluation, the reference range of key parameters and the stability of Con-CNAME are discussed in detail. In addition, the performance of Con-CNAME is compared with some classic algorithms. Experimental results show that our algorithm outperforms several bandit algorithms.

Xiaofang Zhang, Qian Zhou, Tieke He, Bin Liang
Real-Time Session-Based Recommendations Using LSTM with Neural Embeddings

Recurrent neural networks have successfully been used as core elements of intelligent recommendation engines in e-commerce platforms. We demonstrate how LSTM networks can be applied to recommend products of interest for a customer, based on the events of the current session only. Inspired by recent advances in natural language processing, our network computes vector space representations (VSR) of available products and uses these representations to derive predictions of user behaviour based on the clickstream of the current session. The experimental results suggest that the Embedding-LSTM is well suited for session-based recommendations, thus offering a promising method for attacking the user cold start problem. A live test gives proof that our LSTM model outperforms a recommendation model created with traditional methods. We also show that providing the learned VSR as features to neighbourhood-based methods leads to improved performance as compared to standard nearest neighbour methods.

David Lenz, Christian Schulze, Michael Guckert
Imbalanced Data Classification Based on MBCDK-means Undersampling and GA-ANN

The imbalanced classification problem is often a problem in classification tasks where one class contains a few samples while the other contains a great deal of samples. When the traditional machine learning classification method is applied to the imbalanced data set, the classification performance is bad and the time cost is high. As a result, mini batch with cluster distribution K-means (MBCDK-means) undersampling method and GA-ANN model is proposed in this paper to solve these two problems. MBCDK-means chooses the samples according to the clusters distribution and the distance from the majority class clusters to the minority class cluster center. This technology can keep the original distribution of cluster and increase the sampling rate of boundary samples. It is helpful to improve the final classification performance. At the same time, compared with the classic K-means clustering undersampling method, the presented MBCDK-means undersampling method has lower time complexity. Artificial neural network (ANN) is widely used in data classification but it is easily trapped in a local minimum. Genetic algorithm artificial neural network (GA-ANN), which uses genetic algorithm to optimize the weight and bias of neural network, is raised because of this. GA-ANN achieves better performance than ANN. Experimental results on 8 data sets show the effectiveness of the proposed algorithm.

Anping Song, Quanhua Xu
Evolutionary Tuning of a Pulse Mormyrid Electromotor Model to Generate Stereotyped Sequences of Electrical Pulse Intervals

Adjusting parameters of a neural network model to reproduce complete sets of biologically plausible behaviors is a complex task, even in a well-described neural system. We show here a method for evolving a model of the mormyrid electromotor command chain to reproduce highly realistic temporal firing patterns as described by neuroethological studies in this system. Our method uses genetic algorithms for tuning unknown parameters in the synapses of the network. The developed fitting function simulates each evolved model under different network inputs and compare its output with the target patterns from the living animal. The obtained synaptic configuration can reveal new information about the functioning of electromotor systems.

Angel Lareo, Pablo Varona, F. B. Rodriguez
An Overview of Frank-Wolfe Optimization for Stochasticity Constrained Interpretable Matrix and Tensor Factorization

In this paper we give an overview about utilizing Frank Wolfe optimization to find interpretable constrained matrix and tensor factorizations. We will particularly concentrate on imposing stochasticity constraints and show how factors of Archetypal Analysis as well as Decomposition Into Directed Components can be found using Frank Wolfe optimization to respectively decompose bipartite matrices and asymmetric similarity tensors. We will show how the derived algorithms perform by presenting case studies from behavioral profiling in digital games.

Rafet Sifa

Computational Neuroscience

Frontmatter
A Bio-Feasible Computational Circuit for Neural Activities Persisting and Decaying

The neurophysiological view considers the working memory (WM) as a persistence of neural information in the cerebral cortex [1], that external stimulation will activate some pyramidal cells and their continuous activation after stimulus being removed indicates the memory of stimulus, but with the fading of activities, memory will be gradually decaying. More and more studies [2] have shown that the mechanism of neural activities persisting and decaying is not only related to the structure of neural circuits, but also closely related to the synaptic mechanisms. In this paper, we design the neural computational circuit of persistence of neural activities by combining the synaptic mechanism and the structure of neural circuit. Firstly, in the aspect of circuit structure, the recurrent circuit of pyramidal neurons was used as the main circuit to achieve the persistence, and then an auxiliary circuit was designed to regulate the firing rate of main circuit to achieve the “decaying” of neural activities; Secondly, in the computational circuit, we consider the mechanism of synaptic depression and slow synapse. From the structure of neural circuits and synaptic mechanism, we try to explore the neural computational mechanism of neural information persisting and decaying over the time, which is beneficial to explore the true neural mechanism of WM.

Dai Dawei, Weihui, Su Zihao
Granger Causality to Reveal Functional Connectivity in the Mouse Basal Ganglia-Thalamocortical Circuit

In this study we analyze simultaneously recorded spike trains at several levels of the basal ganglia-thalamocortical circuit in freely moving parvalbumin (PV)-deficient and wildtype (WT) (i.e., expressing PV at normal levels) mice. Parvalbumin is a Calcium-binding protein, mainly expressed in GABAergic inhibitory neurons, that affects the dynamics of the Excitatory/Inhibitory balance at the network level. We apply Granger causality analysis in order to measure the functional connectivity of different selected brain areas and their possible alterations due to PV depletion. Our results show that connections between ventromedial prefrontal cortex and Nucleus Accumbens are not affected by PV depletion.

Alessandra Lintas, Takeshi Abe, Alessandro E. P. Villa, Yoshiyuki Asai
A Temporal Estimate of Integrated Information for Intracranial Functional Connectivity

A major challenge in computational and systems neuroscience concerns the quantification of information processing at various scales of the brain’s anatomy. In particular, using human intracranial recordings, the question we ask in this paper is: How can we estimate the informational complexity of the brain given the complex temporal nature of its dynamics? To address this we work with a recent formulation of network integrated information that is based on the Kullback-Leibler divergence between the multivariate distribution on the set of network states versus the corresponding factorized distribution over its parts. In this work, we extend this formulation for temporal networks and then apply it to human brain data obtained from intracranial recordings in epilepsy patients. Our findings show that compared to random re-wirings of the data, functional connectivity networks, constructed from human brain data, score consistently higher in the above measure of integrated information. This work suggests that temporal integrated information may indeed be a good starting point as a future measure of cognitive complexity.

Xerxes D. Arsiwalla, Daniel Pacheco, Alessandro Principe, Rodrigo Rocamora, Paul Verschure

SOM/SVM

Frontmatter
Randomization vs Optimization in SVM Ensembles

Ensembles of SVMs are notoriously difficult to build because of the stability of the model provided by a single SVM. The application of standard bagging or boosting algorithms generally leads to small accuracy improvements at a computational cost that increases with the size of the ensemble. In this work, we leverage on subsampling and the diversification of hyperparameters through optimization and randomization to build SVM ensembles at a much lower computational cost than training a single SVM on the same data. Furthermore, the accuracy of these ensembles is comparable to a single SVM and to a fully optimized SVM ensemble.

Maryam Sabzevari, Gonzalo Martínez-Muñoz, Alberto Suárez
An Energy-Based Convolutional SOM Model with Self-adaptation Capabilities

We present a new self-organized neural model that we term ReST (Resilient Self-organizing Tissue). ReST can be run as a convolutional neural network (CNN), possesses a $$C^\infty $$ energy function as well as a probabilistic interpretation of neural activities, which arises from the constraint of log-normal activity distribution over time that is enforced during learning. We discuss the advantages of a $$C^\infty $$ energy function and present experiments demonstrating the self-organization and self-adaptation capabilities of ReST. In addition, we provide a performance benchmark for the publicly available TensorFlow-implementation.

Alexander Gepperth, Ayanava Sarkar, Thomas Kopinski
A Hierarchy Based Influence Maximization Algorithm in Social Networks

Influence maximization refers to mining top-K most influential nodes from a social network to maximize the final propagation of influence in the network, which is one of the key issues in social network analysis. It is a discrete optimization problem and is also NP-hard under both independent cascade and linear threshold models. The existing researches show that although the greedy algorithm can achieve an approximate ratio of $$ \left( {1 - 1/e} \right) $$ , its time cost is expensive. Heuristic algorithms can improve the efficiency, but they sacrifice a certain degree of accuracy. In order to improve efficiency without sacrificing much accuracy, in this paper, we propose a new approach called Hierarchy based Influence Maximization algorithm (HBIM in short) to mine top-K influential nodes. It is a two-phase method: (1) an algorithm for detecting information diffusion levels based on the first-order and second-order proximity between social nodes. (2) a dynamic programming algorithm for selecting levels to find influential nodes. Experiments show that our algorithm outperforms the benchmarks.

Lingling Li, Kan Li, Chao Xiang
Convolutional Neural Networks in Combination with Support Vector Machines for Complex Sequential Data Classification

Trying to extract features from complex sequential data for classification and prediction problems is an extremely difficult task. Deep Machine Learning techniques, such as Convolutional Neural Networks (CNNs), have been exclusively designed to face this class of problems. Support Vector Machines (SVMs) are a powerful technique for general classification problems, regression, and outlier detection. In this paper we present the development and implementation of an innovative by design combination of CNNs with SVMs as a solution to the Protein Secondary Structure Prediction problem, with a novel two dimensional (2D) input representation method, where Multiple Sequence Alignment profile vectors are placed one under another. This 2D input is used to train the CNNs achieving preliminary results of 80.40% per residue accuracy (Q3), which are expected to increase with the use of larger training datasets and more sophisticated ensemble methods.

Antreas Dionysiou, Michalis Agathocleous, Chris Christodoulou, Vasilis Promponas
Classification of SIP Attack Variants with a Hybrid Self-enforcing Network

The Self-Enforcing Network (SEN), a self-organized learning neural network, is used to analyze SIP attack traffic to obtain classifications for attack variants that use one of four widely used User Agents. These classifications can be used to categorize SIP messages regardless of User-Agent field. For this, we combined SEN with clustering methods to increase the amount of traffic that can be handled and analyzed; the attack traffic was observed at a honeynet system over a month. The results were multiple categories for each User Agent with a low rate of overlap between the User Agents.

Waldemar Hartwig, Christina Klüver, Adnan Aziz, Dirk Hoffstadt

Anomaly Detection/Feature Selection/Autonomous Learning

Frontmatter
Generalized Multi-view Unsupervised Feature Selection

Although many unsupervised feature selection (UFS) methods have been proposed, most of them still suffer from the following limitations: (1) these methods are usually just applicable to single-view data, thus cannot well exploit the ubiquitous complementarity among multiple views; (2) most existing UFS methods model the correlation between cluster structure and data distribution in linear ways, thus more general correlations are difficult to explore. Therefore, we propose a novel unsupervised feature selection method, termed as generalized Multi-View Unsupervised Feature Selection (gMUFS), to simultaneously explore the complementarity of multiple views, and complex correlation between cluster structure and selected features as well. Specifically, a multi-view consensus pseudo label matrix is learned and, the most valuable features are selected by maximizing the dependence between the consensus cluster structure and selected features in kernel spaces with Hilbert Schmidt independence criterion (HSIC).

Yue Liu, Changqing Zhang, Pengfei Zhu, Qinghua Hu
Performance Anomaly Detection Models of Virtual Machines for Network Function Virtualization Infrastructure with Machine Learning

Networking Function Virtualization (NFV) technology has become a new solution for running network applications. It proposes a new paradigm for network function management and has brought much innovation space for the network technology. However, the complexity of the NFV Infrastructure (NFVI) impose hard-to-predict relationship between Virtualized Network Function (VNF) performance metrics (e.g., latency, throughput), the underlying allocated resources (e.g., load of vCPU), and the overall system workload, thus the evolving scenario of NFV calls for adequate performance analysis methodologies, early detection of performance anomalies plays a significant role in providing high-quality network services. In this paper, we have proposed a novel method for detecting the performance anomalies in NFV infrastructure with machine learning methods. We present a case study on the open source NFV-oriented project, namely Clearwater, which is an IP Multimedia Subsystem (IMS) NFV application. Several classical classifiers are applied and compared empirically on the anomaly dataset which is built by ourselves. Considering the risk of over-fitting issue, the experimental results show that neutral networks is the best anomaly detection model with the accuracy over 94%.

Juan Qiu, Qingfeng Du, Yu He, YiQun Lin, Jiaye Zhu, Kanglin Yin
Emergence of Sensory Representations Using Prediction in Partially Observable Environments

In order to explore and act autonomously in an environment, an agent can learn from the sensorimotor information that is captured while acting. By extracting the regularities in this sensorimotor stream, it can build a model of the world, which in turn can be used as a basis for action and exploration. It requires the acquisition of compact representations from possibly high dimensional raw observations. In this paper, we propose a model which integrates sensorimotor information over time, and project it in a sensory representation. It is trained by preforming sensorimotor prediction. We emphasize on a simple example the role of motor and memory for learning sensory representations.

Thibaut Kulak, Michael Garcia Ortiz

Signal Detection

Frontmatter
Change Detection in Individual Users’ Behavior

The analysis of a dynamic data is challenging. Indeed, the structure of such data changes over time, potentially in a very fast speed. In addition, the objects in such data-sets are often complex. In this paper, our practical motivation is to perform users profiling, i.e. to follow users’ geographic location and navigation logs to detect changes in their habits and interests. We propose a new framework in which we first create, for each user, a signal of the evolution in the distribution of their interest and another signal based on the distribution of physical locations recorded during their navigation. Then, we detect automatically the changes in interest or locations thanks a new jump-detection algorithm. We compared the proposed approach with a set of existing signal-based algorithms on a set of artificial data-sets and we showed that our approach is faster and produce less errors for this kind of task. We then applied the proposed framework on a real data-set and we detected different categories of behavior among the users, from users with very stable interest and locations to users with clear changes in their behaviors, either in interest, location or both.

Parisa Rastin, Guénaël Cabanes, Basarab Matei, Jean-Marc Marty
Extraction and Localization of Non-contaminated Alpha and Gamma Oscillations from EEG Signal Using Finite Impulse Response, Stationary Wavelet Transform, and Custom FIR

The alpha and gamma oscillations derived from EEG signal are useful tools in recognizing a cognitive state and several cerebral disorders. However, there are undesirable artifacts that exist among the electrophysiological signals which lead to unreliable results in the extraction and localization of these accurate oscillations. We introduced, three filtering techniques based on Finite Impulse Response filters FIR, Stationary Wavelet transform SWT method and custom FIR filter to extract the non-contaminated (pure) oscillations and localize their responsible sources using the Independent Component Analysis ICA technique. In our obtained results, we compared the effectiveness of these filtering techniques in extracting and localizing of non-contaminated alpha and gamma oscillations. We proposed here the accurate technique for the extraction of pure alpha and oscillations. We also presented the accurate cortical region responsible of the generation of these oscillations.

Najmeddine Abdennour, Abir Hadriche, Tarek Frikha, Nawel Jmail

Long-Short Term Memory/Chaotic Complex Models

Frontmatter
Chaotic Complex-Valued Associative Memory with Adaptive Scaling Factor

In this paper, we propose a Chaotic Complex-Valued Associative Memory with Adaptive Scaling Factor which can realize dynamic association of multi-valued pattern. In the proposed model, the scaling factor of refractoriness is adjusted according to the maximum absolute value of the internal state up to that time as similar as the conventional Chaotic Associative Memory with Adaptive Scaling Factor. Computer experiments are carried out and we confirmed that the proposed model has the same dynamic association ability as the conventional model, and the proposed model also has recall capability similar to that of the conventional model, even for the number of neurons not used for automatic adjustment of parameters.

Daisuke Karakama, Norihito Katamura, Chigusa Nakano, Yuko Osana
Computation of Air Traffic Flow Management Performance with Long Short-Term Memories Considering Weather Impact

In this paper we compute the impact of weather events to airport performance, which is measured as deviation of actual and scheduled timestamps (delay). Weather phenomena are categorized by the Air Traffic Management Airport Performance weather algorithm, which aims to quantify weather conditions at European airports. A comprehensive dataset of flights of 2013 for example airport Hamburg and accompanied weather data result in both a quantification of the individual airport performance and an aggregated weather-performance metric.To model complex correlations between weather and flight schedule data we use advance machine learning procedures as Long Short-Term Memories are. Various structured models are applied to certain simulation scenarios considering differences in weather affected air traffic dynamics.

Stefan Reitmann, Michael Schultz

Wavelet/Reservoir Computing

Frontmatter
A Study on the Influence of Wavelet Number Change in the Wavelet Neural Network Architecture for 3D Mesh Deformation Using Trust Region Spherical Parameterization

The 3D deformation and simulation process frequently include much iteration of geometric design changes. We propose in this paper a study on the influence of wavelet number change in the wavelet neural network architecture for 3D mesh deformation method. Our approach is focused on creating the series of intermediate objects to have the target object, using trust region spherical parameterization algorithm as a common domain of the source and target objects that minimizing angle and area distortions which assurance bijective 3D spherical parameterization, and we used a multi-library wavelet neural network structure (MLWNN) as an approximation tools for feature alignment between the source and the target models to guarantee a successful deformation process. Experimental results show that the spherical parameterization algorithm preserves angle and area distortion, a MLWNN structure relying on various mother wavelets families (MLWNN) to align mesh features and minimize distortion with fixed features, and the increasing of wavelets number makes it possible to facilitate the features alignment which implies the reduction of the error between the objects thus reducing the rate of deformation to have good deformation scheme.

Naziha Dhibi, Akram Elkefai, Chokri Ben Amar
Combining Memory and Non-linearity in Echo State Networks

Echo State Networks (ESNs) represent a successful methodology for efficient modeling of Recurrent Neural Networks. Untrained recurrent dynamics in ESNs apparently need to comply a trade-off between the two desirable features of implementing a long memory over past inputs and the ability of modeling non-linear dynamics. In this paper, we analyze such memory/non-linearity trade-off from the perspective of recurrent model design. In particular, we propose two variants to the standard ESN model, aiming at combining linear and non-linear dynamics both in the architectural setup of the recurrent system, and at the level of recurrent units activation functions. The proposed models are experimentally assessed on ad-hoc defined tasks as well as on standard benchmarks in the area of Reservoir Computing. Results show that the introduced ESN variants can grasp the proper trade-off between memory and non-linearity requirements, at the same time allowing to improve the performance of standard ESNs. Moreover, the analysis of the employed degree of non-linearity in the reservoir system can provide useful insights on the characterization of the learning task at hand.

Eleonora Di Gregorio, Claudio Gallicchio, Alessio Micheli
A Neural Network of Multiresolution Wavelet Analysis

Wavelet transformation is a powerful method of signal processing which uses decomposition of the studied signal over a special basis with unique properties, the most important of which are its compactness and multiresolution: wavelet functions are produced from the mother wavelet by transition and dilation. Wavelet neural networks (WNN) are a family of approximation algorithms that use wavelet functions to decompose the approximated function. If only approximation and no inverse transformation is needed, the values of transition and dilation coefficients may be determined during network training, and the windows corresponding to various wavelet functions may overlap, making the whole system much more efficient. Here we present a new type of a WNN – Adaptive Window WNN (AWWNN), in which window positions and wavelet levels are determined with a special iterative procedure. Two modifications of AWWNN are tested against linear model and multi-layer perceptron on Mackey-Glass benchmark prediction problem.

Alexander Efitorov, Vladimir Shiroky, Sergey Dolenko

Similarity Measures/PSO - RBF

Frontmatter
Fast Supervised Selection of Prototypes for Metric-Based Learning

A crucial factor for successful learning is the finding of more convenient representations for a problem, such that subsequent processing can be delivered to linear or non-linear modeling methods. Similarity functions are a flexible way to express knowledge about a problem and to capture meaningful relations of data in input space. In this paper we use similarity functions to find an alternative data representation which is then reduced by selecting a subset of relevant prototypes, in a supervised way. The idea is tested in a set of modelling problems, characterized by a mixture of data types and different amounts of missing values. The results demonstrate competitive or better performance than traditional methods in terms of prediction error and sparsity of the representation.

Lluís A. Belanche
Modeling Data Center Temperature Profile in Terms of a First Order Polynomial RBF Network Trained by Particle Swarm Optimization

In this paper a polynomial radial basis function neural network is trained to model and predict the temperature profile-energy proxy of a highly complex data center located at the University of the Aegean, Greece. A number of input variables are identified that directly quantify the rack’s air temperature. The corresponding data set is generated through an experimental monitoring system used over a two-week period. The network’s structure encompasses three distinct levels. The first level involves a number of hidden nodes with Gaussian activation functions, while the second level generates first order polynomial functions of the input variables. Finally, the third level aggregates the outputs of the above two levels and generates the network’s output. The network’s training process is based on using the particle swarm optimization algorithm. For comparative reasons, a typical radial basis function and a feed-forward network were developed. The results indicate that the proposed network is very effective in predicting the server rack’s air temperature, outperforming the other two networks.

Ioannis A. Troumbis, George E. Tsekouras, Christos Kalloniatis, Panagiotis Papachiou, Dias Haralambopoulos
Incorporating Worker Similarity for Label Aggregation in Crowdsourcing

For the quality control in the crowdsourcing tasks, requesters usually assign a task to multiple workers to obtain redundant answers and then aggregate them to obtain the more reliable answer. Because of the existence of the non-experts in the crowds, one of the problems in the label aggregation is how to differ experts with higher ability from non-experts with lower ability and strengthen the influences of these experts. Most of the existing label aggregation approaches tend to strengthen the workers who provide majority answers and regard them with high ability. In addition, we find that the similarity among worker labels is possible to be effective for this issue because two experts are more probable to reach consensus than two non-experts. We thus propose a novel probabilistic model which can incorporate the similarity information of workers. The experimental results on a number of real datasets show that our approach can outperform the existing models including a probabilistic model without incorporating the similarity. We also make an empirical study on the influence of worker ability, label sparsity and redundancy to the performance of label aggregation approaches, and provide a suggestion on the strategy of collecting the labels in crowdsourcing.

Jiyi Li, Yukino Baba, Hisashi Kashima
NoSync: Particle Swarm Inspired Distributed DNN Training

Training deep neural networks on big datasets remains a computational challenge. It can take hundreds of hours to perform and requires distributed computing systems to accelerate. Common distributed data-parallel approaches share a single model across multiple workers, train on different batches, aggregate gradients, and redistribute the new model. In this work, we propose NoSync, a particle swarm optimization inspired alternative where each worker trains a separate model, and applies pressure forcing models to converge. NoSync explores a greater portion of the parameter space and provides resilience to overfitting. It consistently offers higher accuracy compared to single workers, offers a linear speedup for smaller clusters, and is orthogonal to existing data-parallel approaches.

Mihailo Isakov, Michel A. Kinsy
Backmatter
Metadaten
Titel
Artificial Neural Networks and Machine Learning – ICANN 2018
herausgegeben von
Věra Kůrková
Prof. Yannis Manolopoulos
Barbara Hammer
Lazaros Iliadis
Ilias Maglogiannis
Copyright-Jahr
2018
Electronic ISBN
978-3-030-01421-6
Print ISBN
978-3-030-01420-9
DOI
https://doi.org/10.1007/978-3-030-01421-6