Skip to main content

2020 | Buch

Machine Learning for Networking

Second IFIP TC 6 International Conference, MLN 2019, Paris, France, December 3–5, 2019, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the Second International Conference on Machine Learning for Networking, MLN 2019, held in Paris, France, in December 2019. The 26 revised full papers included in the volume were carefully reviewed and selected from 75 submissions. They present and discuss new trends in deep and reinforcement learning, patternrecognition and classi cation for networks, machine learning for network slicingoptimization, 5G system, user behavior prediction, multimedia, IoT, securityand protection, optimization and new innovative machine learning methods, performanceanalysis of machine learning algorithms, experimental evaluations ofmachine learning, data mining in heterogeneous networks, distributed and decentralizedmachine learning algorithms, intelligent cloud-support communications,ressource allocation, energy-aware communications, software de ned networks,cooperative networks, positioning and navigation systems, wireless communications,wireless sensor networks, underwater sensor networks.

Inhaltsverzeichnis

Frontmatter
Network Anomaly Detection Using Federated Deep Autoencoding Gaussian Mixture Model

Deep autoencoding Gaussian mixture model (DAGMM) employs dimensionality reduction and density estimation and jointly optimizes them for unsupervised anomaly detection tasks. However, the absence of large amount of training data greatly compromises DAGMM’s performance. Due to rising concerns for privacy, a worse situation can be expected. By aggregating only parameters from local training on clients for obtaining knowledge from more private data, federated learning is proposed to enhance model performance. Meanwhile, privacy is properly protected. Inspired by the aforementioned, this paper presents a federated deep autoencoding Gaussian mixture model (FDAGMM) to improve the disappointing performance of DAGMM caused by limited data amount. The superiority of our proposed FDAGMM is empirically demonstrated with extensive experiments.

Yang Chen, Junzhe Zhang, Chai Kiat Yeo
Towards a Hierarchical Deep Learning Approach for Intrusion Detection

Nowadays, it is almost impossible to imagine our daily life without Internet. This strong dependence requires an effective and rigorous consideration of all the risks related to computer attacks. However traditional methods of protection are not always effective, and usually very expensive in treatment resources. That is why this paper presents a new hierarchical method based on deep learning algorithms to deal with intrusion detection. This method has proven to be very effective across traditional implementation on four public datasets, and meets all the other requirements of an efficient intrusion detection system.

François Alin, Amine Chemchem, Florent Nolot, Olivier Flauzac, Michaël Krajecki
Network Traffic Classification Using Machine Learning for Software Defined Networks

The recent development in industry automation and connected devices made a huge demand for network resources. Traditional networks are becoming less effective to handle this large number of traffic generated by these technologies. At the same time, Software defined networking (SDN) introduced a programmable and scalable networking solution that enables Machine Learning (ML) applications to automate networks. Issues with traditional methods to classify network traffic and allocate resources can be solved by this SDN solution. Network data gathered by the SDN controller will allow data analytics methods to analyze and apply machine learning models to customize the network management. This paper has focused on analyzing network data and implement a network traffic classification solution using machine learning and integrate the model in software-defined networking platform.

Menuka Perera Jayasuriya Kuranage, Kandaraj Piamrat, Salima Hamma
A Comprehensive Analysis of Accuracies of Machine Learning Algorithms for Network Intrusion Detection

Intrusion and anomaly detection are particularly important in the time of increased vulnerability in computer networks and communication. Therefore, this research aims to detect network intrusion with the highest accuracy and fastest time. To achieve this, nine supervised machine learning algorithms were first applied to the UNSW-NB15 dataset for network anomaly detection. In addition, different attacks are investigated with different mitigation techniques that help determine the types of attacks. Once detection was done, the feature set was reduced according to existing research work to increase the speed of the model without compromising accuracy. Furthermore, seven supervised machine learning algorithms were also applied to the newly released BoT-IoT dataset with around three million network flows. The results show that the Random Forest is the best in terms of accuracy (97.9121%) and Naïve Bayes the fastest algorithm with 0.69 s for the UNSW-NB15 dataset. C4.5 is the most accurate one (87.66%), with all the features considered to identify the types of anomalies. For BoT-IoT, six of the seven algorithms have a close to 100% detection rate, except Naïve Bayes.

Anurag Das, Samuel A. Ajila, Chung-Horng Lung
Q-routing: From the Algorithm to the Routing Protocol

Routing is a complex task in computer network. This function is mainly devoted to the layer 3 in the Open Standard Interconnection (OSI) model. In the 90s, routing protocols assisted by reinforcement learning were created. To illustrate the performance, most of the literature use centralized algorithms and “home-made” simulators that make difficult (i) the transposition to real networks; (ii) the reproducibility. The goal of this work is to address those 2 points. In this paper, we propose a complete distributed protocol implementation. We deployed the routing algorithm proposed by Boyan and Littman in 1994 based on Q-learning on the network simulator Qualnet. Twenty-five years later, we conclude that a more realistic implementation in more realistic network environment does not give always better Quality of Service than the historical Bellman-Ford protocol. We provide all the materials to conduct reproducible research.

Alexis Bitaillou, Benoît Parrein, Guillaume Andrieux
Language Model Co-occurrence Linking for Interleaved Activity Discovery

As ubiquitous computer and sensor systems become abundant, the potential for automatic identification and tracking of human behaviours becomes all the more evident. Annotating complex human behaviour datasets to achieve ground truth for supervised training can however be extremely labour-intensive, and error prone. One possible solution to this problem is activity discovery: the identification of activities in an unlabelled dataset by means of an unsupervised algorithm. This paper presents a novel approach to activity discovery that utilises deep learning based language production models to construct a hierarchical, tree-like structure over a sequential vector of sensor events. Our approach differs from previous work in that it explicitly aims to deal with interleaving (switching back and forth between activities) in a principled manner, by utilising the long-term memory capabilities of a recurrent neural network cell. We present our approach and test it on a realistic dataset to evaluate its performance. Our results show the viability of the approach and that it shows promise for further investigation. We believe this is a useful direction to consider in accounting for the continually changing nature of behaviours.

Eoin Rogers, John D. Kelleher, Robert J. Ross
Achieving Proportional Fairness in WiFi Networks via Bandit Convex Optimization

We revisit in this paper proportional fair channel allocation in IEEE 802.11 networks. Instead of following traditional approaches based on explicit solution of the optimization problem or iterative solvers, we investigate the use of a bandit convex optimization algorithm. We propose an algorithm which is able to learn the optimal slot transmission probability only by monitoring the throughput of the network. We have evaluated this algorithm both using the true value of the function to optimize, as well as adding estimation errors coming from a network simulator. By means of the proposed algorithm, we provide extensive experimental results which illustrate the sensitivity of the algorithm to different learning parameters and noisy estimates. We believe this is a practical solution to improve the performance of wireless networks that does not require inferring network parameters.

Golshan Famitafreshi, Cristina Cano
Denoising Adversarial Autoencoder for Obfuscated Traffic Detection and Recovery

Traffic classification is key for managing both QoS and security in the Internet of Things (IoT). However, new traffic obfuscation techniques have been developed to thwart classification. Traffic mutation is one such obfuscation technique, that consists of modifying the flow’s statistical characteristics to mislead the traffic classifier. In fact, this same technique can also be used to hide normal traffic characteristics for the sake of privacy. However, the concern is its use by attackers to bypass intrusion detection systems by modifying the attack traffic characteristics. In this paper, we propose an unsupervised Deep Learning (DL)-based model to detect mutated traffic. This model is based on generative DL architectures, namely Autoencoders (AE) and Generative Adversarial Network (GAN). This model consists of a denoising AE to de-anonymize the mutated traffic and a discriminator to detect it. The implementation results show that the traffic can be denoised when different mutation techniques are applied with a reconstruction error less than $$10^{-1}$$. In addition, the detection rate of fake traffic reaches 83.7%.

Ola Salman, Imad H. Elhajj, Ayman Kayssi, Ali Chehab
Root Cause Analysis of Reduced Accessibility in 4G Networks

The increased programmability of communication networks makes them more autonomous, and with the ability to actuate fast in response to users and networks’ events. However, it is usually a difficult task to understand the root cause of the network problems, so that autonomous actuation can be provided in advance.This paper analyzes the probable root causes of reduced accessibility in 4G networks, taking into account the information of important Key Performance Indicators (KPIs), and considering their evolution in previous time-frames. This approach resorts to interpretable machine learning models to measure the importance of each KPI in the decrease of the network accessibility in a posterior time-frame.The results show that the main root causes of reduced accessibility in the network are related with the number of failure handovers, the number of phone calls and text messages in the network, the overall download volume and the availability of the cells. However, the main causes of reduced accessibility in each cell are more related to the number of users in each cell and its download volume produced. The results also show the number of PCA components required for a good prediction, as well as the best machine learning approach for this specific use case.

Diogo Ferreira, Carlos Senna, Paulo Salvador, Luís Cortesão, Cristina Pires, Rui Pedro, Susana Sargento
Space-Time Pattern Extraction in Alarm Logs for Network Diagnosis

Increasing size and complexity of telecommunication networks make troubleshooting and network management more and more critical. As analyzing a log is cumbersome and time consuming, experts need tools helping them to quickly pinpoint the root cause when a problem arises. A structure called DIG-DAG able to store chain of alarms in a compact manner according to an input log has recently been proposed. Unfortunately, for large logs, this structure may be huge, and thus hardly readable for experts. To circumvent this problem, this paper proposes a framework allowing to query a DIG-DAG in order to extract patterns of interest, and a full methodology for end-to-end analysis of a log.

Achille Salaün, Anne Bouillard, Marc-Olivier Buob
Machine Learning Methods for Connection RTT and Loss Rate Estimation Using MPI Measurements Under Random Losses

Scientific computations are expected to be increasingly distributed across wide-area networks, and Message Passing Interface (MPI) has been shown to scale to support their communications over long distances. Application-level measurements of MPI operations reflect the connection Round-Trip Time (RTT) and loss rate, and machine learning methods have been previously developed to estimate them under deterministic periodic losses. In this paper, we consider more complex, random losses with uniform, Poisson and Gaussian distributions. We study five disparate machine leaning methods, with linear and non-linear, and smooth and non-smooth properties, to estimate RTT and loss rate over 10 Gbps connections with 0–366 ms RTT. The diversity and complexity of these estimators combined with the randomness of losses and TCP’s non-linear response together rule out the selection of a single best among them; instead, we fuse them to retain their design diversity. Overall, the results show that accurate estimates can be generated at low loss rates but become inaccurate at loss rates 10% and higher, thereby illustrating both their strengths and limitations.

Nageswara S. V. Rao, Neena Imam, Zhengchun Liu, Rajkumar Kettimuthu, Ian Foster
Algorithm Selection and Model Evaluation in Application Design Using Machine Learning

AI has turned into a focal piece of our life – as buyers, clients, and, ideally, as scientists and professionals! Regardless of whether we are applying prescient displaying systems to our examination or business issues, accept we make them thing in like manner: We need to make “great” forecasts! Fitting a model to our preparation information would one say one is a thing, however how would we realize that it sums up well to concealed information? How would we realize that it does not only retain the information we sustained it and neglects to make high forecasts on future examples, tests that it has not seen previously? Additionally, how would we select an appropriate model in any case? Perhaps an alternate learning calculation could be more qualified for the current issue? The right utilization of model assessment, model choice, and calculation choice systems is indispensable in scholarly AI examine just as in numerous mechanical settings. This article audits various systems that can be utilized for every one of these three subtasks and talks about the primary focal points and drawbacks of every method with references to theoretical and observational investigations. Further, suggestions are given to empower best yet plausible practices in research and uses of AI. In this article, we have used applications like Drowsiness detection, Oil price prediction, Election result evaluation as examples to explain algorithm selection and model evaluation.

Srikanth Bethu, B. Sankara Babu, K. Madhavi, P. Gopala Krishna
GAMPAL: Anomaly Detection for Internet Backbone Traffic by Flow Prediction with LSTM-RNN

This paper proposes a general-purpose anomaly detection mechanism for Internet backbone traffic named GAMPAL (General-purpose Anomaly detection Mechanism using Path Aggregate without Labeled data). GAMPAL does not require labeled data to achieve a general-purpose anomaly detection. For scalability to the number of entries in the BGP RIB (Routing Information Base), GAMPAL introduces path aggregates. The BGP RIB entries are classified into the path aggregates, each of which is identified with the first three AS numbers in the AS_PATH attribute. GAMPAL establishes a prediction model of traffic throughput based on past traffic throughput. It adopts the LSTM-RNN (Long Short-Term Memory Recurrent Neural Network) model focusing on periodicity in weekly scale of the Internet traffic pattern. The validity of GAMPAL is evaluated using the real traffic information and the BGP RIB exported from the WIDE backbone network (AS2500), a nation-wide backbone network for research and educational organizations in Japan. As a result, GAMPAL successfully detects traffic increases due to events and DDoS attacks targeted to a stub organization.

Taku Wakui, Takao Kondo, Fumio Teraoka
Revealing User Behavior by Analyzing DNS Traffic

The Domain Name System (DNS) is today a fundamental part of Internet’s working. Considering that Internet has grown in the last decades as part of human’s culture, user patterns regarding their behavior are present in the network data. As a consequence, some of these human behavior patterns are present as well in DNS data. With real data from the ‘.cl’ ccTLD, this work seeks to detect those human patterns by using Machine Learning techniques. As DNS traffic is described by a time series, particular and complex techniques have to be used in order to process the data and extract this information. The procedure that we apply in order to achieve this goal is divided in two stages. The first one consists of using clustering to group DNS domains basing on the similarity between their users’ activity. The second stage establishes a comparison between the obtained groups by using Association Rules. Finding human patterns in the data could be of high interest to researchers that analyze the human behavior regarding Internet’s usage. The procedure was able to detect some trends and patterns in the data that are discussed along with proper evaluation measures for further comparison.

Martín Panza, Diego Madariaga, Javier Bustos-Jiménez
A New Approach to Determine the Optimal Number of Clusters Based on the Gap Statistic

Data clustering is one of the most important unsupervised classification method. It aims at organizing objects into groups (or clusters), in such a way that members in the same cluster are similar in some way and members belonging to different cluster are distinctive. Among other general clustering method, k-means is arguably the most popular one. However, it still has some inherent weaknesses. One of the biggest challenges when using k-means is to determine the optimal number of clusters, k. Although many approaches have been suggested in the literature, this is still considered as an unsolved problem. In this study, we propose a new technique to improve the gap statistic approach for selecting k. It has been tested on different datasets, on which it yields superior results compared to the original gap statistic. We expect our new method to also work well on other clustering algorithms where the number k is required. This is because our new approach, like the gap statistic, can work with any clustering method.

Jaekyung Yang, Jong-Yeong Lee, Myoungjin Choi, Yeongin Joo
MLP4NIDS: An Efficient MLP-Based Network Intrusion Detection for CICIDS2017 Dataset

More and more embedded devices are connected to the internet and therefore are potential victims of intrusion. While machine learning algorithms have proven to be robust techniques, it is mainly achieved with traditional processing, neural network giving worse results. In this paper, we propose usage of a multi-layer perceptron neural network for intrusion detection and provide a detailed description of our methodology. We detail all steps to achieve better performances than traditional machine learning techniques with a detection of intrusion accuracy above 99% and a low false positive rate kept below 0.7%. Results of previous works are analyzed and compared with the performances of the proposed solution.

Arnaud Rosay, Florent Carlier, Pascal Leroux
Random Forests with a Steepend Gini-Index Split Function and Feature Coherence Injection

Although Random Forests (RFs) are an effective and scalable ensemble machine learning approach, they are highly dependent on the discriminative ability of the available individual features. Since most data mining problems occur in the context of pre-existing data, there is little room to choose the original input features. Individual RF decision trees follow a greedy algorithm that iteratively selects the feature with the highest potential for achieving subsample purity. Common heuristics for ranking this potential include the gini-index and information gain metrics. This study seeks to improve the effectiveness of RFs through an adapted gini-index splitting function and a feature engineering technique. Using a structured framework for comparative evaluation of RFs, the study demonstrates that the effectiveness of the proposed methods is comparable with conventional gini-index based RFs. Improvements in the minimum accuracy recorded over some UCI data sets, demonstrate the potential for a hybrid set of splitting functions.

Mandlenkosi Victor Gwetu, Jules-Raymond Tapamo, Serestina Viriri
Emotion-Based Adaptive Learning Systems

Right from our primary school to professional academic level, the classical education system modus operandi, forces us to follow a series of predefined steps to climb the stairs of academic levels. Traditionally those predefined steps forces students to go through the beginner level to advanced level and then specialized in a specific level. The main problem was that the teaching styles and content delivery was not tailored to every learning styles and student personalities. The traditional education system is moving towards adaptive learning system where students are not bound only to one predefined set of contents. Therefore the traditional “one size fits all” approach is no longer valid as it were before. Each student has their curriculum based on their unique needs and personality. Adaptive learning may be referred as the process of creating unique learning experience for each and every learner based upon the learner’s personality, interests and performance. This research presents a novel approach of adaptive learning by presenting an emotion-based adaptive learning system where the emotion and psychological traits of the learner is considered to provide learning materials that would be most appropriate at that particular instance of time. It shall demonstrate an intelligent agent based expert system using artificial intelligence and emotion detections capabilities to measure the user learning rate and find an optimum learning scheme for the latter.

Sai Prithvisingh Taurah, Jeshta Bhoyedhur, Roopesh Kevin Sungkur
Machine Learning Methods for Anomaly Detection in IoT Networks, with Illustrations

IoT devices have been the target of 100 million attacks in the first half of 2019 [1]. According to [2], there will be more than 64 billion Internet of Things (IoT) devices by 2025. It is thus crucial to secure IoT networks and devices, which include significant devices like medical kit or autonomous car. The problem is complicated by the wide range of possible attacks and their evolution, by the limited computing resources and storage resources available on devices. We begin by introducing the context and a survey of Intrusion Detection System (IDS) for IoT networks with a state of the art. So as to test and compare solutions, we consider available public datasets and select the CIDDS-001 Dataset. We implement and test several machine learning algorithms and show that it is relatively easy to obtain reproducible results [20] at the state-of-the-art. Finally, we discuss embedding such algorithms in the IoT context and point-out the possible interest of very simple rules.

Vassia Bonandrini, Jean-François Bercher, Nawel Zangar
DeepRoute: Herding Elephant and Mice Flows with Reinforcement Learning

Wide area networks are built to have enough resilience and flexibility, such as offering many paths between multiple pairs of end-hosts. To prevent congestion, current practices involve numerous tweaking of routing tables to optimize path computation, such as flow diversion to alternate paths or load balancing. However, this process is slow, costly and require difficult online decision-making to learn appropriate settings, such as flow arrival rate, workload, and current network environment. Inspired by recent advances in AI to manage resources, we present DeepRoute, a model-less reinforcement learning approach that translates the path computation problem to a learning problem. Learning from the network environment, DeepRoute learns strategies to manage arriving elephant and mice flows to improve the average path utilization in the network. Comparing to other strategies such as prioritizing certain flows and random decisions, DeepRoute is shown to improve average network path utilization to 30% and potentially reduce possible congestion across the whole network. This paper presents results in simulation and also how DeepRoute can be demonstrated by a Mininet implementation.

Mariam Kiran, Bashir Mohammed, Nandini Krishnaswamy
Arguments Against Using the 1998 DARPA Dataset for Cloud IDS Design and Evaluation and Some Alternative

Due to the lack of adequate public datasets, the proponents of many existing cloud intrusion detection systems (IDS) have relied on the DARPA dataset to design and evaluate their models. In the current paper, we show empirically that the DARPA dataset by failing to meet important statistical characteristics of real world cloud traffic data center is inadequate for evaluating cloud IDS. We present, as alternative, a new public dataset collected through a cooperation between our lab and a non-profit cloud service provider, which contains benign data and a wide variety of attack data. We present a new hypervisor-based cloud IDS using instance-oriented feature model and supervised machine learning techniques. We investigate 3 different classifiers: Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) algorithms. Experimental evaluation on a diversified dataset yields a detection rate of 92.08% and a false positive rate of 1.49% for random forest, the best performing of the three classifiers.

Onyekachi Nwamuo, Paulo Magella de Faria Quinan, Issa Traore, Isaac Woungang, Abdulaziz Aldribi
Estimation of the Hidden Message Length in Steganography: A Deep Learning Approach

Steganography is a science which helps to hide secret data inside multimedia supports like image, audio and video files to ensure secure communication between two parts of a channel. Steganalysis is the discipline which detects the presence of data hidden by a steganographic algorithm. There are two types of steganalysis: targeted steganalysis and universal steganalysis. In targeted steganalysis, the steganographic algorithm used to hide data is known. In the case of universal steganalysis, the detection of hidden data doesn’t depend on any specific algorithm used in the process of steganography. In this paper, we focus on universal steganalysis of images in a database with an eventual cover-source mismatch problem. It is shown that combining both unsupervised and supervised machine learning algorithms helps to improve the performance of classifiers in the case of universal steganalysis by reducing the cover-source mismatch problem. In the unsupervised step, the k-means algorithm is generally used to group similar images. When the number of features extracted from the image is very large it becomes difficult to compute the k-means algorithm properly. We propose, in that case, to use Deep Learning with Convolutional Neural Network (CNN) to group similar images at first and implement a Multilayer Perceptron (MLP) neural network to estimate the hidden message length in all the different groups of images. The first step of this approach prevents the cover-source mismatch problem. Reducing this issue boost the performance of classifiers in the second step which consists of estimating the hidden message length.

François Kasséné Gomis, Thierry Bouwmans, Mamadou Samba Camara, Idy Diop
An Adaptive Deep Learning Algorithm Based Autoencoder for Interference Channels

Deep learning (DL) based autoencoder (AE) has been proposed recently as a promising, and potentially disruptive Physical Layer (PHY) design for beyond-5G communication systems. Compared to a traditional communication system with a multiple-block structure, the DL based AE provides a new PHY paradigm with a pure data-driven and end-to-end learning based solution. However, significant challenges are to be overcome before this approach becomes a serious contender for practical beyond-5G systems. One of such challenges is the robustness of AE under interference channels. In this paper, we first evaluate the performance and robustness of an AE in the presence of an interference channel. Our results show that AE performs well under weak and moderate interference condition, while its performance degrades substantially under strong and very strong interference condition. We further propose a novel online adaptive deep learning (ADL) algorithm to tackle the performance issue of AE under strong and very strong interference, where level of interference can be predicted in real time for the decoding process. The performance of the proposed algorithm for different interference scenarios is studied and compared to the existing system using a conventional DL-assist AE through an offline learning method. Our results demonstrate the robustness of the proposed ADL-assist AE over the entire range of interference levels, while existing AE fail to perform in the presence of strong and very strong interference. The work proposed in this paper is an important step towards enabling AE for practical 5G and beyond communication systems with dynamic and heterogeneous interference.

Dehao Wu, Maziar Nekovee, Yue Wang
A Learning Approach for Road Traffic Optimization in Urban Environments

In many urban areas where road drivers are suffering from the huge road traffic flow, conventional traffic management methods have become inefficient. One alternative is to let road-side units or vehicles learn how to calculate the optimal path based on the traffic situation. This work aims to provide the optimal path in terms of travel time for the vehicles seeking to reach their destination avoiding road traffic congestion and in the least possible time. In this paper we apply a reinforcement learning technique, in particular Q-learning, that is employed to learn the best action to take in different situations, where the transiting delay from a state to another is used to determinate the rewards. The simulation results confirm that the proposed Q-learning approach outperformed the greedy existing algorithm and present better performances.

Ahmed Mejdoubi, Ouadoudi Zytoune, Hacène Fouchal, Mohamed Ouadou
CSI Based Indoor Localization Using Ensemble Neural Networks

Indoor localization has attracted much attention due to its many possible applications e.g. autonomous driving, Internet-Of-Things (IOT), and routing, etc. Received Signal Strength Indicator (RSSI) has been used extensively to achieve localization. However, due to its temporal instability, the focus has shifted towards the use of Channel State Information (CSI) aka channel response. In this paper, we propose a deep learning solution for the indoor localization problem using the CSI of an $$8 \times 2$$ Multiple Input Multiple Output (MIMO) antenna. The variation of the magnitude component of the CSI is chosen as the input for a Multi-Layer Perceptron (MLP) neural network. Data augmentation is used to improve the learning process. Finally, various MLP neural networks are constructed using different portions of the training set and different hyperparameters. An ensemble neural network technique is then used to process the predictions of the MLPs in order to enhance the position estimation. Our method is compared with two other deep learning solutions: one that uses the Convolutional Neural Network (CNN) technique, and the other that uses MLP. The proposed method yields higher accuracy than its counterparts, achieving a Mean Square Error of 3.1 cm.

Abdallah Sobehy, Éric Renault, Paul Mühlethaler
Bayesian Classifiers in Intrusion Detection Systems

To be able to identify computer attacks, detection systems that are based on faults are not dependent on data base upgrades unlike the ones based on misuse. The first type of systems mentioned generate a knowledge pattern from which the usual and unusual traffic is distinguished. Within computer networks, different classification traffic techniques have been implemented in intruder detection systems based on abnormalities. These try to improve the measurement that assess the performance quality of classifiers and reduce computational cost. In this research work, a comparative analysis of the obtained results is carried out after implementing different selection techniques such as Info.Gain, Gain ratio and Relief as well as Bayesian (Naïve Bayes and Bayesians Networks). Hence, 97.6% of right answers were got with 13 features. Likewise, through the implementation of both load balanced methods and attributes normalization and choice, it was also possible to diminish the number of features used in the ID classification process. Also, a reduced computational expense was achieved.

Mardini-Bovea Johan, De-La-Hoz-Franco Emiro, Molina-Estren Diego, Paola Ariza-Colpas, Ortíz Andrés, Ortega Julio, César A. R. Cárdenas, Carlos Collazos-Morales
A Novel Approach Towards Analysis of Attacker Behavior in DDoS Attacks

Traditionally, research in Network Security has largely focused on Intrusion Detection and the use of Machine Learning techniques towards identifying malicious agents as well as work on methods towards protecting ourselves from such attacks. In this paper, we wish to make use of the same techniques to analyze the profile of the attacker in the case of a DDoS attack on a distributed honeypot.

Himanshu Gupta, Tanmay Girish Kulkarni, Lov Kumar, Neti Lalita Bhanu Murthy
Jason-RS, A Collaboration Between Agents and an IoT Platform

In this article we start from the observation that REST services are the most used as tools of interoperability and orchestration in the Internet of Things (IoT). But REST does not make it possible to inject artificial intelligence into connected objects, i.e. it cannot allow autonomy and decision-making by the objects themselves. To define an intelligence to a connected object, one can use a Believe Desire Intention agent (BDI an intelligent agent that adopts human behavior) such as Jason Agentspeak. But Jason AgentSpeak does not guarantee orchestration or choreography between connected objects. There are platforms for service orchestration and choreography in IoT, still the interconnection with artificial intelligence needs to be built. In this article, we propose a new approach called Jason-RS. It is a result of pairing Jason BDI agent with the web service technologies to exploit the agent capacity as a service, Jason-RS turn in Java SE and it does not need any middleware. The architecture that we propose allows to create the link between Artificial Intelligence and Services choreography to reduce human intervention in the service choreography. In order to validate the proposed approach, we have interconnected the Iot BeC3 platform and the REST agent (Jason-RS). The decision-making faculty offered by Jason-RS is derived from the information sent by the objects according to the different methods of REST (GET, POST, PUT, and DELETE) that Jason-RS offers. As a result, the objects feed the inter-agent collaborations and decision-making inside the agent. Finally, we show that Jason-RS allows the Web of Objects to power complex systems such as an artificial intelligence responsible for processing data. This performance is promising.

Hantanirina Felixie Rafalimanana, Jean Luc Razafindramintsa, Sylvain Cherrier, Thomas Mahatody, Laurent George, Victor Manantsoa
Scream to Survive(S2S): Intelligent System to Life-Saving in Disasters Relief

Disasters are becoming more and more common around the world, making technology important to guarantee people’s lives as much as possible.One of the most modern advances of recent years is how AI is used in disaster relief. Researchers propose works based on new technologies (IoT, Cloud Computing, Blockchain, etc.) and AI concepts (Machine Learning, Natural Language Processing, etc.). But these concepts are difficult to exploit in low and middle socio-demographic index (SDI) countries, especially as most disasters happen in.In this paper we propose S2S intelligent system, based on voice recognition to life-saving in disaster relief. Generally, a disaster victim is enable to access to his Smartphone and ask help, with this system, saying “help” will be enough to send automatically alerts to the nearest Emergency Operation Services (EOS).S2S is composed of two parts: Intelligent application embedded on citizens and victims Smartphones, and S2S System for the Emergency Operation Services.

Nardjes Bouchemal, Aissa Serrar, Yehya Bouzeraa, Naila Bouchmemal
Association Rules Algorithms for Data Mining Process Based on Multi Agent System

In this paper, we present a collaborative multi-agent based system for data mining. We have used two data mining model functions, clustering of variables in order to build homogeneous groups of attributes, association rules inside each of these groups and a multi-agent approach to integrate the both data mining techniques. For the association rules extraction, we use both apriori algorithm and genetic algorithm.The main goal of this paper is the evaluation of the association rules obtained by running apriori and genetic algorithm using quantitative datasets in multi agent environment.

Imane Belabed, Mohammed Talibi Alaoui, Jaara El Miloud, Abdelmajid Belabed
Internet of Things: Security Between Challenges and Attacks

In recent years, the fast developments in hardware, software, networking and communication technologies have facilitated the big emergence of many technologies such as Internet of things. Measurement and collecting data from physical world, and then sending it to digital world is base of this technology. The transmitted data are stocked, processed and then possibly used to act upon the physical world. IoT adds intelligence and autonomy to many domains (e.g. health care, smart transportation and industrial monitoring). As a result, it makes human life more comfortable and simple. However, as all emerging technologies, IoT is suffering from several security challenges and issues, especially that most of IoT devices and sensors are resources- constrained devices. As security issues and attacks could put systems in dangerous and could threat human life too, this paper treats these problems. We will provide an overview about IoT technology, and we will present various security issues that target the perception and the network levels. Moreover, we will discuss how each layer is damaged by harmful and malicious purposes. Most of recent papers use the three layers architecture (which is an old architecture) to present security problems; but this paper uses one of the new reference architectures to study security threats and attacks.

Benali Cherif, Zaidi Sahnoun, Maamri Ramdane, Bouchemal Nardjes
Socially and Biologically Inspired Computing for Self-organizing Communications Networks

The design and development of future communications networks call for a careful examination of biological and social systems. New technological developments like self-driving cars, wireless sensor networks, drones swarm, Internet of Things, Big Data, and Blockchain are promoting an integration process that will bring together all those technologies in a large-scale heterogeneous network. Most of the challenges related to these new developments cannot be faced using traditional approaches, and require to explore novel paradigms for building computational mechanisms that allow us to deal with the emergent complexity of these new applications. In this article, we show that it is possible to use biologically and socially inspired computing for designing and implementing self-organizing communication systems. We argue that an abstract analysis of biological and social phenomena can be made to develop computational models that provide a suitable conceptual framework for building new networking technologies: biologically inspired computing for achieving efficient and scalable networking under uncertain environments; socially inspired computing for increasing the capacity of a system for solving problems through collective actions. We aim to enhance the state-of-the-art of these approaches and encourage other researchers to use these models in their future work.

Juan P. Ospina, Joaquín F. Sánchez, Jorge E. Ortiz, Carlos Collazos-Morales, Paola Ariza-Colpas
Backmatter
Metadaten
Titel
Machine Learning for Networking
herausgegeben von
Selma Boumerdassi
Éric Renault
Paul Mühlethaler
Copyright-Jahr
2020
Electronic ISBN
978-3-030-45778-5
Print ISBN
978-3-030-45777-8
DOI
https://doi.org/10.1007/978-3-030-45778-5