Skip to main content
main-content

2021 | Buch

Advances in Computational Collective Intelligence

13th International Conference, ICCCI 2021, Kallithea, Rhodes, Greece, September 29 – October 1, 2021, Proceedings

herausgegeben von: Krystian Wojtkiewicz, Prof. Dr. Jan Treur, Dr. Elias Pimenidis, Marcin Maleszka

Verlag: Springer International Publishing

Buchreihe : Communications in Computer and Information Science

share
TEILEN
insite
SUCHEN

Über dieses Buch

This book constitutes refereed proceedings of the 13th International Conference on International Conference on Computational Collective Intelligence, ICCCI 2021, held in Kallithea, Rhodes, Greece, in October - November 2021. Due to the the COVID-19 pandemic the conference was held online.
The 44 full papers and 14 short papers were thoroughly reviewed and selected from 231 submissions. The papers are organized according to the following topical sections: ​​social networks and recommender systems; collective decision-making; computer vision techniques; innovations in intelligent systems; cybersecurity intelligent methods; data mining and machine learning; machine learning in real-world data; Internet of Things and computational technologies for collective intelligence; smart industry and management systems; low resource languages processing; computational intelligence for multimedia understanding.

Inhaltsverzeichnis

Frontmatter

Social Networks and Recommender Systems

Frontmatter
Identifying Key Actors in Organizational Social Network Based on E-Mail Communication

Nowadays a lot of diverse systems in many different fields can be described as complex network and they are the focus of interest in many disciplines such as politics, marketing, social systems. Using different network analysis tools may provide many interesting observations about the structure of the network, dynamics of the network over the time and the role of selected nodes in the network. The paper focuses on organizational social network based on email communication between employees within the organization. Such network has a form of a network including a set of vertices, referring to persons employed in this organization, and a set of edges, defining information flow between these persons using an email communication channel. The main contribution of the paper is to discover main properties of the email-based social network of public organization located in Poland and to identify key actors in it using social network analysis tools. An important part of the analysis is also a comparison of the obtained results with real structure of the organization. The experiment confirmed that analysis of email traffic within an organization may derive information that can be usable for organizational management purposes.

Dariusz Barbucha, Paweł Szyman
Social Recommendation for Social Networks Using Deep Learning Approach: A Systematic Review

The increasing popularity of social networks indicates that the vast amounts of data contained within them could be useful in various implementations, including recommendation systems. Interests and research publications on deep learning-based recommendation systems have largely increased. This study aimed to identify, summarize, and assess studies related to the application of deep learning-based recommendation systems on social media platforms to provide a systematic review of recent studies and provide a way for further research to improve the development of deep learning-based recommendation systems in social environments. A total of 32 papers were selected from previous studies in five of the major digital libraries, including Springer, IEEE, ScienceDirect, ACM, Scopus, and Web of Science, published between 2016 and 2020. Results revealed that even though RS has received high coverage in recent years, several obstacles and opportunities will shape the future of RS for researchers. In addition, social recommendation systems achieving high accuracy can be built by using a combination of techniques that incorporate a range of features in SRS. Therefore, the adoption of deep learning techniques in developing social recommendation systems is undiscovered.

Muhammad Alrashidi, Ali Selamat, Roliana Ibrahim, Ondrej Krejcar
Session Based Recommendations Using Char-Level Recurrent Neural Networks

The use of long short-term memory (LSTM) for session-based recommendations is described in this research. This study uses char-level LSTM as a real-time recommendation service to test and offer the optimal solution. Our strategy can be used to any situation. Two LSTM layers and a thick layer make up our model. To evaluate the prediction results, we use the mean of squared errors. We also put our recall and precision metrics prediction to the test. The best-performing network had roughly 2000 classes and was a trainer for the last year of likes on an image-based social platform. On twenty objects, our best model had a recall value of 0.182 and a precision value of 0.061.

Michal Dobrovolny, Jaroslav Langer, Ali Selamat, Ondrej Krejcar

Collective Decision-Making

Frontmatter
Hybridization of Metaheuristic and Population-Based Algorithms with Neural Network Learning for Function Approximation

This paper attempts to improve the learning representation of radial basis function neural network (RBFNN) through metaheuristic algorithm (MHA) and evolutionary algorithm (EA). Next, the ant colony optimization (ACO)-based and genetic algorithm (GA)-based approaches are employed to train RBFNN. The proposed hybridization of ACO-based and GA-based approaches (HAG) algorithm incorporates the complementarity of exploration and exploitation abilities to reach resolution optimization. The property of population diversity has higher chance to search the global optimal instead of being restricted to local optimal extremely in two benchmark problems. The experimental results have shown that ACO-based and GA-based approaches can be integrated intelligently and develop into a hybrid algorithm which aims for receiving the best precise learning expression among relevant algorithms in this paper. Additionally, method assessment results for two benchmark continuous test function experiments and show that the proposed HAG algorithm outperforms relevant algorithms in term of preciseness for learning of function approximation.

Zhen-Yao Chen
Valentino Braitenberg’s Table: Downhill Innovation of Vehicles via Darwinian Evolution

Evolution has been a topic of interest that has been explored extensively throughout recent history. Ever since the proposal of the evolutionary theory by Darwin, there have been attempts made to validate, extend, explore and exploit the different aspects of this theory. This study sets forth methods to explore elements of what has been proposed by Darwinian evolution using Braitenberg vehicles, simple machines with sensory-motor couplings in an environmental context. In this study, a simulation environment is set up based upon the descriptions of neurophysiologist Valentino Braitenberg. This simulation environment is then utilized to carry out the thought experiments envisioned by Braitenberg for the simplest nontrivial kinds of vehicles. The methodology and results for the carried out experiments are detailed in this paper. Apart from an understanding of whether the environmental setup affects evolution, specifically the number of light sources (stimuli) and location of these light sources, the experiments show an interesting trend regarding dynamic equilibrium of the evolutionary process, the ramifications of which might not have been understood well enough previously. It is concluded that ecological setup, as well as the initial genetic makeup of the vehicles, play a crucial role in the evolution of vehicles in scenarios laid out in this study. Further, the placement of stimuli (location of lights) and the number of the stimuli have a visible effect on the survivability of vehicle types (species).

Sahand Shaghaghi, Owais Hamid, Chrystopher L. Nehaniv
Testing for Data Quality Assessment: ACase Study from the Industry 4.0 Perspective

Driven by the significant improvement of technologies and applications into smart manufacturing, this paper describes a way of analyzing and evaluating the quality of real-world industrial data. More precisely, it focuses on developing a method for determining the quality of production data and performing analysis of quality in terms of KPIs, such as OEE index and its sub-indicators, i.e. availability, quality rate and efficiency. The main purpose of the work is to propose a method that allows determine the quality of the data used to calculate production efficiency scores. In addition to the requirements imposed upon properly selected measures, we discuss possibilities of verifying the validity and reliability of these sub-indicators in relation to major production losses. The method for data quality assessment, developed in terms of the provided real data gathered from the factory shop-floor monitoring and management systems, was tested for its correctness. Our research has shown that an analysis of the quality of production data can reveal strengths and weaknesses in the production process. Finally, based on our single-unit intrinsic case study results, we discuss results learned on data quality assessment from an industry perspective and provide recommendations in this area.

Dariusz Król, Tomasz Czarnecki
Hybrid Biogeography-Based Optimization Algorithm for Job Shop Scheduling Problem with Time Lags and Single Transport Robot

We are interesting for the Job shop Scheduling Problem with Time Lags and Single Transport Robot (JSPTL-STR). This problem is a new extension of the Job shop Scheduling Problem, in which, we take into account two additional constraints; the minimum and maximum time lags constraints between finish and start time of two operations and transportation time constraints of different operations between different machines using a single robot. After the completion of an operation on a machine, it needs to be transported using transport robot to the next machine taking some time. The objective is to determine a feasible schedule of machine operations and transport operations with minimal makespan (Completion time of the last operation executed). This problem belongs to a category of problems known as NP-hard problem. Biogeography-Based Optimization (BBO) algorithm is an evolutionary algorithm inspired by the migration of species between habitats. It has successfully solved optimization problems in many different domains and has demonstrated excellent performance. To assess the performance of the proposed algorithm, a series of experiments on new proposed benchmark instances for JSPTL-STR are performed.

Madiha Harrabi, Olfa Belkahla Driss, Khaled Ghedira

Computer Vision Techniques

Frontmatter
Deep Component Based Age Invariant Face Recognition in an Unconstrained Environment

Age Invariant face recognition is one of the challenging problems in pattern recognition. Most existing face recognition algorithms perform well under controlled conditions where the data set is collected with careful cooperation with the individuals. However, in most real-world applications, the user usually has little or no control over environmental conditions. This paper proposes efficient deep component-based age-invariant face recognition algorithm in an unconstrained environment. The algorithm detects face from an image, align the face and extract the facial components (eye, mouth and nose). Each facial component is then trained using deep neural network. Thus, deep features are extracted from each component. Support vector machine is then used in classification stage. Experiments are conducted on two challenging benchmarks: AgeDb30 and Pins-Face. Results have shown significant improvement when compared with the state of the art baseline approach.

Amad Asif, Muhammad Atif Tahir, Mohsin Ali
Hybrid Vision Transformer for Domain Adaptable Person Re-identification

Person re-identification refers to finding person images taken from different cameras at different times. Supervised re-id methods rely on labeled dataset, which is usually not available in real word situations. Therefore, a procedure must be devised to adapt unseen domains in an unsupervised manner. In this work we have proposed a domain adaptation methodology by using hybrid Vision Transformers and incorporating Cluster loss along with the widely used Triplet loss. Our proposed methodology has shown to improve results of exiting unsupervised domain adaptation methods for person re-id.

Muhammad Danish Waseem, Muhammad Atif Tahir, Muhammad Nouman Durrani
Recognition of Changes in the Psychoemotional State of a Person by the Video Image of the Pupils

We are looking for a relationship between electrodermal activity and the amplitude of fluctuations in the size of the pupils, depending on the magnitude of the stress (stress state) experienced. Studies have been carried out on the psychophysiological reactions of a person (emotions) arising in response to external stress factors (stimuli). For this, a device was used to register changes in pupil size and galvanic skin response. It turned out that the change in the values of galvanic skin response and pupil size correlates (p = 0.9) in the presence of emotions (all other things being equal). The result of measuring galvanic skin reaction (GSR) shows that the level of attention during the test to some stimuli was higher than to others. This means that the first stimuli may be more significant for the subject than the second. The results obtained make it possible to link the galvanic skin response and the pupil response in response to the stimulus material. Our research also shows that the pupil diameter signal has a good discriminating ability to detect changes in the psychological state of a person. The results can be useful for the development of Computer Vision and Artificial Intelligence.

Marina Boronenko, Oksana Isaeva, Yuri Boronenko, Vladimir Zelensky, Pavel Gulyaev
Morphological Analysis of Histopathological Images Using Deep Learning

In this study, we introduce a morphological analysis of segmented tumour cells from histopathology images concerning the recognition of cell overlapping. The main research problem considered is to distinguish how many cells are located in a structure, which is composed of overlapping cells. In our experiments, we used convolutional neural network models to provide recognition of the number of cells. For the medical data used: Ki-67 histopathology images, we achieved a high f1-score result. Therefore, our research proves the assumption to use convolutional neural networks for morphological analysis of segmented objects derived from medical images.

Artur Zawisza, Martin Tabakov, Konrad Karanowski, Krzysztof Galus
Developing a Three Dimensional Registration Method for Optical Coherence Tomography Data

This work proposes a registration method for stitching different overlapping Optical Coherence Tomography (OCT) data. The algorithm is based on the basic procedure of image registration where key points and descriptors are located, feature matching is established and transformed using a homographic transformation to obtain the resultant registered images. Image similarity techniques such as mean square error, structural similarity index, and peak signal to noise ratio are the three basic approaches that are used for the analysis of these registered OCT images from the OCT datasets. An algorithm for locating the differences in the registered images against reference and target images is also demonstrated. Similarity measures and image differentiation approach provide a general analysis regarding the appropriateness of the image registration algorithm. The same methods are also used for the analysis of the optical coherence tomography volume scans. The analysis on the similarity of the images also shows that images with the highest similarities have the highest possibilities to be close to each other and can be further used for the registration. The analysis comprises the comparison of two optical coherence tomographic volumes.

Bansari Vadgama, Doina Logofatu, Peter Thoma
Which Gameplay Aspects Impact the Immersion in Virtual Reality Games?

In this paper a comparison of two implementations of the same game is presented: a VR (virtual reality) version and a traditional one, meaning with no VR hardware used. The implementations’ design - made specially for this research - is the result of the analysis of many state-of-the VR games. The comparison’s goal is to find the aspects of gameplay that impact user immersion in a video game. Each extracted aspect is covered by a question in a post experiment questionnaire and discussed before final conclusions are drawn.

Marek Kopel, Marta Rutkowska

Innovations in Intelligent Systems

Frontmatter
Is Wikipedia Easy to Understand?: A Study Beyond Conventional Readability Metrics

Wikipedia has emerged to be one of the most prominent sources of information available on the Internet today. It provides a collaborative platform for editors to edit and share their information, making Wikipedia a valuable source of information. The Wikipedia articles have been duly studied from an editor’s point of view. But, the analysis of Wikipedia from the reader’s perspective is yet to be studied. Since Wikipedia serves as an encyclopedia of information for its users, its role as an information securing tool must be examined. The readability of a written text plays a major role in imparting the intended comprehension to its readers. Readability is the ease with which a reader can understand the underlying piece of text. In this paper, we study the readability of various Wikipedia articles. Apart from judging the readability of Wikipedia articles against standard readability metrics, we introduce some new parameters related specifically to the comprehension of the text present in Wikipedia articles. These new parameters, combined with standard readability metrics, help classify the Wikipedia articles into comprehensible and non-comprehensible classes through the SVM classification technique.

Simran Setia, S. R. S. Iyengar, Amit Arjun Verma, Neeru Dubey
Compatibility Checking of Business Rules Expressed in Natural Language Against Domain Specification

Business rules play an important role in software development. They are usually expressed in natural language, sometimes with the use of sentence templates such as RuleSpeak. The rules should be consistent with other artifacts representing the same domain, e.g. business glossary. Typically, business rules are written in a text editor. Such editors only offer basic support such as grammar spelling. They do not check compliance of business rules with a business glossary or more complex artifacts, e.g. domain diagrams. The goal of this paper is to propose a method of compatibility checking of business rules expressed in natural language with the domain specification. Checking is done at the syntax level. It is assumed that the domain is specified by a class diagram and a glossary. The compatibility checking is a heuristic method, the usefulness of which has been demonstrated by several experiments. At that point, the method application is limited to business rules relating to at most two classes and/or two attributes/roles.

Bogumila Hnatkowska
Agent-Based Modeling and Simulation of Citizens Sheltering During a Tsunami: Application to Da Nang City in Vietnam

Humans have witnessed several tsunamis throughout their history with tremendous human, environmental and material damage. Vietnam, because of its special geographic location in Southeast Asia, might be affected by tsunamis. Although the risk of tsunami in Vietnam is low, it does exist. Therefore, people must be well prepared if a tsunami strikes. Minimizing the damage caused by a tsunami, both for humans and infrastructure, is the most important duty of the authorities and the scientists. The objective of our research is to evaluate and measure the tsunami impacts on citizens and tourists according to their awareness of the situation and moving strategies towards shelters. We follow the agent-based modeling and simulation (ABMS) approach and illustrate it through a case study applied to Da Nang city, Vietnam, using the NetLogo platform and geo-spatial information.

Nguyen-Tuan-Thanh Le, Phuong-Anh-Hung-Cuong Nguyen, Chihab Hanachi
Effect of Dialogue Structure and Memory on Language Emergence in a Multi-task Game

In language emergence, neural agents engage in finite-length conversations using a finite set of symbols to reach a given goal. In such systems, two key factors can determine the dialogue structure; the size of the symbol set and the conversation length. During training, agents invent and assign meanings to the symbols without any external supervision. Existing studies do not investigate how these models behave when they train under multiple tasks requiring different levels of coordination and information exchange. Moreover, only a handful of work discusses the relationship between the dialogue structure and the performance. In this paper, we formulate a game environment where neural agents simultaneously learn on heterogeneous tasks. Using our setup, we investigate how the dialogue structure and the agent’s capability of processing memory affect the agent performance across multiple tasks. We observed that memory capacity non-linearly affects the task performances, where the nature of the task influences this non-linearity. In contrast, the performance gain obtained by varying the dialogue structure is mostly task-independent. We further observed that agents prefer smaller symbol sets with longer conversation lengths than the converse.

Kasun Vithanage, Rukshan Wijesinghe, Alex Xavier, Dumindu Tissera, Sanath Jayasena, Subha Fernando
Towards Smart Customer Knowledge Management Systems

Nowadays, customer focus is one of the most important challenges of enterprises in identifying customer needs and providing suitable products and services to customers. Customer focus gives prominence to knowledge about, for, and from customers. Customer knowledge management and transfer – at the right time, in the right place, and with the right quality – enable enterprises to survive in today’s business environment. This paper presents the concept of smart customer knowledge management and proposes a conceptual framework for studying and designing smart customer knowledge management systems based on the design science method.

Thang Le Dinh, Nguyen Anh Khoa Dam

Cybersecurity Intelligent Methods

Frontmatter
The Proposition of Balanced and Explainable Surrogate Method for Network Intrusion Detection in Streamed Real Difficult Data

Handling the data imbalance problem is one of the crucial steps in a machine learning pipeline. The research community is well aware of the effects of data imbalance on machine learning algorithms. At the same time, there is a rising need for explainability of AI, especially in difficult, high-stake domains like network intrusion detection. In this paper, the effects of data balancing procedures on two explainability procedures implemented to explain a neural network used for network intrusion detection are evaluated. The discrepancies between the two methods are highlighted and important conclusions are drawn.

Mateusz Szczepanski, Mikołaj Komisarek, Marek Pawlicki, Rafał Kozik, Michał Choraś
Behavioral Anomaly Model for Detecting Compromised Accounts on a Social Network

The previous two decades have given birth to a new popular phenomenon - online social networking. Its range, diversity and constantly growing impact on our lives provides space for the creation of new and profitable, highly professionalized industry of cybercrime. It is based on phishing, social engineering, brute force password guessing or malware collecting passwords. The aim is to penetrate another’s account and monetize it by blackmailing, searching for cryptocurrency wallets, obtaining passwords, influencing public opinion in favor of an idea, or at least sending spam. Today’s hackers are well organized into massive, internally specialized criminal networks recruited from the best IT specialists, and as the latest results of the British investigative initiative Bellingcat show, they are also often the cornerstone of the ongoing hybrid war. On top of this, in February 2021, the “mother of all breaches” appeared on the darknet - the COMB database - consisting of 3.2 billion of unique pairs of emails and cleartext passwords. This is a clear and present danger for every user of any social network and their providers have to act immediately to protect accounts of their users. This paper describes how a social network operator with 250,000 daily active users deals with the problem of account compromise by deploying anomaly detection responsive to sudden changes in the behavior of a user trying to log in to the account.

Antonin Fuchs, Miroslava Mikusova
Adversarial Attacks on Face Detection Algorithms Using Anti-facial Recognition T-Shirts

Purpose of the paper was to analyze adversarial attacks on face detection algorithms with the use of anti-facial recognition T-shirts. In the research we also checked whether the methods used to attack detectors of objects are effective in attacking face detectors. Research has been proposed to verify the safety of computer vision algorithms on the example of face detection. An attempt was made to attack the object detector with 63 prepared examples. Each of the examples contained a specially generated adversarial pattern, which was then placed on the T-shirt in a digital version or in the form of a physically printed sheet applied to the T-shirt.

Ewa Lyko, Michal Kedziora
Security and Scalability in Private Permissionless Blockchain: Problems and Solutions Leading to Creating Consent-as-a-Service (CaaS) Deployment

The purpose of this paper is to analyze the security and scalability problems occurring in private permissionless blockchain systems. The consent management system (CMS) based upon Hyperledger Fabric (HLF), was implemented in the selected blockchain-as-a-service (BaaS), and therefore led to consent-as-a-service (CaaS) deployment. The experiments results assessed to what level the network transaction throughput is affected by changing the world state size, which indicates scalability of chosen blockchain system implementation. Additional experiments with the IBM Blockchain Platform and the FastFabric framework (a HLF modification) were performed to prove the possibility to achieve transaction throughput comparable to the Ethereum blockchain network.

Hanna Grodzicka, Michal Kedziora, Lech Madeyski

Data Mining and Machine Learning

Frontmatter
EMaxPPE: Epoch’s Maximum Prediction Probability Ensemble Method for Deep Learning Classification Models

As deep learning (DL) is evolving rapidly, implementing the knowledge of DL into various fields of human life and the effective usage of existing data insights are becoming crucial tasks for a majority of DL models. We are proposing to ensemble maximum prediction probabilities of different epochs and the epoch which achieved the highest accuracy for classification problems. Our suggestion contributes to the improvement of DL models using the pre-trained and skipped results from epochs. The maximum prediction probability ensemble of epochs increases the prediction space of the entire model if the intersection of prediction scope of any epoch is smaller than the one that has the biggest prediction scope. Using only the best epoch’s prediction probabilities for classification cannot use the other epochs’ knowledge. To avoid bias in this research, a simple CNN architecture with batch normalization and dropout was used as a base model. By ensembling only maximum prediction probabilities of different epochs, we managed to use 50% of the lost data insight from the epochs, thereby increasing the total accuracy by 4–5%.

Javokhir Musaev, Ngoc Thanh Nguyen, Dosam Hwang
Convolutional Neural Networks with Dynamic Convolution for Time Series Classification

Due to its prominent applications, time series classification is one of the most important fields of machine learning. Although there are various approaches for time series classification, dynamic time warping (DTW) is generally considered to be a well-suited distance measure for time series. Therefore, in the early 2000s, techniques based on DTW dominated this field. On the other hand, deep learning techniques, especially convolutional neural networks (CNN) were shown to be able to solve time series classification tasks accurately. Although CNNs are extraordinarily popular, the scalar product in convolution only allows for rigid pattern matching. In this paper, we aim at combining the advantages of DTW and CNN by proposing the dynamic convolution operation and dynamic convolutional neural networks (DCNNs). The main idea behind dynamic convolution is to replace the dot product in convolution by DTW. We perform experiments on 10 publicly available real-world time-series datasets and demonstrate that our proposal leads to statistically significant improvement in terms of classification accuracy in various applications. In order to promote the use of DCNN, we made our implementation publicly available at https://github.com/kr7/DCNN .

Krisztian Buza, Margit Antal
Enhancing Speech Signal Features with Linear Envelope Subtraction

The common integral transforms present the speech signal to another space with a set of orthogonal basis vectors. The speech, in terms of nature, is a periodic signal so the basis vectors are periodic too, particularly the sinusoidal wave. In reality, after impacted by many outside agents, the speech signal is not always periodic. This leads to the fact that the traditional transforms such as Fourier or Wavelet transforms do not always perform well. In this research, we propose a new method in which the speech signal is processed to be periodic before transformed into the frequency domain. We first use linear regression to identify the linear envelope of the speech signal in the time domain, then subtract the signal with the identified linear function to horizontalize the speech signal. The feature vector we propose includes two parameters from the linear envelope and the standard feature vectors in the frequency domain. Experimental results show that our new method works well in many cases. It demonstrates the significant impact of the linear envelope and its improvement on the performances of the recognizers.

Hao D. Do, Duc T. Chau, Dung D. Nguyen, Son T. Tran
Entropy Role on Patch-Based Binary Classification for Skin Melanoma

In this paper, we split the region of interest of dermoscopic images of skin lesions in patches of different size and we analyze the impact of the entropy of the patches on patch-based binary classification using a convolutional neural network (CNN). Specifically, we analyze the distribution of entropy amongst the patches and we compare the training time of a classifier on subsets of the data with varying entropy. We find that the classifier converges faster on patches with higher entropy. Our entropy-based analysis is performed on skin lesion images from the ISIC archive.

Guillaume Lachaud, Patricia Conde-Cespedes, Maria Trocan
Reinforcement Learning for Optimizing Wi-Fi Access Channel Selection

Wi-Fi’s success is largely a testament to its cost-effectiveness, convenience, and ease of integration with other networks. Wi-Fi allows a suddenly increased number of users to access network services within their networking environment from any convenient location. However, wireless networks have a frequent issue of losing packets caused by poor Wi-Fi signal, network interference, and long-distance connection. This study primarily analyses the mentioned Wi-Fi issues and demonstrates the solution to maintain a reliable network connection within an area including multiple access points and devices by using Reinforcement Learning (RL). The RL algorithm is developed to recommend appropriate channels for the access points in a wireless network environment. The case study of Wi-Fi access data at a university is examined to evaluate the proposed method. Experimental results have shown that the RL-based Wi-Fi access channel selection can achieve higher performance than manual channel selection.

Hung Nguyen, Duc Long Pham, Mau Hien Doan, Thi Thanh Sang Nguyen, Duc Anh Vu Dinh, Adrianna Kozierkiewicz
Emotiv Insight with Convolutional Neural Network: Visual Attention Test Classification

The purpose of this paper is to use the low-cost EEG device to collect brain signal and use the neural network algorithm to classify the attention level based on the recorded EEG data as input. Fifteen volunteers participated in the experiment. The Emotiv Insight headset was used to record the brain signal during participants performing the Visual Attention Colour Pattern Recognition (VACPR) test. The test was divided into 2 tasks namely task A for stimulating the participant to be attentive and task B for stimulating the participant to be inattention. Later, the recorded raw EEG signal passed through a Notch filter and Independent Component Analysis (ICA) to filter out the noise. After that, Power Spectral Density (PSD) was used to calculate the power value of pre-processed EEG signal to verify whether the recorded EEG signal is consistent with the mental state stimulated during task A and task B before performing classification. Since EEG signals exhibit significantly complex behaviour with dynamic and non-linear characteristics, Convolutional Neural Network (CNN) shows great promise in helping to classify EEG signal due to its capacity to learn good feature representation from the signals. An accuracy of 76% was achieved, indicating the feasibility of using Emotiv Insight with CNN for attention level classification.

Chean Khim Toa, Kok Swee Sim, Shing Chiang Tan

Machine Learning in Real-World Data

Frontmatter
Machine Learning in the Context of COVID-19 Pandemic Data Analysis

Recently a subject of COVID-19 pandemic-related predictions and models emerged as one of the crucial problems related to medicine and computer science. Acquired data carries the features of complex, difficult to analyze data. Moreover, often exponential growth of infected leads to the rapid growth of available data. Thus any approach related to cleaning and preprocessing data, as well as algorithm capable to deal with the prediction problem are crucial. It is especially important due to two main reasons: first of all, results acquired during the analysis and prediction could be used to contain the pandemic; moreover, such unprocessed, difficult data could be very important for different machine learning methods as a source of real-world, still-changing data. It should be clearly stated, that impact of the external factors, like capabilities of dealing with a pandemic, different country-dependent actions, and different virus mutations is an enormous challenge related to the data prediction.In this article, we introduce an overview of different methods and approaches used in the context of the COVID-19 pandemic. Additionally, we estimate the quality of pandemic prediction on the basis of the polynomial regression. We investigate four different versions (dependent on the size of the train set) as well as test results on four different cases (situation in three different countries as well as the situation around the world).

Anita Hrabia, Jan Kozak, Przemysław Juszczuk
Web Scraping Methods Used in Predicting Real Estate Prices

Currently, a significant increase is observed in offers number related to, among others, with the real estate market. More and more clients are looking for real estate on dedicated portals. However, too much data and the number of websites related to the real estate market and the dependence of this market on many factors make it very difficult for the end user to have data presented in a single place. Therefore, a good solution is to prepare a tool that serves as an initial expert system helping to make the decision. In this paper we are using different web scraping methods to obtain data from the real estate market. For the Polish market, the data is either not free of charge or scattered around multiple different sources. The end goal is to make predictions about price; therefore, quality of data acquisition is crucial. The solution is to combine different methods of web scraping and crawling into a complete solution presented here as an actual use case study with guidelines and good practices. The presented systems is successfully running and being a data source for real estate market predictions.

Tomasz Jach
Combining Feature Extraction Methods and Principal Component Analysis for Recognition of Vietnamese Off-Line Handwritten Uppercase Accented Characters

This paper proposes a blended model that is suitable for recognizing off-line handwritten accented characters in general and Vietnamese characters in particular. The recognition model linearly combines four extracting methods in the feature extraction period, including Zones density, Projection histogram, Contour profiles, and Haar wavelets. The set of features obtained will be applied with Principal Component Analysis (PCA) to retain useful features, reducing the recognition time. Additionally, a Support Vector Machine (SVM) is also utilised for training and recognition. The proposed model is tested on the dataset of 21174 samples with 99 Vietnamese off-line handwritten accented characters.

Ha Hoang Quoc Thi, Mau Hien Doan

Internet of Things and Computational Technologies for Collective Intelligence

Frontmatter
Detecting School Violence Using Artificial Intelligence to Interpret Surveillance Video Sequences

In this study, we present a skeleton-based approach for detecting aggressive activity. The approach does not require much powerful hardware but is very fast in realization. There are two stages in our method: feature extraction from video frames to evaluate a person’s posture, and then action classification using a neural network to determine if the frames contain bullying scenes. We also selected 13 classes for identifying aggressor’s and victim’s behavior, created a dataset of 400 min of video data that contains actions of one person and 20 h of video data containing actions of physical bullying and aggression. The approach was tested on the assembled dataset. Results show more than 97% accuracy in determining aggressive behavior in video sequences.

Sergazy Narynov, Zhandos Zhumanov, Aidana Gumar, Mariyam Khassanova, Batyrkhan Omarov
Audio Surveillance: Detection of Audio-Based Emergency Situations

The subject of the study was the recognition of sounds of critical situations in the audio signal. The term “critical situation” is understood as an event, the characteristic sound signs of which can speak about acoustic artifacts as a shot, a scream, a glass crash, an explosion, a siren, etc.. The paper considers the scope of audio analytics, its advantages, the history of spectral analysis, as well as analyzes and selects tools for further development of system components. In the paper, we propose our dataset that consists of 14 classes that contains 1000 sounds of each, and a model to detect emergency situations using audio processing and analytics.

Zhandos Dosbayev, Rustam Abdrakhmanov, Oxana Akhmetova, Marat Nurtas, Zhalgasbek Iztayev, Lyazzat Zhaidakbaeva, Lazzat Shaimerdenova
Understanding Bike Sharing Stations Usage with Chi-Square Statistics

Bike sharing systems have both great potential and great challenge for the development of smart and green urban environment. Many problems, arising from design and operation of bike sharing systems, have no easy solutions and call for complex mathematical models. Nowadays, there are a lot of sophisticated methods for understanding and administration of bike sharing systems, based on Data mining techniques, graph computations, temporal networks models, etc. At the same time, as the digitalization is accelerating, easy and affordable old-school methods are often overlooked. This paper presents a simple but efficient Chi-square test for analyzing bike sharing stations usage in mornings and evenings. The proposed method determines stations that keep the same usage patterns over time. Experiments conducted on CitiBike trip data for New York City’s bike sharing service, have shown promising performance of the proposed method.

Aliya Nugumanova, Almasbek Maulit, Madina Mansurova, Yerzhan Baiburin
Intelligent System for Assessing the Socio-economic Situation in the Region

This article analyzes the problems of monitoring and managing the socio-economic situation. The analysis of the socio-economic situation involves the determination of the quantitative characteristics of the dynamic series, the trend of growth, decline or stabilization, the identification of causal factors, in specific territories and for different groups. The criterion of fuzzy controllability was obtained to solve the problem of forecasting and controlling the socio-economic situation. A mathematical model and algorithm for solving the task of monitoring and managing the socio-economic situation based on interval mathematics and their software implementation are described. The social effect will be expressed in improving the safety of people’s lives. As a result, it will be possible to carry out preventive measures in the necessary territories.

Sholpan Jomartova, Talgat Mazakov, Daryn Mukhaev, Aigerim Mazakova, Gauhar Tolegen
Integration of PSO Algorithm and Fuzzy Logic to Reduce Energy Consumption in IoT-Based Sensor Networks

Wireless sensor network (WSN) is composed by a set of sensor nodes with low energy consumption and low cost which could sense quantities like humidity, temperature, pressure and send them to the central node. The routing optimization in IoT networks is required taking into account the traffic, congestion, and failure of some or all network services. In fact, low traffic and network congestion means high quality routing in that network. Increasing the network lifetime and improving the routing quality is one of the main concerns of IoT-based sensor networks. In this paper, a new clustering algorithm is proposed to increase the network efficiency in the IoT-based sensor networks using the fuzzy logic and particle swarm optimization (PSO) algorithm. In this approach, the cluster was created in WSN using energy modelling for effective routing of data packages via using fuzzy logic and (PSO). The stimulation results of the proposed method showed higher capabilities by comparison with reported algorithms (e.g. LEACH, FLCFP, HEED, FBCFP) s that were used for energy consumption optimization in WSN.

Behnam Seyedi, Octavian Postolache

Smart Industry and Management Systems

Frontmatter
An Analysis of Convolutional Neural Network Models for Classifying Machine Tools

This paper analyzes the use of different Neural Network architectures on two different sets of machine tool images. The sets are either composed of images that were taken with a low-quality camera or catalog photos. The task was to classify of the different types of cutting tools, which is the first step in initiating automatic support for computer-based sharpening. The performance of different Neural Network models was evaluated using a confusion matrix and the Fl-Score. For better understanding, the ROC and PR curves were used. A final check using trained Convolutional Neural Networks was done reciprocally on each of the respective test-set. The main contribution is dedicated for the research in the Industry 4.0. Especially the application of machine learning methods. The main goal of this paper is to present an analysis of the different Deep Neural Network Models that are used to classify machine tools. Furthermore, the factor domain relevance is also briefly discussed.

Leonid Koval, Daniel Pfaller, Mühenad Bilal, Markus Bregulla, Rafał Cupek
Low-Level Wireless and Sensor Networks for Industry 4.0 Communication – Presentation

Wireless communication is becoming increasingly popular in factory automation and process control systems, especially when moving components are used. One reason for this growth is the benefits that wireless communication offers, which include lower installation costs than wired networks, less mechanical wear and tear and the ability to provide critical information even about the moving components. Robust and reliable wireless communication solutions must accommodate the demanding and changing conditions of the existing industrial environment, such as the variable number of communication elements, possible interference, a large area for which to provide communication and an organic amount of available battery power. For mobile industrial environments, sensor networks appear to be candidates that can meet many or even all of these requirements. Several types of such communication solutions are available and are in use. This paper focuses on selected wireless networks that can be considered to be enabling technology for the new generation of manufacturing systems.

Anna-Lena Kampen, Marcin Fojcik, Rafal Cupek, Jacek Stoj
Ontology-Based Approaches for Communication with Autonomous Guided Vehicles for Industry 4.0

Autonomous Guided Vehicles (AGV) are an enabling technology that has changed the landscape for the new generation of manufacturing systems. Because AGV must interact with a heterogenous production environment, communication between an AGV and other devices must be established dynamically. This includes the production stands, production systems, manufacturing infrastructure, and cooperation with other AGVs. The focus of this paper is the ontological approach that enables dynamic communications with an AGV that must be adapted to changing operating conditions. The aim of this work is to review the existing approaches using ontologies for industrial communication, to evoke a discussion, and to elucidate the current research opportunities by highlighting the relationship between different subareas of communication with an AGV.

Rafal Cupek, Marcin Fojcik, Piotr Gaj, Jacek Stój
Detecting of Minimal Changes in Physical Activity Using One Accelerometer Sensor

This paper presents experiments using IoT systems for chronic arthritis patients monitoring using an inexpensive and convenient wristband, avoiding expensive medical equipment. The main goal is to see if it is possible to distinguish between different types of patient behavior while performing very similar exercises. The data from the wristband were collected through a communication system and statistically analyzed. The comparison of the obtained results allows for reliable, reproducible, and accurate decoding of individual cases. The publication describes the various steps in the data collection and analysis and gives the results in the form of receiver operating characteristic curves for all measured features and a comparison of performed and detected exercises.

Pawel Mielnik, Marcin Fojcik, Krzysztof Tokarz, Zuzanna Rodak, Bjarte Pollen

Low Resource Languages Processing

Frontmatter
Integrated Technology for Creating Quality Parallel Corpora

What determines the quality of parallel corpora? Firstly, it is determined by the quality of the translation. However, in this paper, we consider not the substantial quality of the translation, but the “technical” quality of parallel texts. Parallel texts are collected from different sources and often such texts have the following disadvantages: language mixing, font mixing, text alignment problems, the need for manual correction of parallel texts. All these problems require, firstly, their recognition, and secondly, they need to be resolved, and with large volumes of parallel texts, performing these operations manually is a very time-consuming process. Therefore, the work proposes an integrated technology for creating parallel corpora, which allows to minimize the number of manual operations. The authors present the technology as an example of a new linguistic resource - an open Kazakh-English parallel corpus.

Zhandos Zhumanov, Ualsher Tukeyev
Development and Study of a Post-editing Model for Russian-Kazakh and English-Kazakh Translation Based on Machine Learning

This work presents research in the field of machine translation for the Kazakh language. A comparative analysis of translation works of open online machine translation systems (Google translate, Yandex translate, sozdik.kz, webtran.ru.) for English-Kazakh and Russian-Kazakh translation is presented. To improve the quality of translation for the Kazakh language a model of post-editing of the Kazakh language in machine translation has been developed, based on the neural network training approach. For machine learning parallel corpuses for the English-Kazakh and Russian-Kazakh language pairs were collected and processed. Experimental testing has been carried out. The results obtained were evaluated using the BLEU metric.

Diana Rakhimova, Kamila Sagat, Kamila Zhakypbaeva, Aliya Zhunussova
Development and Study of an Approach for Determining Incorrect Words of the Kazakh Language in Semi-structured Data

Research in the field of computer linguistics is relevant due to the rapid growth of information in natural languages on the Internet and social networks. Currently, there is an increase in the amount of information that humans and machines create in natural language. Information retrieval systems, dialog systems, machine translation, and automatic resume tools, spelling check modules analyze and process texts in natural languages. Thus, the range of automatic word processing systems is wide and covers a variety of tasks. One of the most important tasks of natural language processing (NLP) is to find errors in texts and including words, identify and correct incorrect words. The article provides an overview of semi-structured data, methods, and technologies for detecting incorrect words in natural languages. An approach for identifying incorrect words in the Kazakh language was developed and the features and capabilities of this approach were analyzed. A comparative analysis of texts on the Internet and social networks and of technologies that identify incorrect words in natural languages has been carried out.

Yntymak Abdrazakh, Aliya Turganbayeva, Diana Rakhimova
Conversational Machine Reading Comprehension for Vietnamese Healthcare Texts

Machine reading comprehension (MRC) is a sub-field in natural language processing that aims to assist computers understand unstructured texts and then answer questions related to them. In practice, the conversation is an essential way to communicate and transfer information. To help machines understand conversation texts, we present UIT-ViCoQA, a new corpus for conversational machine reading comprehension in the Vietnamese language. This corpus consists of 10,000 questions with answers over 2,000 conversations about health news articles. Then, we evaluate several baseline approaches for conversational machine comprehension on the UIT-ViCoQA corpus. The best model obtains an F1 score of 45.27%, which is 30.91 points behind human performance (76.18%), indicating that there is ample room for improvement. Our dataset is available at our website: http://nlp.uit.edu.vn/datasets/ for research purposes.

Son T. Luu, Mao Nguyen Bui, Loi Duc Nguyen, Khiem Vinh Tran, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen
Bigram Based Deep Neural Network for Extremism Detection in Online User Generated Contents in the Kazakh Language

Countering the spread of aggressive information and extremism in the global network is an urgent problem of society and government agencies, which is solved in particular by filtering unwanted Internet resources. A necessary condition for such filtering is the classification of the content of websites, texts and documents of the information flow. Therefore, an urgent problem of information technologies is the classification of texts in natural languages in order to detect extremist texts, such as calls for extremism and other messages that threaten the security of citizens.Therefore, our research examines the detection of extremist messages in online content in the Kazakh language. To do this, we have collected a corpus of extremist texts from open sources, developed a deep neural network based on bigrams for detecting extremist texts in the Kazakh language. The proposed model has shown high efficiency in comparison with classical methods of machine learning and deep learning.

Shynar Mussiraliyeva, Batyrkhan Omarov, Milana Bolatbek, Kalamkas Bagitova, Zhanna Alimzhanova

Computational Collective Intelligence and Natural Language Processing

Frontmatter
Cbow Training Time and Accuracy Optimization Using SkipGram

Most word embedding techniques get their theoretical foundation from distributional semantics theory. They have been among the most popular trends of natural language processing for the last two decades. They have a large range of application. The present paper presents an overview of recent word embedding techniques. Furthermore, it proposes an optimized continuous bag of word (Cbow) model. The experiments we conducted show that the proposed approach outperforms the classic Cbow technique in terms of accuracy and training time.

Toufik Mechouma, Ismail Biskri, Jean Guy Meunier, Alaidine Ben Ayed
Erroneous Coordinated Sentences Detection in French Students’ Writings

This paper presents the development stages of an NLP device to be used to improve students’ skills in French academic writing. Among various relevant difficulties, we focus on coordinating constructions that include or not ellipsis. We develop a tool to detect errors automatically in coordinated sentences from a corpus composed of erroneous and correct sentences. We use a deep learning approach based on the French CamemBERT model. To find the best learning environment for the classification task, we show the results obtained from training and testing datasets with different proportions of erroneous and correct sentences.

Laura Noreskal, Iris Eshkol-Taravella, Marianne Desmets
Comprehensive Evaluation of Word Embeddings for Highly Inflectional Language

The purpose of this paper is to present the experiments aiming at choosing the best word embeddings for highly inflectional languages. In particular, authors evaluated the word embeddings for Polish language among those available in the literature at the time of writing. The static embeddings like Word2Vec, GloVe, fasttext and their training settings were taken into account. In particular, the evaluation coverted 121 different embedding models provided by IPI PAN, OPI, Kyubyong and Facebook. The experiment phase was divided into two tasks: the first task consisted in examining word analogies and the second verified the similarities and the relatedness of pairs of words. The obtained results showed that in terms of accuracy the Facebook fasttext model learned on the Common Crawl collection should be considered the best model under assumptions of experimental session.

Pawel Drozda, Krzysztof Sopyla, Juliusz Lewalski
Constructing VeSNet: Mapping LOD Thesauri onto Princeton WordNet and Polish WordNet

Lexical resources are crucial in many modern applications of Natural Language Processing and Artificial Intelligence. We present VeSNet – a network of lexical resources resulting from the merge of Polish-English WordNet (PEWN) with several existing large electronic thesauri from the Linked Open Data cloud (DBpedia, Wikipedia, GeoWordNet, Agrovoc, Eurovoc, Gemet and MeSH). We describe the procedure of making the resource and depict its elementary properties, as well as, evaluate its quality. The created lexical network is characterised both by great coverage and high precision: nearly 1.3M new exactMatch links were created, including 85K to PEWN, with the estimated precision of 94%.

Arkadiusz Janz, Grzegorz Kostkowski, Marek Maziarz
Arabic Sentiment Analysis Using BERT Model

Sentiment analysis is the process of determining whether a text or a writing is positive, negative, or neutral. A lot of research has been done to improve the accuracy of sentiment analysis methods, varying from simple linear models to more complex deep neural network models. Lately, the transformer-based model showed great success in sentiment analysis and was considered as the state-of-the-art model for various languages (English, german, french, Turk, Arabic, etc.). However, the accuracy for Arabic sentiment analysis still needs improvements especially in tokenization level during data processing. In fact, the Arabic language imposes many challenges, due to its complex structure, various dialects, and resource scarcity. The improvement of the proposed approach consists of integrating an Arabic BERT tokenizer instead of a basic BERT Tokenizer. Various tests were carried out with different instances (dialect and standard). We used hyperparameters optimization by random search method to obtain the best result with different datasets. The experimental study proves the efficiency of the proposed approach in terms of classification quality and accuracy compared to Arabic BERT and AraBERT models.

Hasna Chouikhi, Hamza Chniter, Fethi Jarray

Computational Intelligence for Multimedia Understanding

Frontmatter
Infrared Thermography and Computational Intelligence in Analysis of Facial Video-Records

Infrared thermography has a wide range of applications both in engineering and biomedicine. Resulting video-images provide immediate information about thermal conditions on the surface of the observed object but for the more sophisticated analysis the detail evaluation of separate images is necessary. The processing of thermal images is based upon data acquisition by special non-invasive sensors, efficient communication systems, and the application of selected machine learning methods in many cases. The present paper is devoted to the recognition of thermal regions in the facial area, detection of the body temperature, and evaluation of breathing frequency and its possible disorders. Data include video-sequences acquired on the home exercise bike and recorded during different load conditions. The proposed general methodology combines the use of neural networks and machine learning methods for the detection of the changing temperature ranges of the thermal camera. Selected digital signal processing methods are then used to find the mean body temperature and breathing frequency during the specified time period. Results show the temperature changes and breathing frequency between 0.48 and 0.56 Hz for selected experiments and different body loads.

Aleš Procházka, Hana Charvátová, Oldřich Vyšata
Optimized Texture Spectral Similarity Criteria

This paper introduces an accelerated algorithm for evaluating criteria for comparing the spectral similarity of color, Bidirectional Texture Functions (BTF), and hyperspectral textures. The criteria credibly compare texture pixels by simultaneously considering the pixels with similar values and their mutual ratios. Such a comparison can determine the optimal modeling or acquisition setup by comparing the original data with their synthetic simulations. Other applications of the criteria can be spectral-based texture retrieval or classification. Together with existing alternatives, the suggested methods were extensively tested and compared on a wide variety of color, BTF, and hyper-spectral textures. The methods’ performance quality was examined in a long series of specially designed experiments where proposed ones outperform all tested alternatives.

Michal Havlíček, Michal Haindl
Success and Hindrance Factors of AHA-Oriented Open Service Platforms

In the past years, there has been a flourishing of platforms dedicated to Active Assisted Living (AAL) and Active and Healthy Ageing (AHA). Most of them feature as their core elements intelligent systems for the analysis of multisource and multimodal data coming from sensors of various nature inserted in suitable IoT ecosystems. While progress in signal processing and artificial intelligence has shown how these platforms may have a great potential in improving the daylife of seniors or frail subjects, there are still several technological and non-technological barriers that should be torn down before full uptake of the existing solutions. In this paper, we address specifically this issue describing the outcome and creation process of a methodology aimed at evaluating the successful uptake of existing platforms in the field of AHA. We propose a pathway (as part of an overarching methodology) to define and select for Key Performance Indicators (KPIs), taking into account an extensive amount of parameters related to success, uptake and evolution of platforms. For this, we contribute a detailed analysis structured along with the 4 main actions of mapping, observing, understanding, and defining. Our analysis focuses on Platforms, defined as operating environments, under which various applications, agents and intelligent services are designed, implemented, tested, released and maintained. By following the proposed pathway, we were able to define a practical and effective methodology for monitoring and evaluating the uptake and other success indicators of AHA platforms. Besides, by the same token, we were able to provide guidelines and best practices for the development of the next-generation platforms in the AHA domain.

Andrea Carboni, Dario Russo, Davide Moroni, Paolo Barsocchi, Alexander Nikolov, Carina Dantas, Diana Guardado, Ana Filipa Leandro, Willeke van Staalduinen, Efstathios Karanastasis, Vassiliki Andronikou, Javier Ganzarain, Silvia Rus, Frederic Lievens, Joana Oliveira Vieira, Carlos Juiz, Belen Bermejo, Christina Samuelsson, Anna Ekström, Maria Fernanda Cabrera-Umpierrez, Silvia de los Rios Peres, Ad Van Berlo
Bi-RDNet: Performance Enhancement for Remote Sensing Scene Classification with Rotational Duplicate Layers

We propose compact and effective network layer Rotational Duplicate Layer (RDLayer) that takes the place of regular convolution layer resulting up to 128 $$\times $$ × in memory saving. Along with network accuracy, memory and power constraints affect design choices of computer vision tasks performed on resource-limited devices such as FPGAs (Field Programmable Gate Array). To overcome this limited availability, RDLayers are trained in a way that whole layer parameters are obtained from duplication and rotation of smaller learned kernel. Additionally, we speed up the forward pass via partial decompression methodology for data compressed with JPEG(Joint Photograpic Expert Group)2000. Our experiments on remote sensing scene classification showed that our network achieves $$\sim $$ ∼ 4 $$\times $$ × reduction in model size in exchange of $$\sim $$ ∼ 4.5 $$\%$$ % drop in accuracy, $$\sim $$ ∼ 27 $$\times $$ × reduction with the cost of $$\sim $$ ∼ 10 $$\%$$ % drop in accuracy, along with $$\sim $$ ∼ 2.6 $$\times $$ × faster evaluation time on test samples.

Erdem Safa Akkul, Berk Arıcan, Behçet Uğur Töreyin
MR Image Reconstruction Based on Densely Connected Residual Generative Adversarial Network–DCR-GAN

Magnetic Resonance Image (MRI) reconstruction from undersampled data is an important ill-posed problem for biomedical imaging. For this problem, there is a significant tradeoff between the reconstructed image quality and image acquisition time reduction due to data sampling. Recently a plethora of solutions based on deep learning have been proposed in the literature to reach improved image reconstruction quality compared to traditional analytical reconstruction methods. In this paper, a novel densely connected residual generative adversarial network (DCR-GAN) is being proposed for fast and high-quality reconstruction of MR images. DCR blocks enable the reconstruction network to go deeper by preventing feature loss in the sequential convolutional layers. DCR block concatenates feature maps from multiple steps and gives them as the input to subsequent convolutional layers in a feed-forward manner. In this new model, the DCR block’s potential to train relatively deeper structures is utilized to improve quantitative and qualitative reconstruction results in comparison to the other conventional GAN-based models. We can see from the reconstruction results that the novel DCR-GAN leads to improved reconstruction results without a significant increase in the parameter complexity or run times.

Amir Aghabiglou, Ender M. Eksioglu
Underground Archeological Structures Detection

This paper introduces and compares three approaches for automatic archaeological heritage site detection hidden under soil cover from public aerial images. The methods use low quality public aerial RGB spectral data restricted by the land-use map to agricultural regions in the vegetation season to detect underground structures influencing plants growing on the surface soil layer.

Anna Moudrá, Michal Haindl
A Deep Learning Approach for Hepatic Steatosis Estimation from Ultrasound Imaging

This paper proposes a simple convolutional neural model as a novel method to predict the level of hepatic steatosis from ultrasound data. Hepatic steatosis is the major histologic feature of non-alcoholic fatty liver disease (NAFLD), which has become a major global health challenge. Recently a new definition for FLD, that take into account the risk factors and clinical characteristics of subjects, has been suggested; the proposed criteria for Metabolic Disfunction-Associated Fatty Liver Disease (MAFLD) are based on histological (biopsy), imaging or blood biomarker evidence of fat accumulation in the liver (hepatic steatosis), in subjects with overweight/obesity or presence of type 2 diabetes mellitus. In lean or normal weight, non-diabetic individuals with steatosis, MAFLD is diagnosed when at least two metabolic abnormalities are present. Ultrasound examinations are the most used technique to non-invasively identify liver steatosis in a screening settings. However, the diagnosis is operator dependent, as accurate image processing techniques have not entered yet in the diagnostic routine. In this paper, we discuss the adoption of simple convolutional neural models to estimate the degree of steatosis from echographic images in accordance with the state-of-the-art magnetic resonance spectroscopy measurements (expressed as percentage of the estimated liver fat). More than 22,000 ultrasound images were used to train three networks, and results show promising performances in our study (150 subjects).

Sara Colantonio, Antonio Salvati, Claudia Caudai, Ferruccio Bonino, Laura De Rosa, Maria Antonietta Pascali, Danila Germanese, Maurizia Rossana Brunetto, Francesco Faita
Sparse Progressive Neural Networks for Continual Learning

Human brain effectively integrates prior knowledge to new skills by transferring experience across tasks without suffering from catastrophic forgetting. In this study, to continuously learn a visual classification task sequence, we employed a neural network model with lateral connections called Progressive Neural Networks (PNN). We sparsified PNNs with sparse group Least Absolute Shrinkage and Selection Operator (LASSO) and trained conventional PNNs with recursive connections. Later, the effect of the task prior on current performance is investigated with various task orders. The proposed approach is evaluated on permutedMNIST and selected subtasks from CIFAR-100 dataset. Results show that sparse Group LASSO regularization effectively sparsifies the progressive neural networks and the task sequence order affects the performance.

Esra Ergün, Behçet Uğur Töreyin
Backmatter
Metadaten
Titel
Advances in Computational Collective Intelligence
herausgegeben von
Krystian Wojtkiewicz
Prof. Dr. Jan Treur
Dr. Elias Pimenidis
Marcin Maleszka
Copyright-Jahr
2021
Electronic ISBN
978-3-030-88113-9
Print ISBN
978-3-030-88112-2
DOI
https://doi.org/10.1007/978-3-030-88113-9

Premium Partner