Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the 20th International Conference on Computer Information Systems and Industrial Management Applications, CISIM 2021, held in Ełk, Poland, September 24–26, 2021. The 38 papers presented together with 1 invited speech and 3 abstracts of keynotes were carefully reviewed and selected from 69 submissions. The main topics covered by the chapters in this book are mobile and pervasive computing, machine learning, high performance computing, image processing, industrial management. Additionally, the reader will find interesting papers on computer information systems, biometrics, security systems, and sensor network service. The contributions are organized in the following topical sections: biometrics and pattern recognition applications; computer information systems and security; industrial management and other applications; machine learning and artificial neural networks; modelling and optimization, and others.Chapter 24 "A first step towards automated species recognition from camera trap images of mammals using AI in a European temperate forest" is published open access under a CC BY license (Creative Commons Attribution 4.0 International License).

Inhaltsverzeichnis

Frontmatter

Invited Paper

Frontmatter

Importance of Variables in Gearbox Diagnostics Using Random Forests and Ensemble Credits

We consider a multivariate data matrix of size $$n \times d = 2183 \times 15$$ n × d = 2183 × 15 , where $$n=2183$$ n = 2183 is the number of time segments recorded from vibration signals of two gearboxes, and $$d=15$$ d = 15 is the number of variables (traits) characterizing these segments. To learn about the role played by each of the 15 variables in the gearbox diagnostics, we use the Random Forest (RF) methodology with its ‘Variables Importance Plot’ (VIP) algorithm, which yields a kind of ranking of the variables with regard of their importance in the performed diagnostics. This ranking is different in various runs of the RF. We propose to use at this stage an additional module performing a specific ensemble learning yielding credits scores for each variable. It shows clearly the top most important variables.

Anna M. Bartkowiak, Radoslaw Zimroz

Biometrics and Pattern Recognition Applications

Frontmatter

Typing Pattern Analysis for Fake Profile Detection in Social Media

Nowadays, interaction with fake profiles of a genuine user in social media is a common problem. General users may not easily identify profiles created by fake users. Although various research works are going on all over the world to detect fake profiles in social media, focus of this paper is to remove additional efforts in detection procedure. Behavioral biometrics like typing pattern of users can be considered to classify genuine profile and fake profile without disrupting normal activities of the users. In this paper, DEEP_ID model is designed to detect fake profiles in Facebook like social media considering typing patterns like keystroke, mouse-click, and touch stroke. Proposed model can silently detect the profiles created by fake users when they type or click in social media from desktop, laptop, or touch devices. DEEP_ID model can also identify whether genuine profiles have been hacked by fake users or not in the middle of the session. The objective of proposed work is to demonstrate the hypothesis that user recognition algorithms applied to raw data can perform better if requirement for feature extraction can be avoided, which in turn can remove the problem of inappropriate attribute selection. Proposed DEEP_ID model is based on multi-view deep neural network, where network layers can learn data representation for user recognition based on raw data of typing pattern without feature selection and extraction. Proposed DEEP_ID model has achieved better results compared to traditional machine learning classifiers. It provides strong evidence that the stated hypothesis is valid. Evaluation results indicate that Deep_ID model is highly accurate in profile detection and efficient enough to perform fast detection.

Tapalina Bhattasali, Khalid Saeed

Determination of the Most Relevant Features to Improve the Performance of RF Classifier in Human Activity Recognition

The impact that neurodegenerative diseases have in our society, have made human activity recognition (HAR) arise as a relevant field of study. The quality of life of people with such conditions, can be significantly improved with the outcomes of the projects within this area. The application of machine learning techniques on data from low level sensors such as accelerometers is the base of HAR. To improve the performance of these classifiers, it is necessary to carry out an adequate training process. To improve the training process, an analysis of the different features used in literature to tackle these problems was performed on datasets constructed with students performing 18 different activities of daily living. The outcome of the process shows that an adequate selection of features improves the performance of Random Forest from 94.6% to 97.2%. It was also found that 78 features explain 80% of the variability.

Geovanna Jiménez-Gómez, Daniela Navarro-Escorcia, Dionicio Neira-Rodado, Ian Cleland

Augmentation of Gait Cycles Using LSTM-MDN Networks in Person Identification System

This paper presents a novel data augmentation method to improve a walk-based person identification system. The proposed algorithm is based on trainable deep learning models, that are able to model the gait cycle of individual participants and generate perturbed augmented signals. In this study generative model involving two layers of Long Short-Term Memory (LSTM) and Mixture Density Network (MDN) was implemented.The proposed approach was evaluated on a publicly available human gait database collected with 30 participants and captured with IMU sensors. The impact of using the proposed algorithm was compared with the case without data augmentation (baseline) and the case of augmentation with the classical state-of-the-art method. The use of an LSTM-MDN model in the augmentation process has promising results increasing f-score from 0.94 to 0.96. Whereas, use classical state-of-the-art augmentation method did not affect the person identification metrics.

Aleksander Sawicki

Raspberry Pi-Based Device for Finger Veins Collection and the Image Processing-Based Method for Minutiae Extraction

Biometrics is one of the most important ways to secure users’ data. It gained popularity due to effectiveness and ease of usage. It was proven that diversified solutions based on measurable traits can guarantee higher security levels than traditional authentication-based (logins and passwords). The most popular are fingerprint and iris (especially in mobile devices). In this work we would like to present our own algorithm connected with finger veins features extraction. At the beginning all details of the device for samples collection are given. In the further part significant information related to finger veins extraction are described in the details. Image processing methods were used to show that even with traditional, well-known algorithms it is possible to obtain precise information about human veins. The final step in our algorithm is connected with feature vector generation. In this work we do not present a classification stage as it is out of its scope.

Maciej Szymkowski

Identification of Humans Using Hand Clapping Sounds

This paper demonstrates that hand clapping sounds could be employed as a useful biometric trait. The identity of 16 people was automatically recognized using their hand clapping sounds recorded with two mobile phones. To enhance the validity of the experiment, the audio recordings were made in six domestic environments (kitchen, living room, anteroom, and three bedrooms). The subjects were requested to clap their hands in three different hands configurations (A1, A3, and P1, using Repp’s taxonomy [1]). The three identification methods were compared. They were all based on the same classification algorithm (support vector machines) but differed in the way the acoustic features (cepstral coefficients) were extracted. In the first method, for each individual clap recording, the cepstral coefficients were derived only from the time frame exhibiting the highest energy. In the second method, the cepstral coefficients were computed for all the time frames and subsequently aggregated by calculating their mean values and standard deviations. In the third method, all the coefficients were preserved (no aggregation performed). The last-mentioned method produced the best results, yielding 99% and 61% identification accuracy for room-dependent and room-independent test conditions, respectively. Out of the three hands configuration compared, the one in which the hands were aligned straight to each other (P1) was the most conducive in terms of the identification accuracy.

Cezary Wróbel, Sławomir K. Zieliński

Computer Information Systems and Security

Frontmatter

Analyzing and Predicting Colombian Undergrads Performance on Saber-Pro Test: A Data Science Approach

In this paper, we present an analytic tool solution for universities and the Colombian government, which allows them to identify the main factors that impact the performance of undergraduate students and also to predict the performance of the future cohorts of students. This solution consists of an interactive dashboard that visualizes two different types of analysis: in the first place, the descriptive statistics of variables related to the different entities implicated in the education process such as students, universities, and the state governments; and in the second place, the solution helps to visualize the results of different predictive models.

Eugenia Arrieta Rodríguez, Paula María Almonacid, Santiago Cortés, Rafael Deaguas, Nohora Diaz, Maria Paula Aroca

Development of Digital Competences of Elementary School Teachers with a Life-Long Learning Approach

The dynamic development of digital technology has an increasing influence on all areas of our lives. New technologies are also changing the face of education. Young people are particularly attracted to technological innovations. It is important that students consciously and constructively use the newest information technologies, which they should learn at school. The responsibility to take on new educational challenges lies with the teacher, who must have high digital competences and learn to act effectively in a rapidly changing reality. Between June 2017 and December 2019, more than 140 early-education school teachers were trained in the programming teaching course. Before 36-h training and after 30 h of lessons assessments among teachers based on pre- and post-questionnaires about digital competences were performed. The questionnaires were prepared accordingly to the Digital Competence Framework 2.0 (DigComp 2.0). Obtained results gave a picture of the effectiveness of developing the digital competences of the primary school teachers.

Michał Czołombitko, Tomasz Grześ, Maciej Kopczyński, Urszula Kużelewska, Anna Lupińska-Dubicka, Dorota Mozyrska, Joanna Panasiuk

Design of Web Application with Dynamic Generation of Forms for Group Decision-Making

The primary role of software engineering is to apply engineering approaches to improve the processes and methods for software development. There are various different custom applications that could be generalized as a common application’ tool. These applications are related to the problems of different group decision-making variants. In order to improve the software quality, modular architecture could be used to separate different functionality and to minimize the complexity of each individual module. For the goal, the current article deals with the problem of the dynamic generation of matrices that is the core of group decision-making. An algorithm for designing a web application to support group decision-making based on multi-attribute utility theory is proposed. This algorithm is implemented in a web-based software tool to support group decision-making. The main features of the described tool to support group-decision making are the ability to generate individual matrices for each expert and subsequent generation of aggregated group-decision matrix. These two types of matrices together with the completed data are stored as two components of the problem to be solved and can be reused. The proposed algorithm and software tool are applied in a case study for a group decision-making problem for the selection of videoconferencing software tool. The obtained results show the applicability of the dynamic generation of forms that support group decision-making.

Zornitsa Dimitrova, Daniela Borissova, Vasil Dimitrov

Neural Networks as Tool to Improve the Intrusion Detection System

Nowadays, computer programs affecting computers both locally and network-wide have led to the design and development of different preventive and corrective strategies to remedy computer security problems. This dynamic has been important for the understanding of the structure of attacks and how best to counteract them, making sure that their impact is less than expected by the attacker.For this research, a simulation was carried out using the DATASET-KDD NSL at 100%, generating an experimental environment, where processes of pre-processing, training, classification, and evaluation of model quality metrics were carried out. Likewise, a comparative analysis of the results obtained after implementing different feature selection techniques (INFO.GAIN, GAIN RATIO, and ONE R), and classification techniques based on neural networks that use an unsupervised learning algorithm based on self-organizing maps (SOM and GHSOM), with the purpose of classifying bi-class network traffic automatically. From the above, a 97.09% hit rate was obtained with 21 features by implementing the GHSOM classifier with 10-fold cross-validation with the ONE R feature selection technique, which would improve the efficiency and performance of Intrusion Detection Systems (IDS).

Esmeral Ernesto, Mardini Johan, Salcedo Dixon, De-La-Hoz-Franco Emiro, Avendaño Inirida, Henriquez Carlos

Anonymous Group Signature Scheme with Publicly Verifiable General Access Structure

The group signature scheme allows a group member or defined subsets of all group members to sign a message on behalf of the group. Any party can verify the validity of the group signature, as well as whether it was created by member of the group or members of some subgroup. However, without the participation of the group manager it is not possible to determine exactly who is part of the group. This paper proposes a new generalised group signature scheme in which members of an authorised set can jointly create a group signature without revealing the identities of its members. Furthermore, the authorised sets can be dynamically modified. The scheme is existentially unforgeable against adaptively chosen message attacks assuming the hardness of Computational Diffie-Hellman problem and meets other requirements imposed on group signature schemes. The scheme has been implemented and performance tests have shown that the scheme is suitable for practical application.

Tomasz Hyla, Jerzy Pejaś

A Novel Proposal of Using NLP to Analyze IoT Apps Towards Securing User Data

The evolution of Internet of Things over the years has led to all time connectivity among us. However, the heterogeneity of the constituent layers of IoT makes it vulnerable to multiple security threats. One of the typical vulnerability of IoT involves the endpoint, i.e. the apps that are used by end users for enabling IoT services. Generally, the users have to authorize the app, during installation time, to perform certain tasks. Often the apps ask for permissions to access information which are not related to the IoT services provided by them. These over-privileged apps have the chance to turn malicious at any moment and use such information for their benefit. Sometimes, the users are naive enough to trust the apps and grant permissions without caution, thus leading to unintended exposure of personal information to malicious apps. It is important to analyze the app description for understanding the exact meaning of a stated functionality in the app description. In this paper, we have focused on the use of NLP in securing user data from malicious IoT apps by analysing their privacy policies and user reviews. This is followed by a novel proposal that supports cautious decision making of users based on a careful analysis of app behaviour.

Raghunath Maji, Atreyee Biswas, Rituparna Chaki

Addressing the Permutational Flow Shop Scheduling Problem Through Constructive Heuristics: A Statistical Comparison

Flow shop problem has been addressed by many researchers around the world. Different heuristic methods has been developed to deal with this kind of problems. Nevertheless, it is necessary to explore the impact that the bottleneck machine has on the performance of each heuristic. In this article an F6 || Cmax (Makespan) flow shop is tackled with different well-known heuristics in open literature, such as Palmer, Johnson, Gupta, CDS, NEH and PAS and their impact on Cmax was measure. The methodology used seeks to find the possible relationship between the different bottleneck machines and the result obtained from each of the heuristics. For this experiment, there were 302 scenarios with six machines in series, in which each machine had a parity number of scenarios in which it was the bottleneck. The values of Cmax obtained for each heuristic were compared against the result of corresponding MILP (Mixed Integer Liner Problem) problem. The results show that the performance of the NEH heuristic is superior in each scenario, regardless of the bottleneck, but also shows a variable behavior in each heuristic, taking into account the bottleneck machine.

Javier Velásquez Rodriguez, Dionicio Neira Rodado, Alexander Parody, Fernando Crespo, Laurina Brugés-Ballesteros

ICBAKE 2021 Workshop

Frontmatter

Search for a Flavor Suited to Beverage by Interactive Genetic Algorithm

Interactive evolutionary computation (IEC) is a method to optimize media contents suited to user’s subjective feelings and preferences. Previous IECs employed various evolutionary algorithms, and most of them employed Genetic Algorithm (GA). The interactive type of GA is called IGA, and the IGA is applied to create user’s media content such as computer graphics, music, and sound. As a special application of the IGA, the creation of the scent was already proposed. In the IGA related to the scent, the intensity of the source aromas was treated as variable of GA individuals. In this study, the IGA is applied to create the flavor suited to beverage. The soda water with no sugar is treated as the beverage. It is popular among health-minded people, and some of the soda waters wear a flavor. In other words, the purpose of this study is to create a good flavor suited to soda water by reflecting each user’s feeling using IGA. In the use of the IGA, the user looks at the soda water with no sugar and smells the mixed flavor. In the mixture of the source aromas, Aromageur is used in the system. To investigate the fundamental efficiency of the IGA, a smelling experiment was conducted. The target of creation was “delicious” flavor for soda water, and six aroma oils were used as the source oils. The significant increase in fitness value was observed.

Makoto Fukumoto, Seishiro Yoshimitsu

Investigation of Facial Preference Using Gaussian Process Preference Learning and Generative Image Model

This study introduces a novel approach to investigate human facial attractiveness’s intrinsic psychophysical function using a sequential experimental design with a combination of Bayesian optimization (BO) and StyleGAN2. To estimate a facial attractiveness function from pairwise comparison data, we used a BO that incorporates Gaussian process preference learning (GPPL). Fifty female Japanese university students provided facial photographs. We embedded each female facial image into a latent representation ( $$18\times 512$$ 18 × 512 dimensions) in the StyleGAN2 network trained on the Flickr-Faces-HQ (FFHQ) dataset. Using PCA, the latent representations’ dimension is reduced to an 8-dimensional subspace, which we refer to here as the Japanese female face space. Nine participants participated in the pairwise comparison task. They had to choose the more attractive facial images synthesized using StyleGAN2 in the face subspace and provided their evaluations in 100 trials. The stimuli for the first 80 trials were created from randomly generated parameters in the face subspace, while the remaining 20 trials were created from the parameters calculated using the acquisition function. We estimated the facial parameters corresponding to the most, the least, 25, 50, 75 percentile rank of attractiveness and reconstructed the faces based on the results. The results show that a combination of StyleGAN2 and GPPL methodologies is an effective way to elucidate human kansei evaluations of complex stimuli such as human faces.

Masashi Komori, Keito Shiroshita, Masataka Nakagami, Koyo Nakamura, Maiko Kobayashi, Katsumi Watanabe

Evaluation of Strong and Weak Signifiers in a Web Interface Using Eye-Tracking Heatmaps and Machine Learning

The eye-tracking heatmap is a quantitative research tool that shows the user’s gaze points. Most of the eye-tracking heatmap is a 2D visualization comprising different colors. The heatmap colors indicate gaze duration, and the color cell’s position indicates gaze position. The eye-tracking heatmap has often been used to evaluate the usability of web interfaces to understand user behavior. For example, web designers have used heatmaps to obtain actual evidence for how users use their website. Further, the collection of eye-tracking heatmap data during website viewing facilitates measurement of improvements in site usability. However, although the eye-tracking heatmap provides rich information about how users watch, focus, and interact with a site, the high informational requirements substantially increase computational burden. In many cases, the distribution of gaze points in an eye-tracking heatmap may not be easily understood and interpreted. Accordingly, manual evaluation of heatmaps is inefficient. This study aimed to evaluate web usability by focusing on signifiers as an interface element using eye-tracking heatmaps and machine learning algorithms. We also used the dimensionality reduction technique to reduce the complexity of heatmap data. The results showed that the proposed classification model that combined the decision tree and PCA technique provided more than 90% accuracy when compared with the other nine classical machine learning methods. This finding indicated that the machine learning process reached the correct decision about the interface’s usability.

Kitti Koonsanit, Taisei Tsunajima, Nobuyuki Nishiuchi

Applying Artificial Bee Colony Algorithm to Interactive Evolutionary Computation

In this study, we apply an artificial bee colony (ABC) algorithm to the interactive evolutionary computation (IEC) method for the multimodal retrieval of candidate solutions. Previous works have proposed IEC systems using a parallel interactive tabu search algorithm (PITS) that generates multiple tabu search (TS) retrievals and a hybrid genetic algorithm (GA) involving a global retrieval method and a TS involving a local retrieval method for multimodal retrieval. However, the PITS cannot efficiently retrieve candidate solutions and it has a complicated algorithm. The hybrid GA–TS also finds it hard to retrieve candidate solutions if the user has a more multimodal preference. We propose herein an IEC method with the ABC algorithm for the multimodal and simultaneous retrieval of candidate solutions. We perform a numerical simulation with a pseudo user that imitates multimodal preferences as target candidate solutions instead of a real user. The results show that the proposed method can retrieve multimodal candidate solutions in conditions with limited numbers of candidate solutions and bees.

Hiroshi Takenouchi, Masataka Tokumaru

Emotion Estimating Method by Using Voice and Facial Expression Parameters

This study aimed to propose a method of emotion estimation using facial expression and speech features for machine learning with fewer parameters than in previous studies. In the experiment, emotions were evoked in participants by displaying emotion-activation movies. Facial expressions and voice parameters were then extracted. These parameters were used to estimate emotions by machine learning. In machine learning, six types of learning were performed by combining objective variables and explanatory variables. It was shown that classification can be performed with an accuracy of 93.3% when only voice parameters are used in two-category classification, such as positive and negative.

Kimihiro Yamanaka

Industrial Management and other Applications

Frontmatter

Multilayer Perceptron Applied to the IOT Systems for Identification of Saline Wedge in the Magdalena Estuary - Colombia

Maritime safety has become a relevant aspect in logistics processes using rivers. In Colombia, specifically in the Caribbean Region, there is the Magdalena River, a body of water that broadly borders the Colombian territory and is a tributary of various economic and public health activities. At its mouth, this river interacts with the sea directly, which generates a phenomenon called saline wedge, which is directly related to the sediments that must be continuously extracted and which threatens the proper functioning of the port from the city of Barranquilla, Colombia. Through this research, a network of sensors located in strategic places at the mouth of this river was generated, which allows predicting the behavior of the salt wedge. Using artificial neural networks, more specifically, the Multilayer Perceptron algorithm, it was possible to analyze the results of the implementation in light of the indicators or quality metrics, generating a highly reliable scenario that can be replicated in other sections of the river and in other aquifers.

Paola Patricia Ariza-Colpas, Cristian Eduardo Ayala-Mantilla, Marlon-Alberto Piñeres-Melo, Diego Villate-Daza, Roberto Cesar Morales-Ortega, Emiro De-la-Hoz-Franco, Hernando Sanchez-Moreno, Shariq Butt Aziz, Carlos Collazos-Morales

Assessment of Organizational Policies in a Retail Store Based on a Simulation Model

This paper evaluates three organizational policies in a retail store by a discrete simulation model in Simio®. The policies implemented were using one, two, or three express checkouts, cross-trained workers, and allocating one, two, or three weighing counters in the produce section (fruit and vegetables). These policies were evaluated during days with low, medium, and high demand over critical performance metrics such as the queue length, waiting time, active and idle time rate, the average time in the system, average service time, and sales. Our results demonstrated that all policies are beneficial for the studied system but in days with high demand. In days with low or medium demand, there were good improvements for some indicators, but this conflicted with others. As the simulation model was implemented to evaluate each policy independently, a future direction should include studying the performance simultaneously.

Jairo R. Coronado-Hernández, Mayra A. Macías-Jiménez, Joned D. Chica-Llamas, José I. Zapata-Márquez

Locating Sea Ambulances to Respond to Emergencies of Vulnerable Populations. Case of Cartagena Bay in Colombia

Emergency services are an important element for healthcare assistance because its rapid response to transportation is essential for saving lives. Cartagena de Indias presents some weaknesses in terms of covering the demand for emergency transfers from insular areas due to the non-existence of terrestrial routes to hospitals and healthcare facilities. This study aims to determine the optimal location for sea ambulances using a mixed-integer linear programming model. A three-staged methodology allowed to select optimal locations, reducing response times of emergencies for vulnerable populations considering available healthcare facilities and maritime safety requirements.

Jairo R. Coronado-Hernández, Marly Rico-Carrillo, Katherine Rico-Carrillo, Orlando Zapateiro Altamiranda

Toward a Unique IoT Network via Single Sign-On Protocol and Message Queue

Internet of Things (IoT), currently, is one of the most rapidly developing technology trends. However, at present, users, devices, and applications using IoT services mainly connect to IoT service providers in a client-server model. Each IoT service provider has its own management mechanism and internal message exchange method. This results in the isolation between IoT service providers, and it is challenging to connect these organizations into an IoT network. Besides, one of the most popular protocols in IoT deployments, Message Queuing Telemetry Protocol (MQTT), also has significant security and privacy issues. Therefore, in this paper, we propose an IoT Platform Model capable of improving the MQTT protocol’s security problem by using a Single Sign-On. Also, this model allows the organizations to provide the IoT services to connect into a single network but does not change too much of each organization’s current architecture. We describe the evaluation to prove the effectiveness of our approach. Specifically, we check the number of concurrent users who can publish messages simultaneously for two internal communication and external communication; furthermore, a complete code solution is publicized on the authors’ GitHub repository to engage further reproducibility and improvement.

Tran Thanh Lam Nguyen, The Anh Nguyen, Hong Khanh Vo, Hoang Huong Luong, Huynh Tuan Khoi Nguyen, Anh Tuan Dao, Xuan Son Ha

Machine Learning and Artificial Neural Networks

Frontmatter

Multi-objective Approach for Deep Learning in Classification Problems

We study the possibility of using a distributed multi-objective differential evolutionary algorithms with tabu mutation (DMD+) to optimize classification deep learning models. We discuss conflicts between three metrics used for evaluating the quality of some classifiers: an accuracy, F1-score, and Area Under the ROC Curve. Because a smart city is a place where economic mechanisms are intensively supported by modern computational methods, including deep learning, we select some datasets to verify the quality of designed classifiers. As a result, we develop an effective approach for determining the key parameters of Convolution Neural Networks and Long Short Term Memory models with using computing cloud.

Jerzy Balicki, Witold Sosnowski

Open Access

A First Step Towards Automated Species Recognition from Camera Trap Images of Mammals Using AI in a European Temperate Forest

Camera traps are used worldwide to monitor wildlife. Despite the increasing availability of Deep Learning (DL) models, the effective usage of this technology to support wildlife monitoring is limited. This is mainly due to the complexity of DL technology and high computing requirements. This paper presents the implementation of the light-weight and state-of-the-art YOLOv5 architecture for automated labeling of camera trap images of mammals in the Białowieża Forest (BF), Poland. The camera trapping data were organized and harmonized using TRAPPER software, an open-source application for managing large-scale wildlife monitoring projects. The proposed image recognition pipeline achieved an average accuracy of 85% F1-score in the identification of the 12 most commonly occurring medium-size and large mammal species in BF, using a limited set of training and testing data (a total of 2659 images with animals).Based on the preliminary results, we have concluded that the YOLOv5 object detection and classification model is a fine and promising DL solution after the adoption of the transfer learning technique. It can be efficiently plugged in via an API into existing web-based camera trapping data processing platforms such as e.g. TRAPPER system. Since TRAPPER is already used to manage and classify (manually) camera trapping datasets by many research groups in Europe, the implementation of AI-based automated species classification will significantly speed up the data processing workflow and thus better support data-driven wildlife monitoring and conservation. Moreover, YOLOv5 has been proven to perform well on edge devices, which may open a new chapter in animal population monitoring in real-time directly from camera trap devices.

Mateusz Choiński, Mateusz Rogowski, Piotr Tynecki, Dries P. J. Kuijper, Marcin Churski, Jakub W. Bubnicki

Battle on Edge - Comparison of Convolutional Neural Networks Inference Speed on Two Various Hardware Platforms

Several reasons influenced the tendency to move the first level of machine learning data processing to the edge of the information system. Edge-generated data is typically processed by so-called edge devices with low processing power and low power consumption. In addition to well-known SoC (System on Chip) manufacturers that are usually used as an edge device, some manufacturers in this market base their processor design on open source. This paper compares two different SoC, one based on the ARM (Advanced RISC Machines) architecture and the other on the open-source RISC-V (Reduced Instruction Set Computer) architecture. The specificity of the analysed SoC based on the RISC-V architecture is an additional processor for speed up calculations common in neural networks. Since the architectures differ, we compare two SoC of similar price. The comparison’s focus is an analysis of the inference performance with the different number of filters in the first layer of the convolutional neural network used to detect handwritten digits. The process of convolutional neural network’s training occurs in the cloud and uses a well-known database of handwritten digits – MNIST (Modified National Institute of Standards and Technology). In the SoC based on the RISC-V architecture, a reduced dependence of the inference speed on the number of filters at the first level of the convolutional neural network was observed.

Kristian Dokic, Dubravka Mandusic, Lucija Blaskovic

Efficient Hair Damage Detection Using SEM Images Based on Convolutional Neural Network

With increasing interest in hairstyles and hair color, bleaching, dyeing, straightening, and curling hair are widely used worldwide, and the chemical and physical treatment of hair is also increasing. As a result, the hair suffered a lot of damage, and the degree of damage to the hair was measured only by the naked eye or touch. This has led to serious consequences, such as hair damage and scalp diseases. However, although these problems are serious, there is little research on hair damage. With the advancement of technology, people began to be interested in preventing and restoring hair damage. Manual observation methods cannot accurately and quickly identify hair damage areas. With the rise of artificial intelligence technology, a large number of applications in various scenarios have given researchers new methods. In the project, we created a new hair damage data set based on SEM (Scanning Electron Microscope) images. Through various physical and chemical analyses, we observe the changes in the hair surface according to the degree of hair damage, find the relationship between them, and use intelligence the convolutional neural network recognizes and confirms the degree of hair damage, and divides the degree of damage into weak damage, damage and extreme damage.

QiaoYue Man, LinTong Zhang, Young Im Cho

Why Do Law Enforcement Agencies Need AI for Analyzing Big Data?

The aim of the article is to give the rationale behind employing AI tools to help Law Enforcement Agencies analyze data, based on the existing solution, i.e., the MAGNETO (Multimedia Analysis and correlation enGine for orgaNised crime prevention and investigation) platform. In order to do this, the challenges Law Enforcement Agencies (LEAs) face with regard to data handling are presented. Then, the paper presents the key features of the MAGNETO platform, which is an innovative AI-based approach to empowering LEAs with the capabilities to process, manage, analyse, correlate and reason from the voluminous heterogeneous datasets; the underlying technologies are mentioned, too. It then discusses the innovative potential of the solution. The article proposes an array of technologies and methods that may be applied in order to facilitate LEAs in their handling of large amounts of heterogeneous data. Owing to the study, it has been shown that in the long run, the application of the platform will contribute to safer, more secure Europe. Additionally, it may even help save lives during the COVID-19 pandemic.

Aleksandra Pawlicka, Michał Choraś, Marcin Przybyszewski, Laurent Belmon, Rafał Kozik, Konstantinos Demestichas

Deep Learning Bio–Signal Analysis from a Wearable Device

World Health Organization reports that cardiovascular diseases are the main cause of death worldwide. To prevent the premature death rate it is important to ensure appropriate treatment. This is one of the most important WHO targets related to cardiovascular diseases. In this work we present a framework that is able to detect cardiovascular abnormalities with an application of specially prepared Android application that is capable of taking patient’s input data. Such an interface allows for bio-parameters (e.g. sex, weight, etc.) input by the user. Furthermore, in this study we describe an artificial neural network model that is trained on the data collected by the wearable device that was also connected with the application with a specially prepared API.Moreover, this paper highlights the possibility of utilization of a smartwatch with blood-pressure capabilities to deliver real-time measurements, which are added to input feature vector. Presented results show that the proposed scheme is capable of performing real-time data analysis. Achieved accuracy values are promising and allow for further examination of the application of off the shelf device in the detection of cardiovascular diseases.

Mikołaj Skubisz, Łukasz Jeleń

Modelling and Optimization

Frontmatter

Big Data from Sensor Network via Internet of Things to Edge Deep Learning for Smart City

Data from a physical world is sampled by sensor networks, and then streams of Big Data are sent to cloud hosts to support decision making by deep learning software. In a smart city, some tasks may be assigned to smart devices of the Internet of Things for performing edge computing. Besides, a part of workload of calculations can be transferred to the cloud hosts. This paper proposes benchmarks for division tasks between an edge layer and a cloud layer for deep learning. Results of some numerical experiments are presented, too.

Jerzy Balicki, Honorata Balicka, Piotr Dryja

A Tale of Four Cities: Improving Bus and Waste Collection Schedules in Practical Smart City Applications

Computer-based Improvements of waste collection and public transport procedures are often a part of smart city initiatives. When we envision an ideal bus network, it will primarily connect the most crowded bus stops. Similarly, an ideal waste collection vehicle will arrive at every container exactly at the time when it is fully loaded. Beyond doubt, this will reduce traffic and support environmentally friendly intentions like waste separation, as it will make more containers manageable. A difficulty of putting that vision into practice is that vehicles cannot always be where they are needed. Knowing the best time for arriving at a position is not insufficient for finding the optimal route. Therefore, we compare four different approaches to optimized routing: Regensburg, Christchurch, Malaysia, and Bangalore. Our analysis shows that the best schedules result from adapting field-tested routes frequently based on sensor measurements and route optimizing computations.

Jan Dünnweber, Amitrajit Sarkar, Vimal Kumar Puthiyadath, Omkar Barde

Fractional-Order Nonlinear System Identification Using MPC Technique

Many real-world processes exhibit fractional-order dynamics and are described by the non-integer order differential equations. In this paper, we quantify the fitting of the Oustaloup approximation method to the fractional-order state-space system to be obtained in the specified narrow frequency range and order. A novel method of the plant model state estimation using the model predictive control (MPC) technique has been verified on the approximated fractional-order water tanks system. To improve the system tracking and reduce the experimental effort, the Kalman filter (KF) has been connected to the MPC structure. The main objective is to design a control system of the linearized fractional-order system with the tuning of its parameters concerning an additive white noise affecting the output of the system. The presented scheme has been verified using numerical examples, and the results of the prediction of the state are discussed.

Wiktor Jakowluk, Sjoerd Boersma

ToMAL: Token-Based Fair Mutual Exclusion Algorithm

Token-based mutual exclusion (ME) algorithms for distributed systems have gained much attention over the years due to their inherent safety property. Safety property ensures that only one process executes the critical section at any instant of time. Raymond, et al. have proposed a token-based ME algorithm that uses an inverted-tree topology. The solution is simple, fast, and widely accepted by the community. However, a major drawback of Raymond’s algorithm is that it fails to satisfy the fairness property in terms of the first-come-first-serve policy among equal-priority processes requesting the token. Several attempts have been initiated to resolve this issue. In this work, we provide a new token-based ME algorithm (ToMAL) that, similar to Raymond’s algorithm, works on inverted tree topology. The proposed solution not only ensures the fairness property but also requires very little additional storage in a node. We have compared our proposed approach ToMAL with another existing work. The comparative performances are studied in terms of storage space and control messages.

Debdita Kar, Mandira Roy, Nabendu Chaki

FPGA in Core Calculation for Big Datasets

The rough sets theory developed by Prof. Z. Pawlak is one of the tools used in intelligent systems for data analysis and processing. In modern systems, the amount of the collected data is increasing quickly, so the computation speed becomes the critical factor. This paper shows FPGA and softcore CPU based hardware solution for big datasets core calculation focusing on rough set methods. Core represents attributes cannot be removed without affecting the classification power of all condition attributes. Presented architectures have been tested on real datasets by running presented solutions inside two different FPGA chips. Datasets had 1 000 to 1 000 000 objects. The same operations were performed in software implementation. Results show the up to 15.83 times increase factor in computation time using hardware supporting core generation in comparison to pure software implementation.

Maciej Kopczyński

Resource-Wokflow Petri Nets with Time Tokens and Their Application in Process Management

Resource-workflow Petri nets with time tokens (RWPNTT) are the newly-introduced class of low-level process Petri nets in this article. The theory of RWPNTT can be successfully applied especially in modeling processes for which the lower and upper limits of the duration of their activities are specified and in determining their lower and upper duration limits of the time-optimal critical steps and paths. RWPNTT theory also generalizes the original process model used by the CPM method in the direction that each activity may require a limited set of resources (e.g., energy, financial, material, etc.) for its exclusive use and successful completion that are shared by other activities of the same process. The concept of global network time is not explicitly introduced in the definition of the class of RWPNTT and it is then used only implicitly within the mechanism of the transition firing. The new RWPNTT class also enables the analysis of other process properties by using well-known methods of Petri nets theory. These properties of RWPNTT class are demonstrated on the simple process examples in this article.

Ivo Martiník

Digital Device Design by ASMD-FSMD Technique

Recently, there has been an increase in the complexity of digital device designs and an increase in the requirements for the development time and the reliability of the products. The developing new techniques for designing digital devices is one of the directions to solve this problem. This paper proposes the technique for designing digital devices based on finite state machines with datapath (FSMD), when the device functioning is described by an algorithm state machine with datapath (ASMD). Different techniques for designing digital devices are compared when implementing a synchronous multiplier on a field programmable gate array (FPGA). The effectiveness of the ASMD-FSMD methodology is compared to the traditional approach in terms of cost (area) and performance. The ASMD-FSMD technique, compared to the traditional approach, reduces the area from 28.6% to 39.7% and increases the speed for some designs to 17.6%. In addition, using ASMD-FSMD technique reduces design time and increases design reliability at least by a factor 2.5.

Valery Salauyou, Adam Klimowicz

Bus Demand Forecasting for Rural Areas Using XGBoost and Random Forest Algorithm

In recent years, mobility solutions have experienced a significant upswing. Consequently, it has increased the importance of forecasting the number of passengers and determining the associated demand for vehicles. We analyze all bus routes in a rural area in contrast to other work that predicts just a single bus route. Some differences in bus routes in rural areas compared to cities are highlighted and substantiated by a case study data using Roding, a town in the rural district of Cham in northern Bavaria, as an example. Data collected and we selected a random forest model that lets us determine the passenger demand, bus line effectiveness, or general user behavior. The prediction accuracy of the selected model is currently 87%. The collected data helps to build new mobility-as-a-service solutions, such as on-call buses or dynamic route optimizations, as we show with our simulation.

Timo Stadler, Amitrajit Sarkar, Jan Dünnweber

Core Computation and Artificial Fish Swarm Algorithm in Rough Set Data Reduction

Reducing the redundant attributes is an important preprocessing step in data mining. In the paper, a novel search algorithm COREplusAFSA for minimal attribute set reduction based on rough set theory and artificial fish swarm algorithm is proposed. First, the algorithm identifies the attributes from the core. Second, the artificial fish swarm algorithm is applied. Some well-known data sets from UC Irvine Machine Learning Repository were selected to verify the proposed algorithm. The results of experiments show that the investigated method COREplusAFSA is a better solution to the attribute set reduction problem than the application of only artificial fish swarm algorithm.

Jaroslaw Stepaniuk

Performance Analysis of a QoS System with WFQ Queuing Using Temporal Petri Nets

The paper presents the results of analysis and modelling of differentiated services using Petri nets. Mechanisms and methods of implementation of QoS (Quality of Service) services in packet networks are discussed. A network model supporting data transfers related to different traffic classes was designed and studied. Traffic shaping mechanisms based on WFQ (Weighted Fair Queuing) system used in QoS were studied. The impact of the traffic shaping mechanism was studied and the performance of the modelled systems was evaluated. The application of simulation tools in the form of TPN (Temporal Petri Nets) was aimed at verifying the traffic shaping mechanisms and evaluating the performance of the studied WFQ system. Our simulation results show that the number of high-priority flows have a critical impact on average waiting times and queue length.

Dariusz Strzęciwilk, Rafik Nafkha, Rafał Zawiślak

Runtime Sensor Network Schedule Adaptation to Varying Temperature Conditions

We study runtime adaptation methods of sensor activity schedules for wireless sensor networks. Adaptation is necessary when the network operating conditions differ from the ones assumed in the scheduling phase. Usually, the ideal temperature conditions are assumed. When the system has to operate at a lower temperature, sensor batteries discharge faster, resulting in an inability to complete the schedule created with that assumption. We deal with this problem by careful selection of a slot to be executed next from the very beginning of the network activity. We test several slot selection strategies based on battery load levels for all the sensors active exclusively in a given slot. Our experiments showed that in the majority of cases, a tournament–based approach gave the best results. Moreover, we propose and verify experimentally a new selection strategy, which uses the standard deviation of the whole network’s battery levels as a decision attribute.

Krzysztof Trojanowski, Artur Mikitiuk, Jakub A. Grzeszczak

Backmatter

Weitere Informationen

Premium Partner