Skip to main content
main-content

Über dieses Buch

This volume constitutes the refereed proceedings of the 13th Asian Conference on Intelligent Information and Database Systems, ACIIDS 2021, held in Phuket, Thailand, in April 2021.

The total of 35 full papers accepted for publication in these proceedings were carefully reviewed and selected from 291 submissions. The papers are organized in the following topical sections: ​​data mining and machine learning methods; advanced data mining techniques and applications; intelligent and contextual systems; natural language processing; network systems and applications; computational imaging and vision; decision support and control systems; data modelling and processing for Industry 4.0.

Inhaltsverzeichnis

Frontmatter

Data Mining and Machine Learning Methods

Frontmatter

CNN Based Analysis of the Luria’s Alternating Series Test for Parkinson’s Disease Diagnostics

Deep-learning based image classification is applied in this studies to the Luria’s alternating series tests to support diagnostics of the Parkinson’s disease. Luria’s alternating series tests belong to the family of fine-motor drawing tests and been used in neurology and psychiatry for nearly a century. Introduction of the digital tables and later tablet PCs has allowed deviating from the classical paper and pen-based setting, and observe kinematic and pressure parameters describing the test. While such setting has led to a highly accurate machine learning models, the visual component of the tests is left unused. Namely, the shapes of the drawn lines are not used to classify the drawings, which eventually has caused the shift in the assessment paradigm from visual-based to the numeric parameters based. The approach proposed in this paper allows combining two assessment paradigms by augmenting initial drawings by the kinematic and pressure parameters. The paper demonstrates that the resulting network has accuracy similar to those of human practitioner.

Sergei Zarembo, Sven Nõmm, Kadri Medijainen, Pille Taba, Aaro Toomela

Examination of Water Temperature Accuracy Improvement Method for Forecast

In aquaculture, fishery workers and researchers attach great importance to water temperature. This is because knowing the water temperature makes it possible to take measures against fish diseases and predict the occurrence of red tide. In collaboration with fishery researchers, we constructed a multi-depth sensor network consisting of 16 water temperature observation devices over the Uwa Sea in Ehime Prefecture. In addition, we developed a Web system that can instantly visualize the water temperature measured on this network and provide it to fishery workers in the Uwa Sea. However, the water temperature information provided by this Web system is only current or past information, and fishery workers are requiring the provision of near-future water temperature forecasts for a week or two weeks. Therefore, in this research, as a new function of this Web system, we will implement a function to predict the water temperature in the near future from the past water temperature information and provide the forecast information. This paper examines the steps to be solved for the prediction of seawater temperature, which is the current problem in this system. Moreover, among the steps, this paper reports on the solution method and its evaluation experiment for improving the accuracy of water temperature information.

Yu Agusa, Keiichi Endo, Hisayasu Kuroda, Shinya Kobayashi

Assembly Process Modeling Through Long Short-Term Memory

This paper studies Long Short-Term Memory as a component of an adaptive assembly assistance system suggesting the next manufacturing step. The final goal is an assistive system able to help the inexperienced workers in their training stage or even experienced workers who prefer such support in their manufacturing activity. In contrast with the earlier analyzed context-based techniques, Long Short-Term Memory can be applied in unknown scenarios. The evaluation was performed on the data collected previously in an experiment with 68 participants assembling as target product a customizable modular tablet. We are interested in identifying the most accurate method of next assembly step prediction. The results show that the prediction based on Long Short-Term Memory is better fitted to new (previously unseen) data.

Stefan-Alexandru Precup, Arpad Gellert, Alexandru Dorobantiu, Constantin-Bala Zamfirescu

CampusMaps: Voice Assisted Navigation Using Alexa Skills

Voice assistants have become a common occurrence that are present in our smart-phones, speakers or dedicated devices like the Google Home and Alexa Echo. Over the years, these voice assistants have become more intelligent and the number of tasks that can be performed by them has increased. Of the many voice assistants that exist, Amazon’s Alexa is one of the most compatible voice assistants, which can be programmed to suit specific use cases by the use of Amazon’s Alexa Skills Kit. Through this paper, we leverage the power of Alexa’s voice assistance by designing a navigation system for our college campus, which allows users to request directions in the most intuitive way possible. This is a cost-effective and scalable solution based on Amazon Web Services (AWS) Lambda.

Revathi Vijayaraghavan, Ritu Gala, Vidhi Rambhia, Dhiren Patel

Advanced Data Mining Techniques and Applications

Frontmatter

Solving Reduction Problems in Cover Lattice Based Decision Tables

Covering based rough set is an important extension of Pawlak's traditional rough set. Reduction is a typical application of rough sets, including traditional, covering based and other rough set extensions. Although this task has several proposals, it is still an open problem to decision systems (tables) in covering based rough set. This paper focuses on the reduction problem for the condition lattice, and fitting problem for the decision lattice in the decision table based on cover lattice. A corresponding algorithm is proposed for each problem. Two examples to illustrate a covering based decision table and two related problems show the applications of these concepts and problems.

Thanh-Huyen Pham, Thi-Ngan Pham, Thuan Ho, Thi-Hong Vuong, Tri-Thanh Nguyen, Quang-Thuy Ha

Forecasting System for Solar-Power Generation

Environmental protection is a highly concerned and thought-provoking issue, and the way of generating electricity has become a major conundrum for all mankind. Renewable or green energy is an ideal solution for environmentally friendly (eco-friendly) power generation. Among all renewable energy, solar-power generation is low cost and small footprint, which make solar-power generation available around our lives. The major uncontrollable factor in the solar-power generation is the amount of solar radiation, which completely dominates the electricity generated by solar panel. In this paper, we design prediction models for solar radiation, using data mining techniques and machine learning algorithms, and derive precision prediction models (PPM) and light prediction models (LPM). Experimental results that the PPM (LPM) with random forest regression can obtain R-squared of 0.841 (0.828) and correlation coefficient of 0.917 (0.910). Compared with highly cited researches, our models outperform them in all measurements, which demonstrates the robustness and effectiveness of the proposed models.

Jia-Hao Syu, Chi-Fang Chao, Mu-En Wu

A Recommender System for Insurance Packages Based on Item-Attribute-Value Prediction

Finding a proper insurance package becomes a challenging issue for new customers due to the variety of insurance packages and many factors from both insurance packages’ policies and users’ profiles for considering. This paper introduces a recommender model named INSUREX that attempts to analyze historical data of application forms and contact documents. Then, machine learning techniques based on item-attribute-value prediction are adopted to find out the pattern between attributes of insurance packages. Next, our recommender model suggests several relevant packages to users. The measurement of the model results in high performance in terms of HR@K and F1-score. In addition, a web-based proof-of-concept application has been developed by utilizing the INSUREX model in order to recommend insurance packages and riders based on a profile from the user input. The evaluation against users demonstrates that the recommender model helps users get start in choosing right insurance plans.

Rathachai Chawuthai, Chananya Choosak, Chutika Weerayuthwattana

Application of Recommendation Systems Based on Deep Learning

With the rapid development of mobile Internet and self-media, it is becoming more and more convenient for people to obtain information, and the problem of information overload has increasingly affected people’s sense of use. The emergence of recommendation systems can help solve the problem of information overload, but in the big data environment, the traditional recommendation system technology can no longer meet the needs of recommending more personalized, more real-time, and more accurate information to users. In recent years, deep learning has made breakthroughs in the fields of natural language understanding, speech recognition, and image processing. At the same time, deep learning has been integrated into the recommendation system, and satisfactory results have also been achieved. However, how to integrate massive multi-source heterogeneous data and build more accurate user and item models to improve the performance and user satisfaction of the recommendation system is still the main task of the recommendation system based on deep learning. This article reviews the research progress of recommendation systems based on deep learning in recent years and analyses the differences between recommendation systems based on deep learning and traditional recommendation systems.

Li Haiming, Wang Kaili, Sun Yunyun, Mou Xuefeng

Data Reduction with Distance Correlation

Data reduction is a technique used in big data applications. Volume, velocity, and variety of data bring in time and space complexity problems to computation. While there are several approaches used for data reduction, dimension reduction and redundancy removal are among common approaches. In those approaches, data are treated as points in a large space. This paper considers the scenario of analyzing a topic for which similar multi-dimensional data are available from different sources. The problem can be stated as data reduction by source selection. This paper examines distance correlation (DC) as a technique for determining similar data sources. For demonstration, COVID-19 in the United States of America (US) is considered as the topic of analysis as it is a topic of considerable interest. Data reported by the states of US are considered as data sources. We define and use a variation of concordance for validation analysis.

K. M. George

Intelligent and Contextual Systems

Frontmatter

A Subtype Classification of Hematopoietic Cancer Using Machine Learning Approach

Hematopoietic cancer is the malignant transformation in immune system cells. This cancer usually occurs in areas such as bone marrow and lymph nodes, the hematopoietic organ, and is a frightening disease that collapses the immune system with its own mobile characteristics. Hematopoietic cancer is characterized by the cells that are expressed, which are usually difficult to detect in the hematopoiesis process. For this reason, we focused on the five subtypes of hematopoietic cancer and conducted a study on classifying by applying machine learning algorithms both contextual approach and non-contextual approach. First, we applied PCA approach for extracting suited feature for building classification model for subtype classification. And then, we used four machine learning classification algorithms (support vector machine, k-nearest neighbor, random forest, neural network) and synthetic minority oversampling technique for generating a model. As a result, most classifiers performed better when the oversampling technique was applied, and the best result was that oversampling applied random forest produced 95.24% classification performance.

Kwang Ho Park, Van Huy Pham, Khishigsuren Davagdorj, Lkhagvadorj Munkhdalai, Keun Ho Ryu

Design of Text and Voice Machine Translation Tool for Presentations

In this paper, a machine translation tool for presentations was presented. This virtual translation tool is a novel approach for generating text or voice in other languages. The proposed system is expected to assists audiences in understanding foreign language content in the live presentations. In this study, the conventional translator was taken over by neural machine translation and human-machine interaction was improved significantly by using text to speech and speech recognition. Experimental results in Vietnamese-English pair showed the effectiveness of the proposed system design and deployment approach.

Thi-My-Thanh Nguyen, Xuan-Dung Phan, Ngoc-Bich Le, Xuan-Quy Dao

Hybrid Approach for the Semantic Analysis of Texts in the Kazakh Language

In this paper authors propose a hybrid approach for semantic analysis of text resources and documents in the Kazakh language. An overview and difficulties of analysis for the Kazakh language are presented. The developed approach consists of two main parts. The first definition of keywords (phrases) from the text, and the second, based on the data obtained, will build an annotated summarization of the text. To implement the first part of the approach, the TF-IDF algorithm was applied to extract keywords and phrases from texts. The cosine similarity of the sentence data in the Kazakh language was calculated to determine the similarity. With the help of certain similarities semantic links in the text are determined. On the basis of the data obtained, the second part is performed - the abstraction of texts. The number of annotations directly depends on the size of the document. The linguistic corpus of the Kazakh language was collected for carrying out experiments and calculations. A study of various approaches and a hybrid approach for the semantic analysis of the Kazakh language was carried out. The practical part was implemented in Python. The article presents the results of experimental calculations.

Diana Rakhimova, Asem Turarbek, Leila Kopbosyn

Natural Language Processing

Frontmatter

Short Text Clustering Using Generalized Dirichlet Multinomial Mixture Model

The Artificial Intelligence field is under the spotlight as of its wide use and efficiency in solving real world problems. As of this decade, a notable rise in the amounts of data collected, which were made available to the public, is witnessed. This allowed the emergence of many research problems among which working with short texts and their different challenges. In this paper, we propose the collapsed Gibbs Sampling algorithm for the generalized Dirichlet Multinomial Mixture model for short text clustering (GSDMM). The proposed approach has been evaluated on the Google News dataset. Our approach proved to be more efficient than the related-works and succeeded into overcoming the common challenges that come with short texts.

Samar Hannachi, Fatma Najar, Nizar Bouguila

Comparative Study of Machine Learning Algorithms for Performant Text Analysis in a Real World System

This work illustrates how the text analysis is done on XYZ GmbH’s (company’s name was changed due to privacy) ticketing engine “ManageEngine ServiceDesk Plus” (MSP) database using modern technology offered by Azure ML. Here, we will use Azure Machine Learning Studio to implemented Text Analysis techniques and machine learning algorithms on the data set obtained from MSP. This research paper will guide us to the process of data extraction, data processing, creation of a word cloud and keyword extraction. We will compare modern machine Learning algorithm like neural network, averaged perceptron and boosted decision tree available in Azure ML to train and score the model and do a prediction on a ticket’s probability of being assigned to a correct department.

Faraz Islam, Doina Logofătu

Contour: Penalty and Spotlight Mask for Abstractive Summarization

The act of transferring accurate information presented as proper nouns or exclusive phrases from the input document to the output summary is the requirement for Abstractive Summarization task. To address this problem, we propose Contour to emphasize the most suitable word in the original document that contains the crucial information in each predict step. Contour contains two independent parts: Penalty and Spotlight. Penalty helps to penalize inapplicable words in both training and inference time. Spotlight is to increase the potential of important related words. We examined Contour on multiple types of datasets and languages, which are large-scale (CNN/DailyMail) for English, medium-scale (VNTC-Abs) for Vietnamese, and small-scale (Livedoor News Corpus) for Japanese. Contour not only significantly outperforms baselines by all three Rouge points but also accommodates different datasets.

Trang-Phuong N. Nguyen, Nhi-Thao Tran

Heterogeneous Information Access System with a Natural Language Interface in the Context of Organization of Events

The article presents an innovative approach to providing information to event and conference participants using a natural language interface and chatbot. The proposed solution, in the form of chatbots, allows users to communicate with conference/event participants and provide them with personalized event-related information in a natural way. The proposed system architecture facilitates exploitation of various communication agents, including integration with several instant messaging platforms. The heterogeneity and flexibility of the solution have been validated during functional tests conducted in a highly scalable on-demand cloud environment (Google Cloud Platform); moreover the efficiency and scalability of the resulting solution have proven sufficient for handling large conferences/events.

Piotr Nawrocki, Dominik Radziszowski, Bartlomiej Sniezynski

Network Systems and Applications

Frontmatter

Single Failure Recovery in Distributed Social Network

This paper aims to propose a single failure recovery for a distributed social network. Single node and multi-node recovery are both non-trivial tasks. We have considered that a user is treated as a single independent server with its own independent storage and is focused only on single-user failure. Here, failure refers to the failure of the server as a whole. With these fundamental assumptions, a new fast recovery strategy for the single failed user is proposed. In this method, partial replication of node data is used for recovery of the node to reduce the storage issue.

Chandan Roy, Dhiman Chakraborty, Sushabhan Debnath, Aradhita Mukherjee, Nabendu Chaki

Improving Reading Accuracy in Personal Information Distribution Systems Using Smartwatches and Wireless Headphones

There is a huge amount of news information on the Internet, and users have the advantage of being able to access a large amount of information. However, users are not interested in all the news that exists on the Internet. In our previous research, we developed a smartphone application that enables users to provide information via a smartwatch in order to solve the information overload in news information on the Internet. However, considering the screen size of the smartwatch, it was necessary to limit the information that could be obtained, and we could not provide the information that users wanted. Therefore, we propose an application that provides news information by voice and allows users to read out the news from a smartwatch. By learning the user’s interests, this application automatically selects articles and provides news that the user is interested in with priority. In addition, by providing information by voice, news information can be obtained in a crowded train or while driving a car, which is expected to improve the convenience of the system. As a result of the evaluation experiment, we were able to provide more information to the users than the conventional applications. However, there was a problem of misreading. In this paper, we refer to misreadings and discuss countermeasures.

Takuya Ogawa, Keiichi Endo, Hisayasu Kuroda, Shinya Kobayashi

Cyber Security Aspects of Digital Services Using IoT Appliances

In the modern digital world, we have observed the emergence of endless potential for electronic communication using diverse forms of data transmission between subscriber devices. As such, the necessity of a proper networking infrastructure to sustain this communication was a crucial factor for the further development. But now, the successful advancement of networks and systems is dependent on the appropriate status of cyber security in networking and services.This article describes threats to the customer premises network, and explores both the security mechanisms and the network vulnerability to attacks resulting from the use of devices associated with the IoT applications.

Zbigniew Hulicki, Maciej Hulicki

Computational Imaging and Vision

Frontmatter

Simple Methodology for Eye Gaze Direction Estimation

A simple methodology (based on standard detection algorithms and a shallow neural network) is developed for estimating the eye gaze direction in natural (mostly indoor) environments. The methodology is primarily intended for evaluating human interests or preferences (e.g. in interaction with smart systems). First, techniques for detection faces, facial landmarks, eyes and irises are employed. The results are converted into numbers representing resolution-normalized locations of those landmarks. Eventually, six numbers (comprehensively combining the detection results) are used as NN inputs. The NN outputs represent 17 different directions of the eye gaze, where the neutral direction is defined by the virtual line linking the monitoring camera with the face center. Thus, the identified gaze direction results from combined motions of eyeballs and heads. For a feasibility study, a small dataset was created (showing diversified configurations of head and eyeballs orientations of 10 people looking in the specific directions). Data augmentation was used to increase the dataset volume. Using $$67\%$$ 67 % for training (in 3-fold validation) we have reached accuracy of $$91.35\%$$ 91.35 % .

Suood Al Mazrouei, Andrzej Śluzek

Dropout Regularization for Automatic Segmented Dental Images

Deep neural networks are those networks that have a large number of parameters, thus the core of deep learning systems. There is a challenge that arises, from these systems as to how they perform against training data, and/or validation datasets. Due to the number of parameters involved the networks tend to consume a lot of time and this brings about a condition referred as over-fitting. This approach proposes the introduction of a dropout layer between the input and the first hidden layer in a model. This is quite specific and different from the traditional dropout used in other fields which introduce the dropout in each and every hidden layer of the network model to deal with over-fitting. Our approach involves a pre-processing step that deals with data augmentation to take care of the limited number of dental images and erosion morphology to remove noise from the images. Additionally, segmentation is done to extract edge-based features using the canny edge detection method. Further, the neural network used employs the sequential model from Keras, and this is for combining iterations from the edge segmentation step into one model. Parallel evaluations to the model are carried out, first without dropout, and the other with a dropout input layer of size 0.3. The introduction of dropout in the model as a weight regularization technique, improved the accuracy of evaluation results, 89.0% for the model without dropout, to 91.3% for model with dropout, for both precision and recall values.

Vincent Majanga, Serestina Viriri

Towards Exploiting Convolutional Features for Remote Sensing Images Scene Classification

The developments in deep learning have shifting the research directions of remote sensing image scene understanding and classification from pixel-to-pixel handcrafted methods to scene-level image semantics analysis for scene classification tasks. The pixel-level methods rely on handcrafted methods in feature extraction, which yield low accuracy when these extracted features are fed to Support Vector machine for scene classification task. This paper proposes a generic extraction technique that is based on convolutional features in the context of remote sensing scene classification images. The experimental evaluation results with convolutional features on public datasets, Whu-RS, Ucmerced, and Resisc45 attain a scene classification accuracy of 92.4%, 88.78%, and 75.65% respectively. This demonstrate that convolutional features are powerful in feature extraction, therefore achieving superior classification results compared to the low-level and mid-level feature extraction methods on the same datasets.

Ronald Tombe, Serestina Viriri

Contrast Enhancement in Deep Convolutional Neural Networks for Segmentation of Retinal Blood Vessels

The segmentation of blood vessels from the retinal fundus image is known to be complicated. This difficulty results from visual complexities associated with retinal fundus images such as low contrast, uneven illumination, and noise. These visual complexities can hamper the optimal performance of Deep Convolutional Neural Networks (DCNN) based methods, regardless of its ground-breaking success in computer vision and segmentation tasks in the medical domain. To alleviate these problems, image contrast enhancement becomes inevitable to improve every minute objects’ visibility in retinal fundus images, particularly the tiny vessels for accurate analysis and diagnosis. This study investigates the impact of image contrast enhancement on the performance of DCNN based method. The network is trained with the raw DRIVE dataset in RGB format and the enhanced version of DRIVE dataset using the same configuration. The take-in of the enhanced DRIVE dataset in the segmentation task achieves a remarkably improved performance hit of 2.60% sensitivity, besides other slight improvements in accuracy, specificity and AUC, when validated on the contrast-enhanced DRIVE dataset.

Olubunmi Sule, Serestina Viriri, Mandlenkosi Gwetu

Real-Time Social Distancing Alert System Using Pose Estimation on Smart Edge Devices

This paper focuses on developing a social distance alert system using pose estimation for smart edge devices. Recently, with the rapid development of the Deep Learning model for computer vision, a vision-based automatic real-time warning system for social distance becomes an emergent issue. In this study, different from previous works, we propose a new framework for distance measurement using pose estimation. Moreover, the system is developed on smart edge devices, which is able to deal with moving cameras instead of fixed cameras of surveillance systems. Specifically, our method includes three main processes, which are video pre-processing, pose estimation, and object distance estimation. The experiment on coral board, an AI accelerator device, provides promising results of our proposed method in which the accuracies are able to achieve more than 85% from different datasets.

Hai-Thien To, Khac-Hoai Nam Bui, Van-Duc Le, Tien-Cuong Bui, Wen-Syan Li, Sang Kyun Cha

Automatic Video Editor for Reportages Assisted by Unsupervised Machine Learning

In this paper we explain the methodology for making a computer program that will automatically edit videos with the only input being files containing raw video interviews. Program produces two files as its outputs – an MP4 video file and an XML file compatible with the most popular video editing programs. MP4 video file will be the edited video program made and the XML file will contain the timeline information which the program puts together with clip durations, in and out points on the timeline, and information about the timeline itself. Such a program is useful for two general types of users. The first group would be video editors for whom it would save time - upon generating the XML they could continue improving the cut, without wasting time manually cutting and sorting clips. The second type of users that could benefit using this program are people who are not professional video editors and would use this program as a simple tool that enables them to quickly edit the interviews they shot and publish them on the web. In processing the inputted video files, among other technologies, program uses unsupervised machine learning. The program is written in Python and uses FFmpeg for converting audio and video files. The program has been tested on a limited sample of video interviews carried out in English .

Leo Vitasovic, Zdravko Kunic, Leo Mrsic

Residual Attention Network vs Real Attention on Aesthetic Assessment

Photo aesthetics assessment is a challenging problem. Deep Convolutional Neural Network (CNN)-based algorithms have achieved promising results for aesthetics assessment in recent times. Lately, few efficient and effective attention-based CNN architectures are proposed that improve learning efficiency by adaptively adjusts the weight of each patch during the training process. In this paper, we investigate how real human attention affects instead of CNN-based synthetic attention network architecture in image aesthetic assessment. A dataset consists of a large number of images along with eye-tracking information has been developed using an eye-tracking device ( https://www.tobii.com/group/about/this-is-eye-tracking/ ) power by sensor technology for our research, and it will be the first study of its kind in image aesthetic assessment. We adopted a Residual Attention Network and ResNet architectures which achieve state-of-the-art performance image recognition tasks on benchmark datasets. We report our findings on photo aesthetics assessment with two sets of datasets consist of original images and images with masked attention patches, which demonstrates higher accuracy when compared to the state-of-the-art methods.

Ranju Mandal, Susanne Becken, Rod M. Connolly, Bela Stantic

ICDWiz: Visualizing ICD-11 Using 3D Force-Directed Graph

WHO released ICD-11 in 2018 and will be used in 2022. ICD-11 has been changed drastically in its complex structure, enormous size, multi-source of classifications and terminologies, and code of diseases multiplication. The transition from ICD-10 to ICD-11 will be extremely intricate and long process, especially for the international members. In this paper, we present the 3D visualization part of the ICDWiz system to uncover the ICD-11 using our modified 3D Force Directed Graph to visual the information and relationship of the multi-knowledge sources biomedical Concept-terms-strings-atoms or phrases. The Visualization construction and functions such as Initial graph, Collapsed Function, Ring Notation and Constructed Text Label are described. The testing is elaborated using the ICDWiz database which is developed based on UMLS-Metathesaurus structure. The result reveals the complex visual of the ICD-11 medical information such as diseases, symptoms and relationship integrated from multi-medical classifications such as ICD-11, ICD-10, ICD-10 TM (Thai Modification), MeSH, and SNOMED-CT. The mapping between ICD-10 and 11 is also visualized as well.

Jarernsri Mitrpanont, Wudhichart Sawangphol, Wichayapat Thongrattana, Suthivich Suthinuntasook, Supakorn Silapadapong, Kanrawi Kitkhachonkunlaphat

Decision Support and Control Systems

Frontmatter

An Integrated Platform for Vehicle-Related Services and Records Management Using Blockchain Technology

This paper proposes an integrated system for vehicle-related services and records management using blockchain technology, where various stakeholders (vehicle owners, regional transport offices, insurance issuers, pollution control test centers, traffic police) can avail or provide various services (vehicle’s registration, driving license issuance, insurance and pollution-under-control certificates issuance, automated insurance claim, auditable trail of vehicle’s documents access and verification) in a hassle-free manner without any untrusted intermediaries. While the use of blockchain technology increases accountability, transparency, and trust in the system, this allows a cohesive integration of the proposed system with other smart city digital infrastructures easily. We develop a prototype of our proposal as a proof of concept using Hyperledger Fabric, adopting Attribute-Based Access Control (ABAC) policy, and we present an experimental evaluation to demonstrate the performance of the system. To the best of our knowledge, this proposal is the first of its kind to provide a common blockchain-based platform for all vehicle-related services and records management.

Arnab Mukherjee, Raju Halder

Increase Driver’s Comfort Using Multi-population Evolutionary Algorithm

An application of own algorithm for reduction optimization calculation time is presented in the paper. The algorithm is called Distributed Multi-Population Evolutionary Algorithm and uses a genetic algorithm with real-coded genes in the chromosome. The client-server architecture was used for processing lots of populations with a simultaneous data exchange between the populations. Each generation consists of standard genetic operators: a natural selection, one-point crossing, uniform mutation and a data exchange within the computational units. This method was used for selecting values of damping coefficients of a driver’s seat sub-assembly to improve driving comfort. Numerical results of a vehicle driving with different variants of velocity and obstacles are presented in the paper.

Szymon Tengler, Kornel Warwas

The Impact of Cybersecurity on the Rescue System of Regional Governments in SmartCities

The article’s topic is the solution to the issue of cyber security of IoT in Smart Cities while ensuring the security of citizens with the cooperation of regional governments and safety officies (include police, firefighters, and paramedics).The global COVID pandemic has highlighted the need for safety officies, and regional governments to ensure the safety of citizens using information technology and IoT in Smart Cities strategies. The pandemic revealed conceptual shortcomings of IoT in security forcesandan inadequate solution to cyber security, which manifested itself, for example, in response to cyberattacks on hospitals around the world.The article focuses on the evaluation the concepts of coordination and the use of available technologies used by safety officies. It points to the current problems (positive and negative impacts) of IoT in connection with cyber security. It emphasizes the need to address this issue more in detail and to integrate it into Smart Cities’ strategies as a separate segment. In the case of currently resolved pandemics, it is evident that the mutual coordination of regional self-government and safety officies using smart technologies and IoT leads to the saving of many human lives.

Hana Svecova, Pavel Blazek

A General Modeling Framework for Robust Optimization

This paper introduces pyropt, a Python programming language package for robust optimization. The package provides a convenient way for expressing robust optimization problems using simple and concise syntax, based on the extension of popular cvxpy domain-specific language. Moreover, it offers an easy access to the collection of solution algorithms for robust optimization, which utilize the state of the art reformulation and cut-generation methods, leveraging a variety of mathematical programming solvers. In particular, the package provides readily available modeling and solution framework for min-max linear programming, min-max regret linear programming with polyhedral uncertainty, and min-max regret binary programming with interval uncertainty.

Maciej Drwal

Unsupervised Barter Model Based on Natural Human Interaction

Human interaction is a natural process in business management. In various indigenous cultures, the natives still use a barter system to reach consensus or balances that determine the essence of their economies. The present investigation consists of the presentation of an unsupervised model based on pure barter. The main contribution sought is to visualize the balance that is achieved in an unsupervised environment of two entities that are close to reaching an agreement. Both Game Theory and Walrasian Theory deal with the problem of exchange of goods. However, the current objective is to show the barter model from its simplest bases for the construction of an unsupervised automatic learning scheme where a system of pairs of agents represent a basic model for decision making when guaranteeing an agreement.

Yasmany Fernández-Fernández, Leandro L. Lorente-Leyva, Diego H. Peluffo-Ordóñez, Ridelio Miranda Pérez, Elia N. Cabrera Álvarez

Data Modelling and Processing for Industry 4.0

Frontmatter

Predictive Maintenance for Sensor Enhancement in Industry 4.0

This paper presents an effort to timely handle 400+ GBytes of sensor data in order to produce Predictive Maintenance (PdM) models. We follow a data-driven methodology, using state-of-the-art python libraries, such as Dask and Modin, which can handle big data. We use Dynamic Time Warping for sensors behavior description, an anomaly detection method (Matrix Profile) and forecasting methods (AutoRegressive Integrated Moving Average - ARIMA, Holt-Winters and Long Short-Term Memory - LSTM). The data was collected by various sensors in an industrial context and is composed by attributes that define their activity characterizing the environment where they are inserted, e.g. optical, temperature, pollution and working hours. We successfully managed to highlight aspects of all sensors behaviors, and produce forecast models for distinct series of sensors, despite the data dimension.

Carla Silva, Marvin F. da Silva, Arlete Rodrigues, José Silva, Vítor Santos Costa, Alípio Jorge, Inês Dutra

Cloud Based Business Process Modeling Environment – The Systematic Literature Review

The article presents results of systematic literature review about business process modeling environment based on a cloud computing. The research methodology is evolved. In this paper the research set of 3901 articles is presented and examined. The sources were used for analyses came from Scopus, Web of Science, and EBSCOhost databases. The aim of the paper is to show the impact of cloud computing environment on business process modeling based on literature review. The research subject area is limited to Computer science category in cloud computing, business process modelling, and business process management subcategories. The research date set was limited to 313 items based on chosen literature sources indexed from year 2009 to 2020. The development of cloud based models in business process modelling and management, known as BPaaS, eBPM, and BPMaaS are described. The paper shows the state-of-art of business process lifecycle management and presents some classic methods of usage in modeling tools based on BPMN notation, UML, and Petri nets notations. The research methodology uses the different methods of data gathering and searching algorithms, with computer programming. The final findings based on the research questions (RQ1–RQ6) are described and presented in tables and on figures. The conclusion with future research ideas have been shown.

Anna Sołtysik-Piorunkiewicz, Patryk Morawiec

Terrain Classification Using Neural Network Based on Inertial Sensors for Wheeled Robot

In the article, a method of terrain recognition for robotic application has been described. The main goal of the research is to support the robot's motor system in recognizing the environment, adjusting the motion parameters to it, and supporting the location system in critical situations. The proposed procedure uses differences between calculated statistics to detect the diverse type and quality of ground on which wheeled robot moves. In the research IMU (Inertial Measurement Unit) has been used as a main source of data, especially 3-axis accelerometer and gyroscope. The experiment involved collecting data with a sensor mounted on a remotely controlled wheeled robot. This data was collected from 4 hand-made platforms that simulated different types of terrain. For terrain recognition, a neural network-based analytical model has been proposed. In this paper authors present results obtained from the application model to experimental data. The paper describes the structure of NN and the whole analytical process in detail. Then, based on a comparison of the obtained results with the results from other methods, the value of the proposed method was shown.

Artur Skoczylas, Maria Stachowiak, Paweł Stefaniak, Bartosz Jachnik

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise