Skip to main content
Top

2024 | Book

Digital Ecosystems: Interconnecting Advanced Networks with AI Applications

insite
SEARCH

About this book

This book covers several cutting-edge topics and provides a direct follow-up to former publications such as “Intent-based Networking” and “Emerging Networking”, bringing together the latest network technologies and advanced AI applications. Typical subjects include 5G/6G, clouds, fog, leading-edge LLMs, large-scale distributed environments with specific QoS requirements for IoT, robots, machine and deep learning, chatbots, and further AI solutions. The highly promising combination of smart applications, network infrastructure, and AI represents a unique mix of real synergy. Special aspects of current importance such as energy efficiency, reliability, sustainability, security and privacy, telemedicine, e-learning, and image recognition are addressed too. The book is suitable for students, professors, and advanced lecturers for networking, system architecture, and applied AI. Moreover, it serves as a basis for research and inspiration for interested professionals looking for new challenges.

Table of Contents

Frontmatter
Revolutionizing Digital Ecosystems with Artificial Intelligence: Challenges, Concepts, and Future Directions

The progressive development of artificial intelligence (AI) combined with advanced information technologies is driving a new era of innovation, transforming human interaction through the use of various digital ecosystems. The chapter highlights the importance and challenges of integrating AI into digital ecosystems. A concept of AI-enhanced ecosystems is proposed, discussing its impact on diverse sectors, including smart cities, industrial Internet of Things (IoT), healthcare, agriculture, autonomous transportation, and more. The paper addresses the issues of incompatibility and limited interaction among numerous technological devices and IoT systems in the construction of intelligent digital ecosystems. To solve this problem, the Home Assistant digital platform was used as an open source. It facilitates the integration of artificial intelligence applications and various devices from different vendors into a single digital ecosystem. We provide an overview of the Home Assistant platform, its design, step-by-step implementation of AI library and present the results of various original test cases. For example, motion and object recognition, intelligent temperature monitoring, server health, power grid management, and a robot vacuum cleaner—all within a single platform. As a result, it is emphasized that the future of digital ecosystems with AI promises revolutionary changes, transforming functionality into more intelligent, adaptive, and comfortable for users.

Mykola Beshley, Mikhailo Klymash, Halyna Beshley, Yuriy Shkoropad, Yuriy Bobalo
AI Affected Job Replacements and Accompanying Ethical Problems

With ChatGPT, artificial intelligence has invaded the everyday life of people in the industrialized world almost like a raid. Within a few months, ChatGPT became known and feared as a revolutionary technology. The Open AI company and its boss, Sam Altmann, suddenly became known to a large public overnight. After just a few months, Sam Altman's AI software already had over 100 million users. With the huge chances and possibilities, however, fears, threats, and fantasies of dangers also appear that are currently not at all foreseeable. Opportunities and risks are suddenly in an unknown dimension like the famous pink elephant in the room. On the one hand, the architects of the software outdo each other in describing capabilities, performance options, and new opportunities. On the other hand, the danger of abuse, loss of control, and harmfulness are warned in many ways. Some countries have introduced controls that others consider to be grossly excessive. There is a lot of ambiguity and uncertainty. The article “Job losses caused by artificial intelligence and associated ethical problems” deals with the special forecasts and expected changes in the world of work. For this purpose, the first concrete changes in the world of work and economic influences are collected on the basis of experience in dealing with AI. The influences need to be examined more closely from an economic-ethical point of view. Then questions about intelligence and the consciousness of artificial intelligence are discussed in connection with Maslow's needs model and the question of whether we have to be afraid of artificial intelligence or not.

Jürgen Smettan
Generative AI-Language Models in Didactics and Communication for Inclusiveness

In the future, the use of large language models offers the potential to support people in all activities and areas of life, both privately and professionally. People who experience disabilities encounter far greater barriers in everyday life than people without disabilities. Large language models could break down language barriers in the future and also be an additional driver for inclusion since they enable people who experience disabilities to gain new access to all areas of social life and give them the opportunity to participate more. They can also enable more individualized learning in education, which facilitates inclusive teaching. Nevertheless, large language models also carry the risk of discriminating against this group of people in particular, for example by not recognizing them or systematically excluding people who experience disabilities. Therefore, when developing and maintaining large language models, all ethical considerations must be taken into account to prevent bias, discrimination, and harassment. Since training data historically includes data on discrimination against people who experience disabilities, it is necessary that more up-to-date data is collected from this heterogeneous group of people in order to obtain algorithms that enable a reliable mapping of minorities in our society.

Verena A. Müller, Juliane Heidelberger
How AI Meets Networking and Networks Meet AI Applications

This work is aimed to explore the potential of advanced network technologies to support Artificial Intelligence (AI) applications and so-called Digital Ecosystems, with a focus on interconnecting both under improving user Quality of Experience (QoE). The work provides an analysis of current opportunities, challenges, and case studies and examines ongoing models and algorithms for future Digital Ecosystems: architectures, platforms, smart applications, and services. The slogan is as follows “Networks meet AI as well as AI meets Networking!”.

Andriy Luntovskyy
A Survey of Deep Learning for Remote Sensing, Earth Intelligence and Decision Making

This article presents a comprehensive overview of the applications of deep learning techniques in remote sensing. It discusses various neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), and graph neural networks, and their utilization in tasks such as land cover mapping, object detection, image segmentation, time series analysis, and change detection. Specifically focusing on Ukraine, the article highlights successful applications of deep learning, such as multi-temporal crop classification using Sentinel data, object detection for disaster monitoring, agricultural yield forecasting through RNNs and GANs, and the potential of graph neural networks in various remote sensing applications. Future prospects include further innovation in deep learning architectures, particularly transformers, and the adoption of hybrid models for Earth Intelligence and decision-making support.

Nataliia Kussul, Volodymyr Kuzin, Andrii Shelestov
Intelligent Hierarchical Coordination Fault-Tolerant Routing Method Under End-to-End Quality of Service Protection in Multidomain Softwarized Networks

The paper presents research on the delay-based intelligent hierarchical coordination fault-tolerant inter-domain QoS-routing method designed for use in softwarized networks with multidomain architecture. The method aims at QoS ensuring under bandwidth and average end-to-end packet delay. It protects the inter-domain router while calculating the primary and backup paths. The essence of the intelligent hierarchical-coordination fault-tolerant QoS-routing method is the concurrent computation of primary and backup inter-domain routes when QoS requirements for bandwidth and average end-to-end delay are met for each flow. The flow-based optimization model of inter-domain QoS routing with the protection of inter-domain border routers underlies the method. The optimal inter-domain routes are calculated using the two-level goal coordination method. The domain controllers at the lower level calculate the set of primary and backup paths considering the selected routing metric. At the same time, the network controller at the upper level aims to coordinate lower-level decisions by ensuring inter-domain route connectivity and end-to-end delay. Therefore, the task of guaranteeing the end-to-end QoS is solved not at the domain controllers level but at the SDN controller level by performing gradient coordination procedures and machine learning algorithms. Hence, the offered method increases inter-domain routing flexibility and ensures QoS and fault tolerance.

Oleksandr Lemeshko, Oleksandra Yeremenko, Maryna Yevdokymenko, Mykola Maiba
Peculiarities of Classification of Lossy Compressed Multichannel Remote Sensing Images Using Trained Neural Networks

A lot of modern remote sensing images are multichannel and, due to this as well as to high resolution, they occupy quite a large space. This causes problems in their transfer and storage and leads to the necessity to apply compression where lossy compression is mainly used. The compressed images can be then processed in different ways where classification is a typical operation for which trained neural networks are widely used. Classifier performance depends on many factors including what are the images employed in training. We have earlier shown that if compressed images are planned to be classified, it is worth using just compressed images for training. However, images used for training and employed in classification can be obtained by different compression techniques. Hence, in this paper, we analyze and compare the results of using the same and different coders for training and classified images. It is demonstrated that the difference in classification accuracy is not large if one uses the same coder compressed data for training and classification or if the compression techniques are different. The largest difference has been observed for the situations when one coder is DCT-based and the other coder is wavelet-based.

Volodymyr Lukin, Fangfang Li, Galyna Proskura, Sergii Kryvenko, Benoit Vozel
Reducing the Impact of Unstable Connections Among Nodes of Wireless IIoT Clusters Using Machine Learning Methods

Distributed systems have become increasingly popular in recent years due to their ability to handle massive amounts of data and provide high level of availability and scalability. However, designing and implementing distributed algorithms is a complex task that poses significant challenges. For example, an IoT cluster of sensors for environmental monitoring can utilize a consensus algorithm. In this scenario, multiple sensor nodes are deployed in various locations to collect environmental data such as temperature, humidity, air quality, and other parameters. Consensus algorithms are crucial in distributed systems as they enable all nodes in the system to agree on a single value, even in the presence of faults. To ensure data consistency and reliability in such a distributed IoT cluster, a consensus algorithm can be employed. However, existing consensus algorithms suffer from several issues that can impact their performance and efficiency. One such issue is when the cluster contains an unstable connection with the master node, leading to the algorithm’s inability to perform. In such a situation, the follower node thinks the master is not working and starts the leader’s reelection process. This can cause the cluster to become stuck, with the leader’s reelection process constantly repeated. To overcome this challenge, we propose to utilize temporal analysis with a Seasonal Autoregressive Integrated Moving Average (SARIMA) machine learning model. Temporal analysis involves analyzing the response time from each node in the system. This approach can help predict delays and failures in distributed systems and enable the development of more efficient consensus algorithms. The proposed technique and model can help address one of the problems associated with existing consensus algorithms. By leveraging temporal analysis, the technique can mitigate the impact of unstable connections in the cluster. The model can help predict the response time from a functioning node that is cut off by a network link failure, enabling better management of the system’s performance.

Stanislav Zhuravel, Mikhailo Klymash, Olha Shpur, Vasyl Mrak
Ensemble Methods of Determining the Effective Activity of Enterprises

Ensemble methods, which are effective for determining the activities of enterprises, have been studied. They are based on the idea that a combination of several models can provide more accurate predictions than any single model. The application of ensemble methods allowed to provide more accurate and reliable results, which helps enterprises to make better decisions and improve their efficiency. The main types of ensemble methods include random forest, which is one of the most popular ensemble methods, which uses multiple tree solutions to provide more accurate results. Each tree is built on a subset of data and a subset of features; gradient boosting also uses multiple models to provide more accurate predictions. In this method, each subsequent model tries to correct the errors of the previous model, which leads to an improvement in the quality of forecasts; stacking uses multiple models of different types to provide better predictions. In this method, the results of several models are used as indications for another model, which allows to improvement of the quality of forecasting; bagging uses multiple copies of the same model to analyze the data and provide more accurate results. Each model is built on a random subset of data and a random subset of features. The study also used individual models that were combined into an ensemble. For example, linear regression and decision trees to get better results. Ensemble methods have been applied to determine enterprise performance in many industries, such as finance, marketing, and manufacturing. In this publication, an example of the implementation of ensemble methods on a dataset of enterprise functioning has been developed.

Mariia Nazarkevych, Vasyl Lytvyn, Dmytro Demchyk
The Energy Transition in Germany Requires an AI-Supported Dynamic Control of the Power Supply Network

This position work is based on the best practices of Smart Grid, chosen case studies for energy efficiency, and an in-depth analysis of energy conversion problems in Germany. In addition, real examples based on the authors’ own experiences with energy-efficient Smart Home were conducted. The focus has been shifted to the requirements for modern power grids due to the volatility of renewable energies. Furthermore, the discussion of dynamic network control problems that can be solved by infrastructure approaches such as Smart Grid, NGN with 5G and Beyond, as well as Smart Home, and, certainly, by Artificial Intelligence (AI) methods that will play an increasingly important role. In particular, we proposed a proof-of-concept test-bed for AI-supported dynamic control of the smart grid. We have successfully developed a solar power generation forecasting model based on the Long Short-Term Memory (LSTM) algorithm and implemented an automatic underfloor heating control system that uses these forecasts to optimize energy usage in a private household. We achieved a high accuracy of 91% in forecasting solar power generation and ensured efficient energy management based on conditions. We mean that this is a so-called work-in-progress. The obtained results can be also distributed into larger economic objects: power supply networks for communities, enterprises, geographic areas, and industries.

Dietbert Guetter, Andriy Luntovskyy, Pavlo Beshley, Mykola Beshley
Artificial Intelligence and MicroRNA: Role in Cancer Evolution

Micro RNAs play an important role in genetic cellular regulation that determines the development of various clinical diseases, in particular cancer. They strongly determine gene expression, that is, protein formation, which means that if this mechanism does not work properly, the development of malignant proteins is inevitable. Our work is dedicated to the application of deep neural network learning techniques to build predictive models for the diagnosis and clinical characterization of malignant tumors. As datasets for deep learning, we used the open clinical databases dbDEMC 3.0—the functional exploration of differentially expressed miRNAs in cancers of human and model organisms. A technique for non-invasive biomarkers based on miRNAs expression has been developed for all subtypes of lung and renal cancer. New AI-generated expressions of miRNAs for lung cancer subtypes are predicted, with the selection of the 6 most characteristic expressions by subtype, which forms the basis of a new generalized biomarker of the disease and its clinical conditions. Subsequent experimental verification of generalized biomarkers is expected.

Dimitri Koroliouk, Maurizio Mattei, Maxym Zoziuk, Carla Montesano, Roberta Bernardini, Marina Potestà, Laure Deutou Wondeu, Stefano Pirrò, Andrea Galgani, Vittorio Colizzi
A Hybrid Collaborative Filtering Based Recommender Model Using Modified Funk SVD Algorithm

Big data processing is a growing problem for information and communication systems. Based on the identification of features and relationships in the arrays of information, the most effective scenarios for further actions are determined. Examples of systems that use big data analysis to improve the quality of user service are the Industrial Internet of Things, smart cities, and intelligent commercial structures. Since smart production systems cover both the processes of direct production of items and their sale on the market, it is necessary to constantly monitor demand. Determining current offers for goods or services promotes the interest of customers in the products of the system. Algorithms and methods of big data processing in information communication systems were analyzed in the article. A comparative description of the main approaches to distributed collection, storage, and processing of information from different users was carried out. Features of functioning and basic requirements for the Industrial Internet of Things systems operation, the use of machine learning methods, and mathematical statistics to determine the main features for further data processing and rejection of redundancy were considered. The use of recommender systems to solve the problem of finding relationships between data was determined. A collaborative filtering model using the Funk SVD method for more efficient data processing was proposed. The Funk SVD algorithm was modified using the neighborhood-based collaborative filtering method for the possibility of calculating fewer data and reducing the duration of the processing while maintaining the accuracy of the result. A simulation of the operation of the modified algorithm was carried out, the results of which demonstrate high speed and accuracy compared to the unmodified one. Using distributed Fed SVD algorithm to improve the efficiency of recommender systems, aggregate calculations, and reduce the load on one device was proposed. It was determined that the results of the research and the proposed algorithms can be used to create a hybrid recommender system that calculates big data, flexibly determining the optimal parameters of work.

Mikhailo Klymash, Olena Hordiichuk-Bublivska, Yaroslav Pyrih, Oksana Urikova
The Use of Artificial Intelligence Models in the Automated Knowledge Assessment System

A functional structure of an intellectual system of linguistic analysis of a deployed text response utilizing models of artificial intelligence has been developed and tested in this research paper. An algorithm of fuzzy semantic comparison of textual information—answers to questions submitted by a student in natural language, with options of correct answers, which formalizes description of linguistic structure of the study content and answers has been elaborated. The algorithm provides automatic conversion of student’s responses from a natural language into an intersystem form, the formation of lexical units of the text, followed by the implementation of morphologic, syntactic, semantic and pragmatic analysis. Models of artificial intelligence to compare textual information in content are used on the stages of semantic and pragmatic analysis. As a result of the semantic analysis a semantic network is grounded—a framework for knowledge representation in the form of nodes connected by arcs (links). In the pragmatic analysis it is determined whether a response belongs to a particular subject area. These stages are proposed for implementation through the use of neural networks. The advantage of using neural networks is their versatility. The permanent structure for the neural network can be adapted (trained) to compare texts from various subject areas. In contrast to the known methods of semantic and pragmatic analysis algorithms based on artificial intelligence models will provide more opportunities to inspect automated text responses given in free text form in natural language with greater certainty.

Ivan Katerynchuk, Oksana Komarnytska, Andrii Balendr
Development of a Universal High-Performance Machine Learning Framework for Finding Cybersecurity Anomalies in Big Data

The main aim of the investigation is to simplify the day-to-day operations at Cyber Security Operations Center (CSOC). The management of an array of cybersecurity tools, such as Intrusion Detection System (IDS), Endpoint Detection and Response (EDR), Next-Generation Firewall (NGFW), Data Loss Prevention (DLP), and Cloud Access Security Broker (CASB) poses a great challenge for CSOC. Additionally, there is a massive inflow of raw data from various systems and applications that adds up to this problem. This circumstance puts significant stress on CSOC analysts resulting in mishandling events, delayed response times, longer duration event processing timeframes along generating false alarms furthermore adding more complexities. Two pragmatic alternatives are available to effectively solve these problems: Increase CSOC funding by hiring additional analysts or making it more difficult to identify and respond to anomalies.

Roman Karpiuk, Yaryna Kokovska, Petro Venherskyi
Method for Detecting Phishing Sites

Phishing attacks are a serious threat to the security of confidential data. Traditional user training is not able to fully counter unknown threats. Machine learning is currently the most effective mechanism for detecting phishing sites. Therefore, the presentation of a method in which detection is realized on the basis of fuzzy clustering, which ensures the automated operation of the algorithm without the intervention of an observer, is a relevant topic. The article examines current threats and methods of countering phishing attacks using fake websites. The market for solutions that use machine learning methods to detect phishing sites is analyzed. An improved model for detecting phishing sites based on fuzzy clustering of C-means compromise indicators by combining the closest clusters is proposed, provided that the model efficiency is the highest and the number of clusters is more than two. The proposed model has the potential to become an effective tool for detecting phishing sites and ensuring the security of confidential data. However, further research and improvement of the model, including parameter optimization and the use of additional machine learning techniques, is necessary to achieve optimal results. In general, the article aims to improve the level of protection against phishing attacks by using machine learning techniques and developing a new detection model based on fuzzy clustering. It makes an important contribution to the field of cybersecurity and contributes to ensuring the security of confidential user data from phishing attacks.

Serhii Buchyk, Serhii Toliupa, Oleksandr Buchyk, Anatolii Shevchenko
Using of Computer Vision with Stroboscopic Imaging in 5G

The paper investigates the possibility of detecting the rotational movement of objects with axial symmetry, which is based on the use of stroboscopic shooting of the initial image. For research using computer graphics methods, a set of data has been developed, which is the basis for the further process of machine learning. For this, an array of binary images, which have the axis of symmetry of the second, third, and fourth orders, is generated using computer graphics methods. To study the effect of stroboscopic shooting, this array is supplemented with binary graphic images with the effects of 2-, 3-, … and 10-fold stroboscopic shooting. Since the objects have axial symmetry, circular trajectories are used to scan the image plane to investigate their properties. Scan trajectories add two more problem parameters: the radius of the trajectory and the offset of the centre of the trajectory from the centre of the object. The scan then produces linear binary arrays consisting of 360 elements defined by one complete rotation around the centre of the circle in the image plane. These arrays are the initial data for selecting characteristics of machine learning algorithms. Such a characteristic, in particular, is the fraction of bits that have the value «0». Studies of the effect on this value of the main parameters of the model (scanning radius and multiplicity of stroboscopic shooting) have shown its effectiveness for use as a characteristic of machine learning of the system. On the basis of such a dependence, an optimization function is built, which, in its content, is the sum of squares of mixed derivatives of the second order by variables corresponding to the main parameters of the model. The stability of the method to errors of the second kind (false detection) is studied. The robustness of the method to errors arising both in the process of receiving input data and in the process of transferring binary arrays obtained in the process of scanning the initial images is investigated. The practical value of the developed method lies is that it can be applied to the development and manufacture of optical sensors for detecting the rotational movement of objects with axes of symmetry of the second and higher orders.

Ruslan Politanskyi, Andrii Samila, Vitalii Vlasenko
Fuzzy Logic Models for Technological and Communication Electronic Control Systems

In this chapter, fuzzy-logical models of intelligent technological and communication electronic systems are considered. Fuzzy-logic models are considered separate problems in artificial intelligence tasks. All presented models are based on the formation of membership functions, the formulation of fuzzy rules, and the provision of fuzzification and defuzzification of controlled parameters using the corresponding mathematical operations of fuzzy logic theory by analyzing fuzzy statements. As examples of fuzzy logic models for electronic systems, a model for controlling the temperature of substrates in technological equipment for thin film deposition, a model for controlling the current of high-voltage glow discharge electron guns, and a model for a radar tracking system are considered. Basic theoretical principles of fuzzy logic control systems, such as membership functions, fuzzy rules, fuzzification, and defuzzification of input and output variables, are considered, and the corresponding definitions are presented. It has been proven that the use of artificial fuzzy logic algorithms for electronic control systems in technological and communication equipment makes it possible to eliminate the effects of overshoot and unwanted bursts of controlled values. The membership functions are formed in accordance with the physical properties of the control elements and the control object.

Igor Melnyk, Serhii Tuhai, Mykhailo Skrypka, Iryna Shved
Membership Root-Polynomial Functions

In the chapter, fuzzy logical models with root-polynomial membership functions are considered and analyzed. The properties of root-polynomial dependences are considered, and relevant examples of using such dependences as membership functions in fuzzy logic tasks for real technological and communication electronic control systems are given. The analytical relationships for calculating the coefficients of fifth-order and six-order root-polynomial functions are given, and the computer software for calculating the coefficients of different types of root-polynomial functions as membership one has been developed. Generally, root-polynomial functions are considered in the chapter as membership functions. The proposed approach is very effective for implementation in digital precision intelligence control systems, which are elaborated using the philosophy of artificial intelligence.

Igor Melnyk, Serhii Tuhai, Mykhailo Skrypka, Alina Pochynok
Computational Intelligence for IoT and Smart Home Based on Piezo-Electric Sensors and Fuzzy Logic

In this chapter, the problem of creating computational intelligence solutions for IoT and Smart Home tasks is considered. Firstly, the cross-layered design model for Smart Home and Smart Office, as well as the associated standards and protocols, is considered. The main criteria for the efficiency of using different approaches in Smart Home standards, as well as their compatibility, are also discussed. The possibilities of forming intelligent Smart Home systems are considered in two main important aspects: the use of suitable new types of temperature and pressure piezo-sensors based on Zirconate-Titanate compounds and the solving of fuzzy logic tasks using novel types of membership functions, such as root-polynomials. Both local and cloud realizations of the proposed fuzzy-logic control system for IoT, Smart Office, and Smart Home applications are possible. The main advances of the proposed system are its low costs, the application of effective and cheap piezo-sensors, its low consumption of energy, and the advanced approach of using root-polynomial membership functions in the digital control system of Smart House. Corresponding computer software for getting information from sensors and for calculating the coefficients of membership root-polynomial functions has been elaborated using the advanced mathematical and graphic libraries of the Python interpreter for solving scientific tasks, which are located manually in the virtual environment of a local computer.

Igor Melnyk, Vladyslav Klymenko, Mykhailo Skrypka, Tetiana Khyzhniak, Alina Pochynok
AI-Driven E2E Testing and Cucumber Test Generation: A GPT-Powered Approach for Improved Software Quality and Collaboration

This research proposes leveraging Generative Pretrained Transformer (GPT) to train on software requirements and create a custom model. The model will power an intelligent chatbot for developers and generate Cucumber test scenarios, improving E2E testing efficiency and collaboration. This approach enhances test coverage, reduces maintenance overhead, and accelerates test case creation, providing valuable insights into AI-driven testing in software development.

Ulrich Winkler
OrgPad Tool and Its Role for Elementary School STEM Didactics

The use of diagramming tools and graphical organizers in elementary school STEM education is well known. The digital revolution, focusing on interdisciplinary learning, critical thinking, and effective communication forces us to find more effective tools and methods. Through a unique combination of approaches called free structuring, the graph diagramming tool OrgPad.com increases the effectiveness of teaching and learning and computer-aided understanding of information and relationships between information by humans. Specific didactic methods and the results of their practical application are presented.

Martina Maněnová, Adam Kalisz, Vít Kalisz
Prompt Engineering for Domain-Oriented AI Support Tools: Ontologies, Mind Maps, Namespaces, Source Code Fragments

Chat AI systems, recently popularised by ChatGPT, allow interactions by exchanging linear text messages. Graph-like structures introduced by Z.Hedrlín represent and transfer information better than the classical linear form. We examine the possibility of using these structures to improve communication with these AI systems. We show that it is possible to create a simple graph-like structure about a topic that better captures and transfers understanding of the AI system.

Vít Kalisz, Adam Kalisz
Agent-Based Autonomous Robotic System Using Deep Reinforcement and Transfer Learning

Real robots have different constraints, such as battery capacity limit, hardware cost, etc., which make it harder to train models and conduct experiments on physical robots. Transfer learning can be used to omit those constraints by training a self-driving system in a simulated environment, with a goal of running it later in a real world. Simulated environment should resemble a real one as much as possible to enhance transfer process. This paper proposes a specification of an autonomous robotic system using agent-based approach. It is modular and consists of various types of components (agents), which vary in functionality and purpose. Thanks to system’s general structure, it may be transferred to other environments with minimal adjustments to agents’ modules. The autonomous robotic system is implemented and trained in simulation and then transferred to real robot and evaluated on a model of a city. A two-wheeled robot uses a single camera to get observations of the environment in which it is operates. Those images are then processed and given as an input to the deep neural network, that predicts appropriate action in the current state. Additionally, the simulator provides a reward for each action, which is used by the reinforcement learning algorithm to optimize weights in the neural network, in order to improve overall performance.

Vladyslav Kyryk, Maksym Figat, Marian Kyryk
Investigation of a Genetic Algorithm for Solving the Travelling Salesman Problem

The paper describes in detail the stages of the genetic algorithm (GA), including population initialisation, assessment of the fitness of the resulting solution, selection, crossover and mutation. The features of using the most common selection, crossover and mutation operators are analysed. The mathematical apparatus of the procedure for calculating the geographical distance between cities based on the Gaversus formula for the traveling salesman problem is presented. It is proposed to use a combined selection operator: tournament and elite, to improve the efficiency of the genetic algorithm in solving the traveling salesman problem (TSP). The proposed selection operator allows for a variety of solutions (tournament selection) and the preservation of the best individuals in the population from generation to generation (elite selection). This facilitates the search for new and unpredictable solutions that can be competitive and allows not to lose the best solutions during evolution. To evaluate the proposed solution, we investigated its use in a genetic algorithm designed to build the shortest route for 29 cities in Bavaria, in accordance with the TSP. The obtained simulation results demonstrate the effectiveness of using the combined selection operator to find the shortest route in comparison with the tournament and elite operator. Thus, the paper demonstrates the solution of TSP by applying a GA, the main components of which are the proposed combined selection operator, the operator of ordered crossing and the inversion mutation operator.

Yaroslav Pyrih, Andrii Masiuk, Yuliia Pyrih, Oksana Urikova
Traffic Control System Based on Neural Network

The advantages of the proposed system over competitors are the use of the latest development technologies in this field, which have already proven their superiority not only in experimental conditions, but also in real examples. The proposed system uses a neural network that uses the principle of reinforcement learning. It is proposed to take the Advantage Actor-Critic (A2C) algorithm as a basis. It is an optimization of the Deep Q-Learning (DQN) algorithm that performs just as well, but reduces the time required for computation/training by incorporating parallelism in Reinforcement Learning. The chosen method is quickly learned, which allows for quick implementation and practical application. A separate advantage can be considered easy implementation with already working systems, such as “Safe City”. To do this, you only need to get the current road map through the API. One of the main advantages of the system is the use of the best algorithm for calculating traffic light phases, which was experimentally confirmed.

Bohdan Zhurakovskyi, Oleksiy Nedashkivskiy, Mikhailo Klymash, Oleksandr Pliushch, Volodymyr Saiko
Intelligent Mixed-Signal Embedded Systems for Electromagnetic Tracking Applications

Real-time analytics is the main advantage of Edge Computing (EC) in the concept of Artificial Intelligence (AI). AI EC brings high-performance computing capabilities to the edge, where sensors of Internet of Things (IoT) devices are located and data is processed locally, which significantly improves processing speed as compared to cloud computing. The core of AI EC is based on real-time embedded systems. This work presents intelligent embedded systems for electromagnetic tracking (EMT) applications. EMT is a novel development trend in sensor electronics and object navigation for IoT, Virtual, and Augmented Reality. The EMT technology relies on calculating the position of an object being tracked based on dynamic measurement of reference electromagnetic field vectors. In contrast to optical tracking systems, EMT does not require any clear line of sight (LOS) and therefore is not susceptible to occlusions. Inertial sensor tracking (IST) systems, which are based on inertial measurement units (IMUs), suffer from measurement errors intrinsic for accelerometers and gyroscopes, primarily from bias drift and random walk. EMT systems are free of these errors and overcome inertial tracking systems in their accuracy. The former provides the opportunity to obtain high-precision measurements of the sensor coordinates in 3D EM field space formed by the actuator array. Nevertheless, EMT systems have a limited application area. Such limitations are caused by significant signal changes and strict requirements set to the environment where reference EM fields are supposed to be formed. We have shown that one faces two main issues when designing an EMT system. Firstly, one should ensure a wide measurement range (up to 120 dB). Secondly, highly noise-immune measurements are required under specific operating conditions. The novelty of the presented solution is the intelligent algorithm of noise-immune conversion in conditions when the signal chain parameters are switched dynamically. This solution creates the necessary prerequisites for measurement accuracy enhancement and expansion of the volume in which tracking takes place. The signal chain was implemented using the PSoC 5LP and thus conforms to the concept of EC Programmable Systems-on-Chip.

Roman Holyaka, Tetyana Marusenkova, Olha Shpur
Method of Coding Video Images Based on Meta-Determination of Segments

The given chapter is devoted to the scientific problem of the increasing video availability level while ensuring its integrity. To solve the formulated problem, a direction related to the coding methods is used. Such methods enable reducing the video traffic intensity. Various standardized platforms for encoding video data analysis are carried out. Thus, a conclusion is made regarding the need to modify the standardized coding process. The following requirements are substantiated. The first is the use of the mechanism for establishing the segments’ informative load regarding their integrity preservation. The direction is implemented based on the segments’ meta-determination technology using components’ intellectual analysis. The second is the creation of methods for segment-differentiated processing depending on their informative load level. A method for segment-differentiated coding based on structural-statistical (SS) saturation is developed. Video frame segment differential processing is carried out considering their preliminary typification, depending on the significance of the video information resource (VIR) semantic integrity (SI) preservation position. This is achieved through the syntactic description correction technologies used according to the visual perception model. The main creating model stages for assessing the VIR bit volume reduction level in ensuring its availability process have been described. The created method main differences are: redundancies and their types eliminating process is selected depending on the microsegment’s presence with different SS levels; spectral component areas’ (ASC) permissible correction chains (CPC) codegrams properties regarding the bits excess number presence are considered; the bits excess number filling in the ASC CPC codegrams is carried out depending on the segments influence importance on the VIR SI preservation.

Vladimir Barannik, Valeriy Barannik, Yurii Babenko, Vitalii Kolesnyk, Pavlo Zeleny, Kirill Pasynchuk, Vladyslav Ushan, Andrii Yermachenkov, Maksym Savchuk
An Approach to Restore the Proper Functioning of Embedded Systems Due to Adverse Effects

The paper proposes an approach to restore the proper functioning of specialized microprocessor-based control systems at the element base level. This approach includes two stages. The first stage involves assessing the technical state of the microprocessor system by applying existing control methods in order to identify faults, as well as localizing faults by applying methods of testing and functioning diagnostics of digital devices. At the second stage, the proper functioning of the microprocessor control system is restored by reconfiguring its internal structure at the level of logical elements. The implementation of the proposed approach will increase fault tolerance (cyber resilience) of the embedded system – the ability to maintain operability after failure of one or more of its components due to adverse effects, including cyber-attacks.

Serhii Shtanenko, Yurii Samokhvalov, Serhii Toliupa, Oleksiy Silko
ChatGPT and Other AI Tools for Academic Research and Education

This chapter explores the use of AI tools for academic research and education. The history of artificial neural networks (ANNs) is discussed, including their recent development and use in various industries. The emergence of ChatGPT, a generative language model developed by OpenAI, is also highlighted. The chapter presents a list of freely available AI tools that can perform tasks related to academic activities, including research, analysis, text generation and processing, translation, and correction of errors in the text. The capabilities of each tool are demonstrated through examples, and the tools are categorized based on their purpose. Universal AI tools, such as Bing and Perplexity, which have access to the internet, can generate and process text, and translate, are highlighted. Other tools, such as ChatGPT and Notion, are also discussed for their ability to work with text and translate. The chapter concludes by providing an overview of the capabilities of each tool and its suitability for various academic tasks. Researchers and educators are encouraged to explore these tools to improve their productivity and efficiency.

Oleksandr Shtykalo, Iuliia Yamnenko
Intelligent Mobile Product Recognition for Augmented Reality in Smart Shopping

Many shop owners struggle to digitalize their physical offerings to achieve business continuity in times of shop closure, and to achieve higher sales with shoppers around. Due to business dynamics, they can also often not rely on an Enterprise Resource Planning (ERP) system to retrieve digitalized product information, and thus need to start the digitalization from scratch. A purely optical digitalization is promising due to being cost-efficient in its implementation, and enabling further functionality such as Augmented Reality (AR) for increased customer engagement. This paper discussed three optical approaches to help shop owners digitalize their products based on images and videos and develop digital channels such as e-commerce websites and AR application on that basis. The first approach uses Common Objects in Context (COCO) Single Shot Detector (SSD) Transfer Learning to identify the products from the shelf via real-time video stream and a user-friendly website. The second approach offers a Mask R-CNN (Region-Based Convolutional Neural Network) model to identify the objects and compare them in the market database using the Structural Similarity Index Measure (SSIM). The last approach implements an Optical Character Recognition (OCR) model to detect and recognize the label information from the taken image of the product shelf.

Mehmet Cihan Sakman, Josef Spillner
Deep Learning Application in Continuous Authentication

This paper reviews deep learning architectures used in continuous authentication, emphasizing on sensor-based or implicit device authentication methods such as motion patterns and keystroke dynamics. We review fundamental architectures (RNN, CNN, Attention Mechanism) and more complex architectures like autoencoders and Siamese networks, which can incorporate various architectural units and their applications for continuous authentication. We have discussed each architecture’s benefits, limitations, and potential, as well as an option for hybrid architecture use. Further, we review the applications of these architectures in biometric verification (e.g., ECG), behavioral measures (e.g., keystroke or mouse movement), and behavioral biometrics using IMU data (e.g., accelerometer or gyroscope). After we encapsulate our previous research and propose the end-to-end methodology for continuous authentication with autoencoder-based models. We discuss the challenges and future trends in utilizing deep learning for continuous authentication, considering sensitive data privacy concerns and the need for developing highly robust and secure applications. Overall the paper aims to provide a general view of the usage of deep learning for continuous authentication, highlighting future directions and providing a view of the most commonly used approaches in the field for researchers.

Mariia Havrylovych, Valeriy Danylov
Smart Planning, Design, and Optimization of Mobile Networks Ecosystem Using AI-Enhanced Atoll Software

This chapter explores the importance of new smart approaches to planning, designing and optimizing the mobile network ecosystem. The paper focuses on the use of the Atoll planning software system developed by Forsk, which has significant potential for effective mobile network management. By analyzing various aspects of network planning, such as coverage, capacity, optimization, and resource management, it is shown that Atoll can successfully solve these problems. The software enables coverage simulation, equipment configuration, and network evolution estimation. However, the chapter also points out the lack of a built-in artificial intelligence (AI) module in Atoll and the need for such a module to automate data analysis and planning processes. We propose to use mobile traffic monitoring reports from sFlow software to develop a training set for the traffic volume forecasting module and integrate it into Atoll. A comprehensive analysis of actual mobile network traffic is conducted to effectively adapt to fluctuations in traffic volumes resulting from diverse factors. Next, we discuss the proposal to integrate artificial intelligence (AI) into the Atoll software to achieve more intelligent planning and optimization of mobile networks. Particular attention is paid to the development of an AI module based on the LSTM algorithm for accurate real-time traffic volume analysis. Our main novelty in this work was to make the existing LSTM mobile traffic forecasting model more suitable for real traffic volumes and provide more accurate forecasts specific to the Ukrainian region. The proposed LSTM traffic forecasting model showed higher accuracy (5.01% increase) and significantly improved forecasting speed (36% decrease) compared to the existing model. These enhancements enable operators to access more precise and quicker network load forecasts, promoting efficient resource management, heightened productivity, and an improved user experience.

Halyna Beshley, Michal Gregus, Oksana Urikova, Ilona Scherm, Mykola Beshley
Current Opportunities and Prospects of Artificial Intelligence in Evolutionary Development

The current possibilities of using artificial intelligence (AI) systems in various fields of human activity are considered. Some problems with AI systems that require further research are shown. A presentation of the development of AI systems from initial structures to highly organized systems with distributed parameters is given. Examples of AI systems at medium levels of organization relevant for practical implementation are presented. The regularities of the organization of technical systems in solving specific problems in practice are determined. Directions for increasing the intellectual level of the systems being created are noted. The prospects for the development of AI systems are presented.

Serhii Melnykov, Tofig G. Kazimov, Aydin Gasanov, Petro Malezhyk
Computational Intelligence for Voice Call Security: Encryption and Mutual User Authentication

5G networks receive more and more implementation in different countries of the world, and the appearance of new releases makes them perfect, more reliable, and more convenient for the user. At the same time, some aspects of security remain relevant from the past generations of networks. So, for voice traffic, encryption is performed only in participation to the base station. There is no possibility of end-to-end encryption between the endpoints and there is no reliable protection against an attack man-in-the-middle. In addition, the networks are given a lot of authentication attention, but when installing the call, only authentication of the device is carried out. At the same time, in some cases, mutual authentication does not occur, but only authentication by the user device. This work describes the solutions to ensure security in 5G networks, and an overview of existing threats has been made. In this work, a method of ensuring mutual authentication of users during the call is proposed. The proposed method is implemented by using biometric authentication and changes in the call flow. Another method proposed in this work is the use of end-to-end encryption during a call to increase user confidentiality. The proposed method is based on the use of an asymmetric protocol Diffie-Hellman for keys exchange and symmetric encryption to protect voice traffic. Also in the work, the main metrics of security for the mobile network are given and their improvement is evaluated when using the proposed methods. The proposed methods allow you to increase security metrics by ~ 20%.

Andrii Astrakhantsev, Larysa Globa, Oleksii Astrakhantsev
An Approach to the Formation of Artificial Intelligence Development Threat Indicators

This chapter explores the threats associated with the development of artificial intelligence and the risks of its implementation. For the first time, a foundation is established for defining indicators of threats and risks related to AI. Emphasis is placed on the use of AI in the interest of ensuring national security and state defense. The chapter highlights essential aspects to consider when formulating strategies for using artificial intelligence (AI) and their practical implementation.

Yuriy Danyk, Mikhailo Klymash, Valery Shestakov, Andrii Masiuk
Federated Learning: A Solution for Improving Anomaly Detection Accuracy of Autonomous Guided Vehicles in Smart Manufacturing

Autonomous Guided Vehicles (AGVs) play an instrumental role in smart manufacturing, particularly in the transport of materials and finished goods. Yet, unexpected anomalies with AGVs can pose safety risks and impede production. Conventional anomaly detection techniques are often hamstrung by data privacy concerns and an absence of centralized data access. Federated learning emerges as a viable alternative, championing distributed data analysis without the prerequisite of centralized data repositories. In this paper, we delve into the efficacy of federated learning in amplifying the precision of AGV anomaly detection within the ambit of smart manufacturing. Our empirical findings attest to the capability of federated learning in bolstering AGV anomaly detection accuracy, all while upholding the tenets of data privacy and security.

Bohdan Shubyn, Taras Maksymyuk, Juraj Gazda, Bohdan Rusyn, Dariusz Mrozek
Expanding the Possibilities of Video Surveillance Monitoring and Recovery Procedures for Post-stroke Patients

The state examples of smart technologies for patients requiring long-term recovery and remote monitoring processes are considered. The application of the automated monitoring and recovery module is proposed. The composition of the module is considered, its settings are studied, and the joint operation of various variants of structural elements is analyzed. An artificial neural network is considered when using video surveillance and a manipulator. Using the example of three layers of its fragment, the problem arising from the simultaneous processing of quantitative and qualitative data information flows is studied. Using the multiplicative algorithm as a basis of artificial intelligence and for building algorithms for processing information flows of quantitative and qualitative information, described by fuzzy sets, resolves the contradictions in ensuring the operation of the two-dimensional function of the appropriateness of actions in automated control systems. As the experiments show, to effectively connect several webcams, it is better to use programs for working with video images. New options for settings, ease of working with the interface, and the possibility of creating a virtual webcam. Adjusting the appearance of the video stream, and launching a virtual camera with a button, provides video communication in common communication programs.

Alexandr Trunov, Ivanna Dronyuk, Vladyslav Martynenko, Ivan Skopenko, Maksym Skoroid
Developing the Fuzzy Logic Rules Based on Clustering Algorithms for Data Analysis in the IoT Network

The rapid development of digital communication systems led to the formation of a class of IoT systems that provide control of various production processes, in particular, traditional transport systems that must function in real-time. However, the computational complexity of data processing for the purpose of their analysis and decision-making can be significant, requiring finding a compromise between computational complexity and reducing the time of data processing directly. The solution to the problem of traffic forecasting in conditions of limited time and computing resources in the IoT system is possible due to the usage of the logical rules that can be developed from statistical data sets collected by the IoT system, and which are close to human perception. But these rules are unclear, because the limits of the parameter values changing, according to how they are built cannot be determined unequivocally. This paper proposes an approach to developing a fuzzy knowledge base using machine learning methods to reduce computational complexity in the decision-making process. The data processing scenario uses Fuzzy C-Means clustering, the KMeans++ algorithm for the cluster centers primary initialization, and the genetic algorithm for defining a set of fuzzy knowledge base rules. The F-score metric is used as a metric that evaluates the quality of the obtained fuzzy logical rules. The database of fuzzy logical rules can be used to analyze data in real-time systems to improve the performance and reliability of the decision-making process, significantly reducing the time of data analysis, which cannot be immediately interpreted unambiguously due to their significant volume. The proposed approach to a fuzzy knowledge base development made it possible to construct fuzzy logical rules from statistical datasets, create a fuzzy knowledge base, apply periodically the procedures for learning and retraining the set of fuzzy logical rules, and also achieve an assessment of the quality of the developed fuzzy logical rules using the example of the problem of predicting the traffic jams occurrence at the indicator level of the F-score = 0.875. In practice, this allows for more efficient use of road networks and resources, namely, the traffic on different routes distribution, planning the traffic lights working schedule, and other optimization measures to reduce congestion.

Larysa Globa, Andrii Liashenko
Large Language Models and Digital Ecosystems in Business Scenarios

The digital landscape will experience the next disruption through the rise of Large Language Models (LLMs), which will be most probably adopted in practice in the coming years. Most prominently known from ChatGPT’s success, LLMs have produced a boom in different communities and inspired tons of possible usage scenarios. Now, when the situation has settled a bit, it is time to reflect on them and see which additional scenarios will be enabled by them and how they will influence the existing digital ecosystems. Thus, we classify the most promising scenarios of LLMs to be adopted in the coming years, but also sketch which challenges are appearing, and show how they will amend the current scenarios. The scenarios will be demonstrated in case studies of different business and technical scenarios.

Andriy Luntovskyy, Volodymyr Vasyutynskyy
Research and Design of Fog Network Architecture with Smart Control System

The article analyses the methods and algorithms for providing the necessary resources for telecommunications networks (TCN). A comparative description of the adaptive algorithms’ application options with decision-making management goals is carried out to determine the resources required for dynamic TCN. The architecture design peculiarities of the dynamic fog TCN as one of the most relevant network types today in places with available computing and telecommunication resources and limited or missing access to Internet resources are considered. The architecture of the dynamic automated control system (ACS) for the fog TCN is proposed based on which the characteristics of the fog TCN with ACS are given. Performance indicators of the fog TCN with ACS are presented, a mathematical model is proposed, and the characteristics of the fog network with ACS are investigated.

Leonid Uryvsky, Oleksandr Budishevskyi, Serhii Osypchuk
UAV Connectivity Maintenance in Wireless Sensor Networks

The chapter is dedicated to maintaining of UAVs connectivity in wireless sensor networks (WSNs). The difference of the developed method from the known ones is that for the first time new approaches to network clustering are proposed (a new set of metrics for selecting head nodes to achieve certain target functions of monitoring data collection management, the use of improved rules for finding energy-efficient cluster topologies by the method of directed search of options and energy-dependent metrics for route selection), which allows improving the results for certain indicators from 5 to 15% compared to existing data collection methods of the corresponding class.

Ihor Sushyn, Oleksandr Lysenko, Valery Romaniuk, Valerii Yavisya, Volodymyr Kyselov, Valeriy Novikov
Peculiarities of the Neural Controller Synthesis Based on the Object Inverse Dynamics Reproduction

The features of construction and using of dynamic neuron networks, realized based on reverse dynamics of object for a management processes in the systems of automatic control are considered in the work.

Markiyan Nakonechnyi, Orest Ivakhiv, Yurii Nakonechnyi, Oleksandr Viter
Comparative Model for Voice Control Systems: Embedded AI and Cloud APIs Services

This chapter discusses the growing importance of voice control systems in today’s digital world. They are becoming an integral part of our daily lives. They are used in a variety of areas, including smartphones, smart devices, industry, and transportation, changing the way we interact with our environment. In this context, special attention is paid to exploring different strategies for deploying such systems in terms of business requirements, scalability, flexibility, reliability and cost. For this purpose, we developed a comparative model to estimate the cost and performance of existing voice control systems. Two main voice control paradigms are compared in detail: embedded artificial intelligence (AI) and cloud services. The comparison has shown that during the initial months of development, the cost of a system with embedded AI per user is higher due to the need for additional resources. However, over time, these costs become similar to those incurred when using cloud services. The study also included an experimental evaluation of the accuracy and processing speed of the embedded artificial intelligence model and the cloud-based Speech-to-Text API. The choice between these approaches depends on the specific requirements and limitations of the application. If speed is crucial, embedded artificial intelligence may be the better option, but if precision and flexibility are priorities, then the cloud-based Speech-to-Text API with a custom algorithm can offer more advantages.

Volodymyr Pastukh, Mykola Beshley, Mikhailo Klymash, Halyna Beshley, Andriy Luntovskyy, Ilona Scherm
Backmatter
Metadata
Title
Digital Ecosystems: Interconnecting Advanced Networks with AI Applications
Editors
Andriy Luntovskyy
Mikhailo Klymash
Igor Melnyk
Mykola Beshley
Alexander Schill
Copyright Year
2024
Electronic ISBN
978-3-031-61221-3
Print ISBN
978-3-031-61220-6
DOI
https://doi.org/10.1007/978-3-031-61221-3