Skip to main content
Top

2021 | Book

Technological Innovation for Applied AI Systems

12th IFIP WG 5.5/SOCOLNET Advanced Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2021, Costa de Caparica, Portugal, July 7–9, 2021, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 12th IFIP WG 5.5/SOCOLNET Advanced Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2021, held in Costa de Caparica, Portugal, in July 2021.*

The 34 papers presented were carefully reviewed and selected from 92 submissions. The papers present selected results produced in engineering doctoral programs and focus on technological innovation for industry and service systems. Research results and ongoing work are presented, illustrated and discussed in the following areas: collaborative networks; smart manufacturing; cyber-physical systems and digital twins; intelligent decision making; smart energy management; communications and electronics; classification systems; smart healthcare systems; and medical devices.

*The conference was held virtually.

Table of Contents

Frontmatter

Open Access

Correction to: Characteristics of Adaptable Control of Production Systems and the Role of Self-organization Towards Smart Manufacturing

Chapter “Characteristics of Adaptable Control of Production Systems and the Role of Self-organization Towards Smart Manufacturing” was previously published non-open access. It has now been changed to open access under a CC BY 4.0 license and the copyright holder updated to ‘The Author(s)’. The book has also been updated with this change.

Luis Alberto Estrada-Jimenez, Sanaz Nikghadam-Hojjati, Jose Barata

Open Access

Correction to: Predictive Manufacturing: Enabling Technologies, Frameworks and Applications

Chapter “Predictive Manufacturing: Enabling Technologies, Frameworks and Applications” was previously published non-open access. It has now been changed to open access under a CC BY 4.0 license and the copyright holder updated to ‘The Author(s)’. The book has also been updated with this change.

Terrin Pulikottil, Luis Alberto Estrada-Jimenez, Sanaz Nikghadam-Hojjati, Jose Barata

Collaborative Networks

Frontmatter
AI and Simulation for Performance Assessment in Collaborative Business Ecosystems

Artificial Intelligence advances have enabled smarter systems, which in the business world, particularly in Collaborative Business Ecosystems, can lead to more streamlined, effective, and sustainable processes. Moreover, the use of well-defined performance indicators to assess the organisations' collaboration level can influence their behaviour, expecting to improve their performance and that of the ecosystem. This paper presents a case study using a simulation and agent-based model to represent the organisations' behaviour. True data gathered from three IT industry organisations running in the same business ecosystem allowed to shape the model with three classes of agents with different collaboration willingness levels. As such, some scenarios are simulated and discussed, considering a CBE populated with a given combination of organisations and a variation of the weighted adopted performance indicators.

Paula Graça, Luís M. Camarinha-Matos
The Benefits of Applying Social Network Analysis to Identify Collaborative Risks

It is often argued that efficient collaboration is the key for successful organizations. However, achieving efficient collaboration still represents a challenge for most organizations. Often, hidden involuntary behaviors (also known as behavioral risks), threat efficient collaboration in a blink of an eye. Either by the lack of supportive models to manage sscollaboration or due to misunderstandings of what the different dimensions of collaboration are and represent, most organizations fear the engagement in collaborative approaches due the high chances of failure. In this work is proposed a heuristic model to identify organizational collaborative risks by applying social network analysis . This work aims to illustrate how organizations can benefit from the application of social network analysis in the efficient identification and management of collaborative risks. A case study illustrates the application of the proposed model in this work.

Marco Nunes, António Abreu
A Mixed Method for Assessing the Reliability of Shared Knowledge in Mass Collaborative Learning Community

The recent trends in open, informal, and collective education have gradually been shaping an innovative approach to learning called “mass collaborative learning” where large and limitless number of scattered but interested people (with different backgrounds and levels of knowledge) join a networked community aiming to learn new things interactively. Even though the potential benefits of mass collaborative learning for learners are enormous, the process is by nature prone to be harmed by sharing unhealthy materials within the community. In order to minimize the dissemination of disinformation and promote the quality of shared contents in mass collaborative learning communities, this study, as a contribution in this context, proposes a mixed method that can help involved learners to assess the reliability and quality of shared knowledge or information through a multi-user and multilevel evaluation approach. Preliminary findings of this research work are discussed.

Majid Zamiri, Luis M. Camarinha-Matos

Smart Manufacturing

Frontmatter

Open Access

Characteristics of Adaptable Control of Production Systems and the Role of Self-organization Towards Smart Manufacturing

Self-adaptive control of production systems has attracted a lot of research during last years. Nevertheless, most of these approaches are still unable to tackle current manufacturing expectations, they are very particular for the case study, are in an initial stage of research or do not apply the concept of self-organization and their properties in its strong sense. Thus, leaving the systems without enough robustness, adaptability, or emergence that are highly desirable considering current market requirements. Therefore, the purpose of this work to identify some of the important characteristics that have been applied in past studies and that can be considered together as a baseline to build future manufacturing frameworks.

Luis Alberto Estrada-Jimenez, Sanaz Nikghadam-Hojjati, Jose Barata

Open Access

Predictive Manufacturing: Enabling Technologies, Frameworks and Applications

The impact of globalization and the recent advancements in Information and Communication Technologies has pushed the manufacturing sector towards a new transformation. Current manufacturers with the help of recent advances in Cloud Computing, Artificial Intelligence, and Internet of Things are moving towards a new intelligent system called Predictive Manufacturing Systems (PMS). These systems can be used in a wide array of applications, including proactive maintenance, improved quality control and higher performance. This paper provides an overview of the current trends in Predictive Manufacturing Systems in recent years. The paper discusses the developed frameworks, enabling technologies and various applications of Predictive Manufacturing Systems.

Terrin Pulikottil, Luis Alberto Estrada-Jimenez, Sanaz Nikghadam-Hojjati, Jose Barata
Control of Manufacturing Systems by HMS/EPS Paradigms Orchestrating I4.0 Components Based on Capabilities

The fourth industrial revolution has driven initiatives worldwide following the Industry 4.0 (I4.0) context, requiring better integration and relationship between elements. This work was applied the Reference Architecture Model Industry 4.0 (RAMI 4.0) to present new models for standardizing entities in the I4.0 context to migrate legacy systems and standardize assets based on I4.0 Components (I4.0C). The management and orchestration of I4.0C can be achieved by attaching Artificial Intelligence (AI) concepts through intelligent entities describing the behavior and resources relationship, applying Multi-Agent systems (MAS), and adding self-organization, reconfiguration, plug ability, adaptation, and reasoning. Therefore, a manufacturing systems control framework is proposed based on capabilities and HMS/EPS application to orchestrate I4.0C.

Jackson T. Veiga, Marcosiris A. O. Pessoa, Fabrício Junqueira, Paulo E. Miyagi, Diolino J. dos Santos Filho
A Framework for Self-configuration in Manufacturing Production Systems

Intelligence in manufacturing enables the optimization and configuration of processes, and a goal of future smart manufacturing is to enable processes to configure themselves – called self-configuration. This paper describes a framework for utilising data to make decisions for the self-configuration of a production system device in a smart production environment. A data pipeline is proposed that connects the production system via a gateway to a cloud computing platform for machine learning and data analytics. Agent technology is used to implement the framework for this data pipeline. This is illustrated by a data oriented self-configuration solution for an industrial use-case based on a device used at a testing station in a production system. This research presents possible direction towards realising self-configuration in production systems.

Hamood Ur Rehman, Jack C. Chaplin, Leszek Zarzycki, Svetan Ratchev

Cyber-Physical Systems and Digital Twins

Frontmatter
Verification of the Boundedness Property in a Petri Net-Based Specification of the Control Part of Cyber-Physical Systems

A method of analysis of a control part of the cyber-physical system described by a Petri net is presented in the paper. In particular, boundedness of the system is examined. Contrary to other well-known techniques, the proposed idea does not require obtaining of all place invariants, nor computation of all reachable states in the net. Therefore, it is possible to check the boundedness of a net in a more effective and efficient way, compared to the traditional, well-known methods. Furthermore, the proposed algorithm has been examined experimentally with a set of 243 benchmarks (Petri nets). The research results show the high efficiency of the proposed method, since a solution was found even for such nets where popular techniques were not able to analyse boundedness of the system. Finally, the presented idea is illustrated by a case-study real-life example.

Marcin Wojnakowski, Remigiusz Wiśniewski
Collaborative Cyber-Physical Systems Design Approach: Smart Home Use Case

The growing trend on moving from isolated services to dynamically integrated/composed ones in a context where the cyber and physical worlds are interlinked, led to emergence of the concept of Collaborative CPSs (CCPSs). These systems rely on collaboration among internal and external components. An important aspect, in this regard, is the establishment of a design methodology for those systems. To satisfy agility requirements, the design process should be accomplished in a modular way, so that the system can be updated by adding or replacing modules. In traditional ICT systems the design process can be split into two parts/phases: the computational model design, i.e., functionality modules, and the design of a shell or service layer, providing the auxiliary services to utilize the computational model, e.g., security, human-machine interface, etc. In the case of CCPS design, the process also must consider the collaborative aspects within the design workflow. In the proposed work, we provide a model and design pattern (framework and a set of steps) for building Collaborative CPSs. To illustrate the approach, a smart home use-case is used.

Artem A. Nazarenko, Luis M. Camarinha-Matos
Digital Twin for Supply Chain Master Planning in Zero-Defect Manufacturing

Recently, many novel paradigms, concepts and technologies, which lay the foundation for the new revolution in manufacturing environments, have emerged and make it faster to address critical decisions today in supply chain 4.0 (SC4.0), with flexibility, resilience, sustainability and quality criteria. The current power of computational resources enables intelligent optimisation algorithms to process manufacturing data in such a way, that simulating supply chain (SC) planning performance in real time is now possible, which allows relevant information to be acquired so that SC nodes are digitally interconnected. This paper proposes a conceptual framework based on a digital twin (DT) to model, optimise and prescribe a SC’s master production schedule (MPS) in a zero-defect environment. The proposed production technologies focus on the scientific development and resolution of new models and optimisation algorithms for the MPS problem in SC4.0.

Julio C. Serrano, Josefa Mula, Raúl Poler

Intelligent Decision Making

Frontmatter
Matheuristic Algorithms for Production Planning in Manufacturing Enterprises

Production systems are moving towards new levels of smart manufacturing, which means that production processes become more autonomous, sustainable, and agile. Additionally, the increasing complexity and variety of individualized products makes manufacturing a challenge, since it must be produced by consuming the least number of resources, generating profitability. In mathematical perspective, most production planning problems, such as real-world scheduling and sequencing problems, are classified as NP-Hard problems, and there is most likely no polynomial-time algorithm for these kinds of problems. In addition, advances in information and communication technologies (ICT) are increasing, leading to an already existing trend towards real-time scheduling. In this context, the aim of this research is to develop matheuristic algorithms for the optimization of production planning in the supply chain. The development of matheuristic algorithms allows finding efficient solutions, achieving shorter computational times, providing companies with smart manufacturing skills to quickly respond to the market needs.

Eduardo Guzman, Beatriz Andres, Raul Poler
Assessment of Sentinel-2 Spectral Features to Estimate Forest Height with the New GEDI Data

The unprecedent availability of vertical structure forest data provided by NASA Global Ecosystem Dynamics Investigation (GEDI), allows the validation of new methodologies based on other Remote Sensing (RS) sensors for monitoring forest parameters, such as Forest Height (FH). Previously, studies on FH estimation implied in-situ measurements or acquiring LiDAR data, which was limited and expensive. Opposing to the sampling nature of GEDI mission, Sentinel-2 optical products has a high revisiting frequency and a high spatial resolution favoring the implementation of forest monitoring methodologies. This work presents a study on the correlation and usability of linear and exponential regressions for estimating FH through Sentinel-2 imagery. It was also exploited the advantages of making estimations by study area, or by specific land cover types. Overall, a R2 of 0.66 and a RMSE of 2.91 m were achieved, and in case of specifying the vegetation type were 0.83 and 2.40 m, respectively.

João E. Pereira-Pires, André Mora, Valentine Aubard, João M. N. Silva, José M. Fonseca
Assessing Normalization Techniques for TOPSIS Method

In recent years, data normalization is receiving considerable attention due to its essential role in decision problems. Especially, considering the new developments in Big data and Artificial Intelligent to handle heterogeneous data from sensors, normalization’s role as a preprocessing step for complex decision problems is more distinguished. However, selecting the best normalization technique among several introduced techniques in the literature is still an open issue. In this study we focus on evaluating normalization techniques in Multi-Criteria Decision Making (MCDM) methods namely for Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to recommend the most proper technique. A small numerical example, borrowed from literature, is used to show the applicability of the proposed assessment framework using several metrics for recommending the most suitable technique. This study helps decision makers to improve the accuracy of the final ranking of results in decision problems by selecting the best normalization technique for the related case study.

Nazanin Vafaei, Rita A. Ribeiro, Luis M. Camarinha-Matos
How Can e-Grocers Use Artificial Intelligence Based on Technology Innovation to Improve Supply Chain Management?

The digital transformation among grocery sales is in full swing. However, some retailers are struggling to adapt to technological innovation in the grocery industry to achieve digital excellence. The purpose of this article is to analyse artificial intelligence systems applied in e-commerce that could be implemented in online grocery sales. Unlike other online businesses, grocery sales face logistical challenges that differentiate them, such as fresh product conservation and tight delivery times. Through a literature review, this study aims to provide researchers and practitioners with a starting point for the selection of technological innovation to solve e-grocery problems.

Mar Vazquez-Noguerol, Carlos Prado-Prado, Shaofeng Liu, Raul Poler
A Conceptual Framework of Human-System Interaction Under Uncertainty-Based on Shadow System Perspective

Instability is the norm, and emergency management has gradually become the general situation of organizations. Due to the enterprise business systems are developed to handle certain business under a planned and controlled assumption, cannot solve the uncertain business. Instead, a large number of uncertain business mainly rely on people’s interactives which bring diverse perspectives and sharing knowledge to solve. This paper interprets the representation of shadow system under uncertainty and further analyzes one type of obstacle of human-system interaction based on empirical analysis. Finally, this paper designs a conceptual framework about the feedback and enabling between shadow system and legitimate system to solve this obstacle.

Qingyu Liang, Juanqiong Gou
A New Challenge for Machine Ethics Regarding Decision-Making in Manufacturing Systems

In order to deal with increasingly complex manufacturing systems, we need to make sophisticated decisions. A new challenge emerges when dealing with presenting a new decision – making model merged by an off-line (production data including; staffs, machinery, materials) and on-line (sensors, actuators) data to render a shared responsibility of decision´s consequences between machine and human through giving weight to the taken decisions. Undoubtedly, to make an accurate fair prediction, this presented model should follow the ethical rules. However, the mostly past research works about relationship between machine and ethics mainly have focused on human and his responsibility in applying of technology and only humans have engaged in ethical issues. In light of the digital era especially applying AI and Machine learning, necessarily a new approach should be applied to the interplay between the machine, ethics, and human by adding an ethical dimension to those machines which involve with decision making.

Esmaeil Kondori, Rui Neves-Silva

Smart Energy Management

Frontmatter
Towards a Hybrid Model for the Diffusion of Innovation in Energy Communities

The need for comprehensive models to simulate the diffusion of innovations in communities or social systems has become eminent. Existing models such as the Diffusion of Innovation expound how ideas, products, or innovations gain acceptance and spread over time. Other analogous models, such as the Transtheoretical Model also claim that people do not change behaviours quickly and decisively. Instead, they change in a continuous and cyclical process of internal decision-making. Although these two models are used to address adoption, it may be observed that one addresses it from a behavioural point of view while the other from the characteristics of the innovation perspective. In this study, we propose and develop a hybrid model based on these two. We expect that a composite model with enhanced attributes can be deployed as a decision support tool to enhance the diffusion of innovations particularly within energy communities. The study sought to conduct a structural and contextual analysis of these models hoping to find reasonable grounds for merging them.

Kankam O. Adu-Kankam, Luis M. Camarinha-Matos
Towards Extension of Data Centre Modelling Toolbox with Parameters Estimation

Modern data centres consume a significant amount of electricity. Therefore, they require techniques for improving energy efficiency and reducing energy waste. The promising energy-saving methods are those, which adapt the system energy use based on resource requirements at run-time. These techniques require testing their performance, reliability and effect on power consumption in data centres. Generally, real data centres cannot be used as a test site because of such experiments may violate safety and security protocols. Therefore, examining the performance of different energy-saving strategies requires a model, which can replace the real data centre. The model is expected to accurately estimate the energy consumption of data centre components depending on their utilisation. This work presents a toolbox for data centre modelling. The toolbox is a set of building blocks representing individual components of a typical data centre. The paper concentrates on parameter estimation methods, which use data, collected from a real data centre and adjust parameters of building blocks so that the model represents the data centre most accurately. The paper also demonstrates the results of parameters estimation on an example of EDGE module of SICS ICE data centre located in Luleå, Sweden.

Yulia Berezovskaya, Chen-Wei Yang, Valeriy Vyatkin
Power Transformer Design Resorting to Metaheuristics Techniques

Power transformers have a key role in the power system grids. Their manufacturing and design must consider several aspects, such as technical limits, legal constrains, security constrains and manufacturing price. Considering only power transformers’ active parts, it is possible to identify 20 manufacturing specific parameters, and in economic point of view, 13 variables are also considered. Using a classic approach, variables are chosen accordingly with the defined constraints, followed by a sensitivity analysis preformed to each variable, to optimize the manufacturing cost. This procedure can be time consuming and the optimum may not be reached. In this paper, genetic algorithms are used. An innovative approach through the introduction of genetic compensation concept in mutation operator is detailed. Results pointed out an increased performance and consistency when compared with the classical approach.

Pedro Alves, P. M. Fonte, R. Pereira

Communications and Electronics

Frontmatter
Detection of Signaling Vulnerabilities in Session Initiation Protocol

This paper investigates the detection of abnormal sequences of signaling packets purposely generated to perpetuate signaling-based attacks in computer networks. The problem is studied for the Session Initiation Protocol (SIP) using a dataset of signaling packets exchanged by multiple end-users. The paper starts to briefly characterize the adopted dataset and introduces a few definitions to propose a deep learning-based approach to detect possible attacks. The solution is based on the definition of an orthogonal space capable of representing the sampling space for each time step, which is then used to train a recurrent neural network to classify the type of SIP dialog for the sequence of packets observed so far. When a sequence of observed SIP messages is unknown, this represents possible exploitation of a vulnerability and in that case, it should be classified accordingly. The proposed classifier is based on supervised learning of two different sets of anomalous and non-anomalous sequences, which is then tested to identify the detection performance of unknown SIP sequences. Experimental results are presented to assess the proposed solution, which validates the proposed approach to rapidly detect signaling-based attacks.

Diogo Pereira, Rodolfo Oliveira
Interference Power Characterization in Directional Networks and Full-Duplex Systems

This paper characterizes the aggregate interference power considering both directional millimeter-wave (mmWave) and In-Band Full-Duplex (IBFDX) communications. The considered scenario admits random locations of the interferers. The analysis considers a general distance-based path loss with a sectored antenna model. The interference caused to a single node also takes into account the residual self-interference due to IBFDX operation. The main contribution of the paper is the characterization of the interference caused by both transmitting nodes and full-duplex operation for different parameters and scenarios.

Ayman T. Abusabah, Rodolfo Oliveira, Luis Irio
FEM-Parameterized Sensorless Vector Control of PMSM Using High-Frequency Voltage Injection

The authors of the paper present the high-frequency voltage injection-based sensorless vector control method, where the test signals are injected in the estimated common coordinate system. During the modeling process a custom designed permanent magnet synchronous machine is used and its parameters are calculated using measurements combined with finite element method (FEM). The authors present an enhanced PLL-based estimator, and detail the limitations of the sensorless algorithm using the FEM results. Simulation results will be presented, where the sensorless method is combined with an incremental encoder, demonstrating the low-frequency region performance of the proposed algorithm.

Gergely Szabó, Károly Veszprémi

Classification Systems

Frontmatter
Deep Learning-Based Automated Detection of Inappropriate Face Image Attributes for ID Documents

A face photo forms a fundamental element of almost every identity document such as national ID cards, passports, etc. The governmental agencies issuing such documents may set slightly different requirements for a face image to be acceptable. Nevertheless, some are too critical to avoid, such as mouth closedness, eyes openness and no veil-over-face. In this paper, we aim to address the problem of fully automating the inspection of these 3 characteristics, thereby enabling the face capturing devices to determine, as soon as a face image is taken, if any of them is invalid or not. To accomplish this, we propose a deep learning-based approach by defining model architectures that are lightweight enough to enable real-time inference on resource-constrained devices with a particular focus on prediction accuracy. Lastly, we showcase the performance and efficiency of our approach, which is found to surpass two well-known off-the-shelf solutions in terms of overall precision.

Amineh Mazandarani, Pedro Miguel Figueiredo Amaral, Paulo da Fonseca Pinto, Seyed Jafar Hosseini Shamoushaki
Automatic Cognitive Workload Classification Using Biosignals for Distance Learning Applications

Current e-learning platforms provide recommendations by applying Artificial Intelligence algorithms to model users’ preferences based on content, by collaborative filtering, or both, thus, do not consider users’ states, such as boredom. Biosignals and Human-Computer Interaction will be used in this study to objectively assess the state of the user during a learning task. Preliminary data was obtained from a small sample of young adults using physiological sensors (e.g., electroencephalogram, EEG, and functional near infrared spectroscopy, fNIRS) and computer interfaces (e.g., mouse and keyboard) during cognitive tasks and a Python tutorial. Using Machine Learning (ML), Cognitive Workload was classified considering EEG and fNIRS. The results show that it is possible to automatically distinguish cognitive states with accuracy around 84%. This procedure will be applied to adjust the difficulty level of learning tasks, model user preferences, and ultimately optimize the distance learning process in real-time, in a future e-learning platform.

Rui Varandas, Hugo Gamboa, Inês Silveira, Patrícia Gamboa, Cláudia Quaresma
Design of an Attention Tool Using HCI and Work-Related Variables

The project Prevention of Occupational Disorders in Public Administrations based on Artificial Intelligence (PrevOccupAI) aims to identify and characterize profiles of work-related disorders (WRD) and daily working activities profiles. WRD have major impacts on the well-being and quality of life of individuals, as on productivity and absenteeism. Thus, to increase individuals’ quality of life and productivity, a tool focusing on human attention is being developed, integrating the insights of workers of AT (Autoridade Tributária) and the literature on attention and time management. By inputting Human-Computer Interaction (HCI) and work-related variables into an Artificial Intelligence (AI) layer, a dashboard system will provide workers’ information on causes of loss of focus they may not be aware of. Additionally, another layer will provide Recommendations, such as mindfulness-based tips, to assist in the management of feelings, emotions or work-related concerns, possibly increasing awareness and focus on the present. This manuscript presents the preliminary design of this tool.

Patricia Gamboa, Cláudia Quaresma, Rui Varandas, Helena Canhão, Rute Dinis de Sousa, Ana Rodrigues, Sofia Jacinto, João Rodrigues, Cátia Cepeda, Hugo Gamboa

Smart Healthcare Systems

Frontmatter
Assessment of Visuomotor and Visual Perception Skills in Children: A New Proposal Based on a Systematic Review

Vision is a dominant sense in humans and its performance is a critical factor in children’s development and learning. For health sciences it is essential to identify the tools most used in the last decade to assess visuomotor and visual perception skills in children under 6 years of age. For that reason, systematic research was conducted according to the PRISMA criteria. The research was performed using B-on and Pubmed and were included articles published between 2011 until 2021. Included were 17 articles highlighting 2 different samples: children with typical development and children with abnormal development. The assessment protocol differs between researches. A large diversity of tests were observed, usually involving “paper and pencil” tasks. Surprisingly, the association of neuroimaging or electrophysiological techniques with traditional assessment is not common in clinical settings. This aspect will be fully analyzed and a proposal of change will be presented.

Ana Isabel Ferreira, Carla Quintão, Cláudia Quaresma
Benefits, Implications and Ethical Concerns of Machine Learning Tools Serving Mental Health Purposes

In recent years, Healthcare and Mental Health have been in the spotlight for the creation of tools using Artificial Intelligence (AI). Mental Health technologies have proliferated in the last years, from social robotics to self-help tools and internet-based intervention programs. In particular, machine learning algorithms have been embedded into applications, for example, to provide a probability or classification regarding risk behaviours (e.g., suicide prevention tools). This paper reflects on the use of AI applied in mental healthcare tools, to assist clinicians in diagnosis, differential diagnosis and monitoring of patients with brain-related disorders. Benefits regarding the use of these technologies in mental healthcare will be discussed as well as some limitations and ethical concerns.

Patricia Gamboa, Cláudia Quaresma, Rui Varandas, Hugo Gamboa
Multi-agent System Architecture for Distributed Home Health Care Information Systems

In recent years, the aging population has increased. The intervention of Home Health Care (HHC) has been an asset, however, needs technological innovation for the high level of complexity and requirements. Innovation in the HHC system is crucial since management still occurs manually using classical methods, usually centralized and static. The mapping of real HHC problems, enables an application model of a distributed intelligent system, considering the operational planning needs, promoting a digital and sustainable ecosystem. This work aims to specify a flexible architecture for routing and scheduling tasks in distributed HHC. It considers multi-agent systems technology to guarantee the fast response to condition changes in existing planning, merged with optimization algorithms that allow achieving optimal solutions. Collaboratively, the information digitalization for real-time monitoring will coordinate a socialized solution using different tools and techniques, ensuring robustness and responsiveness in a domain with emerging needs.

Filipe Alves, Ana Maria A. C. Rocha, Ana I. Pereira, Paulo Leitão

Medical Devices

Frontmatter
Analysis of Electromyography Signals for Control Models of Power-Assisted Stroke Rehabilitation Devices of Upper Limb System

Stroke is a significant affliction that can affect people with varying degrees of severity. One of the most common consequences of stroke is the impairment of the muscular motor function to some degree with two-thirds of the patients being affected by upper-limb paralysis. For those cases, the most effective forms of regaining muscular motor function are through rehabilitation therapy, traditionally this must be done in a clinical environment. Developments in robotics, batteries and electronics have made accessible the prototyping, production, and utilization of exoskeleton type devices technically adapted for personal and residential rehabilitation. This paper presents and discusses the results of EMG signals from upper limb of brachial biceps muscle, obtained from a cohort of healthy volunteers. The methodology for testing is presented and explained, additionally, a preliminary discussion is made on the obtained data. Some control considerations, variables and methods are also presented and discussed.

Paulo Bonifacio, Valentina Vassilenko, Guilherme Marques, Diogo Casal
AI-Based Classification Algorithm of Infrared Images of Patients with Spinal Disorders

Infrared thermal imaging is a non-destructive, non-invasive technique that has shown to be effective in the detection and pre-clinical diagnosis of a variety of disorders. Nowadays, some medical applications have already been successfully implemented in pre-clinic diagnostics using thermography based on AI algorithms to support decision-based medical tasks. Though, the massive amount of image types, disease variety, and numerous individual anatomical features of the human body continue to give researchers more challenging jobs that still need to be solved. This paper proposes a novel methodology using a convolutional neural network (CNN) for analyzing with high accuracy infrared thermal images from the spine region for quick screening and disease classification of patients.

Anna Poplavska, Valentina Vassilenko, Oleksandr Poplavskyi, Diogo Casal
Improvements on Signal Processing Algorithm for the VOPITB Equipment

The pulse signal obtained non-invasively through an oscillometric method can accurately measure the Cardio-Ankle Vascular Index (CAVI) and Pulse Wave Velocity (PWV), two valuable physiological markers of arterial stiffness and cardiovascular health. The VOPITB device is designed to obtain these markers whose accuracy heavily depends on the correctness of feature extraction from pulse wave signals. Typically, a threshold method is obtained, leading to excessive detection success dependency on the established level. To overcome this limitation two signal processing methods are proposed, one based on a modified version of the Pan-Tompkins algorithm and the other centered on a Wavelet approach. A statistical study is presented assessing the accuracy of both methods. The new algorithms are presented as an alternative to the simple thresholding method.

Filipa E. Cardoso, Valentina Vassilenko, Arnaldo Batista, Paulo Bonifácio, Sergio Rico Martin, Juan Muñoz-Torrero, Manuel Ortigueira
Pilot Study for Validation and Differentiation of Alveolar and Esophageal Air

Breath analysis is an expanding scientific field with great potential for creating personalized and non-invasive health screening and diagnostics techniques. However, the wide range of contradictory results in breath analysis is explained by the lack of an optimal standard procedure for selective breath sampling. Recently we developed novel instrumentation for selective breath sampling, enabling the precise collection of a pre-determined portion of exhaled air using AI (Machine Learning) algorithm. This work presents pilot study results for validation of developed technology by differentiation of alveolar and oesophagal air obtained from the healthy population (n = 31). The samples were analyzed in-situ by Gas Chromatography-Ion Mobility Spectrometry (GC-IMS) apparatus, and obtained spectra were processed with proper multivariate classification tools. The results show a promising performance of proposed AI-based technology for breath sampling adapted to users’ age, genre, and physiological conditions.

Paulo Santos, Valentina Vassilenko, Carolina Conduto, Jorge M. Fernandes, Pedro C. Moura, Paulo Bonifácio
Application of Machine Learning Methods to Raman Spectroscopy Technique in Dentistry

Raman spectroscopy is nowadays regarded as a practical optical method and non-destructive photonic tool, which can be applied in several biomedical fields for analyzing the molecular composition. This technique is considered appropriate for human dental tissues characterization, from caries detection to evaluation of demineralization caused by acidic external agents. Discrimination techniques (linear regression), and classification techniques (neural networks) are often used for spectroscopic data analysis in disease detection and identification. Usually, Raman raw spectra obtained from teeth are processed using baseline correction, smoothing, normalized for noise, fluorescence, shot noise removal, and subsequently analysed using principal component analysis, to reduce the variable dimensionality. Raman chemical images can be constructed with another simple and uncomplicated unsupervised machine learning method – represented by k-means clustering, enabling the identification of similar areas/features, for classifying different acquired spectral data. The Machine Learning methods choice depends always on type and amount of information provided by Raman spectra. In this paper, was applied Principal Component Analysis methods to the analysis and interpretation of several parameters extracted from Raman spectra acquired before and after a simulated acid challenge of human enamel. These parameters and their correlation allow to assess the protective effect of a fluoride-based dental varnish.

Iulian Otel, J. M. Silveira, V. Vassilenko, A. Mata, S. Pessanha
Gas Chromatography-Ion Mobility Spectrometry Instrument for Medical Applications: A Calibration Protocol for ppb and ppt Concentration Range

Medical diagnosis research is driven into the development of non-invasive diagnosis devices centered in fast and precise analytical tools and instrumentation. This led to Volatile Organic Compounds (VOCs) being identified as metabolomics biomarkers for several diseases, including respiratory infections, cancer and even COVID 19 non-invasive test. While VOCs give a direct access to physiological states, their applicability requires detections at low concentration ranges (ppbv-pptv). However, its clinical success is strongly dependent on precise and robust calibration methods. In this work we describe a calibration protocol of volatile organic compounds in low concentration range (ppbv-pptv) for analytical GC-IMS technology which offer a quick in-situ results in medical diagnosis. The calibration is based on permeation tubes which are monitored using thermogravimetric methods to estimate mass loss ratio over time establishing emitted concentrations. Notwithstanding future improvements, herein calibration methodology results are a promising step forward in medical diagnosis and applications.

Jorge M. Fernandes, Valentina Vassilenko, Pedro C. Moura, Viktor Fetter
Backmatter
Metadata
Title
Technological Innovation for Applied AI Systems
Editors
Prof. Dr. Luis M. Camarinha-Matos
Pedro Ferreira
Guilherme Brito
Copyright Year
2021
Electronic ISBN
978-3-030-78288-7
Print ISBN
978-3-030-78287-0
DOI
https://doi.org/10.1007/978-3-030-78288-7

Premium Partner