Skip to main content
Top

2023 | Book

Artificial Intelligence Applications and Innovations

19th IFIP WG 12.5 International Conference, AIAI 2023, León, Spain, June 14–17, 2023, Proceedings, Part I

insite
SEARCH

About this book

This two-volume set of IFIP-AICT 675 and 676 constitutes the refereed proceedings of the 19th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2023, held in León, Spain, during June 14–17, 2023. This event was held in hybrid mode.

The 75 regular papers and 17 short papers presented in this two-volume set were carefully reviewed and selected from 185 submissions. The papers cover the following topics: Deep Learning (Reinforcement/Recurrent Gradient Boosting/Adversarial); Agents/Case Based Reasoning/Sentiment Analysis; Biomedical - Image Analysis; CNN - Convolutional Neural Networks YOLO CNN; Cyber Security/Anomaly Detection; Explainable AI/Social Impact of AI; Graph Neural Networks/Constraint Programming; IoT/Fuzzy Modeling/Augmented Reality; LEARNING (Active-AutoEncoders-Federated); Machine Learning; Natural Language; Optimization-Genetic Programming; Robotics; Spiking NN; and Text Mining /Transfer Learning.

Table of Contents

Frontmatter

Deep Learning (Reinforcement/Recurrent Gradient Boosting/Adversarial)

Frontmatter
A Deep Learning-Based Methodology for Detecting and Visualizing Continuous Gravitational Waves

Since the Gravitational Waves’ initial direct detection, a veil of mystery from the Universe has been lifted, ushering a new era of intriguing physics, as-tronomy, and astrophysics research. Unfortunately, since then, not much progress has been reported, because so far all of the detected Gravitational Waves fell only into the Binary bursting wave type (B-GWs), which are cre-ated via spinning binary compact objects such as black holes. Nowadays, as-tronomy scientists seek to detect a new type of gravitational waves called: Continuous Gravitational Waves (C-GWs). Unlike the complicated burst na-ture of B-GWs, C-GWs have elegant and much simpler form, being able to provide higher quality of information for the Universe exploration. Never-theless, C-GWs are much weaker comparing to the B-GWs, which makes them considerably harder to be detected. For this task, we propose a novel Deep-Learning-based methodology, being sensitive enough for detecting and visualizing C-GWs, based on Short-Time-Fourier data provided by LIGO. Based on extensive experimental simulations, our approach significantly outperformed the state-of-the-art approaches, for every applied experimental configuration, revealing the efficiency of the proposed methodology. Our expectation is that this work can potentially assist scientists to improve their detection sensitivity, leading to new Astrophysical discoveries, via the incor-poration of Data-Mining and Deep-Learning sciences.

Emmanuel Pintelas, Ioannis E. Livieris, Panagiotis Pintelas
A Sharpe Ratio Based Reward Scheme in Deep Reinforcement Learning for Financial Trading

Deep Reinforcement Learning (DRL) is increasingly becoming popular for developing financial trading agents. Nevertheless, the nature of financial markets to be extremely volatile, in addition to the difficulty of optimizing DRL agents, lead the agents to make more risky trades. As a result, while agents can earn higher profits, they are also vulnerable to significant losses. To evaluate the performance of the financial trading agent, the Profit and Loss (PnL) is usually calculated, which is also used as the agent’s reward. However, in addition to PnL, traders often take into account other aspects of the agent’s behavior, such as the risk associated with the positions opened by the agent. A widely used metric that captures the risk-related component of an agent’s performance is the Sharpe ratio, which is used to evaluate a portfolio’s risk-adjusted performance. In this paper, we propose a Sharpe ratio-based reward shaping approach that enables optimizing DRL agents by taking into account both PnL and the Sharpe ratio, with the objective to improve the overall performance of the portfolio, by mitigating the risk that occurs in the agent’s decisions. The effectiveness of the proposed method to increase different performance metrics is illustrated using a dataset provided by Speedlab AG, which contains 14 instruments .

Georgios Rodinos, Paraskevi Nousi, Nikolaos Passalis, Anastasios Tefas
Algorithmic Forex Trading Using Q-learning

The forex market is a difficult market for traders to succeed. The high noise and volatility of the forex market make the traders very hard to open and close position accurately. Many approaches have been proposed to overcome these difficulties, including algorithmic trading. This research proposed a framework for algorithmic trading using Q-learning with the help of LSTM. The proposed framework uses a finite state space in reinforcement learning to use holding time and higher timeframe market data. The state space is designed so that the agent can open and close positions flexibly, without being restricted by a fixed time window. This allows the agent to take profits and avoid losses. The proposed framework was trained and tested using 15 years’ worth of historical data of the EUR/USD currency pair in 5-min timeframe data. The system was evaluated based on various metrics such as profit, drawdown, Sharpe ratio, holding time, and delta time. The results show that with its designed finite state space and flexible time window, the proposed framework achieved consistent profits, reduced losses, and increased overall profits. This suggests that the proposed framework may be a suitable solution for forex market trading.

Hasna Haifa Zahrah, Jimmy Tirtawangsa
Control of a Water Tank System with Value Function Approximation

We consider a system of two identical rectangular shaped water tanks. A source of constant water inflow is available, which may only be directed to one tank at a time. The objective is to find a control policy to maximize the final sum of the water levels at some terminal time T, subject to minimum water level constraints on each tank. Water exits each tank corresponding to Toricelli’s law (i.e., the velocity depends on the current water level). We derive a closed form dynamic programming solution in discrete time to this problem without the water-level threshold constraints. Subsequently, we implement the value iteration algorithm on a set of support points to find a control policy with the threshold constraints, where a random forest regressor is iteratively used to update the value function. Our results show consistency between the dynamic programming solution and the value iteration solution.

Shamal Lalvani, Aggelos Katsaggelos
Decentralized Multi Agent Deep Reinforcement Q-Learning for Intelligent Traffic Controller

Recent development of deep reinforcement learning models has impacted many fields, especially decision based control systems. Urban traffic signal control minimizes traffic congestion as well as overall traffic delay. In this work, we use a decentralized multi-agent reinforcement learning model represented by a novel state and reward function. In comparison to other single agent models reported in literature, this approach uses minimal data collection to control the traffic lights. Our model is assessed using traffic data that has been synthetically generated. Additionally, we compare the outcomes to those of existing models and employ the Monaco SUMO Traffic (MoST) Scenario to examine real-time traffic data.Finally, we use statistical model checking (specifically, the MultiVeStA) to check performance properties. Our model works well in all synthetic generated data and real time data.

B. Thamilselvam, Subrahmanyam Kalyanasundaram, M. V. Panduranga Rao
Deep Learning Based Employee Attrition Prediction

Employee attrition is a critical issue for the business sectors as leaving employees cause various types of difficulties for the company. Some studies exist on examining the reasons for this phenomenon and predicting it with Machine Learning algorithms. In this paper, the causes for employee attrition is explored in three datasets, one of them being our own novel dataset and others obtained from Kaggle. Employee attrition was predicted with multiple Machine Learning and Deep Learning algorithms with feature selection and hyperparameter optimization and their performances are evaluated with multiple metrics. Deep Learning methods showed superior performances in all of the datasets we explored. SMOTE Tomek Links were utilized to oversample minority classes and effectively tackle the problem of class imbalance. Best performing methods were Deep Random Forest on HR Dataset from Kaggle and Neural Network for IBM and Adesso datasets with F1 scores of 0.972, 0.642 and 0.853, respectively.

Kerem Gurler, Burcu Kuleli Pak, Vehbi Cagri Gungor
Deep Reinforcement Learning for Robust Goal-Based Wealth Management

Goal-based investing is an approach to wealth management that prioritizes achieving specific financial goals. It is naturally formulated as a sequential decision-making problem as it requires choosing the appropriate investment until a goal is achieved. Consequently, reinforcement learning, a machine learning technique appropriate for sequential decision-making, offers a promising path for optimizing these investment strategies. In this paper, a novel approach for robust goal-based wealth management based on deep reinforcement learning is proposed. The experimental results indicate its superiority over several goal-based wealth management benchmarks on both simulated and historical market data.

Tessa Bauman, Bruno Gašperov, Stjepan Begušić, Zvonko Kostanjčar
DeNISE: Deep Networks for Improved Segmentation Edges

This paper presents Deep Networks for Improved Segmentation Edges (DeNISE), a novel data enhancement technique using edge detection and segmentation models to improve the boundary quality of segmentation masks. DeNISE utilizes the inherent differences in two sequential deep neural architectures to improve the accuracy of the predicted segmentation edge. DeNISE applies to all types of neural networks and is not trained end-to-end, allowing rapid experiments to discover which models complement each other. We test and apply DeNISE for building segmentation in aerial images. Aerial images are known for difficult conditions as they have a low resolution with optical noise, such as reflections, shadows, and visual obstructions. Overall the paper demonstrates the potential for DeNISE. Using the technique, we improve the baseline results with a building IoU of 78.9%.

Sander Jyhne, Jørgen Åsbu Jacobsen, Morten Goodwin, Per-Arne Andersen
Detecting P300-ERPs Building a Post-validation Neural Ensemble with Informative Neurons from a Recurrent Neural Network

We introduce a novel approach for detecting the sample-level temporal structure of P300 event-related potentials. It consists of extracting the most informative neurons from a Recurrent Neural Network and building a post-validation neural ensemble (PVNE). The weights connecting the recurrent and the output layers are used to rank the recurrent neurons according to their relevance when generating the network’s output. A set of neurons is selected according to their positions in this ranking, and their individual predictions are then combined to obtain the final model’s output. This procedure discards neurons whose role could be more related to maintaining the network’s hidden state than to detecting the P300 events, with an overall performance increase. The use of L1 regularization notably emphasizes this effect. We compare the performance of this approach with both Elman and LSTM RNNs and show that the PVNE is able to detect the sample-level temporal structure of P300 event-related potentials, outperforming the standard models. Sample-level prediction also allows for real-time monitoring of the EEG signal generation related to ERPs.

Christian Oliva, Vinicio Changoluisa, Francisco B. Rodríguez, Luis F. Lago-Fernández
Energy Efficiency of Deep Learning Compression Techniques in Wearable Human Activity Recognition

Deploying deep learning (DL) models onto low-power devices for Human Activity Recognition (HAR) purposes is gaining momentum because of the pervasive adoption of wearable sensor devices. However, the outcome of such deployment needs exploration not only because the topic is still in its infancy, but also because of the wide combination between low-power devices, deep models, and available deployment strategies. We have investigated the outcome of the application of three compression techniques, namely lite conversion, dynamic quantization, and full-integer quantization, that allow the deployment of deep models on low-power devices. This paper describes how those three compression techniques impact accuracy and energy consumption on an ESP32 device. In terms of accuracy, the full-integer technique incurs an accuracy drop between 2% and 3%, whereas the dynamic quantization and the lite conversion result in a negligible accuracy drop. In terms of power efficiency, dynamic and full-integer quantization allow for saving almost 30% of energy. The adoption of one of those two quantization techniques is recommended to obtain an executable network model, and we advise the adoption of the dynamic quantization given the negligible accuracy drop (Chiara Contoli is a researcher co-funded by the European Union - PON Research and Innovation 2014-2020.).

Chiara Contoli, Emanuele Lattanzi
Enhancing Medication Event Classification with Syntax Parsing and Adversarial Learning

In this paper, we introduce a method for extracting detailed information from raw medical notes that could help medical providers more easily understand a patient’s medication history and make more informed medical decisions. Our system uses NLP techniques for finding the names of medications and details about the changes to their disposition in unstructured clinical notes.The system was created to extract data from the Contextualized Medication Event Dataset in three subtasks. Our system utilizes a solution based on a large language model enriched with adversarial examples for the medication extraction and event classification tasks. To extract more detailed contextual information about the medication changes, we were motivated by aspect-based sentiment analysis and used the local context focus mechanism to highlight the relevant parts of the context and extended it with information from dependency syntax.Both adversarial learning and the syntax-enhanced local focus mechanism improved the results of our system.

Zsolt Szántó, Balázs Bánáti, Tamás Zombori
Generating Synthetic Vehicle Speed Records Using LSTM

Quality assurance testing of automotive electronic components such as navigation or infotainment displays requires data from genuine car rides. However, traditional static on-site testing methods are time-consuming and costly. To address this issue, we present a novel approach to generating synthetic ride data using Bidirectional LSTM, which offers a faster, more flexible, and environmentally friendly testing process. In this paper, we demonstrate the effectiveness of our approach by generating synthetic vehicle speed along a given route and evaluating the fidelity of the generated output using objective and subjective methods. Our results show that our approach achieves high levels of fidelity and offers a promising solution for quality assurance testing in the automotive industry. This work contributes to the growing research on generative machine learning models and their potential applications in the automotive industry.

Jiri Vrany, Michal Krepelka, Matej Chumlen
Measuring the State-Observation-Gap in POMDPs: An Exploration of Observation Confidence and Weighting Algorithms

The objective of this study is to measure the discrepancy between states and observations within the context of the Partially Observable Markov Decision Process (POMDP). The gap between states and observations is formulated as a State-Observation-Gap (SOG) problem, represented by the symbol $$\varDelta $$ Δ , where states and observations are treated as sets. The study also introduces the concept of Observation Confidence (OC) which serves as an indicator of the reliability of the observation, and it is established that there is a positive correlation between OC and $$\varDelta $$ Δ . To calculate the cumulative entropy $$\lambda $$ λ of rewards in $$\langle o, a, \cdot \rangle $$ ⟨ o , a , · ⟩ , we propose two weighting algorithms, namely Universal Weighting and Specific Weighting. Empirical and theoretical assessments carried out in the Cliff Walking environment attest to the effectiveness of both algorithms in determining $$\varDelta $$ Δ and OC.

Yide Yu, Yan Ma, Yue Liu, Dennis Wong, Kin Lei, José Vicente Egas-López
Predicting Colour Reflectance with Gradient Boosting and Deep Learning

Colour matching remains to be a labour-intensive task which requires a combination of the colourist’s skills and a time consuming trial-and-error process even when employing the standard analytical model for colour prediction called Kubelka-Munk. The goal of this study is to develop a system which can perform an accurate prediction of spectral reflectance for variations of recipes of colourant concentration values, which could be used to assist the colour matching process. In this study we use a dataset of paint recipes which includes over 10,000 colour samples that are mixed from more than 40 different colourants. The framework we propose here is based on a novel hybrid approach combining an analytical model and a Machine Learning model, where a Machine Learning algorithm is used to correct the spectral reflectance predictions made by the Kubelka-Munk analytical model. To identify the optimal Machine Learning method for our hybrid approach, we evaluate several optimised models including Elastic Net, eXtreme Gradient Boosting and Deep Learning. The performance stability of the models are studied by performing computationally intensive Monte Carlo validation. In this work we demonstrate that our hybrid approach based on an eXtreme Gradient Boosting regressor can achieve superior performance in colour predictions, with good stability and performance error rates as low as 0.48 for average $$dE_{CMC}$$ d E CMC and 1.06 for RMSE.

Asei Akanuma, Daniel Stamate, J. Mark Bishop
The Importance of the Current Input in Sequence Modeling

The last advances in sequence modeling are mainly based on deep learning approaches. The current state of the art involves the use of variations of the standard LSTM architecture, combined with several adjustments that improve the final prediction rates of the trained neural networks. However, in some cases, these adaptations might be too much tuned to the particular problems being addressed. In this article, we show that a very simple idea, to add a direct connection between the input and the output, skipping the recurrent module, leads to an increase of the prediction accuracy in sequence modeling problems related to natural language processing. Experiments carried out on different problems show that the addition of this kind of connection to a recurrent network always improves the results, regardless of the architecture and training-specific details. When this idea is introduced into the models that lead the field, the resulting networks achieve a new state-of-the-art perplexity in language modeling problems.

Christian Oliva, Luis F. Lago-Fernández
Towards Historical Map Analysis Using Deep Learning Techniques

This paper presents methods for automatic analysis of historical cadastral maps. The methods are developed as a part of a complex system for map digitisation, analysis and processing. Our goal is to detect important features in individual map sheets to allow their further processing and connecting the sheets into one seamless map that can be better presented online. We concentrate on detection of the map frame, which defines the important segment of the map sheet. Other crucial features are so-called inches that define the measuring scale of the map. We also detect the actual map area.We assume that standard computer vision methods can improve results of deep learning methods. Therefore, we propose novel segmentation approaches that combine standard computer vision techniques with neural nets (NNs). For all the above-mentioned tasks, we evaluate and compare our so-called “Combined methods” with state-of-the-art methods based solely on neural networks. We have shown that combining the standard computer vision techniques with NNs can outperform the state-of-the-art approaches in the scenario when only little training data is available.We have also created a novel annotated dataset that is used for network training and evaluation. This corpus is freely available for research purposes which represents another contribution of this paper.

Ladislav Lenc, Josef Baloun, Jiří Martínek, Pavel Král

Agents/Case Based Reasoning/Sentiment Analysis

Frontmatter
Analysis of the Lingering Effects of Covid-19 on Distance Education

Education has been severely impacted by the spread of the COVID-19 virus. In order to prevent the spread of the COVID-19 virus and maintain education in the current climate, governments have compelled the public to adopt online platforms. Consequently, this decision has affected numerous lives in various ways. To investigate the impact of COVID-19 on students’ education, we amassed a dataset consisting of 10,000 tweets. The motivations of the study are; (i) to analyze the positive, negative, and neutral effects of COVID-19 on education; (ii) to analyze the opinions of stakeholders in their tweets about the transition from formal education to e-learning; (iii) to analyze people’s feelings and reactions to these changes; and (iv) to analyze the effects of different training methods on different groups. We constructed emotion recognition models utilizing shallow and deep techniques, including Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long-short Term Memory (LSTM), Random Forest (RF), Naive Bayes (NB), Support Vector Machine (SVM), and Logical Regression (LR). RF algorithms with a bag-of-words model outperformed with over 80% accuracy in recognizing emotions.

Büşra Kocaçınar, Nasibullah Qarizada, Cihan Dikkaya, Emirhan Azgun, Elif Yıldırım, Fatma Patlar Akbulut
Exploring the Power of Failed Experiences in Case-Based Reasoning for Improved Decision Making

Case-based reasoning (CBR) is a popular approach for problem-solving and decision-making that involves using previous cases as a basis for reasoning about new situations. While CBR has shown promise in many domains, it is not immune to errors and failures. One limitation of the approach is that it tends to focus primarily on successful cases, ignoring the potential value of failed cases as a source of learning and insight. While many studies have focused on the role of successful cases in CBR, less attention has been given to the value of analyzing failed cases. In this paper, we explore the benefits of reasoning from both successful and failed cases in CBR. We argue that by examining both types of cases, we can identify patterns and insights that can help to refine CBR methods, improve their accuracy and efficiency, and reduce the likelihood of future failures. Using a combination of theoretical modeling and empirical analysis, we demonstrate that failed cases can provide valuable insights into identifying potential solutions that might otherwise be overlooked. To illustrate our approach, we present a case study in which we apply our reasoning methodology to a real-world problem in the field of energy management. Our analysis demonstrates that by considering both successful and failed cases, we can identify new and more effective solutions to the problem at hand.

Fateh Boulmaiz, Patrick Reignier, Stephane Ploix
Improving Customer Experience in Call Centers with Intelligent Customer-Agent Pairing

Customer experience plays a critical role for a profitable organisation or company. A satisfied customer for a company corresponds to higher rates of customer retention, and better representation in the market. One way to improve customer experience is to optimize the functionality of its call center. In this work, we have collaborated with the largest provider of telecommunications and Internet access in the country, and we formulate the customer-agent pairing problem as a machine learning problem. The proposed learning-based method causes a significant improvement in performance of about $$215\%$$ 215 % compared to a rule-based method.

Stylianos Filippou, Andreas Tsiartas, Petros Hadjineophytou, Spyros Christofides, Kleanthis Malialis, Christos G. Panayiotou
Mind the Gap: Addressing Incompleteness Challenge in Case-Based Reasoning Applications

Data quality is a crucial aspect of case-based reasoning (CBR), and incomplete data is a ubiquitous challenge that can significantly affect the accuracy and effectiveness of CBR systems. Incompleteness arises when a case lacks relevant information needed to solve a problem. Existing CBR systems often struggle to handle such cases, leading to sub-optimal solutions, and making it challenging to apply CBR in real-world settings. This paper highlights the importance of data quality in CBR and emphasizes the need for systems to handle incomplete data effectively. The authors provide for the first time a framework for addressing the issue of incompleteness under the open-world assumption. The proposed approach leverages a combination of data-driven and knowledge-based techniques to detect incompleteness. The approach offers a promising solution to the incompleteness dimension of data quality in CBR and has the potential to improve the practical utility of CBR systems in various domains as illustrated by the results of a real data-based evaluation.

Fateh Boulmaiz, Patrick Reignier, Stephane Ploix
Sentiment Analysis of Tweets on Online Education during COVID-19

The global coronavirus disease (COVID-19) pandemic has devastated public health, education, and the economy worldwide. As of December 2022, more than 524 million individuals have been diagnosed with the new coronavirus, and nearly 6 million people have perished as a result of this deadly sickness, according to the World Health Organization. Universities, colleges, and schools are closed to prevent the coronavirus from spreading. Therefore, distance learning became a required method of advancing the educational system in contemporary society. Adjusting to the new educational system was challenging for both students and instructors, which resulted in a variety of complications. People began to spend more time at home; thus, social media usage rose globally throughout the epidemic. On social media channels such as Twitter, people discussed online schooling. Some individuals viewed online schooling as superior, while others viewed it as a failure. This study analyzes the attitudes of individuals toward distance education during the pandemic. Sentiment analysis was performed using natural language processing (NLP) and deep learning methods. Recurrent neural network (RNN) and one-dimensional convolutional neural network (1DCNN)-based network models were used during the experiments to classify neutral, positive, and negative contents.

Elif Yıldırım, Harun Yazgan, Onur Özbek, Ahmet Can Günay, Büşra Kocaçınar, Öznur Şengel, Fatma Patlar Akbulut

Biomedical - Image Analysis

Image Classification Using Class-Agnostic Object Detection

Human-in-the-loop interfaces for machine learning provide a promising way to reduce the annotation effort required to obtain an accurate machine learning model, particularly when it is used with transfer learning to exploit existing knowledge gleaned from another domain. This paper explores the use of a human-in-the-loop strategy that is designed to build a deep-learning image classification model iteratively using successive batches of images that the user labels. Specifically, we examine whether class-agnostic object detection can improve performance by providing a focus area for image classification in the form of a bounding box. The goal is to reduce the amount of effort required to label a batch of images by presenting the user with the current predictions of the model on a new batch of data and only requiring correction of those predictions. User effort is measured in terms of the number of corrections made. Results show that the use of bounding boxes always leads to fewer corrections. The benefit of a bounding box is that it also provides feedback to the user because it indicates whether or not the classification of the deep learning model is based on the appropriate part of the image. This has implications for the design of user interfaces in this application scenario.

Geoffrey Holmes, Eibe Frank, Dale Fletcher
Ince-PD Model for Parkinson’s Disease Prediction Using MDS-UPDRS I & II and PDQ-8 Score

Parkinson’s disease (PD) is one of the most prevalent and complex neurodegenerative disorders. Timely and accurate diagnosis is essential for the effectiveness of the initial treatment and improvement of the patients’ quality of life. Since PD is an incurable disease, the early intervention is important to delay the progression of symptoms and severity of the disease. This paper aims to present Ince-PD, a new, highly accurate model for PD prediction based on Inception architectures for time-series classification, using wearable data derived from IoT sensor-based recordings and surveys from the mPower dataset. The feature selection process was based on the clinical knowledge shared by the medical experts through the course of the EU funded project ALAMEDA. Τhe algorithm predicted total MDS-UPDRS I & II scores with a mean absolute error of 1.97 for time window and 2.27 for patient, as well as PDQ-8 scores with a mean absolute error of 2.17 for time window and 2.96 for patient. Our model demonstrates a more effective and accurate method to predict Parkinson Disease, when compared to some of the most significant deep learning algorithms in the literature.

Nikos Tsolakis, Christoniki Maga-Nteve, Georgios Meditskos, Stefanos Vrochidis, Ioannis Kompatsiaris
Medical Knowledge Extraction from Graph-Based Modeling of Electronic Health Records

The variety and dimensionality of health-related data cannot be addressed by the human perception to arrive at useful knowledge or conclusions for proposing individualized treatment, diagnosis, or prognosis for a disease. Treating this wealth of heterogeneous data in a tabular manner deprives us of the knowledge that is hidden in interactions between the different types of data. In this paper, the potentials of graph-based data modeling and management are explored. Entities such as patients, encounters, observations, and immunizations are structured as graph elements with meaningful connections and are, consequently, encoded to form graph embeddings. The graph embeddings contain information about the graph structure in the vicinity of the node. This vicinity contains multiple low-level graph embeddings that are further encoded into a single high-level vector for utilization in downstream tasks by applying higher-order statistics on Gaussian Mixture Models With reference to the Covid-19 pandemic, we make use of synthetic data for predicting the risk of a patient’s fatality with a focus to prevent hospital overpopulation. Initial results demonstrate that utilizing networks of health data entities for the generation of compact medical representations has a positive impact on the performance of machine learning tasks. Since the generated Electronic Health Record vectors are label independent, they can be utilized for any classification or clustering task words.

Athanasios Kallipolitis, Parisis Gallos, Andreas Menychtas, Panayiotis Tsanakas, Ilias Maglogiannis
Predicting ALzheimer's Disease with AI and Brain Imaging Data

According to the latest estimates, one in three elderly people will suffer from dementia in 2050, with the majority of cases being Alzheimer's disease. This study proposes a novel deep-learning CNN architecture for the prediction of Alzheimer's disease, utilizing the OASIS-2 dataset. On average, the proposed approach achieves a testing accuracy of 96.37% for 100-fold cross validation, outperforming related works such as VGG-19 [4], AlexNet [3], and GoogLeNet [3] by 0.55%, 4.97%, and 3.35%, respectively. Experimental results provide positive evidence that the proposed approach, with much fewer parameters, has strong potential to tackle the prediction problem of Alzheimer's disease. In the future, a specialized system can be developed to handle Alzheimer's disease and alleviate the diagnostic burden of doctors. This intelligent system can collaborate with doctors in the early stages of diagnosis, reducing the burden of consultations. If the system achieves higher accuracy, it can enable early detection and treatment, delaying the progression of symptoms. Such a system can detect Alzheimer's disease early, monitor the patient's condition over time, provide personalized treatment plans, and achieve better results.

Chun-Cheng Peng, Guan-Wei Lin, Jian-Min Lin, Guan-Ting Chen, Wei-Chen Liu
Pre-trained Model Robustness Against GAN-Based Poisoning Attack in Medical Imaging Analysis

Deep learning revolutionizes healthcare, particularly in medical image classification, with its analysis performance aided by public architectures and transfer learning for pre-trained models. However, these models are vulnerable to adversarial attacks as they rely on learned parameters, and their unexplainable nature can make it challenging to identify and fix the root cause of the model after an attack. Given the increasing use of pre-trained models in life-critical domains like healthcare, testing their robustness against attacks is essential. Evasion and poisoning attacks are two primary attack types, with poisoning attacks having a broader range of poison sample-generating methods, making testing model robustness under them more critical than under evasion attacks. Poisoning attacks do not require an attacker to have a complete understanding to corrupt the model, making them more likely to occur in the real world. This paper evaluates the robustness of the famous pre-trained models trained as binary classifiers under poisonous label attack. The attacks use GANs to generate mislabeled fake images and feed poison samples to the model in a black box manner. The amount of performance degradation using classification metrics evaluates the model's robustness. We found that ConvNeXt architecture is the most robust against this type of attack, suggesting that transformer architecture can be used to build a more robust deep-learning model.

Pakpoom Singkorapoom, Suronapee Phoomvuthisarn
Semi-supervised Brain Tumor Segmentation Using Diffusion Models

Semi-supervised learning can be a promising approach in expediting the process of annotating medical images. In this paper, we use diffusion models to learn visual representations from multi-modal medical images in an unsupervised setting. These learned representations are then employed for the challenging downstream task of brain tumor segmentation. To avoid feature selection when using pixel-level classifiers, we propose fine-tuning the noise predictor network for semantic segmentation. We compare these methods against a supervised baseline over a varying number of training samples and evaluate their performance on a substantially larger test set. Our results show that, with less than 20 training samples, all methods outperform the supervised baseline across all tumor regions. Additionally, we present a practical use-case for patient-level tumor segmentation using limited supervision. The code we used and our trained diffusion model are publicly available ( https://github.com/risc-mi/braintumor-ddpm ).

Ahmed Alshenoudy, Bertram Sabrowsky-Hirsch, Stefan Thumfart, Michael Giretzlehner, Erich Kobler

Classification

Frontmatter
A Methodology for Emergency Calls Severity Prediction: From Pre-processing to BERT-Based Classifiers

Emergency call centers are often required to properly assess and prioritise emergency situations pre-intervention, in order to provide the required assistance to the callers efficiently. In this paper, we present an end-to-end pipeline for emergency calls analysis. Such a tool can be found useful as it is possible for the intervention team to misinterpret the severity of the situation or mis-prioritise callers. The data used throughout this work is one week’s worth of emergency call recordings provided by the French SDIS 25 firemen station, located in the Doubs. We pre-process the calls and evaluate several artificial intelligence models in the classification of callers’ situation as either severe or non-severe. We demonstrate through our results that it is possible, with the right selection of algorithms, to predict if the call will result in a serious injury with a 71% accuracy, based on the caller’s speech only. This shows that it is indeed possible to assist emergency centers with an autonomous tool that is capable of analysing the caller’s description of their situation and assigning an appropriate priority to their call.

Marianne Abi Kanaan, Jean-François Couchot, Christophe Guyeux, David Laiymani, Talar Atechian, Rony Darazi
A Temporal Metric-Based Efficient Approach to Predict Citation Counts of Scientists

Citation count is one of the essential factors in understanding and measuring the impact of a scientist or a publication. Estimating the future impact of scientists or publications is crucial as it assists in making decisions about potential awardees of research grants, appointing researchers for several scientific positions, etc. Many studies have been proposed to estimate publication’s future citation count; however, limited research has been conducted on forecasting the citation-based influence of the scientists. The authors of the scientific manuscripts are connected through common publications, which can be captured in dynamic network structures with multiple features in the nodes and the links. The topological structure is an essential factor to consider as it reveals important information about such dynamic networks, such as the rise and fall in the network properties like in-degree, etc., over time for nodes. In this work, we have developed an approach for predicting the citation count of scientists using topological information from dynamic citation networks and relevant contents of individual publications. This framework of the citation count prediction is formulated as the node classification task, which is accomplished by using seven machine learning-based classification models for various class categories. The highest average accuracy of 85.19% is achieved with the XGBoost classifier on the High Energy Physics - Theory citation network dataset.

Saumya Kumar Dewangan, Shrutilipi Bhattacharjee, Ramya D. Shetty
Comparing Vectorization Techniques, Supervised and Unsupervised Classification Methods for Scientific Publication Categorization in the UNESCO Taxonomy

A comparison of classification strategies for scientific articles using the UNESCO taxonomy for categorization is presented. An annotated set of articles were vectorized using TF-IDF, Doc2Vec, BERT y SPECTER and it was established that among those options SPECTER provided the best separability properties using quantitative metrics as well as qualitative inspection of 2D projections using t-SNE. When pairing the best performing vectorization strategy with classical machine learning strategies for the classification task, such as multiple layer perceptron and support vector machines, comparable results are found, concluding that the choice of text representation strategy exerts a greater impact over the choice of classifier. The most problematic areas for classification were identified and a cascading classification strategy was implemented and evaluated. Unsupervised methods were also tested to consider the case when annotated data is not readily available and test their suitability. Two different unsupervised methods were used and it was determined that k-means yielded the best results when considering 3 times the number of categories as the optimal number of clusters.

Neil Villamizar, Jesús Wahrman, Minaya Villasana
Decoding Customer Behaviour: Relevance of Web and Purchasing Behaviour in Predictive Response Modeling

In this paper, an approach is presented to improve the online direct marketing process regarding predictive customer response modeling. Namely, customer response models are usually faced with a class imbalance problem, due to a low conversion rate, in relation to the entire number of targeted offers. To avoid the bias towards the negative class in machine learning process, this paper proposes a combination of random undersampling and Support Vector Machine method for data pre-processing and increasing the predictive performances of subsequently used classifiers – Decision Tree and Random Forest. In addition, different attribute groups are tested in terms of their influence to the model performance, which enables marketing decisions makers to better understand which attributes have the highest impact in determining the customer’s response to a direct marketing campaign. The results showed that the proposed method successfully solved the class imbalance and significantly increased the accuracy of the response prediction, as well as that web behavior is the most significant when predicting the customer's response.

Sunčica Rogić, Ljiljana Kašćelan
Exploring Pairwise Spatial Relationships for Actions Recognition and Scene Graph Generation

Visual scene understanding is a fundamental problem and a complex task in computer vision, which not only requires identifying objects in isolation, but also the ability to understand and recognize the relationships between them. These relationships can be abstracted into a semantic representation of $$<subject, predicate, object>$$ < s u b j e c t , p r e d i c a t e , o b j e c t > , resulting in a scene graph that captures much of the visual information and semantics in the scene. In recent years, scene graph generation with message-passing mechanism [1] has been an active area of research, as it has the potential to capture global dependencies between objects and their relationships. Inspired by these developments, this paper introduces a novel scene graph generation approach based on spatial relationships. Our approach performs a classification of the spatial relationship between each pair of objects to generate the initial scene graph. Then, based on the semantic features, the model detects action relationships in the scene and updates the scene graph by applying the message-passing mechanism. We conclude this paper by comparing the proposed method with the state-of-the-art approaches [1–7] and demonstrate the effectiveness of our method over the Visual Genome [1] dataset.

Anfel Amirat, Nadia Baha, Lamine Benrais
Fake News Detection Utilizing Textual Cues

Easy and quick information diffusion on the web and especially in social media has been rapidly proliferating during the past decades. As information is posted without any kind of verification of its veracity, fake news has become a problem of great influence in our information driven society. Thus, to mitigate the consequences of fake news and its propagation, automated approaches to detect malicious content were created. This paper proposes an effective framework that utilizes only the text features of the news. We evaluate several features for differentiating fake from real news and we identify the best performing feature set that maximizes performance, using feature selection techniques. Text representation features were also explored as a potential solution. Additionally, the most popular Machine Learning and Deep Learning models were tested to conclude to the model that achieves the maximum accuracy. Our findings reveal that a combination of linguistic features and text-based word vector representations through ensemble methods can predict fake news with high accuracy. eXtreme Gradient Boosting (XGB) outperformed all other models, while linear Support Vector Machine (SVM) achieved comparable results.

Vasiliki Chouliara, Paraskevas Koukaras, Christos Tjortjis
Fusion of Learned Representations for Multimodal Sensor Data Classification

Time-Series data collected using body-worn sensors can be used to recognize activities of interest in various medical applications such as sleep studies. Recent advances in other domains, such as image recognition and natural language processing have shown that unlabeled data can still be useful when self-supervised techniques such as contrastive learning are used to generate meaningful feature space representations. Labeling data for Human Activity Recognition (HAR) and sleep disorder diagnosis (polysomnography) is difficult and requires trained professionals. In this work, we apply learned feature representation techniques to multimodal time-series data. By using signal-specific representations, based on self-supervised and supervised learning, the channels can be evaluated to determine if they are likely to contribute to correct classification. The learned representation embeddings are then used to process each channel into a new feature space that serves as input into a neural network. This results in a better understanding of the importance of each signal modality as well as the potential applicability of newer self-supervised techniques to time-series data.

Lee B. Hinkle, Gentry Atkinson, Vangelis Metsis
Hyperspectral Classification of Recyclable Plastics in Industrial Setups

The development of the circular economy has attracted significant research interest in recent years. The present work explores the use of HyperSpectral Imaging (HSI) sensors and Machine Learning (ML) techniques for the categorization of recyclable plastics in challenging industrial conditions. Specifically, we present the pipeline for the pre- and post- processing of the spectral signals and we compare four well-known classifiers in categorizing plastics into seven material types, according to the international standards of the circular economy and material recycling in particular. The obtained results show that hyperspectral technology can contribute to the successful categorization of plastics in industrial conditions.

Georgios Alexakis, Michail Maniadakis
Mining the Discussion of Monkeypox Misinformation on Twitter Using RoBERTa

The monkeypox outbreak in 2022 raised uncertainty leading to misinformation and conspiracy narratives in social media. The belief in misinformation leads to poor judgment, decision making, and even to unnecessary loss of life. The ability of misinformation to spread through social media may worsen the harms of different emergencies, and fighting it is therefore critical.In this work, we analyzed the discussion of misinformation related to monkeypox on Twitter by training different classifiers that differentiate between tweets that spread and tweets that counter misinformation.We collected over 1.4M tweets related to the discussion of monkeypox on Twitter from over 500K users and calculated word and sentence embeddings using Natural Language Processing (NLP) methods. We trained multiple machine learning classification models and fine-tuned a Robustly Optimized BERT Pretraining Approach (RoBERTa) model on a set of 3K hand-labeled tweets. We found that the fine-tuned RoBERTa model provided superior results and used it to classify the complete dataset into three categories, namely misinformation, counter misinformation and neutral.We analyzed the behavioral patterns and domains that were used in misinformation and counter misinformation tweets. The findings provide insights into the scale of misinformation within the discussion on monkeypox and the behavior of tweets and users that spread and counter misinformation over time. In addition, the findings allow us to derive policy recommendations to address misinformation in social media.

Or Elroy, Dmitry Erokhin, Nadejda Komendantova, Abraham Yosipof
Multi-feature Transformer for Multiclass Cyberbullying Detection in Bangla

Cyberbullying detection is a global issue that must be addressed to improve the cyberspace for millions of online users, services, and organizations. Online harassment of the general public and celebrities is now commonplace on social media, particularly in Bangladesh. In this paper, we present a novel multi-feature transformer followed by a deep neural network for multiple-dimensional cyberbullying detection. Using online Bangla textual data, we introduce the user’s social profile, the lexical features, the contextual embedding, and the semantic similarities among word associations in Bangla in order to develop an effective and robust cyberbullying detection system. Our proposed method can detect cyberbullying in Bangla with a 98% detection accuracy for threats and a 90% detection accuracy for sarcastic comments. The aggregate accuracy of all six multiclass labels is 86.3%. In addition, the experimental results find that the proposed technique outperforms the state-of-the-art methods for detecting cyberbully in Bangla.

Zaman Wahid, Abdullah Al Imran
Optimizing Feature Selection and Oversampling Using Metaheuristic Algorithms for Binary Fraud Detection Classification

Identifying fraudulent transactions and preventing unauthorized individuals from revealing credit card information are essential tasks for different financial entities. Fraud detection systems are used to apply this task by identifying the fraudulent transactions from the normal ones. Usually, the data used for fraud detection is imbalanced, containing many more instances of normal transactions than fraudulent ones. This causes diminished classification task results because it is hard to train a classifier that distinguishes between them. Another problem is caused by many features under study for the fraud detection task. This paper utilizes different metaheuristic algorithms for feature selection to solve the problem of unneeded features and uses the Synthetic Minority Oversampling TEchnique (SMOTE) to solve the imbalance problem of the data using different classification algorithms. The metaheuristic algorithms include Particle Swarm Optimization (PSO), Salp Swarm Algorithm (SSA), Grey Wolf Optimizer (GWO), and A Multi-Verse Optimizer (MVO), whereas the classification algorithms include Logistic Regression (LR), Decision Tree (DT), and Naive Bayes (NB) algorithms. The results show that applying the oversampling technique generated better results for the G-Mean and Recall values, while the feature selection process enhanced the results of almost all the classification algorithms.

Mariam M. Biltawi, Raneem Qaddoura, Hossam Faris
Predicting Student Performance with Virtual Resources Interaction Data

E-learning can be able to act where traditional education cannot, thanks to its ease of interaction with virtual resources. In this work, the possibility of predicting the final outcome of students based solely on their interaction with virtual resources will be tested. The study aims to evaluate the effectiveness of various machine learning and deep learning models in predicting the performance of students based on their interactions with these virtual resources. The OULA dataset will be used to evaluate the proposed models to predict not only whether the student will pass or fail, but also whether the student will receive a distinction or will drop out of the course prematurely. Some of the models trained in this paper, such as Random Forest, have achieved high accuracy levels, up to 96% for binary classification and up to 80% for multiclass classification. These results indicate that it is possible to predict the performance of students based exclusively on their interactions during the duration of the course and to make predictions for each course individually. They also demonstrate the effectiveness of the proposed models and the potential of virtual resources in predicting the performance of students.

Alex Martínez-Martínez, Raul Montoliu, Jesús Aguiló Salinas, Inmaculada Remolar
Probabilistic Decision Trees for Predicting 12-Month University Students Likely to Experience Suicidal Ideation

Environmental stressors combined with a predisposition to experience mental health problems increase the risk for SI (Suicidal Ideation) among college/university students. However, university health and wellbeing services know little about machine learning methods and techniques to identify as early as possible students with higher risk. We developed an algorithm to identify university students with suicidal thoughts and behaviours using features universities already collect. We used data collected in 2020 from the American College Health Association (ACHA), a cross-sectional population-based survey including 50, 307 volunteer students. A state-of-the-art parallel Markov Chain Monte Carlo (MCMC) Decision tree was used to overcome overfitting problems and target classes with fewer representatives efficiently. Two models were fitted to the survey data featuring a range of demographic and clinical risk factors measured on the ACHA survey. The first model included variables universities would typically collect from their students (e.g., key demographics, residential status, and key health conditions). The second model included these same variables plus additional suicide-risk variables which universities would not typically measure as standard practice (e.g., students’ sense of belonging at university). Models’ performance was measured using precision, recall, F1 score, and accuracy metrics to identify any potential overfitting of the data efficiently.

Efthyvoulos Drousiotis, Dan W. Joyce, Robert C. Dempsey, Alina Haines, Paul G. Spirakis, Lei Shi, Simon Maskell

CNN - Convolutional Neural Networks YOLO CNN

Frontmatter
3D Attention Based YOLO-SWINF for Real-Time Video Object Detection

Video object detection has a lot of applications that require detections in real-time, but these applications are unable to leverage the high accuracy of current SOTA video object detection models due to their high computational requirements. A popular approach to overcome this limitation is to reduce the frame sampling rate, but this comes at the cost of losing important temporal information from these frames. Thus, the most widely used object detection models for real-time applications are image-based single-stage models. Therefore, there is a need for a model that can capture the temporal information from the other frames in a video to boost detection results while still staying real-time. To this end, we propose a YOLOX based video object detection model YOLO-SWINF. Particularly, we introduce a 3D-attention based module that uses a three-dimensional window to capture information across the temporal dimension. We integrate this module with the YOLOX backbone to take advantage of the single-stage nature of YOLOX. We extensively test this module on the ImageNet-VID dataset and show that it has an improvement of 3 AP points over the baseline with just less than 1 ms increase in inference time. Our model is comparable to current real-time SOTA models in accuracy while being the fastest. Our YOLO-SWINF-X model achieves 80.4% AP at 38FPS on NVIDIA 1080Ti GPU.

Pradeep Moturi, Mukund Khanna, Kunal Singh
Analysis of Data Augmentation Techniques for Mobile Robots Localization by Means of Convolutional Neural Networks

This work presents an evaluation regarding the use of data augmentation to carry out the rough localization step within a hierarchical localization framework. The method consists of two steps: first, the robot captures an image and it is introduced into a CNN in order to estimate the room where it was captured (rough localization). After that, a holistic descriptor is obtained from the network and it is compared with the descriptors stored in the model. The most similar image provides the position where the robot captured the image (fine localization). Regarding the rough localization, it is essential that the CNN achieves a high accuracy, since an error in this step would imply a considerable localization error. With this aim, several visual effects were separately analyzed in order to know their impact on the CNN when data augmentation is tackled. The results permit designing a data augmentation which is useful for training a CNN that solves the localization problem in real operation conditions, including changes in the lighting conditions.

Orlando José Céspedes, Sergio Cebollada, Juan José Cabrera, Oscar Reinoso, Luis Payá
Intrusion Detection Using Attention-Based CNN-LSTM Model

With the rise of sophisticated cyberattacks and the advent of complex and diverse technological systems, traditional methods of intrusion detection have become insufficient. The inability to prevent intrusions poses a severe threat to the credibility of security services, which may result in the compromise of data confidentiality, integrity, and availability. To address this challenge, research has proposed the use of Artificial Intelligence (AI) and deep learning (DL) models to enhance the effectiveness of intrusion detection. In this study, we present an Intrusion Detection System (IDS) that utilizes attention-based Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) models. The attention mechanism of the model allows for the identification of significant features in network traffic data for more precise predictions. Using the benchmark dataset UNSW-NB15, we validate the robustness and effectiveness of our model, achieving a detection rate of over 95%. Our results emphasize the robustness and effectiveness of the proposed system, demonstrating the immense potential of AI and DL models in bolstering intrusion detection.

Ban Al-Omar, Zouheir Trabelsi
Real-Time Arabic Digit Spotting with TinyML-Optimized CNNs on Edge Devices

TinyML is a rapidly evolving field at the intersection of machine learning and embedded systems. This paper describes and evaluates a TinyML-optimized convolutional neural network (CNN) for real-time digit spotting in the Arabic language when executed on three different computational platforms. The proposed system is designed to recognize a set of Arabic digits from a continuous audio stream in real-time, enabling the development of intelligent voice-activated applications on edge devices.Our results show that our TinyML-optimized CNN model can achieve 90%–93% inference accuracy, within 0.06–38 ms, while occupying only 19–139 KB of memory. These results demonstrate the feasibility of deploying a CNN-based Arabic digit spotting system on resource-constrained edge devices. They also provide insights into the trade-offs between performance and resource utilization on different hardware platforms. This work has important implications for the development of intelligent voice-activated applications in the Arabic language on edge devices, which enables new opportunities for real-time speech processing at the edge.

Yasmine Abu Adla, Mazen A. R. Saghir, Mariette Awad
Sleep Disorder Classification Using Convolutional Neural Networks

Sleep disorders can cause many inconveniences, such as mental fatigue, poor concentration, emotional instability, memory loss, reduced work efficiency, and increased accident rates. Among them, obstructive sleep apnea is a common sleep disorder characterized by repeated apnea and snoring during nighttime, affecting sleep quality, daily life, and work. Therefore, predicting obstructive sleep apnea events can help people better identify and treat sleep disorders and improve quality of life and work efficiency. To enhance the performance of predicting obstructive apnea, we use the MIT-BIH polysomnographic database in this article. We used deep learning methods, specifically transfer learning, with the AlexNet framework to predict the outcome of OSA events. The results show that the best optimization algorithm is SGDM. The accuracy rate is 86.63%, the sensitivity is 92.20%, the precision is 90.55%, and the AUC is 91.95%. This study demonstrates the strong potential of using artificial intelligence techniques, specifically deep learning and transfer learning, to predict OSA events from ECG signals, which could provide valuable information for diagnosing and treating sleep disorders.

Chun-Cheng Peng, Chu-Yun Kou
The Effect of Tensor Rank on CNN’s Performance

The goal of this work is to combine existing convolutional layers (CLs) to design a computationally efficient Convolutional Neural Network (CNN) for image classification tasks. The current limitations of CNNs in terms of memory requirements and computational cost have driven the demand for a simplification of their architecture. This work investigates the use of two consecutive CLs with 1-D filters to replace one layer of full rank 2-D set of filters. First we provide the mathematical formalism, derive the properties of the equivalent tensor and calculate the rank of tensor’s slices in closed form. We apply this architecture with several parameterizations to the well known AlexNet without transfer learning and experiment with three different image classification tasks, which are compared against the original architecture. Results showed that for most parameterizations, the achieved reduction in dimensionality, which yields lower computational complexity and cost, maintains equivalent, or even marginally better classification accuracy.

Eleftheria Vorgiazidou, Konstantinos Delibasis, Ilias Maglogiannis
Tracking-by-Self Detection: A Self-supervised Framework for Multiple Animal Tracking

Animal tracking is a crucial aspect of animal phenotyping, and industries are using computer vision-based methods to enhance their products. In this paper, we adopt the tracking-by-detection approach and propose a self-supervised framework for multiple animal tracking. Self-supervised learning techniques have recently been employed to train models using unlabeled data and have demonstrated improved accuracy on benchmark datasets. Our proposed framework utilizes an EfficientDet detector that was pre-trained with self-supervised learning using a modified Barlow twins method. The detected animals are associated with tracks using our proposed variant of Deepsort, which utilizes appearance information to improve the detection-to-track association. We trained and tested the framework on a customized dataset from a Norwegian pig farm, which consisted of four test and four train sequences, as well as a detection dataset containing 1674 labelled frames and 3000 unlabeled images for self-supervised learning. To evaluate the performance of our framework, we used standard tracking metrics such as HOTA (Higher order tracking accuracy), MOTA (Multiple object tracking accuracy), and IDF1 (Identification metrics). The implementation of our framework is publicly available at https://github.com/DeVcB13d/Animal_tracking_with_ssl .

C. B. Dev Narayan, Fayaz Rahman, Mohib Ullah, Faouzi Alaya Cheikh, Ali Shariq Imran, Christopher Coello, Øyvind Nordbø, G. Santhosh Kumar, Madhu S. Nair
Backmatter
Metadata
Title
Artificial Intelligence Applications and Innovations
Editors
Ilias Maglogiannis
Lazaros Iliadis
John MacIntyre
Manuel Dominguez
Copyright Year
2023
Electronic ISBN
978-3-031-34111-3
Print ISBN
978-3-031-34110-6
DOI
https://doi.org/10.1007/978-3-031-34111-3

Premium Partner