Skip to main content

2024 | Buch

Emerging Trends and Applications in Artificial Intelligence

Selected papers from the International Conference on Emerging Trends and Applications in Artificial Intelligence (ICETAI)

herausgegeben von: Fausto Pedro García Márquez, Akhtar Jamil, Alaa Ali Hameed, Isaac Segovia Ramírez

Verlag: Springer Nature Switzerland

Buchreihe : Lecture Notes in Networks and Systems

insite
SUCHEN

Über dieses Buch

The book covers the proceedings of the International Conference on Emerging Trends and Applications in Artificial Intelligence (ICETAI) held at Istanbul Medipol University, Turkey, on 24 – 25 August 2023. It presents a comprehensive compilation of papers covering the forefront of artificial intelligence, encapsulating state-of-the-art models, innovative methodologies applied to benchmark datasets, and incisive analyses addressing contemporary challenges. Encompassing four pivotal tracks—Artificial Intelligence and Machine Learning, Big Data and Cloud Computing, Internet of Things and Sensor Technology, and Applications of Artificial Intelligence—this volume serves as a vital resource for researchers, scholars, and professionals navigating the multifaceted landscape of AI advancements and their real-world applications across diverse domains.

Inhaltsverzeichnis

Frontmatter
Simultaneous Optimization of Ride Comfort and Energy Harvesting Through a Regenerative, Active Suspension System Using Genetic Algorithm

Active suspension systems have long been recognized as an effective means of improving ride comfort and vehicle handling. However, high energy consumption and a lack of economic justification have hindered their commercial adoption in the industry. In order to address the challenges, this research proposed an innovative control structure that utilizes linear electromagnetic actuators capable of functioning in both motor and generator modes. To implement the proposed method, a suitable vehicle dynamic model available within the Adams software was selected. An analytical model corresponding to the software model was then extracted and verified to ensure its accuracy and reliability for use in GA optimization algorithms. Assuming only ride maneuvers, a feedback control structure based on meaningful terms in vehicle dynamics was developed. Then by using a GA algorithm, the ride comfort and energy harvesting criteria were simultaneously optimized. Finally, by exploiting the most suitable set of coefficients in the developed control structure, the suspension system showed the ability to recover up to 650 watts of power on rough roads, while leading to a 45% improvement in ride comfort.

Hassan Sayyaadi, Jamal Seddighi
Demystifying Deep Learning Techniques in Knee Implant Identification

Accurate identification of an orthopedic implant before a revision surgery is very important and helps both physicians and patients in numerous aspects. The proposed system uses a novel framework to identify five different total knee arthroplasty implants from plain X-ray images using Deep learning techniques. Anterior-Posterior and Lateral images are used together in this study to make identification much more accurate. The proposed system identifies five different knee implants with an accuracy of 86.25% and an Area Under curve of 0.974.

Shaswat Srivastava, A. Ramanathan, Puthur R. Damodaran, C. Malathy, M. Gayathri, Vineet Batta
Artificial Neural Network Model of Nonlinear Behavior of Micro-ring Gyroscopes

The investigation is concerned with developing a neural network tomodel a micro-ring gyroscope considering the exact term for the electrical force. To this end, Hamilton’s principle alongside Ritz’s method has been utilized to obtain the non-linear system of equations governing the dynamics of the micro-ring. The equations are then numerically solved using the fourth-order Runge-Kutta method. It has been observed considering the Taylor series expansion of the electrical force may lead to misleading results in the case of large deformations. Furthermore, gathering a dataset from numerical solutions induces high computational costs and time. So, a fast method is required to obtain a sizable dataset with good accuracy. To this end, the system has been modelled with an artificial neural network. To form the neural network 1720 examples have been gathered with five input features. It is shown that the proposed neural network can perfectly predict the behavior of the micro-ring gyroscope.

Hassan Sayyaadi, Mohammad Ali Mokhtari Amir Majdi
A Framework for Knowledge Representation Integrated with Dynamic Network Analysis

Understanding the information residing in any system is of crucial importance. Knowledge Graphs are a tool for achieving such kinds of goals, as they hold the semantic interaction across the entities and, using links, connect them in a better representable way. In this paper, we proposed a dynamic network analysis framework for understanding the evolution of Knowledge Graphs across timelines. To validate our findings, we applied a thorough analysis of the movie recommendation Knowledge Graph, where we considered different snapshots of it. For example, past (historical information), present (current snapshot), and future (predictions based on historical data) information. For the predictions, we employ Graph Neural Network (GNN) modeling. We also compared our recommendation model with the latest related studies and achieved considerable results.

Siraj Munir, Stefano Ferretti, Rauf Ahmed Shams Malick
Time Series Forecasting Using Parallel Randomized Fuzzy Cognitive Maps and Reservoir Computing

Fuzzy Cognitive Maps (FCMs) have been widely employed as nonlinear forecasting methods that are easily interpretable. They have a remarkable capability to enhance accuracy and are well-equipped to handle uncertainty and emulate the dynamics of complex systems. The main goal of this article is to present a new randomized multiple-input multiple-output (MIMO) FCM-based forecasting technique named M-PRFCM to forecast real-world high-dimensional time series in Internet of Things (IoT) applications. M-PRFCM is a first-order forecasting method integrating the concepts of randomized FCMs, Echo State Networks (ESNs), and Kernel Principle Components Analysis (KPCA). The training process of M-PRFCM is accelerated as a result of utilizing the ESN weight initialization trick, which randomly selects weights. The obtained results show the efficacy and validation of the proposed technique in terms of accuracy when compared with other existing approaches.

Omid Orang, Hugo Vinicius Bitencourt, Petrônio Cândido de Lima e Silva, Frederico Gadelha Guimarães
Review of Offensive Language Detection on Social Media: Current Trends and Opportunities

Offensive language is defined as derogatory or obscene language that has various forms such as hate speech or cyberbullying. Automated detection of offensive language gains traction due to the high and growing scale of social media user input. In this paper, we provide an overview of the field including background and recent research with a focus on natural language processing. We present a synopsis on the ambiguity in definition and categorization of offensive language, application areas of an automated system, shared tasks organized in this field, dataset creation, model evolution in time through machine learning and deep learning algorithms. Finally challenges and gaps in research are discussed.

Lütfiye Seda Mut Altın, Horacio Saggion
Text Mining and Sentimental Analysis to Distinguish Systems Thinkers at Various Levels: A Case Study of COVID-19

Limited research exists on how experts’ Systems Thinking (ST) skills can be linked to their tweets and sentiments. This study employs text mining and social media analysis to explore the relationship between experts’ ST and their tweets, specifically focusing on COVID-19. Twitter is crucial for information dissemination, but misinformation can spread during a pandemic like COVID-19. By analyzing the emotional and sentimental aspects of tweets from 55 COVID-19 experts, we identified three distinct clusters with significant differences in emotions and sentiments. This study introduces a novel framework using NLP, text mining, and sentiment analysis to assess the systems thinking skills of COVID-19 experts.

Mohammad Nagahisarchoghaei, Morteza Nagahi, Harun Pirim
ADHD Prediction in Children Through Machine Learning Algorithms

Attention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder that affects approximately 5% of children worldwide. It is typically diagnosed based on the presence of inattentive and hyperactive symptoms. Our objective is to identify ADHD from a Machine Learning (ML) perspective, utilizing symptom information and features such as socioeconomic status, social behavior, academic competence, and quality of life. We conducted extensive experiments using the CAP dataset and various machine learning algorithms, including logistic regression, k-nearest neighbors, Support Vector Machines (SVMs), Random Forest, XGBoost, and an Artificial Neural Network (ANN). The ANN model demonstrated the highest accuracy, achieving an AUC metric of 0.99. As a result, we conclude that using ML algorithms to predict ADHD provides a better understanding of the etiological factors associated with the disorder and has the potential to form the basis for a more precise diagnostic approach. The code is available at: GitHub Repository.

Daniela Andrea Ruiz Lopez, Harun Pirim, David Grewell
Commonsense Validation and Explanation for Arabic Sentences

Commonsense understanding poses a significant challenge, especially in complex languages like Arabic. However, recent advancements in deep learning have facilitated improvements in various language tasks, including the ability to distinguish commonsense in sentences. This research focuses on participating in the SemEval 2020 Task 4 (ComVE) competition by developing classification and text generation models tailored for the Arabic language. The competition comprises three subtasks: Subtask A involves choosing the sentence that makes sense between two given sentences, Subtask B requires selecting the most appropriate reason from multiple choices for a sentence that goes against common sense, and Subtask C entails generating an explanation and reason for a sentence violating common sense. Our models leverage a set of multilingual pre-trained transformer models and have achieved remarkable performance in the competition. In Subtask A, our accuracy reached 84.7%, surpassing the performance of other works in Arabic. Similarly, in Subtask B, our approach outperformed other multilingual approaches, achieving a score of 79.3% compared to the state-of-the-art BERT model’s 61%. In Subtask C, our model generated explanations with a BLEU score of 24, which is considered acceptable in the domain of text generation, particularly in the context of Arabic.

Farah Alshanik, Ibrahim Al-Sharif, Mohammad W. Abdullah
Predicting Students Answers Using Data Science: An Experimental Study with Machine Learning

In today’s data-driven world, the abundance of information provides us with opportunities to explore the relationships between various data points, leading to progress in multiple domains. For instance, in the field of education, we can leverage students’ past course performance and academic records to offer tailored guidance, allowing them to concentrate their efforts on specific areas for academic growth. By employing machine learning techniques, we can analyze data relations and predict future events based on historical data. In this study, we utilized machine learning techniques on the educational dataset from NeurIPS 2020. We aimed to improve the prediction of upcoming student performance by adding valuable features. To accomplish this, we explored several classification algorithms, including SVM, Naive Bayes, Logistic Regression, and Decision Tree. Additionally, we considered Ensemble methods such as Boosting, Bagging, and Voting. By assessing the optimal hyperparameter values for these algorithms, we aimed to optimize their performance. Our findings revealed that augmenting the dataset with more correlated features significantly improved prediction accuracy. Among the classifiers examined, Decision Tree, XG Boost, and Voting exhibited the best performance, achieving an accuracy rate of 74%.

Malak Abdullah, Naba Bani Yaseen, Mohammad Makahleh
Arabic News Articles Classification Using Different Word Embeddings

With the accelerated growth of the internet, vast repositories of unstructured textual data have emerged, necessitating automated categorization algorithms for organization and insight extraction. The Arabic language, however, poses particular challenges due to its inflected nature, large vocabulary, and varying forms. This study targets the development of robust automated classification systems for Arabic text, a language increasingly adopted online. In this paper, we propose a comparison of four prevalent pre-trained word embeddings: Word2Vec (represented by Aravec), GloVe, FastText, and BERT (represented by ARBERTv2), using the widely-adopted SANAD dataset of Arabic news articles. We provide a comprehensive comparison by applying a fixed deep learning architecture across all four word embeddings to ensure fairness. The motivation behind this comparison is to bridge the knowledge gap observed in the usage of popular word embeddings for Arabic news classification. Despite the state-of-the-art results from transformer models, a significant inclination towards older methodologies still persists. Hence, we aim to highlight the efficiencies of modern techniques. Results indicate that ARBERTv2 outperforms the other embeddings, achieving 95.81%, 98.68%, and 99.30% accuracy on the Akhbarona, Alkhaleej, and Alarabiya subsets of SANAD, respectively. Despite its large number of parameters, ARBERT’s context-based word embeddings seem to offer superior performance. FastText stood out as the top performer among non-contextualized word embeddings due to its ability to capture morphological similarities and handle out-of-vocabulary words. Following closely behind was GloVe, and then came Aravec.

M. Moneb Khaled, Muhammad Al-Barham, Osama Ahmad Alomari, Ashraf Elnagar
Tree Fruit Load Calculation with Image Processing Techniques

Turkey holds a significant position in global olive production, with olives being a crucial component of its agricultural industry. The fruit load on trees directly correlates with olive tree yield, which in turn determines productivity. The Tabit Smart Agriculture R&D Center, located in the Koçarlı district of Aydın within Turkey’s Aegean region, conducted a study using the YOLOv3 Convolutional Neural Network model to estimate olive tree loads. The primary aim of this research was to offer a more precise and objective perspective on olive harvesting, moving away from subjective assumptions based on predictions. Olive trees, playing a significant role in Turkey’s agricultural output, are cultivated across various regions in the country. However, olive sales in Turkey still rely on approximations. To tackle this, image processing techniques were employed to introduce a more technological and practical approach to agricultural applications, particularly in estimating olive tree loads. Throughout the study, real-time datasets were generated by capturing images of olive trees at the Tabit Smart Agriculture R&D Center in Aydın. The focus was on accurately detecting and counting olives. After 6000 iterations, the obtained results were as follows: mAP 61%, Precision 70%, Recall 45%. The results of the study proved that the agricultural industry can actively shape the trajectory of future farming practices by adeptly embracing image processing techniques and deep learning models.

Merve Aral, Nada Misk, Gökhan Silahtaroğlu
Prediction and Analysis of Water Quality Using Machine Learning Techniques

Water in daily life is an imperative and needful resource to human beings and all other living organisms. Water contamination by adding pollutants leads to noxious effects and is the new way of introducing disease to people. Thus, machine learning plays a vital role in Predicting whether the water is in excellent or imperfect condition. The machine learning algorithm is used to predict the water quality, and statistical accuracy parameters are calculated for each machine learning algorithm. SVM, KNN, Decision Tree classifiers, XG Boost, Ada boost and Feed Forward Neural Networks are used to predict water conditions. Compared to Conventional Machine learning algorithms, Feed Forward Neural network predicts the water quality with 98% accuracy. Precision values and more statistical parameters are measured and compared by evaluating the accuracy values.

Reshmy Krishnan, A. Stephen Sagayaraj, S. Elango, R. Kaviya Nachiyar, T. Indhuja, J. Kanishma, A. Mohamed Uvaise, G. Kalaiarasi
Comparative Analysis of Feature Selection Techniques with Metaheuristic Grasshopper Optimization Algorithm

Feature selection (FS) is a critical step used to identify the most relevant and informative features from a given dataset. Feature selection plays a crucial role in dimensionality reduction, improving the used model’s performance, enhancing the model’s interpretability, and reducing computational complexity. In this study, we conducted a comparative analysis of different feature selection approaches three of them are considered traditional approaches, namely Recursive Feature Elimination (RFE), Mutual Information, K-Best, and one is the heuristic approach which is the Grasshopper Optimization Algorithm (GOA). To evaluate the performance of these approaches, we applied them to classification and regression problems and we applied them in three distinct datasets: Zillow Home Value Prediction, Breast Cancer Wisconsin, and Adult Income. The results of this study indicate that the performance of feature selection methods varies depending on the considered dataset. We observed varying levels of effectiveness across the used datasets. For the Zillow dataset, the grasshopper optimization algorithm yielded the best performance, with a Mean Absolute Error (MAE) of 6.666. However, for the Breast Cancer dataset, the grasshopper optimization algorithm again emerged as the top-performing method, achieving an accuracy of 99.122%. While, for the Adult Income dataset, Mutual Information exhibited the best performance, achieving an accuracy of 86.150%. These findings highlight the importance of considering the complexities and characteristics of the dataset when choosing the feature selection method. Therefore, selecting an appropriate feature selection method may require applying different approaches for optimal feature selection and subsequent model performance.

Qanita Bani Baker, Moayyad F. Alajlouni
Supermarket Shopping with the Help of Deep Learning

This study presents the development of an innovative system designed to facilitate the customers, especially elderly and people with disabilities, in their shopping experience. The proposed solution employes deep learning for product identification and obstacle detection in combination with a smartphone app that serves as the user interface. The solution offers functionalities such as product selection, shortest path calculation, automatic shopping list fulfillment and obstacle detection. The presented solution is part of a greater system that also consists of a self-propelled cart with indoor localization capabilities and a supermarket cloud platform.

Ioannis Symeonidis, Panagiotis Chatzigeorgiou, Christos Antonopoulos, Ignatios Fotiou, Mary Panou
A Decision Support System for Detecting FIP Disease in Cats Based on Machine Learning Methods

Cats are close friends who live with us in all aspects of life. Many diseases endanger the quality of life of cats that live with us. One of the most dangerous is infectious peritonitis in cats, also known as FIP; which is a coronavirus that affects a cat’s overall metabolism. There is no specific treatment for FIP and existing drugs are difficult to find and very expensive; therefore, early detection is very important. The most important thing for early detection is to know the body changes caused by the disease, i.e., symptoms, to take appropriate measures. By collecting and interpreting information such as the combination of symptoms, the age at which cats are most common, and the breeds most encountered, cat owners can take precautions even when they cannot be alert. Therefore, in this study, an early detection method for FIP disease in cats is introduced by making predictions using Naive Bayes algorithm. The dataset includes of 300 FIP symptoms used by Jones et al. [11], and from Ümraniye Vita Veterinary Clinic data were obtained from 150 cats who did not have FIP but went to the clinic for other diseases. This generated dataset is resampled using the Smote algorithm to enlarge the dataset. Then the Google Colab program is used to create a naive Bayesian model using the Python programming language. For this study a model is built using the Naive Bayes algorithm, and it is shown that the model can predict the FIP disease with 96% accuracy.

Ozge Doguc, Sevval Beyhan Bilgi, Seval Cagdas, Nevin Yilmazturk
A Numerical Simulation for the Ankle Foot Orthosis Using the Finite Element Technique with the Aid of an Experimental Program

Ankle-foot orthosis (AFO) is a device that supports the ankle and foot part of the body when there is a muscle weakness or a nerve damage, Ankle-foot orthoses are prescribed to individuals with minimal spinal cord injury and excellent trunk muscle control (AFOs). In this paper, two types of composite laminates were used in the experimental program. The first sequence of layers arranged as follows, (2Perlon + 1Carbon fiber + 2Perlon + 1Kevlar + 2Perlon) called Sequence1. And the other sequence is (2Perlon + 3Carbon fiber + 2Perlon + 3Kevlar + 2Perlon) called Sequence2. In the numerical investigation, the performance of the AFO materials is evaluated using mechanical qualities such as fatigue and tensile testing. This study uses FEM as a numerical technique to demonstrate the impact of fatigue performance on a structural element with the assistance of ANSYS Workbench 14 software. It is used to predict how total deformation, maximum stress, fatigue life, and safety factor will behave. The experimental results have shown that the ultimate tensile stress was 67 MPa and 80 MPa for the first and second type of layers respectively. The patient’s height (176 cm), weight (78 kg), and approximate age of 39 and he was suffered from drop foot. By using FEM (ANSYS) (Von-Mises), the equivalent stress and safety factor of the fatigue have been calculated for the provided AFO model. The resulting ANSYS findings are shown that the profiles of the fatigue safety factors for the composite material (sequence1) AFO equal to 2.86211, for the composite material (sequence2) AFO equal to 3.68318, and for the Polypropylene AFO equal to 1.90683. The difference between the yield stresses of composite material and the highest stresses is produced by the orthosis which indicates the feasibility of the notion that composite materials can support the patient’s weight and serve as an alternative to the materials currently is employed to make the AFO. Where the highest stresses of the PP and composite material (sequence1) AFO are equal to 18.033 MPa, and for composite material (sequence 2) AFO is equal to 17.583 MPa and yield strength for composite material (sequence 1) (50 MPa), yield strength for composite material (sequence 2) (63 MPa) compared to polypropylene’s yield stress of 24.3 MPa.

Maryam I. Abduljaleel, Muhsin J. Jweeg, Ahmed K. Hassan
Numerical and Experimental Simulations of Damage Identification in Carbon/Kevlar Hybrid Fiber-Reinforced Polymer Plates Using the Free Vibration Measurements

Damage identification is an essential program in monitoring the health structures of the mechanical, civil, and aeronautical structures. The vibration measurements are one of the techniques used in this respect on the basis that the stiffness is reduced due to any defect and therefore, the natural frequency is reduced correspondingly. In this work, a rectangular composite cantilever plate was designed and analysed using the ANSYS package employing an experimental data program. Vibration sensors were used sensor used to detect the effect of crack length in carbon/Kevlar hybrid fiber-reinforced polymer composites, and a numerical analysis was performed via Ansys software to examine different crack scenarios. The effect of damage length was experimentally detected in carbon/Kevlar hybrid fiber-reinforced polymer composites using the sensor vibration measurements in the present work. The Finite element analysis (FEA) was applied to determine the most significant notch length in the carbon/Kevlar hybrid fiber-reinforced polymer composite structure. The investigation was performed at several different lengths and locations, including vertical and horizontal incisions. The experimental were agreed with those obtained numerically with a discrepancy not more than 5% which proves that the numerical cheap solution is an adequate in predicting failure mode of the structure and can be used to health monitoring of composite structures.

Dhia A. Alazawi, Muhsin J. Jweeg, Mohammed J. Abbas
Computer Modelling of the Gait Cycle Patterns for a Drop Foot Patient for the Composite a Polypropylene Ankle-Foot Orthoses

Ankle-foot orthoses (AFO) is a device that supports the ankle and foot part of the body when there is a muscle weakness or a nerve damage, Ankle-foot orthoses are prescribed to individuals with minimal spinal cord injury and excellent trunk muscle control (AFOs). In this work design and manufacturing of Ankle Foot Orthosis (AFO) was achieved experimentally to obtain the gait cycle shape using the suggested types of composite layering. The experimental program includes the manufacture two type of AFO: Carbon, Perlon, Kevlar, Kenaf fibers. Two Sequences are used in this works based on the results of the examinations of the samples. The sequence 1 is the order of the layers in it (2Perlon + 1Carbon fiber + 2Perlon + 1Kevlar + 2Perlon) and the Sequence 2 is (2Perlon + 3Carbon fiber + 2Perlon + 3Kevlar + 2Perlon). The patient’s gait cycle data (including pressure distribution using an F-socket and Ground Reaction Force [GRF] using a force plate) were presented here. The patient’s height (176 cm), weight (78 kg), and approximate age of 39. The case study was a patient suffered from drop foot. The patient’s gait cycle data (including pressure distribution using F-socket and using a force plate, ground reaction force) have been collected. All the readings of the gait cycle became different when the patient was without wearing AFO because of the foot drop due to severe damage to the nerves, so there was a difference in the readings of the right and left foot, while the difference decreased in the case of the patient wearing AFO because the AFO helped the patient to walk, and therefore the readings converged with the readings of the normal person. Finally, the findings of patient balance and gait cycle show that composite materials AFO superior than PP.

Maryam I. Abduljaleel, Muhsin J. Jweeg, Ahmed K. Hassan
Arabic Sign Language Alphabet Classification via Transfer Learning

The integration of artificial intelligence (AI) has addressed the challenges associated with communication with the deaf community, which requires proficiency in various sign languages. This research paper presents the RGB Arabic Alphabet Sign Language (ArASL) dataset, the first publicly available high-quality RGB dataset. The dataset consists of 7,856 meticulously labeled RGB images representing the Arabic sign language alphabets. Its primary objective is to facilitate the development of practical Arabic sign language classification models. The dataset was carefully compiled with the participation of over 200 individuals, considering factors such as lighting conditions, backgrounds, image orientations, sizes, and resolutions. Domain experts ensured the dataset’s reliability through rigorous validation and filtering. Four models were trained using the ArASL dataset, with RESNET18 achieving the highest accuracy of 96.77%. The accessibility of ArASL on Kaggle encourages its use by researchers and practitioners, making it a valuable resource in the field ( https://www.kaggle.com/datasets/muhammadalbrham/rgb-arabic-alphabets-sign-language-dataset ).

Muhammad Al-Barham, Osama Ahmad Alomari, Ashraf Elnagar
Evaluation of Chemical Data by Clustering Techniques

Obtaining more useful information by applying mathematical techniques from chemical data obtained by different methods can be defined as chemometry. The development of computer-equipped devices allows for obtaining a large number of data in the field of chemistry. Statistical methods and data mining principles are needed for the processing and evaluation of these data. Chemometry is briefly the investigation of how to perform meaningful calculations on data fly the investigation of how to perform meaningful calculations on data. In most cases, these calculations are too complex to be performed manually, and many different techniques are used in these processes, such as classification, clustering, data summarization, learning classification rules, finding dependency networks, variability analysis, and abnormal detection. In data mining, classification, and curve fitting are defined as prediction methods, while methods such as clustering and association analysis are described as descriptive. Classification is the examination of the attributes of data and assigning this data to a predefined class. The important thing here is that the specialties of each class are determined in advance. Clustering is the grouping of data according to their proximity or distance to each other, and there are no pre-defined group boundaries here, but it can be optimized by giving the number of groups. In given context, this study was aimed to group the measurement data obtained as a result of analyses with samples taken from raw wastewater from wastewater treatment plants using the clustering method to determine which cluster the new data to be measured are in and to estimate the BOD5 value related to these data without experimental measurement.

Gonca Ertürk, Oğuz Akpolat
Novel Quantum Key Distribution Method Based on Blockchain Technology

Quantum key distribution (QKD) methods, one of the working subjects in the field of quantum technologies, are among the solutions that have great potential and will be used both today and in the post-quantum era. Because it is one of the unique solutions that come to the fore in both true random number generation and key distribution in a secure manner. In addition, quantum attack methods are also being studied against the security of quantum key distribution techniques. There are different suggestions against man-in-the-middle attacks, which is one of the important attack methods. In this study, a protocol consisting of two stages and foreseeing the use of blockchain technology without affecting quantum mechanics is proposed in order to be a solution against attacks such as man-in-the-middle on QKD systems. In the first phase of the protocol, a basic identification capability between quantum devices (parties) is gained using the Hyperledger Fabric permissioned blockchain protocol. In the second stage, the standard QKD reconciliation process for BB84 will be performed. In the study, blockchain technology has been added to the quantum key distribution process in the most optimal way, and both the identification of the parties has been brought to the quantum systems and a viable solution has been proposed against man-in-the-middle attacks.

Faruk Takaoğlu, Mustafa Takaoğlu, Taner Dursun, Tolga Bağcı
Smart Parking System Based on Dynamic and Optimal Resource Allocation

Smart parking uses the Internet of Things (IoT) to collect and control data to efficiently manage parking spaces. Indeed, Smart Parking Systems (SPS) in smart cities can reduce traffic congestion, reduce the time for searching available parking spots, reduce the CO2 emissions, and offer other benefits, but their efficient resource (parking spot) allocation remains a challenge. In this article, we present our allocation approach using genetic algorithms (GA) with a multi-criteria objective function. We managed to optimize the four criteria of distance, parking space utilization, user preferences and travel time, while considering a set of parkings in a smart city. As a conceptual support of our approach, we propose a meta-model of concepts for smart parking management in IoT context; we also propose a four-layer architecture of SPS in a smart city. Our goal is to maximize parking space utilization, minimize travel time and distance, and satisfy driver preferences.We conducted a set of simulations to highlight significant improvements in the overall optimization of parking spots allocation. The results showed that the GA algorithm satisfied driver’s preferences while minimizing distance and travel time to get a parking spot and also, maximizing parking spots utilization. Furthermore, comparisons of the results obtained with those derived from the FCFS algorithm showed the effectiveness of the GA algorithm.

Khadidja Tair, Lylia Benmessaoud, Saida Boukhedouma
Marine Predatory Algorithm for Feature Selection in Speech Emotion Recognition

In recent times, the recognition of human emotional states expressed through speech communication through speech communication has garnered significant interest among researchers in human-computer interaction. Various systems have been advanced to categorize states of speech emotion using features extracted from spoken utterances. Feature extraction plays a vital role in developing speech emotion recognition systems, as the performance of the learning model improves when the extracted features are reliable and capture the emotional characteristics of speech samples. However, some of the extracted features may be redundant, irrelevant, or noisy, which can diminish the classification performance of speech emotion recognizers. To address this issue and select efficient and precise speech emotion features, this paper introduces an effective feature selection method called MPA-KNN, which combines the marine predators algorithm with the KNN classifier. The proposed method’s performance is evaluated using three distinct speech databases: the Surrey Audio-Visual Expressed Emotion (SAVEE), the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), and the Arabic Emirati-accented speech database. The results showed that MPA-KNN improved the accuracy of speech emotion recognition compared to classical machine learning and optimization feature selection methods. Moreover, it overcomes several competitors in the literature that utilize the same datasets.

Osama Ahmad Alomari, Muhammad Al-Barham, Ashraf Elnagar
Machine Learning Algorithms are Used for Fake Review Detection

An internet business’s revenue can be significantly impacted by user reviews. Online users read reviews before choosing any products or services. Because of this, a company’s profitability and reputation are directly impacted by the reliability of internet reviews. Due to this, some businesses pay spammers to post false reviews. These false reviews abuse the purchase decisions of customers. The approaches for feature extraction currently in use are examined. We will see an injustice in these ratings and reviews. The text analysis method used in this study was sentiment analysis (SA); currently, the area of text analysis attracting the greatest interest. One of the main issues SA is presently experiencing is how to differentiate between negative, neutral, and positive opinion reviews. In this article, We contrast supervised and unsupervised machine learning particularly. Our research demonstrates that supervised machine learning is more accurate and effective than unsupervised learning.

Wesam Hameed Asaad, Ragheed Allami, Yossra Hussain Ali
Development and Research of Models for Optimization Information Flow in Interactive Analysis Big Data in Geographic Information Systems

A large number of areas of application of geographic information systems (GIS) involves the continuous accumulation data. The need to record the state of an observed object or phenomenon generates an intense flow of heterogeneous data over long-time intervals. The period of storage of the received information in many cases is not limited and is limited solely technical capabilities of archiving information. For this reason, GIS databases correspond to modern ideas about big data (big data), since they have a significant volume that is constantly growing, are heterogeneous in presentation formats, and have a value determined by a reliable display of objects, phenomena and events of the real world.A new model of representation of the analysis workspace by two informational components, skeleton and environment, is proposed. The skeleton is formed by deterministic queries to the GIS database, the environment is built by intelligent procedures as a shell of the skeleton. The difference of the proposed concept is the use of a utility function, the parameters of which are the complexity of the skeleton and the environment. The application of the proposed concept allows optimizing the quality of decision-making by maximizing the utility function of the workspace.

Ali Abdulkarem Habib Alrammahi, Farah Abbas Obaid Sari, Bushra Kamil Hilal
Comparison of Text Summarization Methods in Turkish Texts

In this study focusing on extractive automatic text summarization, popular text summarization algorithms commonly used in other languages were compared. Due to their suffix-based structure, the impact of these algorithms on Turkish may not be as effective as English and Chinese, which have been extensively studied. Accordingly, the most commonly used extractive text summarization approaches were investigated, and some of them were tested and compared on Turkish texts. In line with the study, five summaries were generated using the TextRank, LexRank, Luhn algorithms, and two word frequency-based summarization algorithms that we developed, based on a dataset of 130 news texts summarized by three individuals. The similarity metrics were calculated using the Rouge Metric algorithm by comparing the output summaries with the reference summaries. The selected summarization algorithms were chosen among the most commonly used extractive text summarization algorithms, and they are all extractive text summarization algorithms. As a result of the comparison, it was observed that the algorithm developed based on sentence selection using the frequency of word stems had the highest similarity value. The study’s outcome will involve the identification of the most suitable automatic summarization algorithm for Turkish. In this context, conclusions can be drawn regarding the applicability of various methods, the potential for achieving more advantageous results when approached from specific angles, and the aspects requiring reinforcement. This way, the aim is to facilitate the attainment of proficient outcomes in Turkish-specific summarization, thus ensuring a professional culmination.

Semih Marangoz, Ahmet Sayar
Formation of a Speech Database in the Karakalpak Language for Speech Synthesis Systems

The article deals with speech synthesis, the conversion of an arbitrary text given in a natural language into spoken form in that language, and this issue has been of interest to mankind since ancient times. Also, the paper described mechanical, electrical, articulating, format, linear prediction, concatenative, selective, and statistical parametric synthesis methods. The paper describes how a high-quality speech database is essential for creating speech synthesis systems. Also, the paper notes that such databases have been created for many languages, but no speech database has been created to translate text information in the Karakalpak language. Therefore, creating a speech database for converting Karakalpak text into speech is urgent. Furthermore, this article depicts the problems and their solutions related to the creation of a speech database in the Karakalpak language.

N. S. Mamatov, K. M. Jalelov, B. N. Samijonov, A. N. Samijonov, A. D. Madaminjonov
Approaches to Solving Problems of Markov Modeling Training in Speech Recognition

The article discusses approaches to solving problems of learning Markov modeling in speech recognition. A Markov process is a stochastic process consisting of a sequence of random states, where the probability of transition from one state to another depends only on the current state and does not depend on previous states. The result of observing such a process is a sequence of states that the system goes through during the observation period. The task of model training is considered the most difficult when using Markov models in recognition systems, since there is no known unique and universal way to solve it, and the quality of recognition depends on the result of model training. Therefore, special attention must be paid to training the model.

D. T. Muxamediyeva, N. A. Niyozmatova, R. A. Sobirov, B. N. Samijonov, E. Kh. Khamidov
Spatio-Angular Resolution Trade-Off in Face Recognition

Ensuring robustness in face recognition systems across various challenging conditions is crucial for their versatility. State-of-the-art methods often incorporate additional information, such as depth, thermal, or angular data, to enhance performance. However, light field-based face recognition approaches that leverage angular information face computational limitations. This paper investigates the fundamental trade-off between spatio-angular resolution in light field representation to achieve improved face recognition performance. By utilizing macro-pixels with varying angular resolutions while maintaining the overall image size, we aim to quantify the impact of angular information at the expense of spatial resolution, while considering computational constraints. Our experimental results demonstrate a notable performance improvement in face recognition systems by increasing the angular resolution, up to a certain extent, at the cost of spatial resolution.

Muhammad Zeshan Alam, Sousso kelowani, Mohamed Elsaeidy
Bridging the Gap Between Technology and Farming in Agri-Tech: A Bibliometric Analysis

Agri-tech, or the application of technology to agriculture, has the power to transform farming methods and find solutions to the problems the industry faces. With an emphasis on comprehending the significance of numerous disciplines, including Artificial Intelligence (AI), this study presents a bibliometric analysis that attempts to analyze the trends and important disciplines engaged in bridging the gap between technology and farming in agri-tech. The analysis highlights the growing interest and research activity in agri-tech and its related fields by looking at a wide range of academic publications. The results show that AI is becoming a key discipline in agri-tech, with an increase in publications highlighting its potential to boost production, improve resource management, and support sustainable farming. The analysis emphasizes the necessity for interdisciplinary cooperation among researchers, practitioners, policymakers, and farmers in order to close the gap between technology and farming. The agriculture sector may unleash the potential of cutting-edge technologies, resulting in more effective, sustainable, and fruitful farming techniques, by utilizing AI and encouraging interdisciplinary cooperation. The conclusions drawn from this bibliometric analysis lay the groundwork for additional agri-tech research and innovation, opening the door to a revolutionary future for agriculture.

Fatma Serab Onursal, Sabri Öz
Eye Tracking Review: Importance, Tools, and Applications

Eye tracking technology has evolved as a powerful and flexible tool, providing critical insights into human visual activity and cognitive processes in a range of disciplines. This article looks at the importance of eye tracking and the hardware components used in eye tracking systems. The benefits and applications of types of eye trackers, such as remote and head-mounted, are discussed. Considerations like as spatial resolution, sample rate, and accuracy assist researchers in selecting the best equipment for their study objectives.The study discusses software essential for evaluating and interpreting data from eye movements. Then deepen on major eye movements and metrics utilized in eye tracking studies, such as fixation length, saccades, and pupil dilation. These measures offer useful insights on visual attention and cognitive processes, allowing researchers to better understand how people respond to visual stimuli.Finally, eye tracking applications in psychology, marketing, human-computer interface, and medical research are highlighted. Eye tracking demonstrates its adaptability and importance in understanding human behavior in real-world scenarios, from analyzing consumer behavior to improving user interfaces.

Taisir Alhilo, Akeel Al-Sakaa
Using Machine Learning to Control Congestion in SDN: A Review

Congestion is a major issue in networks, as it decreases efficiency and wastes bandwidth. While the basic operations of TCP remain the same, there are different flavors of TCP developed for specific network environments that help control congestion by updating window size and data transmission. Software-defined networking (SDN) can provide centralized control for network traffic, and the amount of data received by SDN controllers is huge. This huge data can be used as an input dataset for machine learning algorithms to extract a lot of information that helps improve network performance. To process this data, machine learning (ML) has been suggested to improve network performance and reinforcement learning (RL) to predict congestion. This article reviews recent ML algorithms for congestion control in SDNs, starting with a brief overview of SDN, ML, and congestion control and then reviewing the recent works that apply ML to control congestion. Based on this comprehensive review, it has been concluded that the RL actor-critic algorithm is the most efficient approach to prevent congestion in SDN networks. In addition, the ML random forest algorithm has successfully classified flow types and detected the flow that may cause congestion.

Tabarak Yassin, Omar Ali
Deployment Yolov8 Model for Face Mask Detection Based on Amazon Web Service

Since the current pandemic of covid 9, there have been many concerns expressed regarding public health and safety, which has led to the widespread adoption of protective measures such wearing face masks. To ensure adherence to safety regulations, the development of reliable and effective solutions for automatic mask detection is of the utmost importance. In this paper, we use Amazon SageMaker to build and train the yolov8 model, testing and validation were performed on the MJFR dataset which is collected by us. The evaluation of the model’s performance is carried out by employing the following metrics: precision (P), recall (R), mean average precision (mAP@.5), and (mAP@.5:.95). The deployment of face mask identification system using the YOLOv8 object detection algorithm and integration into a Flask web application running on an Amazon EC2 instance is thoroughly studied in this work.

Muna Jaffer Al-Shamdeen, Fawziya Mahmood Ramo
The Contribution of the Texturing in the Processing of Optical Data

This study delves into the application of three distinct classification methodologies for extracting information from satellite imagery. The first utilizes traditional techniques with three channels: TM1, TM3, and TM4. The second combines textural indices from occurrence matrices with additive channels and raw radiometric channels. The third integrates these channels with Gabor filter imagery. The aim is to decipher the satellite images’ intrinsic data and comparatively analyze the methodologies’ effectiveness. Our results demonstrate that textural classification, especially with the Gabor filters, amplifies the discriminative capability, achieving a 5% enhancement in the classification rate. This is particularly impactful in differentiating themes like Urban and Sebkha1, emphasizing the potential of textural features in refining satellite image classification processes.

Abdelrafik Touzen, Sarah Ghardaoui, Hadria Fizazi, Meriem Abidi, Nourredine Boudali, Belhadj K. Oussama
Modeling Automobile Credit Scoring Using Machine Learning Models

This study aimed to apply ML models to determine variables related to automobile credit scoring in a financial institute. We used four machine learning methods, including logistic regression (L.R), artificial neural network (ANN), random forest (R.F.), and Extreme Gradient Boosting (Xgboost) with 10-fold cross-validation repeated 10 times. The RF algorithm achieved the best performance in all performance metrics. It performed 96.2% under the receiver operating characteristic (AUROC) score. AUROC scores for other models were 91.7% for LR, 91.6% for Xgboost, and 91.5% for ANN. SHAP (SHapley Additive exPlanations) values were also calculated to better explain the indicators’ importance. The most relevant features of the model were debt to income ratio score, documented wealth score, and down payment rate score. To sum up, this study might help automobile credit providers and applicants for their credit evaluation process.

Pakize Yiğit
Exploring Lightweight Blockchain Solutions for Internet of Things: Review

The world is witnessing a major digital transformation and is moving towards more interaction, connectivity, ease, and intelligence through the Internet of Things (IoT). The IoT offers these advantages to the world by linking necessary devices with each other, making it easier to manage and deal with those devices. However, the IoT faces many challenges, such as authentication, privacy, security, and access management. The application of blockchain technology may provide a solution to these challenges. Nevertheless, applying blockchain technology may face limitations, such as the limited resources of the IoT devices used and the resource-intensive requirements of the blockchain. Therefore, to overcome these limitations, several studies have proposed using a lightweight blockchain; this blockchain is specifically designed for resource-limited IoT devices. In this paper, a comprehensive review has been made on the uses of lightweight blockchain in the IoT. Moreover, we identified some of the challenges facing the application of blockchain technologies in the IoT and the future directions.

Omar Ayad Ismael, Mohammed Majid Abdulrazzaq, Nehad T. A. Ramaha, Yasir Adil Mukhlif, Mustafa Ali Sahib Al Zakitat
Harnessing Advanced Techniques for Image Steganography: Sequential and Random Encoding with Deep Learning Detection

This study delves into the intricacies of steganography, a method employed for concealing information within a clandestine medium to enhance data security during transmission. Given that information is often represented in various forms, such as text, audio, video, or images, steganography offers a distinctive advantage over conventional cryptography by focusing on concealing the very existence of the message, rather than merely its content. This research introduces a novel steganographic technique that places equal emphasis on both message concealment and security enhancement. This study highlights two primary steganographic methods: sequential encoding and random encoding. By employing both encryption and image compression, these techniques fortify data security while preserving the visual integrity of cover images. Advanced deep learning models, namely Vgg-16 and Vgg-19, are proposed for the detection of image steganography, with their accuracy and loss rates rigorously evaluated. The significance of steganography extends across various sectors, including the military, government, and online domains, underscoring its pivotal role in contemporary data communication and security.

Mustafa Ali Sahib Al Zakitat, Mohammed Majid Abdulrazzaq, Nehad T. A. Ramaha, Yasir Adil Mukhlif, Omar ayad Ismael
Performance Analysis for Web Scraping Tools: Case Studies on Beautifulsoup, Scrapy, Htmlunit and Jsoup

Web scraping has become an indispensable technique for extracting valuable data from websites. With the growing demand for efficient and reliable web scraping tools, it is crucial to assess their performance to guide developers and researchers in selecting the most suitable tool for their needs. In this paper, we present a comprehensive performance analysis of four popular web scraping tools: BeautifulSoup, Scrapy, HtmlUnit, and Jsoup. Our study focuses on evaluating these tools based on metrics such as execution time, memory usage, and scalability. We conducted experiments using various websites and datasets to provide a comprehensive evaluation of the tools’ performance. The results highlight the strengths and limitations of each tool, allowing users to make informed decisions when choosing a web scraping tool based on performance requirements. Additionally, we discuss real-world use cases and the impact of website structures on tool performance. This paper aims to assist developers and researchers in selecting the most appropriate web scraping tool for their specific needs, and it also identifies avenues for future research to further enhance the performance of these tools.

Yılmaz Dikilitaş, Çoşkun Çakal, Ahmet Can Okumuş, Halime Nur Yalçın, Emine Yıldırım, Ömer Faruk Ulusoy, Bilal Macit, Aslı Ece Kırkaya, Özkan Yalçın, Ekin Erdoğmuş, Ahmet Sayar
Exploring Spreaders in a Retweet Network: A Case from the 2023 Kahramanmaraş Earthquake Sequence

Two massive earthquakes struck Kahramanmaraş district of Türkiye on 6 February 2023, leaving loss of life and damage in a catastrophic scale. Many blamed the government for its inefficiency in dealing with the disaster. #devletyok (there is no government) was a hashtag used in the aftermath in social networking sites. We analyze the retweet network around the hashtag on 24th February, two weeks after the disaster, and aim to extract topological characteristics of the network, the influential spreaders in the network and the source of the diffusion. We make use of centrality measures, the HITS algorithm, PageRank algorithm and the k-shell decomposition in order to detect the influential spreaders. The social network analysis here is different from much of the previous research in that we explore the central roles in an information diffusion on a network, where all nodes are active, representing an already diffused information. In-degree centrality, betweenness centrality and HITS algorithm provide useful results in detecting spreaders in our network, while closeness centrality, PageRank and k-shell decomposition supply no additional knowledge. We figure out three nodes in the network with central roles in the diffusion, one being the source node. Checking the account of this source node reveals an anonymous user, who does not declare his/her identity. The study here has useful future implications for political and governmental studies. Moreover, the procedure applied to detect influential spreaders has many potential use cases in other fields such as marketing and sociology.

Zeynep Adak, Ahmet Çetinkaya
Verifying the Facial Kinship Evidence to Assist Forensic Investigation Based on Deep Neural Networks

The criminal incident evidence can be considered as the substance of the search for the crime perpetrator. Even if the offender received the deserved punishment. However, when the requisite time for catching the culprit was shorter, the society’s confidence of security and justice will be higher. Thus, justice agencies should custom any new technology which contributes to this process as soon as possible. Kinship Verification can be regarded an interesting and difficult area of study in computer vision and computing forensics. Facial Kinship Verification has the capability to predicts whether two people are related in kinship or not depending on the facial images or videos. Facial Kinship Verification has a diversity of real-world practices, including forensic investigations, contributing to the resolution of missing person cases, social media analysis, and genealogy research. The proposed approach involves the Verification of the relationship which exists between the provided facial images using a Three-Dimensional Convolution Neural Network. This approach involves of following stages: face preprocessing, deep features extraction and Classification. Extensive experiments revealed promising results compared with many state-of-the-art approaches. The accuracy of proposed system reached to 89.25% in KinFaceW-I dataset.

Ruaa Kadhim Khalaf, Noor D. Al-Shakarchy
Software Defects Detection in Explainable Machine Learning Approach

In the era of ubiquitous software systems, the complexity and urgency in software production have often led to compromises in quality. Traditional testing methods are increasingly inadequate, demanding more automated solutions. This research explores the application of machine learning (ML) for Software Defect Prediction (SDP), specifically focusing on binary classification of defective and non-defective software components. Leveraging state-of-the-art ML models such as Random Forest, Artificial Neural Network (ANN), and XGBoost, the study rigorously evaluates their effectiveness on the Promise CM1 dataset. Moreover, the paper addresses the “black box” challenge by employing Explainable AI (XAI) techniques; SHapley Additive exPlanations (SHAP) is used to elucidate the models’ decision-making processes. This approach balances predictive accuracy with interpretability, fostering trust, and promoting responsible usage of automated defect prediction. The research findings offer significant advancements in software quality assurance and provide an insightful perspective on the alignment between prediction capabilities and comprehensible models.

Muayad Khaleel Al-Isawi, Hasan Abdulkader
Explainable AI for Predicting User Behavior in Digital Advertising

Online advertising has ushered in a new era of digital communication and business transformation. However, the inundation of digital content necessitates a deeper understanding of user behavior to ensure meaningful engagement. This paper investigates the potential of machine learning in predicting and analyzing user behavior in the realm of online advertising. Utilizing a dataset encompassing user interactions with advertisements, we deployed three machine learning models: Random Forest, Logistic Regression, and Gradient Boosting. Our findings highlight that the Random Forest model outperformed with an accuracy of 97.67%, followed closely by Logistic Regression and Gradient Boosting. Furthermore, recognizing the opaque nature of machine learning models, our research leverages SHAP and LIME, tools of explainable AI, ensuring that our models’ decisions remain interpretable. This study under-scores the power of a data-driven approach in online advertising, emphasizing the necessity for both precision and transparency in this digital age.

Ashraf Al-Khafaji, Oguz Karan
CryptStego: Powerful Blend of Cryptography and Steganography for Securing Communications

In today’s era, security is one of the most critical issues in the development of electronic communications applications, especially when sending private data. The data may be encrypted with several algorithms; however, an extra layer of security can improve protection by a significant amount. Therefore, in this paper, we have developed an application, CryptStego, to secure data using two techniques, cryptography and steganography, to transmit data securely. The encryption of original data is executed using Blowfish algorithm, a cryptographic technique. Additionally, the encrypted data is hidden through using Least Significant Bit (LSB), a steganography technique. The implementation of both techniques offers an extensive level of security, since an intruder must firstly identify the encrypted text within the image to attain the encrypted text, then secondly to decrypt using the algorithm to obtain the original message. Therefore, any intruder must encounter multiple levels of security to obtain the original message from the cipher image.

Shraiyash Pandey, Pashupati Baniya, Parma Nand, Alaa Ali Hameed, Bharat Bhushan, Akhtar Jamil
Applications and Associated Challenges in Deployment of Software Defined Networking (SDN)

SDN, a rising technology within the realm of Internet of Things (IoT), has been increasingly well-received in recent times. This article presents a summary of SDN along with its different elements, advantages, and difficulties. The paper aims to provide practical solutions for introducing OpenFlow into commercial routers without hardware modifications and extending the integration of OpenFlow with legacy control protocols and control planes. In addition, the paper presents a refactoring process for migrating traditional network applications to OpenFlow-based ones, focusing on the security challenges and techniques of open technologies like SDN, OpenROADM, and SDN-based Mobile Networks (SDMN). The document also examines the advantages and possible uses of SDMN in enhancing network adaptability, streamlining network administration, and bolstering network security. The article also discusses O-RAN network innovations and difficulties, such as AI and ML workflows that are made possible by the architecture and interfaces, security concerns, and, most importantly, standardization issues.

Pashupati Baniya, Atul Agrawal, Parma Nand, Bharat Bhushan, Alaa Ali Hameed, Akhtar Jamil
Combining Text Information and Sentiment Dictionary for Sentiment Analysis on Twitter During COVID

Presence of heterogenous huge data leads towards the ‘big data’ era. Recently, tweeter usage increased with unprecedented rate. Presence of social media like tweeter has broken the boundaries and touches the mountain in generating the unstructured data. It opened research gate with great opportunities for analyzing data and mining ‘valuable information’. Sentiment analysis is the most demanding, versatile research to know user viewpoint. Society current trend can be easily observed through social network websites. These opportunities bring challenges that leads to proliferation of tools. This research works to analyze sentiments using tweeter data using Hadoop technology. It explores the big data arduous tool called Hadoop. Further, it explains the need of Hadoop in present scenario and role of Hadoop in storing ample of data and analyzing it. Hadoop cluster, Hadoop Distributed File System (HDFS), and HIVE are also discussed in detail. The Dataset used in performing the experiment is presented. Moreover, this research explains thoroughly the implementation work and provide workflow. Next session provides the experimental results and analyzes of result. Finally, last session concludes the paper, its purpose, and how it can be used in upcoming research.

Vidushi, Anshika Jain, Ajay Kumar Shrivastava, Bharat Bhushan, Alaa Ali Hameed, Akhtar Jamil
Cyber Threat Analysis and Mitigation in Emerging Information Technology (IT) Trends

For the information technology sector, cybersecurity is essential. One of the main issues in the modern world is sending information from one system to another without letting the information out. Online crimes, which are on the rise daily, are the first thing that comes to mind when we think about cyber security. Various governments and businesses are adopting a number of actions to stop these cybercrimes. A lot of individuals are still quite worried about cyber security after taking many safeguards. This study’s primary goal is to examine the difficulties that modern technology-based cyber security faces, especially in light of the rising acceptance of cutting-edge innovations like server less computing, blockchain, and artificial intelligence (AI). The aim of this paper is to give readers a good overview of the most recent cyber security trends, ethics, and strategies. This study focuses on the present state of cyber security and the steps that may be taken to address the rising dangers posed by modern technology through a thorough investigation of the existing literature and actual case studies.

Mohsin Imam, Mohd Anas Wajid, Bharat Bhushan, Alaa Ali Hameed, Akhtar Jamil
Backmatter
Metadaten
Titel
Emerging Trends and Applications in Artificial Intelligence
herausgegeben von
Fausto Pedro García Márquez
Akhtar Jamil
Alaa Ali Hameed
Isaac Segovia Ramírez
Copyright-Jahr
2024
Electronic ISBN
978-3-031-56728-5
Print ISBN
978-3-031-56727-8
DOI
https://doi.org/10.1007/978-3-031-56728-5

Premium Partner