Skip to main content

2025 | Buch

Information and Communication Technologies

12th Ecuadorian Conference, TICEC 2024, Loja, Ecuador, October 16–18, 2024, Proceedings

herausgegeben von: Santiago Berrezueta-Guzman, Rommel Torres, Jorge Luis Zambrano-Martinez, Jorge Herrera-Tapia

Verlag: Springer Nature Switzerland

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 12th Ecuadorian Conference on Information and Communication Technologies, TICEC 2024, held in Loja, Ecuador, during October 16–18, 2024.

The 24 full papers presented here were carefully reviewed and selected from 74 submissions. They were organized in the following topical sections: Image Processing, Classification, and Segmentation; Artificial Intelligence and Machine Learning Applications; IoT, Embedded Systems, and Applications in Healthcare and Industrial Environments.

Inhaltsverzeichnis

Frontmatter

Image Processing, Classification, and Segmentation

Frontmatter
Image Classification of Peach Leaves Using a Siamese Neural Network
Abstract
The growing global population and the increasing demand for food have made food production a critical concern. To meet this challenge, techniques like vertical farming and aquaponics have been proposed to maximize output while conserving resources and space. However, there is still room for improvement in crop care processes. Deep learning, particularly Convolutional Neural Networks (CNNs), has emerged as a leading approach for addressing various agricultural issues. This paper explores the use of Siamese CNNs to classify the front and back faces of six peach leaf varieties, a crucial step in detecting bacteriosis, a common disease in peach farming. Building on prior work in Siamese CNNs for peach leaf classification, this study examines several enhancements, including increased convolutional layers, three activation functions, and the application of pre-trained CNN models. The proposed architecture, combined with the ReLU activation function, improved the accuracy of the reference model by 3.48%. Among the pre-trained models, ResNet performed best, reaching an accuracy of 0.9841, which is 7.83% higher than the benchmark model’s top result. These findings indicate that the benchmark model had significant room for improvement, which was effectively addressed through the experiments conducted.
Mateo David Coello-Andrade, Iván Reyes-Chacón, Paulina Vizcaino-Imacaña, Manuel Eugenio Morocho-Cayamcela
Pancreas Segmentation Using SRGAN Combined with U-Net Neural Network
Abstract
Pancreatic cancer, a serious disease, is challenging to detect early, hindering timely treatments. Medical imaging, especially computer tomography, is vital for early diagnosis. This study focused on accurate pancreatic segmentation in medical images using subtle changes in 2D U-Net architecture to aid in diagnosis and treatment. Additionally, we proposed using a super-resolution generative adversarial network (SRGAN) for better-resolution images. The proposed methodology considering the Dice-Sørensen coefficient (DSC), showed superior DSC values: \(0.921163 \pm 0.012\) with subtle changes in U-Net architecture and \(0.5354 \pm 0.0002\) DSC with SRGAN and the proposed U-Net considering low training time. Furthermore, the recall \(0.6351 \pm 0.0159\) and precision \(0.6422 \pm 0.0056\) were obtained. These results are important to the progression of medical imaging analysis for complex decision-making.
Mayra Elizabeth Tualombo, Iván Reyes, Paulina Vizcaino-Imacaña, Manuel Eugenio Morocho-Cayamcela
Using Artificial Intelligence and X-ray Images to Train and Predict COVID-19 and Pneumonia: Tool for Diagnosis and Treatment
Abstract
In January 2022, Ecuador experienced a peak in COVID-19 cases, with 890,541 confirmed cases and 35,658 deaths. During 2019-2020, influenza and pneumonia also ranked among the top causes of death. Traditional chest X-rays and chest CT scans are commonly used for diagnosing COVID-19, but studies by Wong et al. indicate that chest X-rays are less sensitive compared to CT scans unless artificial intelligence is utilized. Tahir et al. highlighted that AI models such as U-net and neural networks achieve high diagnostic accuracy, with U-Net++ and ResNet18 models showing sensitivities above 99% and perfect specificity using a large dataset of 33,920 chest X-ray images. The rapid detection of symptoms could have helped in prioritizing critical care, potentially reducing deaths. In the current study, it is demonstrated that combining AI with chest X-rays can achieve a binary accuracy above 98% for COVID-19 detection using transfer learning with Xception, VGG16, and VGG19 neural networks. Using 27,052 chest X-ray images, the VGG19 model achieved an excellent F1-score of 98.53% for COVID-19 and normal class classification. The VGG19 model also performed well in multiclass classification with an F1-score above 89%. The study concludes that AI-enhanced diagnostic tools, such as VGG19, are valuable for hospitals in diagnosing COVID-19 and future improvements might include larger datasets and enhanced segmentation techniques.
Bryan Juárez-Gonzalez, Fernando Villalba-Meneses, Jonathan Cruz-Varela, Andrés Tirado-Espín, Paulina Vizcaino-Imacaña, Carolina Cadena-Morejon, Cesar Guevara, Diego Almeida-Galárraga
Region of Interest Features and Classification of MRI Brain Lesions
Abstract
Nowadays, the diagnosis of numerous diseases is facilitated by medical imaging. In that context, the identification of brain lesions presented as White Matter Hyperintensities (WHMs) and their related diseases is essential to have a correct diagnosis. Machine- and deep learning (subfields within artificial intelligence) could support the diagnosis (especially in complex medical images) by leveraging the structure and regularities within the imaging data. This project presents a technique for the classification of WHMs concerning ischemia and demyelination through the analysis of the region of interest (ROI) features of magnetic resonance images. To do that, we analyzed radiomic features using a combination of principal component analysis (PCA) and support vector machine (SVM) classification. Next, we used a transfer learning fine-tuned ResNet18 model to more thoroughly analyze and classify lesioned ROIs. For that, we used patient data alone and additional synthetic data (generated using spectral generative adversarial networks -SNGAN). The results show an accuracy mean value of 0.96 without data augmentation; while we had a value of 0.54 using synthetic data, a similar value was acquired with radiomics-informed SVM classification (0.56). These findings constitute a starting point for future projects exploring different ways of informing and fine-tuning artificial intelligence models to detect, classify, and segment MRI pathologies characterized by small lesions.
Darwin Castillo, Ricardo J. Alejandro, Santiago García, María José Rodríguez-Álvarez, Vasudevan Lakshminarayanan
Deep Learning-Based Leukemia Diagnosis from Bone Marrow Images
Abstract
Identifying and classifying features in Bone Marrow Aspirate Smear (BMAS) images is essential for diagnosing various leukemias, such as Acute Myeloid Leukemia. The complexity of microscopy image analysis necessitates a computational tool to automate this process, reducing the workload on hematologists. Our study introduces a Deep Learning-based method designed to efficiently detect and classify cell characteristics in BMAS images. Current systems struggle with cell and nucleus segmentation due to variations in cell size, appearance, texture, and overlapping cells, often influenced by different microscopy conditions. We addressed these challenges by experimenting with the Munich AML Morphology Dataset and a custom dataset from Hospital 12 de Octubre in Madrid. The proposed system achieved over 90% accuracy and 92% precision in identifying and classifying leukemia cells, marking a substantial advancement in supporting clinical specialists in their decision-making processes when traditional analysis methods are insufficient.
Luis Zhinin-Vera, Alejandro Moya, Elena Pretel, Jaime Astudillo, Javier Jiménez-Ruescas
Breast Thermographic Image Augmentation Using Generative Adversarial Networks (GANs)
Abstract
Breast thermography captures infrared radiation images to monitor skin surface temperature changes non-invasively. This data, when combined with artificial intelligence, facilitates early breast cancer diagnosis and detection. However, training deep learning algorithms such as convolutional neural networks is challenging due to the limited number of images. The primary objective of this study is to create a set of synthetic breast thermographic images using segmentation and data augmentation techniques. In this work, we propose 1) Using public breast thermography databases, 2) Segmenting the region of interest with the U-Net network, 3) Increasing the variety of thermographic images using the SNGAN model, and 4) Evaluating the performance and accuracy of the previous algorithms with statistical metrics. The results indicate that the U-Net achieved an IoU of 0.96 and a Dice coefficient of 0.97. The SNGAN network generated 2000 synthetic images, reflected in a KID value of 4.54. In conclusion, U-Net is highly effective for segmenting regions of interest in thermographic images, and SNGAN shows promising results in synthetic image generation.
Ramiro Israel Vivanco Gualán, Yuliana del Cisne Jiménez Gaona, Darwin Patricio Castillo Malla, María José Rodríguez-Alvarez, Vasudevan Lakshminarayanan
Evaluating Histopathological Cancer Detection: A Comparative Analysis of CNN Architectures for Tumor Detection in Lymph Node Pathology
Abstract
This study evaluates four well-known convolutional neural networks: VGG19, EfficientNetB0, ResNet50, and InceptionV3, for tumor detection in lymph node pathology images. Using a significant subset of the PCam dataset, models were trained on binary classification tasks focused on identifying tumor tissue, with data augmentation applied to enhance generalization. The methodology involved a rigorous process of data preparation, model selection, training, and evaluation under limited hardware resources, using a standard laptop. The dataset was split into training and validation sets with an 80/20 ratio, and models were trained using the Adam optimizer with a learning rate of 0.001 over multiple epochs. VGG19 achieved the highest validation accuracy at 77.38% and AUC of 85.35% but required substantial computational time and exhibited overfitting. EfficientNetB0, though faster to train (20 m 40 s), showed lower validation accuracy (58.91%) and AUC (60.14%). ResNet50 performed well during training but faced generalization challenges. InceptionV3 demonstrated a balanced performance with a validation accuracy of 70.74% and AUC of 76.41%, making it a promising option across varied datasets. These findings highlight the strengths and limitations of different CNN architectures in enhancing cancer diagnosis in lymph node pathology, providing insights for future research and clinical applications.
Ana Marcillo-Vera, Karen Cáceres-Benítez, Diego Almeida-Galárraga, Andrés Tirado-Espín
ViTSigat: Early Black Sigatoka Detection in Banana Plants Using Vision Transformer
Abstract
This paper presents a model based on Vision Transformer for black Sigatoka detection in early stages, which is called ViTSigat. A comparative of the performance with other proposed model based on Convolutional Neural Network is performed. As first step, a pre-processing is applied to a dataset of 98 videos, which were recorded from a banana plantation using a mobile phone with high resolution camera. The obtained images from each video generated a total of 1500 images, which were normalize and divided in four stages/categories. Different setting parameters were used to train ViTSigat and CNN models. Likewise, both models used the same optimizer, loss function and learning rate. The obtained results were evaluated considering the accuracy, precision, recall and f1-score metrics, including the confusion matrix and ROC curves. A saliency map is used to show the relevant areas where the leaves could be affected due to the infection by black Sigatoka. The experimental results of the proposed models show that the ViTSigat model based on transformer encoder obtains better performance since its attention modules focus on important features of input data instead of take attention to non-useful information.
Jorge L. Charco, Angela Yanza-Montalvan, Johanna Zumba-Gamboa, Jose Alonso-Anguizaca, Edgar Basurto-Cruz
Automatic Parking Space Segmentation Using K-Means Clustering and Image Processing Techniques
Abstract
Proper management of parking spaces is essential in urban environments. This study proposes an approach for parking space segmentation using the K-means algorithm and the OpenCV library. The main objective is to determine the trapezoid describing the parking area by analyzing data previously collected from multiple photographs. These images contain several vehicles parked in different dispositions and moments in time. For this, the coordinates of the four leading edges that compose each car were considered. The previously obtained data were used to estimate the trapezoid defining each photograph’s parking zone. This approach combines segmentation and image processing techniques to delimit parking spaces in urban environments.
Anthony Xavier Romero Gonzalez, Kevin Sebastian Campoverde Ambrosi, Patricio Eduardo Ramon Celi, Alexandra Bermeo, Marcos Orellana, Jorge Luis Zambrano-Martinez, Patricio Santiago García-Montero

Artificial Intelligence and Machine Learning Applications

Frontmatter
Neural Agents with Continual Learning Capacities
Abstract
The contemporary Artificial Neural Networks (ANNs) often suffer from catastrophic forgetting, where learned parameters are overwritten by new tasks. This paper presents a novel approach using a Reinforcement Learning (RL) agent with Continual Learning (CL) capabilities to navigate a visual robotic structure, achieving advanced proficiency in Tic-Tac-Toe. The system integrates a webcam for environmental perception, specialized neural blocks for feature extraction, and a communication bus linking self-taught agents with advisors. A knowledge protection mechanism prevents the loss of acquired parameters during new learning iterations. The methodology was validated on a physical robot, implemented with C++ and OpenCV, demonstrating its ability to retain knowledge and enhance gameplay, effectively emulating intelligent children’s learning strategies. The proposed system was tested in a real-world setting, achieving an average accuracy of 92% in task completion and demonstrating a 15% improvement in task retention over traditional methods.
Luis Zhinin-Vera, Elena Pretel, Alejandro Moya, Javier Jiménez-Ruescas, Jaime Astudillo
Centinela: An Intelligent System Based on an Integrated Architecture for Supporting Scholars
Abstract
Research and discovery are fundamental to the advancement of visionary universities. In the modern era, the impact of scientific research on development underscores the critical need for enhanced collaboration among experts. In Ecuador, both public and private universities are spearheading high-quality research initiatives that engage hundreds of scholars, highlighting an increased momentum in scientific output. This surge necessitates an efficient computational tool to support these academic communities in networking, finding collaborators, and selecting joint research topics. To address these needs, we have developed an architecture underpinning a new application named “Centinela”. This architecture is structured into three layers: i. two operational and one analytical databases that forms the data layer, which by combining multiple databases, optimizes the retrieval, processing, and presentation of research data; ii. Django for the back-end to handle business logic; and iii. Angular for the front-end that supports the user interaction. Centinela integrates data from the Scopus API, focusing on the contributions of Ecuadorian researchers, and builds networks of co-authorship, acts as a search engine for academic publications, allows researchers to form groups, provides recommendations on research topics and facilitates group decision-making processes. This integration showcases the application’s robustness in streamlining the management of researchers’ collective expertise and fostering in-depth knowledge across both foundational and advanced subjects.
Lorena Recalde, Gabriela Suntaxi, Diana Martinez-Mosquera, Rommel Masabanda, Danny Cabrera
Optimizing Predictive Models in Healthcare Using Artificial Intelligence: A Comprehensive Approach with a COVID-19 Case Study
Abstract
The ongoing global healthcare challenges underscore the need for accurate and efficient predictive tools capable of assessing disease severity. This study introduces a broadly applicable artificial intelligence (AI) framework designed to enhance disease severity predictions, focusing on analyzing feature importance and optimizing hyperparameters using the Hyperband method. Utilizing a dataset of 1215 patients, we employed L1 regularization to pinpoint a minimal yet highly informative set of biomarkers-oxygen saturation (O2SAT), partial pressure of carbon dioxide (PCO2), age, percentage of lymphocytes, and percentage of neutrophils. These biomarkers were found sufficient to predict disease severity with 95% accuracy. The Hyperband method facilitated efficient and effective tuning of our neural network models, enhancing their predictive capabilities. Although initially developed for COVID-19, our streamlined model can be adapted to other diseases with similar data characteristics, thereby improving predictive efficiency and impacting healthcare resource allocation and patient management during pandemics. Our findings demonstrate the potential of targeted AI applications to not only refine response strategies during health crises but also provide insights that could lead to more informed and effective healthcare practices globally.
Juan Pablo Astudillo León, Kevin Chamorro, Santiago J. Ballaz
Predictive Model Proposal in Telemetry Using Machine Learning Techniques to Anticipate Water Degradation in Aquaculture
Abstract
The present research presents an innovative strategy to improve water quality management in aquaculture environments by employing machine learning methods. A telemetric system was used to collect and monitor environmental data such as the power of Hydrogen (pH), Temperature, Turbidity and Total Dissolved Solids (TDS). The centrepiece of this study is the development of a telemetry-based predictive model, which leverages this data to predict changes in water quality. Several machine learning algorithms were tested, including Random Forest, Gradient Boosting, Polynomial Regression, Linear Regression, and k-Nearest Neighbors. After extensive testing, it was determined that Random Forest stood out as the most effective algorithm, achieving an extraordinary accuracy of 0.999 with a training time of just 12 s. This model allows for the proactive detection of conditions that could lead to the degradation of the aquatic habitat, enabling early warning of possible incidents. By anticipating these events, aquaculture resource managers can take corrective action in a timely manner, reducing the risks associated with water degradation. The results obtained underline the effectiveness and feasibility of applying machine learning methods in water quality management in aquaculture environments, showing that the combination of telemetry and appropriate algorithms can offer accurate and rapid solutions to improve sustainability and efficiency in this field.
Néstor Rafael Salinas-Buestán, Francisco Alexander Zambrano-Varela, Ángel Iván Torres-Quijije, Diego Fernando Intriago-Rodríguez, Diego Patricio Peña-Banegas
Evaluation of the Use of Artificial Intelligence Techniques in the Mitigation of the Broadcast Storm Problem in FANET Networks
Abstract
UAVs are nodes that can fly autonomously or can be operated remotely. The clustering of UAVs forms what is known as a FANET network. In a FANET the nodes mobility is much higher than in a MANET which results in more frequently changes in the network topology. Consequently, the broadcasting of packets are also expected to be executed more frequently which can cause redundancy, contention, and collision of packets, also known as the Broadcast Storm Problem (BSP). In this work, we propose the integration of artificial intelligence techniques to the analysis of the retransmission packets in a FANET. In this regard, various models commonly used in telecommunications networks were trained. As a result, an AI-based protocol that reduces the BSP problem is presented. Specifically, the AI-based protocol selects between two hybrid BSP reduction models. The results show that the proposal is meaningful in achieving greater efficiency in the retransmissions savings and thus in the energy consumption.
Andrés Sánchez, Patricia Ludeña-González, Katty Rohoden
Adaptation Dynamics of Galápagos Finches: Evolutionary Responses to Climate Variation Explored through Machine Learning
Abstract
Climate change is accelerating the evolution of species as global warming intensifies and natural environments undergo significant alterations. This study addresses the urgent need to understand how these environmental changes impact avian populations, specifically focusing on the physiological adaptation of finch beaks on the Galápagos Islands. By employing multiple models-logistic regression, XGBOOST, and Random Forest Classifier-using the most recent data, the study investigates how changes in climate conditions correlate with variations in beak dimensions. The analysis reveals significant relationships between temperature, precipitation, and finch beak characteristics, indicating that climatic factors have a notable impact on beak morphology. The study aims to clarify the evolutionary dynamics at play and to predict future adaptations based on projected climate scenarios. The results highlight the robust correlation between climatic variables and finch physiology, providing deeper insights into the complex interplay between climate change and evolutionary biology. These findings are crucial for developing targeted conservation and adaptation strategies, helping to safeguard biodiversity in the face of ongoing climate shifts. The insights gained from this study are essential for guiding policymakers and stakeholders in implementing effective measures to mitigate the impacts of climate change on ecosystems and species survival.
Ariana Deyaneira Jiménez Narváez, Dánely Leonor Sánchez Vera, Iván Reyes, Paulina Vizcaino-Imacaña, Manuel Eugenio Morocho-Cayamcela
Incident Alert Priority Levels Classification in Command and Control Centre Using Word Embedding Techniques
Abstract
This research analyzes the textual content of emergency calls that arrive at command and control centers to classify them according to their priority. The aim is to respond promptly and make appropriate decisions in situations that require immediate attention. Text mining techniques construct a computational model with preprocessing techniques such as tokenization, case folding, and stop words removal. The calls are then represented using Word Embeddings with the Skip-Gram architecture to obtain word vectorization, and the Clustering algorithm is applied for classification. The results show an improvement in classification accuracy, achieving 95% precision in classification tests using high and low-priority categories and 81% in classification tests using four alert categories. These techniques enhance the syntactic and semantic understanding of emergency calls and reduce the risk of loss of human life.
Marcos Orellana, Jonnathan Emmanuel Cubero Lupercio, Juan Fernando Lima, Patricio Santiago García-Montero, Jorge Luis Zambrano-Martinez
Physics Informed Neural Networks and Gaussian Processes-Hamiltonian Monte Carlo to Solve Ordinary Differential Equations
Abstract
Non-linear systems of differential equations are vital in fields like biology, finance, ecology, and engineering for modeling dynamic systems. This paper explores two advanced function approximation techniques Physics Informed Neural Networks (PINNs) and Gaussian Processes (GPs) combined with Hamiltonian Monte Carlo (HMC) for solving Ordinary Differential Equations (ODEs) that represent complex physical phenomena. The proposed approach integrates PINNs and GP-HMC, demonstrated through two synthetic models (Lotka Volterra and Fitzhugh Nagumo) and a real dataset (COVID-19 SIR model). The results show that the methodology effectively estimates parameters with low Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). For example, in the Lotka-Volterra model, GP-HMC achieved an RMSE of 0.044 and MAE of 0.041 for one state variable, while PINNs yielded an RMSE of 0.106 and MAE of 0.081. These results highlight the robustness of the methodology in accurately reconstructing system states across varying levels of variability.
Roberth Chachalo, Jaime Astudillo, Saba Infante, Israel Pineda

IoT, Embedded Systems, and Applications in Healthcare and Industrial Environments

Frontmatter
Pre-processing of the Text of ECU 911 Emergency Calls
Abstract
The function of the Integrated Security Service (ECU 911) is to receive emergency calls and coordinate the response through various emergency services. These interactions are recorded and stored as a backup of each call. The present research proposes the pre-processing of the text contained in said calls through the application of selected text formalization techniques, such as the use of dictionaries of idioms, consultation in the Dictionary of the Royal Spanish Academy (RAE), the use of spell checkers and Named Entity Recognition (NER). A bitmap is proposed as an auxiliary tool to facilitate decision-making and correct formalization during the formalization process. Finally, the score obtained by a semantic comparison model is evaluated, in which the informal transcribed text of the calls is compared with the formalized text to determine whether the formalization of text improves the performance of the Artificial Intelligence models The results demonstrated that performance was improved or maintained in 74% of the test cases thanks to the combination of these techniques. .
Marcos Orellana, Pablo Andres Molina Pinos, Patricio Santiago García-Montero, Jorge Luis Zambrano-Martinez
Secure BLE Communication Between Android Devices and Embedded Systems for IoT Applications
Abstract
Data privacy is an important key in current Internet of Things (IoT) communications. In this work, a novel mechanism for secure Bluetooth Low Energy communications between Android and IoT devices is proposed, based on a literature review about vulnerabilities and potential attacks over this protocol. The vulnerabilities and attacks found in the literature were analyzed and, with this basis, a mechanism was proposed that includes a series of application-level security measures to ensure secure communication. This mechanism involves the generation, sharing, and renewal of both symmetric and asymmetric security keys, as well as time and authenticity control by applying timestamps, unique keys, among other measures. As a result, according to the methodology, the proposal provides resistance against 71.43% of the attacks found in the analyzed works. The remaining 28.57% of attacks, not covered by the present proposal, do not impact the privacy or authenticity of the transmitted data, ensuring secure communication.
Raúl Armas, Marlon Navia
Modeling and Control of a Peltier Thermoelectric System Applying a Multi-objective Optimization Approach
Abstract
This paper proposes a design procedure based on multi-objective optimization for modeling a Peltier thermoelectric system using experimental data. A multi-objective evolutionary algorithm was used to identify a set of optimal parameters that satisfactorily characterize the system’s dynamics. The proposed methodology offers a designer valuable information about the dynamics of the temperatures of the Peltier’s hot and cold surfaces and the trade-offs between their design objectives (visualized in the Pareto fronts). In this way, a control engineer can be sufficiently informed to choose, according to their preferences, a model for the Peltier cell with the best performance (for different system operation scenarios). A Peltier cold-side temperature model was selected to tune a proportional-integral-derivative (PID) controller and evaluate the robustness of the model. The tuned PID controller works directly on the nonlinear model of the Peltier cell, allowing it to effectively control the temperature of the cold surface of the thermoelectric module. The methodology uses the Integral Absolute Error (IAE) as a performance index to evaluate the quality of system modeling. The results show that the methodological approach applied to model and control the system performs very satisfactorily.
Víctor Huilcapi, Geovanny García, Elias Ghia, Brian Soto
Wearable Device for Acquiring Biomechanical Variables Applied to the Analysis of Occupational Health Risks in Industrial Environments
Abstract
The activities performed in an industrial environment are classified as high risk for causing occupational diseases because the activities performed demand high frequencies of execution of movements or require high physical efforts on the part of industrial operators. Currently, risk indexes are assessed only based on observation and questionnaires, which adds subjectivity and lack of reliability to the results. The present study proposes the development and evaluation of a wearable technological device to acquire in situ the biomechanical variables of an operator and that these data serve to determine occupational risk indexes in a more objective way. The hardware architecture of the wearable device implemented consists of four modules: signal acquisition, processing, storage, and data transmission. The eleven (11) monitored variables are six myoelectric signals in the forearm, biceps, and back, two back inclination angles, heart rate, skin electrical conductance signal, and body temperature. To evaluate the performance of the implemented device, the eleven monitored variables were acquired, registered, and visualized during the execution of three test exercises by three users of different physiological parameters, resulting in a set of 99 signals. It was observed congruence between the occurrence of peaks of the signals with the characteristics of the executed exercises, in addition, there is no loss of data between the stages of capture, transmission, recording, and visualization.
Carlos Calderon-Cordova, Victor Puchaicela, Roger Sarango
Spectral Analysis of Powertrain Vibration in a Hybrid Vehicle Under Controlled Operating Conditions
Abstract
This paper deals with the characterization of the vibration frequency spectrum of the electric motor in the power train of a hybrid vehicle. This analysis is carried out using a vibration analyzer as a measurement instrument and Minitab software for processing the numerical data obtained. The numerical data collected in each experiment allow an accurate evaluation of the significant differences in power consumption, both in the charging and discharging phases. In order to meet the proposed objectives, we proceed with the characterization of the vibration spectrum of the electric motor of the hybrid vehicle powertrain under controlled conditions, which include both the charging and discharging phases, as well as the operation of the air conditioning system at its maximum capacity. The acquisition of this data is carried out using Dewesoft software. Subsequently, analysis of variance is used to process and filter the collected data. The null hypothesis was accepted, since the results exceeded the significance level set at 0.005, with a confidence interval of 99.5% for normal distribution and 99% for ANOVA analysis.
Raquel de los A. Salas Ibarra, Alexander E. Torres Romero, David H. Cárdenas Villacrés
Evaluation of an Intelligent Educational Toy-Game Prototype for Toddlers’ Motor Stimulation and Learning
Abstract
Existing literature supports the relevance of educational toys in early child development, especially smart toys related to digital games that can promote cognitive and motor development in children. These toys not only entertain but also facilitate learning through interaction and personalization. This study validates a prototype of an educational toy using artificial intelligence designed to improve cognitive and motor skills in toddlers aged two to four years, specifically testing on color learning and fine motor skills. It follows an applied research methodology based on the paradigm through design, using participatory mixed methods and design thinking strategies. The validation was carried out in children's educational centers through observational tests and focus groups. The results show that the “Dori” prototype significantly improves hand-eye coordination and facilitates color recognition in children. In addition, it was observed that the toy captures children's attention and promotes effective learning through lights and sounds. The study concludes that intelligent educational toys are valuable tools for the integral development of children, combining playful learning with gaming themes and sensory stimuli. Areas for improvement of this type of toy were also identified, such as the alignment of the pieces and the volume of the sounds, highlighting the importance of integrating technologies in educational toys to promote integral development in children. Nevertheless, it is suggested that they continue investigating their application in diverse cultural contexts and expand the age range of the test participants.
Nayeth Idalid Solorzano Alcivar, Da Hee Park Kim, Jimmy Ernesto Canizares Pozo, Michael Xavier Arce Sierra, Andrea Paola Rubio Zurita
Changes in Academic Assessment Due to the Use of Artificial Intelligence
Abstract
This study seeks to identify effective evaluation methods according to the use of artificial intelligence. It addresses the extent to which artificial intelligence tools and functionalities are used in secondary education, mainly in students and teachers. The scientific literature on the use of artificial intelligence in academic assessment has been reviewed within the methodological framework. In addition, a survey was conducted with students and teachers to gather information on the influence of artificial intelligence, its ethical use, and its application in different academic activities. It is concluded that artificial intelligence is used in various educational activities, allows for the optimization of the work of teachers, and evaluates, improves, and personalizes the teaching-learning process of students. Notably, students who use artificial intelligence tools prefer to be assessed by human teachers. However, these resources should be used as an aid and adequate follow-up. Finally, this work proposes methods of practical evaluation using artificial intelligence.
Isaac Ojeda, Santiago Castro Arias
Backmatter
Metadaten
Titel
Information and Communication Technologies
herausgegeben von
Santiago Berrezueta-Guzman
Rommel Torres
Jorge Luis Zambrano-Martinez
Jorge Herrera-Tapia
Copyright-Jahr
2025
Electronic ISBN
978-3-031-75431-9
Print ISBN
978-3-031-75430-2
DOI
https://doi.org/10.1007/978-3-031-75431-9