Skip to main content

2024 | Buch

Applied Informatics

6th International Conference, ICAI 2023, Guayaquil, Ecuador, October 26–28, 2023, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 6th International Conference on Applied Informatics, ICAI 2023, which took place in Guayaquil, Ecuador, in October 2023.
The 30 papers presented in this volume were carefully reviewed and selected from 132 submissions. The contributions are divided into the following thematic blocks: Artificial Intelligence; Data Analysis; Decision Systems; Enterprise Information Systems Applications; Geoinformatics; Health Care Information Systems; Interdisciplinary Information Studies; Learning Management Systems; Virtual and Augmented Reality.

Inhaltsverzeichnis

Frontmatter
Correction to: Comparative Analysis of Spatial and Environmental Data in Informal Settlements, from Point Clouds and RPAS Images
Carlos Alberto Diaz Riveros, Andrés Cuesta Beleño, Julieta Frediani, Rocio Rodriguez Tarducci, Daniela Cortizo

Artificial Intelligence

Frontmatter
A Feature Selection Method Based on Rough Set Attribute Reduction and Classical Filter-Based Feature Selection for Categorical Data Classification

The main objective of feature selection in machine learning classification is to reduce the size of features by removing irrelevant and noisy features, with the goal of improving the accuracy and the efficiency of the classification model. Like continuous and mixed data classification, feature selection has been applied to better categorical data classification. On large datasets with tens of features, however, existing feature selection methods perform worse in terms of accuracy metrics than baseline categorical data classification models that involve full features. This paper presents a feature selection method that integrates Rough Set Attribute Reduction and Classical Filter-based feature selection method to improve the performance of categorical data classification. Two large categorical datasets from UCI repository are used to evaluate the method. Support Vector Machine, Random Forest and Multilayer Perceptron algorithms are used as machine learning classifiers. The results show that the proposed method outperforms existing feature selection models in terms of Accuracy, Precision, Recall, and F-measure for individual classes and their average weighted scores in both case studies. Benchmarking with baseline classification models, the best overall performance by the proposed method is obtained with Random Forest.

Oluwafemi Oriola, Eduan Kotzé, Ojonoka Atawodi
Enhancing Face Anti-spoofing Systems Through Synthetic Image Generation

This study introduces a strategy for synthetic image generation aimed at enhancing the detection capability of facial authentication systems (FAS). By employing various digital manipulation techniques, new synthetic fake images were generated using existing datasets. Through experiments and result analysis, the impact of using these new fake samples on improving the detection accuracy of FAS systems was evaluated. The findings demonstrated the effectiveness of synthetic image generation in augmenting the diversity and complexity of the training data. Fine-tuning using the enhanced datasets significantly improved the detection accuracy across the evaluated FAS systems. Nonetheless, the degree of improvement varied among systems, indicating varying susceptibility to specific types of attacks.

César Vega, Ruben Manrique
Mapping Brand Territories Using ChatGPT

Nowadays various tools, powered using artificial intelligence technologies, enable us to automate mundane daily tasks. A well-known recent example is ChatGPT, a large language model that generates seemingly coherent responses in conversations. This has paved the way for wider interaction between users and artificial intelligence technologies. On the other hand, in the field of advertising, defining brand territory plays an important role in market research and the marketing or communication strategy that a brand implements with its customers. Brand territory groups together a set of characteristics, attributes and values that help a brand to establish a personality and differentiate it from its competitors. Traditionally, mapping a brand territory is a largely manual, time-consuming process.In this work, we propose an approach to automate this process, creating a more efficient way of mapping brand territory. Our approach involves the webscraping of product reviews to obtain data set for subsequent analysis using ChatGPT. In this manner, we automatically determine customer perceptions with regards to certain dimensions. By analyzing customer reviews using this Large Language Model, we show that it is possible to get a broader view of how consumers perceive specific aspects of certain products or brands in an automated fashion.

Luisa Fernanda Rodriguez-Sarmiento, Ixent Galpin, Vladimir Sanchez-Riaño
Predictive Modeling for Detection of Depression Using Machine Learning

Predictive modeling techniques using artificial intelligence have shown promising potential in detecting and predicting depression in recent times, adding newer perspectives to mental health assessment and treatment. This paper presents a predictive modeling approach to detect the presence of depression using machine learning techniques. It presents predictive models to detect depression based on depression-related data of a student cohort containing demographic and academic data along with depression information collected through the Beck Depression Inventory questionnaire, in addition to scores such as the PHQ (Patient Health Questionnaire) score, GAD (Generalized Anxiety Disorder) score, and Epworth score, which provide insights into the severity and impact of depressive symptoms, anxiety symptoms, and daytime sleepiness, respectively. The methodology involves data collection and preparation, feature selection, model selection, and model training using machine learning techniques. The results show the performance metrics of different predictive models on various dataset versions generated through preprocessing steps such as normalization, feature encoding, and selection. The best metrics are compared and evaluated, where the Linear Discriminant Analysis model performed best in terms of AUC, F1 score, and other metrics in this specific cohort. Considering the recent advancements of machine learning, incorporating predictive modeling would be important to designing clinical decision support systems, for a comprehensive prediction and analysis of depression in different cohorts, to act as an assistive tool for mental health professionals.

Martín Di Felice, Ariel Deroche, Ilan Trupkin, Parag Chatterjee, María F. Pollo-Cattaneo
Stock Price Prediction: Impact of Volatility on Model Accuracy

This research paper focuses on predicting stock prices using neural networks, and evaluating the impact of volatility on model accuracy. Two stocks, one non-volatile and one volatile, were selected to assess the effect of volatility on prediction precision using three types of neural networks: RNN, LSTM, and feedforward. The datasets used in this study include daily stock price information obtained from Yahoo Finance for the period from September 2020 to February 2023. Additionally, news articles were extracted to perform sentiment analysis. The NLTK sentiment library was utilized to classify sentiments as positive, negative, or neutral, and the results were averaged on a daily basis. The integration of these datasets aims to provide a comprehensive understanding of the factors influencing stock price behavior. The paper discusses the methodology used to train and evaluate neural network models based on the combined datasets. This research contributes to the field of stock price prediction and highlights the importance of considering volatility in achieving accurate predictions.

Juan Parada-Rodriguez, Ixent Galpin

Data Analysis

Frontmatter
Automated Diagnosis of Prostate Cancer Using Artificial Intelligence. A Systematic Literature Review

Prostate cancer is one of the most preventable causes of death. Periodic testing, seconded by precursors such as living habits, heritage, and exposure to specific materials, help healthcare providers achieve early detection, a desirable scenario that positively correlates with survival. However, the currently available diagnosing mechanisms have a great opportunity of improvement in terms of invasiveness, sensitivity and timing before patients reach advanced stages with a significant probability of metastasis. Supervised artificial intelligence enables early diagnosis and excludes patients from unpleasant biopsies. In this work, we gathered information about methodologies, techniques, metrics, and benchmarks to accomplish early prostate cancer detection, including pipelines with associated patents and knowledge transfer mechanisms, intending to find the reasons precluding the solutions from being massively adopted in the standards of care.

Salvador Soto, María F. Pollo-Cattaneo, Fernando Yepes-Calderon
From Naive Interest to Shortage During COVID-19: A Google Trends and News Analysis

Google Trends is a web-based tool for analyzing audience interests, tracking the popularity of events, and identifying emerging trends that could become a crowd purchase intention. The tool performs different combinations of trending queries and related topics at a given time and location until structured data, charts, and maps show a tendency, most relevant articles, and interest over time, among others. We consider data from January 2018 to December 2022, representing three typical times of the normal period before the pandemic, the outbreak period, and the widespread period. The analysis explored when people were more interested in certain products and when news about their scarcity appeared. The results showed that the pandemic gradually changed people’s shopping concerns, which spread more throughout the week that the pandemic began. The study case used a validation process that compared real online data obtained from searches with offline data from official news portals of the same period. The comparative analysis established a relationship between trends and the scarcity of Ivermectin and face masks.

Alix E. Rojas, Lilia C. Rojas-Pérez, Camilo Mejía-Moncayo
Measuring the Impact of Digital Government Service: A Scientometric Analysis for 2023

This study explored the characteristics of digital government trends using research data from the Scopus database for 2012 to 2022. It used a qualitative descriptive method and software CiteSpace to analyze the data. Digitalization will help the community obtain appropriate services, produce collaborative practices, and allow digital innovation. The public sector is essential in public service issues and influences the economy, as it has the authority to issue and enforce regulations and policies. This study found that the number of publications on digital government has increased over the last ten years, with the UK being the region with the most journals. An analysis using CiteSpace Software revealed 11 related clusters, each with its discussion. Digital government transformation, efficient democratic responsiveness, transforming service delivery, and digitally-based enabler are discussed in detail. The research aims to identify best practices, media, and tools used to upgrade or start using digital services in governments. It is hoped that the results of this review can become capital for the government to make digital services even better, from administration to meeting the needs of the Indonesian.

Narendra Nafi Gumilang, Achmad Nurmandi, Muhammad Younus, Aulia Nur Kasiwi
Using Polarization and Alignment to Identify Quick-Approval Law Propositions: An Open Linked Data Application

Since the return of democracy in 1990 until the end of 2020, Chile’s Congress has processed and approved 2404 laws, with an average processing time of 695 days from proposal to official publication. Recent political circumstances have given urgency to identifying those law propositions that might be shepherded to faster approval and those that will likely not be approved. This article proposes to classify law proposals, as well as parliamentarians and political parties, along two axes: polarization (lack of agreement on an issue) and (political) alignment (intra-party coincidence of a group’s members regarding specific opinion), yielding four quadrants: (a) “ideological stance” (high polarization, high alignment), (b) “personal interests” (high polarization, low alignment), (c) “thematic interest” (low polarization, low alignment), and (d) “technical consensus” (low polarization, high alignment). We used this scheme to analyze an existing open-linked dataset that records parliamentarians’ political parties and their voting on law proposals during 1990–2020. A simple visualization allows identifying a large set of propositions (1,643 = 68%) with technical consensus (i.e., low polarization and high alignment), which could have been quickly shepherded to approval, but instead took 687 days on average (i.e., essentially the same time as others). Wider adoption of this analysis may speed up legislative work and ultimately allow Congress to serve citizens more promptly.

Francisco Cifuentes-Silva, José Emilio Labra Gayo, Hernán Astudillo, Felipe Rivera-Polo
Utilizing Chatbots as Predictive Tools for Anxiety and Depression: A Bibliometric Review

This article addresses the impact of the implementation of medical chatbots as a tool to predict mental health disorders on society, focusing on the high prevalence of depression and anxiety worldwide. The promising potential of AI and psychological software agents, such as chatbots, to improve psychological well-being in the digital environment is highlighted. In order to analyze the scientific production related to the use of virtual assistants in the prediction of anxiety and depression, a comprehensive bibliometric review was conducted using the Scopus database. Subsequently, the study reveals the growing interest in medical chatbot development and research, notably from Australia, China, and the United States, which have made significant contributions. It identifies influential articles, authors, and journals that have significantly shaped this research domain. The analysis also underscores recurring keywords, with “depression” and “anxiety” emerging as central themes. This underscores their paramount importance in chatbot-based mental health prediction efforts and their potential to address these widespread mental health challenges. In conclusion, this article emphasizes chatbots’ promising role in enhancing mental well-being through accessible, personalized support. While acknowledging inherent study limitations, it also points to prospective research directions. As technological advancements persist, chatbots are poised to play a pivotal role in promoting better global mental health outcomes.

María de Lourdes Díaz Carrillo, Manuel Osmany Ramírez Pírez, Gustavo Adolfo Lemos Chang

Decision Systems

Frontmatter
A Bio-Inspired-Based Salp Swarm Algorithm Enabled with Deep Learning for Alzheimer’s Classification

Alzheimer’s disease is a progressive neurodegenerative disorder for which early identification is of paramount importance for a holistic treatment plan. Traditional methods of diagnosis require extensive manual interventions, making their scalability and reproducibility difficult. This paper presents a novel Bio-Inspired Salp Swarm Algorithm (BI-SSA) technique enabled by Deep Learning for the classification of Alzheimer’s disease. The social behavior of birds and insects served as inspiration for the optimization technique known as BI-SSA which is able to identify useful solutions to complex problems with minimum manual interventions. This paper extends BI-SSA using Deep Learning which enables it to generate a more accurate and reliable diagnostic model. The model incorporates Alzheimer’s disease-specific features such as age, gender, family history, and cognitive tests and employs an ensemble approach to improve the accuracy of the model. The proposed model is evaluated using a publicly available ADNI dataset. The results demonstrate that the model is able to correctly classify AD patients with an accuracy of 99.9%. Furthermore, our BI-SSA-based model outperforms traditional machine learning techniques and achieves better results with respect to sensitivity, precision, and accuracy of classification.

Joseph Bamidele Awotunde, Sunday Adeola Ajagbe, Hector Florez
A Scientometric Analysis of Virtual Tourism Technology Use in the Tourism Industry

The study aims to analyze the characteristics of virtual tourism technology in the tourism industry over the past ten years. The development of digital technology and the increasing number of Internet users have had a significant impact on the productivity of the world of industry, especially the tourism industry sector. The concept of virtual tourism is one of the artificial intelligence technologies that uses simulation of travel sites in the form of images and videos containing sounds and text. The virtual tourism concept is an alternative effort to provide convenience in exploring tourist destinations armed with smartphones and internet networks. The study uses scientometric analysis to analyze data from 2013 to 2022 using the Scopus database and Citespace software. The number of publications on trends in virtual technology tourism and the tourism industry taken from the Scopuse search engine found 244 documents and has increased overall in the last ten years. Meanwhile, Scopus's data analysis shows that China (50) has the highest contribution to this research and the most dominant author in this research trend is Jung, TT (4). The research found that virtual tourism technology trends in the tourism industry consist of five (5) significant groups, namely technology acceptance, augmented reality, virtual reality experiences, tourism sector, and technology preparation. This research implies that it will help in the development of digital tourism in the world of the tourism industry.

Sri Sulastri, Achmad Nurmandi, Aulia Nur Kasiwi
A Tool to Predict Payment Default in Financial Institutions

Loans are financing services for clients of a bank and are one of the main activities in a financial institution since they are the means through which they make money. When a customer misses one or more payments cause grave problems at the bank at the point of crash. The bank loan manager to decide decides whether to approve or not the loan application using the client’s financial and personal information. This decision always has associated risks. Currently, financial institutions, to reduce the risks associated with loan approval and take advantage of the large repositories of historical data from their clients, are using machine learning algorithms to identify if a client will comply with the loan payment. That information helps managers in their decision-making process. This paper presents the development of an application to support the process of authorizing or not a bank loan in the Acción Imbaburapak Savings and credit cooperative; to choose the model to use in the application, select after training three predictive methods. The analytical process followed the phases proposed by the KDD methodology. Three supervised classification methods were selected: logistic regression, decision trees, and neural networks. Since the neural network showed the best results during the evaluation, we chose this to build the application.

D. Rivero, L. Guerra, W. Narváez, S. Arcinegas
Prediction Value of a Real Estate in the City of Quito Post Pandemic

Many real estate projects were paralyzed due to a lack of funding, and sales dropped significantly due to COVID-19. This article provides a method to predict the value of real estate in the city of Quito post-pandemic, using a methodology that compares different data mining techniques to achieve the best accuracy. In the end, it has been possible to classify the properties in different sectors of the population under study with a good level of value prediction. It can be concluded that the study based on a review of the appropriate literature, the comparison of different techniques, and the segmentation of the population is a basis for other studies that apply other techniques to further improve the level of prediction.

Wladimir Vilca, Joe Carrion-Jumbo, Diego Riofrío-Luzcando, César Guevara
Simulation Model to Assess Household Water Saving Devices in Bogota City

This paper examines the Bogotá River basin in the Andes of Colombia, which is in a vulnerable state. In the medium term, it would be difficult to meet the growing demand for water due to population growth and the risk of low rainfall. Therefore, this paper aims to determine the contribution to water system sustainability by measuring the impact of demand management measures to reduce water wastage through the adoption of household water-saving devices. This is an interesting topic, as few demand management measures have been applied in the Bogotá River Basin. In this sense, a system dynamics model has been developed to simulate the urban water system and the effect on water conservation of demand management measures that promote efficient use. The results show that water-saving taps are the most efficient micro-component, achieving up to 21% of water savings per year per household, while eco-efficient washing machines allow savings of up to 17% and toilets 7%. Consequently, after years of El Niño phenomenon, delays in works to expand supply or continued growth in demand, the water system could avoid a deficit situation with the policy of installing water-saving taps in households.

Andrés Chavarro, Mónica Castañeda, Sebastian Zapata, Isaac Dyner
Towards Reliable App Marketplaces: Machine Learning-Based Detection of Fraudulent Reviews

Online reviews significantly influence consumer decisions, making the increasing prevalence of fake reviews in app marketplaces concerning. These deceptive reviews distort the competitive landscape, providing unfair advantages or disadvantages to certain apps. Despite ongoing efforts to detect fake reviews, the sophistication of fake review generation continues to evolve, necessitating continuous improvements in detection models. Current models often focus on precision, potentially overlooking many fake reviews. This research addresses these challenges by developing a machine learning model since experiments on app reviews were published on a popular App Marketplace. The developed model detects fake reviews based on the textual content and the reviewer’s behavior, offering a relevant approach to enhancing the integrity of app marketplaces.

Angel Fiallos, Erika Anton

Enterprise Information Systems Applications

Frontmatter
Challenges to Use Role Playing in Software Engineering Education: A Rapid Review

Role playing is a teaching method widely used to enhance students learning and engagement, by allowing them to adopt specific roles and interact with others in simulating real-world scenarios, thus applying their theoretical knowledge in a practical context. In Software Engineering Education (SEE), role playing may help students to develop key skills (like teamwork, problem-solving, and critical thinking), to understand the complexities and challenges of software development, and to appreciate the importance of collaboration and effective communication. To use role playing effectively, SEE teachers need to understand the challenges that arising from using it. This paper presents the design, execution and results of a rapid literature review to identify these challenges. Several well known digital libraries (Web of Science, Scopus, and IEEE Xplore) yield 44 articles, which after inclusion/exclusion filters left a total of 23 articles. Key findings are that: (1) most role playing is used to teach skills linked to software development and teamwork/“soft skills”, and secondarily to software design, quality assurance, and process management; rather than project management or quality assurance; (2) challenges and generic considerations for implementing role playing were identified; and (3) challenges for applying role playing in SEE were identified by virtue of the SE specialty classification. In summary, role playing is a mature teaching technique used to in other fields, and has made limited inroads in SEE, mainly in disciplines dear to agile development (like development and teamwork).

Mauricio Hidalgo, Hernán Astudillo, Laura M. Castro
Team Productivity Factors in Agile Software Development: An Exploratory Survey with Practitioners

Agile software development (ASD) has favored the software industry thanks to the early delivery of value to customers and for providing certain advantages for their work teams, including increased productivity. Productivity in ASD is a relevant concept, it is still under study and is composed of a set of factors that allow to determine the performance of each of the members of a team. The purpose of this article is to compare professionals’ perceptions of team productivity in ASD with productivity factors identified in a preliminary Systematic Literature Mapping (SMS). The study is oriented under the protocol for the construction of surveys in Software Engineering by Kitchenham and Pfleeger. As a result, the perceptions of 82 professionals working with agile methods were obtained, who associate productivity as an indicator of improvement within the team’s processes and in the fulfillment of objectives to a client, this last aspect being also recurrent in the SMS. Finally, for the professionals only 22 factors are relevant for the evaluation of productivity highlighting Velocity, Communication, Work Capacity, Commitment, Team Leader, and Quality which are categorized into Meaning, Impact, Flexibility, and Socio-Human.

Marcela Guerrero-Calvache, Giovanni Hernández
Work-Life Interference on Employee Well-Being and Productivity. A Proof-of-Concept for Rapid and Continuous Population Analysis Using Evalu@

The impact of work-life on employee well-being, family, and productivity is a prevalent concern in today’s fast-paced and demanding work environments. This document presents the results of a pilot study that involves coherent data gathering, ease of massification of the tracking instrument used, and the possibility of continued observation to anticipate high-impact social disorders that affect the core of societies, the family. The research utilizes the SWING questionnaire, a widely used tool for measuring work-home interaction, to assess four types of synergies: positive work-to-home interaction, negative work-to-home interaction, positive home-to-work interaction, and negative home-to-work interaction.The study employs the Evalu@ - data centralizer to gather information from populations that receive the tracking instruments in their smartphones. This approach enables fast and efficient data collection for posterior analysis providing researchers and practitioners with unexplored information and valuable insights into any field suitable for inspection.The inference findings extracted from the presented exercise and other factors not included in this document will have a double-strike impact when spreading the methodology to a broader audience. The acquisition forms can continue pinpointing employees at risk of an unbalanced lifestyle to warn companies and individuals about low productivity with possible roots or consequences at home or home instability due to work factors. But more importantly, the exercise is a successful proof-of-concept to enable the participation of individuals in an organization with a hierarchical structure. We backed the findings with verifiable statistical procedures that will not take more than minutes to set up and a few more to yield results, a dynamic participation mechanism never envisaged before.

Fernando Yepes-Calderon, Paulo Andrés Vélez Ángel

Geoinformatics

Frontmatter
Comparative Analysis of Spatial and Environmental Data in Informal Settlements, from Point Clouds and RPAS Images

Although there is access to QGIS, ArcGIS, MappingGIS platforms, with extensive historical and current information, environmental and spatial data, paradoxically, there is little analysis of the data from these platforms on urban structures that contribute to decision making, and even less on informal settlements; Paradoxically, there is little analysis of the data from these platforms on urban structures that contribute to decision making, and even less on informal settlements, we start from this problem question: the lack of knowledge coupled with little use or implementation of geoinformatics in municipal planning offices, have allowed the uncontrolled and uncontrolled growth of informal settlements, therefore, their problems are becoming more complex every day? In addition to this hypothesis, could we infer from geoinformatics possible solutions to informal settlements, guiding the steps of government authorities, seeking to support decision making, promoting welfare and protection of vulnerable communities there? In this exercise, which will be presented with the support of geoinformatics, we intend to present the results of a comparative analysis on issues related to critical environmental aspects in informal settlements -such as water courses and flood plains-. Seven informal settlements are analyzed: three in La Plata (Argentina), two in Mocoa and two in Villavicencio (Colombia). The method seeks the interrelation of geometric information from point clouds, and radiometric information from orthomosaic images of Piloted Aerial Systems (RPAS), to then classify the variables, with the objective of generating new information derived from the analysis, which. The images resulting from the crossing of information generating new spatialities will be made available to communities, public and private entities.

Carlos Alberto Diaz Riveros, Andrés Cuesta Beleño, Julieta Frediani, Rocio Rodriguez Tarducci, Daniela Cortizo
Prospects of UAVs in Agricultural Mapping

The food security of most rural sectors depends on traditional agricultural practices. Geomatics tools contribute to traditional agricultural practices through Unmanned Aerial Vehicles (UAVs) in smart agriculture, agricultural mapping and improving sustainable agricultural production. The study’s objective is to explore the application of UAVs in agricultural mapping through a literature review of the use of this tool in agriculture. The methodology of this review performs a literary search on the application of UAVs to explore the structure, dynamics and domain of this area of knowledge. Subsequently, the analysis includes scientific contributions, the most relevant authors, evolution of themes and trends in sustainable agriculture. The results show that the countries with the greatest scientific contribution in this area of knowledge are the United States (USA) and China. The predominant themes are monitoring plant phenology/crop detection, status/evaluation of agricultural soils, irrigation applications/water resources and agricultural yield. UAV use, and their fusion with other technologies is a technological trend contributing to agricultural mapping through thematic maps and object-based image analysis in digitising sustainable farming activities and practices.

Paulo Escandón-Panchana, Gricelda Herrera-Franco, Sandra Martínez Cuevas, Fernando Morante-Carballo

Health Care Information Systems

Frontmatter
Gross Motor Skills Development in Children with and Without Disabilities: A Therapist’s Support System Based on Deep Learning and Adaboost Classifiers

Fundamental or Gross Motor Skills (GMS) are a set of essential skills both for basic movement activities and physical activities. Properly developing them is vital for children to develop a healthy lifestyle and prevent serious illnesses at an older stage of life, like obesity and cardio-respiratory problems. This is a problem for therapists because they must attend to many children lacking this skill set, and it’s even more time-consuming with children with disabilities. Therefore, this work presents a system that can assist therapists in giving therapy to more children with and without disabilities. To reach this goal, the system is divided into 3 phases: first, the data preprocessing phase, where images from 3 postures are collected: sitting, static crawling, and bound angle. Then all the images are resized. Model construction is the second phase. It consists of implementing the MoveNet algorithm that helps detect human posture through 17 key points of the body. Then, this algorithm is applied to the dataset created to obtain the coordinates from the postures collected. After that, an Adaboost model is created and trained, and tested. Next, the MoveNet algorithm is assembled with the Adaboost model to predict the three postures in live action. Then comes the third phase: model evaluation. This step includes evaluating the model assembled at Instituto de Parálisis Cerebral del Azuay (IPCA). Finally, the results of this evaluation are presented.

Adolfo Jara-Gavilanes, Vladimir Robles-Bykbaev
TP53 Genetic Testing and Personalized Nutrition Service

TP53 is one of the tumor suppressor genes found to be highly correlated with human tumor development, this gene can sense stress or damage to cells and prevent cell division or trigger cell death, thereby preventing the proliferation of damaged cells. The P53 protein encoded by TP53 has an anti-tumor effect and is known as the “guardian of the genome”. The mutation of TP53 gene eliminates a key cell safety mechanism, making it the trigger of cancer. In this paper, we first described the relationship between TP53 gene and tumors, and discussed the application of TP53 gene in tumor prediction and treatment, then we developed a TP53 genetic testing service to evaluate the cancer suppression ability of tumors, so that to help individuals to establish a scientific and reasonable lifestyle in a timely manner, and better grasp the initiative of health. Further, we describe the interaction between TP53 gene and nutrition and its impact on the occurrence and development of cancer. Finally, based on the analysis of the genetic testing results and food frequency questionnaires, we developed a personalized nutrition service to reduce the risk of developing the diseases with high genetic risk score.

Jitao Yang

Interdisciplinary Information Studies

Frontmatter
Changes in the Adaptive Capacity of Livelihood Vulnerability to Climate Change in Ecuador’s Tropical Commodity Crops: Banana and Cocoa

Climate change can cause negative impacts on agriculture. This paper aims to analyze the changes in the adaptive capacity of livelihood vulnerability to climate change in Ecuador’s Tropical Commodity Crops: Banana and Cocoa. We used the adaptive capacity factor from the livelihood vulnerability index approach, using secondary data from the Survey of Agricultural and Livestock Surface and Production of Ecuador from 2020 to 2022. The results showed that for the banana crops, the livelihood strategy has remained stable, the sociodemographic profile was improving, and the social network got worse in 2021 during the pandemic. Regarding cocoa, the sociodemographic profile has the lowest values of the major components. Lastly, the adaptive capacity was much better in 2022 for both crops, so farmers were becoming more prepared for climate hazards, but they could have better improvement. The policy implications are that agricultural assistance and insurance access should be improved to reduce climate vulnerability.

Elena Piedra-Bonilla, Yosuny Echeverría
Context and Characteristics of Software Related to Ecuadorian Scientific Production: A Bibliometric and Content Analysis Study

In view of the predominance of Information Technologies in different contexts, it is known that the use of software and data in the field of scientific research has increased; however, it is not possible to clearly determine the environment and the use given to them. The present study proposes a bibliometric and content analysis of publications with Ecuadorian affiliation, which allows us to recognize the characteristics and context of the use of software as a work tool. The study was developed in four stages: selection of documents, bibliometric analysis, network analysis and content analysis. A total of 4028 documents were extracted from the WoS and Scopus databases, analyzing 117 at the content level. Among the main tools used were R Studio, VOSviewer and QualCoder. Among the institutions generating this production are the Universidad Politécnica ESPOL, the Universidad Politécnica Salesiana and the Universidad de las Fuerzas Armadas ESPE. There is a high rate of collaboration with Spanish authors. Finally, the studies are strongly oriented towards “Professional, scientific and technical activities”, are of the “Experimental” type, and have mainly referred specifically to proprietary software.

Marcos Espinoza-Mina, Alejandra Colina Vargas, Javier Berrezueta Varas
Exploring the Potential of Genetic Algorithms for Optimizing Academic Schedules at the School of Mechatronic Engineering: Preliminary Results

The generation of schedules is a complex challenge, particularly in academic institutions aiming for equitable scheduling. The goal is to achieve fair and balanced schedules that meet the requirements of all parties involved, such as workload, class distribution, shifts, and other relevant criteria. To address this challenge, a genetic algorithm specifically designed for optimal schedule generation has been proposed as a solution. Adjusting genetic algorithm parameters impacts performance, and employing parameter optimization techniques effectively tackles this issue. This work introduces a genetic algorithm for optimal schedule generation, utilizing suitable encoding and operators, and evaluating quality through fitness techniques. Optimization efforts led to reduced execution time, improved solution quality, and positive outcomes like faster execution, fewer generations, increased stability, and convergence to optimal solutions.

Johan Alarcón, Samantha Buitrón, Alexis Carrillo, Mateo Chuquimarca, Alexis Ortiz, Robinson Guachi, D. H. Peluffo-Ordóñez, Lorena Guachi-Guachi
Machine Masquerades a Poet: Using Unsupervised T5 Transformer for Semantic Style Transformation in Poetry Generation

This paper presents a novel approach to automatically capturing the unique style of various poets and using it to convert given poems into the styles of those poets. The method combines the power of web scraping, and T5 transformer (11 Billion Parameters). A dataset of poems was collected by web scraping popular online libraries, such as Project Gutenberg and Open Library. These poems were then pre-processed to remove using HTML tags and meta data. The pre-trained T5 Transformer-11b was fine tuned on this corpus of text. The results of this study were highly promising. The proposed method accurately captured the styles of various poets, effectively capturing their overall tone, ideologies, and poetic style. By providing a starting poem, the model generated new poems in the style of a specific poet, successfully mimicking their unique writing characteristics. These findings highlight the potential of machine learning algorithms in understanding and reproducing the intricate nuances of poetic styles. This work opens avenues for automated poem generation, enabling individuals to experience the styles and voices of renowned poets in a novel way.

Agnij Moitra
Under the Spotlight! Facial Recognition Applications in Prison Security: Bayesian Modeling and ISO27001 Standard Implementation

This article highlights the importance of using Bayesian models and adhering to the ISO27001 standard in developing a web application to enhance prison security through facial recognition techniques. The proposed approach includes several key stages: 1. Identify the functional and non-functional requirements of the application, ensuring alignment with the desired objectives. 2. Design the application architecture and carefully select facial recognition techniques and Bayesian models that best suit the intended purpose. 3. Implement the application and perform thorough unit and integration testing to ensure functionality and compatibility. 4. Performed an experimental evaluation of the application in a controlled test environment, using performance and security metrics as benchmarks. The results demonstrate that using a web application integrated with a Bayesian model, in conjunction with adherence to the standardized practices outlined in ISO27001, enables the proactive identification of risks and threats. As a result, it serves as a valuable tool for mitigating prison insecurity.

Diego Donoso, Gino Cornejo, Carlos Calahorrano, Santiago Donoso, Erika Escobar

Learning Management Systems

Frontmatter
Comparative Quality Analysis of GPT-Based Multiple Choice Question Generation

Assessment is an essential part of education, both for teachers who assess their students as well as learners who auto-evaluate themselves. A popular type of assessment questions are multiple-choice questions (MCQ), as they can be automatically graded and can cover a wide range of learning items. However, the creation of high quality MCQ items is nontrivial. With the advent of Generative Pre-trained Transformer (GPT), considerable effort has been recently made regarding Automatic Question Generation (AQG). While metrics have been applied to evaluate the linguistic quality, an evaluation of generated questions according to the best practices for MCQ creation has been missing so far. In this paper, we propose an analysis of the quality of automatically generated MCQs from 3 different GPT-based services. After producing 150 MCQs in the domain of computer science, we analyse them according to common multiple-choice item writing guidelines and annotate them with identified docimological issues. The dataset of annotated MCQs is available in Moodle XML format. We discuss the different flaws and propose solutions for AQG service developers.

Christian Grévisse

Virtual and Augmented Reality

Frontmatter
Design and Validation of a Virtual Reality Scenery for Learning Radioactivity: HalDron Project

Immersive technologies could be used to teach more complex concepts, particularly those which are hard to conceptualize related to science. The present paper presents how a team comprised of a Physics professor and some undergraduate Physics and Mathematics students with basic knowledge of C#/C +  + design developed a Virtual Reality (VR) experience by using Blender and Unity to create an immersive experience through the use of Oculus Quest 2 headset, the objective was teaching the basic concepts of natural radioactivity and nucleosynthesis to undergraduate students with the help of a 3D modeled Virtual Learning Companion to test if immersive technologies would help to better understand and retain the information through an experience not available to them in experimental laboratories. An intervention was conducted with 143 undergraduate students to validate the system as a learning method. The students who participated in the testing received a prior knowledge assessment through a Google Forms test which was emailed to them, then experienced a quick virtual reality immersion (less than 10 min) and finally received a new theoretical assessment the following day after the VR experience along a user experience test. The results show that the student’s average grades went from 4.9 to 9.8 (of 15 points) and a high completion rate, even among those unfamiliar with VR or head-mounted displays.

Silvio Perez, Diana Olmedo, Fancois Baquero, Veronica Martinez-Gallego, Juan Lobos
Backmatter
Metadaten
Titel
Applied Informatics
herausgegeben von
Hector Florez
Marcelo Leon
Copyright-Jahr
2024
Electronic ISBN
978-3-031-46813-1
Print ISBN
978-3-031-46812-4
DOI
https://doi.org/10.1007/978-3-031-46813-1

Premium Partner