Zum Inhalt

Computer Science – CACIC 2024

30th Argentine Congress of Computer Science, La Plata, Argentina, October 7–10, 2024, Revised Selected Papers

  • 2026
  • Buch
insite
SUCHEN

Über dieses Buch

Dieses Buch stellt den Referenten des 30. Argentinischen Kongresses für Informatik - CACIC 2024 dar, der vom 7. bis 10. Oktober 2024 in La Plata, Argentinien, stattfand. Die 29 vollständigen Beiträge in diesem Buch wurden sorgfältig überprüft und aus 201 Einreichungen ausgewählt. Sie waren wie folgt in thematische Abschnitte gegliedert: Agenten und Systeme; Verteilte und parallele Verarbeitung; Technologien für die Ausbildung; Graphische Datenverarbeitung, Bilder und Visualisierung; Datenbanken und Data Mining; Software Engineering; Hardware-Architekturen; Hardware-Architekturen, Netzwerke und Betriebssysteme; Innovation in Softwaresystemen; Signalverarbeitung und Echtzeitsysteme; Innovation in der Informatik-Ausbildung; Computersicherheit und digitale Governance und Smart Cities.

Inhaltsverzeichnis

Frontmatter

Agents and Systems

Frontmatter
Analyzing Effectiveness and Interpretability of Machine Learning Models for Stress Detection
Abstract
Beyond the effectiveness of machine learning methods, a second aspect should be ensured: the interpretability of the decision-making process, as well as the explanation of the results. For safety-critical domains, as mental health is, it is mandatory that Artificial Intelligence (AI) systems be transparent and trustworthy to both practitioners and patients. This paper presents a systematic review revealing how interpretability, explainability, and ethic are considered in AI techniques for healthcare. We evaluate the effectiveness of machine learning models in the particular case of detecting stress. We consider methods with different support for interpretability like logistic regression (inherently interpretable), SS3 (adequate interpretability) and BERT/MentalBERT (black-boxes). The results of the experimental study show that logistic regression and SS3 have slightly lower predictive performance than transformer-based models, which are complex and difficult to explain. From the interpretable models, we can confirm the conclusions presented in previous works about the role that personal pronouns and self-references play as significant indicators of stress. This was possible due to the ability of logistic regression to assess the individual importance of each input token and the ability of SS3 to hierarchically classify and interpret input words/sentences/paragraphs. On the other hand, the same conclusion was derived from attention analysis performed for the Transformer-based methods despite their inherent opacity.
Leticia C. Cagnina, Lautaro Borrovinsky, Marcelo L. Errecalde
A Novel Cooperative Co-Evolutionary Algorithm for Large-Scale Constrained Optimization
Abstract
Cooperative Coevolution (CC) is a widely used framework for solving large-scale continuous constrained optimization problems. This approach decomposes this kind of problem into smaller, more manageable subproblems by grouping the objective function and constraint variables. The efficiency of the CC framework depends mainly on the group size and the grouping strategy. In this paper, we propose a CC-based algorithm, which in its first part uses a recently published grouping method called CEDG, which groups variables into subcomponents of appropriate size with high accuracy and low computational cost. In the second part, we use a variant of differential evolution called CSaNSDE to solve the generated subproblems. The CEDG and CSaNSDE methods are used under the CC framework. The resulting algorithm, called CC-CEDG, is evaluated on a scalable benchmark function set with constraints, up to 1000 dimensions. Results show that CC-CEDG outperforms other advanced algorithms for solving large constrained optimization problems.
Fabiola Paz, Guillermo Leguizamón, Efrén Mezura-Montes

Distributed and Parallel Processing

Frontmatter
An Adaptative Agent-Based Model Metasystem for Emergency Departments
Abstract
Hospital Emergency Departments (EDs) are one of the more complex units of the healthcare system, requiring coordination of human and technical resources for managing situations effectively. This article establishes the basic principles to design primitives of an Agent-Based Modeling and Simulation (ABMS) modular system, inspired by the modularity of Lego https://static-content.springer.com/image/chp%3A10.1007%2F978-3-032-00718-6_3/MediaObjects/658195_1_En_3_Figa_HTML.gif
Chemical structure diagram showing a hexagonal benzene ring with alternating double bonds. Attached to the ring are two hydroxyl groups (OH) at the first and second positions, and a carboxyl group (COOH) at the fourth position.
 blocks, which allows the creation of computational models that can be employed as Decision Support Systems (DSS). The ABMS modular system has shifted from a monolithic approach to an adjustable system. This means that the system allows the description of the metasystem and agent box used to build the computational models (simulator) that, used as DSS, can help EDs managers to achieve the highest possible level of service quality, given the available resources.
Francisco Mesas, Manel Taboada, Dolores Rexachs, Francisco Epelde, Alvaro Wong, Emilio Luque
Enhancing Patient-Centric Synthetic Data Generation: A Comparative Analysis for Chronic Kidney Disease
Abstract
Access to medical data is often restricted due to privacy and security policies. The generation of synthetic data from real data is a widely adopted technique to address these limitations. This research presents the enhancement of a patient-centric methodology for generating synthetic data, specifically designed for patients diagnosed with Chronic Kidney Disease (CKD), by comparing the results with synthetic data generated using algorithms from the Synthetic Data Vault (SDV) library. The key advantage of the methodology proposed by the authors lies in its explainability and the traceability of the results, as it relies on statistical methods and data analysis rather than AI algorithms. The MIMIC-III clinical dataset serves as the foundation for generating synthetic patients in this study. This article outlines the data preprocessing and filtering applied to this dataset. Subsequently, synthetic data for CKD patients is generated using the proposed methodology. A comparison is then made between the synthetic data and the real data. Furthermore, the synthetic data is compared with the results obtained using the AI algorithm known as SMOTE. Finally, an extensive comparative analysis is conducted with the results obtained using the Gaussian Copula and CTGAN algorithms from the SDV library.
Candelaria Alvarez, Jose Ibeas, Javier Balladini, Remo Suppi
An Extended Technological Ecosystem for the Detection of Risk of Dehydration in Cattle Based on Precision Livestock Farming
Abstract
Detecting the risk of dehydration in cattle is a way to preserve animal welfare that prevents loss of life in extreme cases. This article presents a technological ecosystem based on precision livestock farming, an extension of the E-SED ecosystem. Its components, hardware and software, consist of smart Edge Computing devices connected to the Cloud Computing infrastructure and Internet of Things. These devices are necessary to put into operation an application based on an Expert System for the Detection of Dehydration Risk model. A preselection of the data to be sent to the Cloud is considered in order to reduce network traffic. It is particularly proposed for a forested rural area called Monte in central Argentina dedicated to extensive livestock farming.
Silvia M. Molina, Emilio Luque, Dolores Rexachs

Technology Applied to Education

Frontmatter
Open Resources to Introduce Data Science in Secondary Schools
Abstract
Data Science has emerged as a field of significant relevance in recent years. The strong connection between Data Science and programming, along with its growing integration into secondary education, has driven the development of innovative teaching strategies to effectively introduce fundamental concepts in the classroom. This article details the pedagogical methodologies used in experiences, focusing on the computer tools used. The importance of having simple tools that facilitate the understanding of the concepts is highlighted. Pandalyze is presented, a tool in the testing phase, the result of a final degree project. The experiences show that this topic is of interest and allows for high scalability in the teaching of programming and in advanced aspects of Data Science.
Eliana Sofía Martin, Claudia Banchoff, Paula Venosa, Liliana Hurtado
Innovation in Virtual Teaching Training: Virtual Reality Treasure Hunt
Abstract
The work carried out for the XXX Argentine Congress of Computer Sciences (CACIC 2024) [20] is taken as a basis, where the experience carried out in November 2023 by the National University of the Northwest of the Province of Buenos Aires is reported. The First Virtual Camp on Digital Education was organized, an innovative initiative for teacher training that brought together twenty-three participants. The activities were designed to enhance digital skills, highlighting the adaptation to hybrid teaching environments. The objectives of the event included familiarization with digital tools, the development of effective pedagogical strategies and the promotion of collaboration between teachers. The platforms used allowed for the creation and sharing of virtual reality spaces in an accessible and collaborative way, facilitating real-time interactions and the personalization of content. The experience promoted skills such as spatial orientation, problem solving and teamwork. In addition, it offered a safe environment to face frustrations and anxieties, stimulating creativity in the design of educational activities. This resulted in the ability to work with abstract concepts visually, carry out educational simulations, promote active and personalized learning, and encourage peer collaboration. In this context, the Virtual Reality Treasure Hunt game, the subject of this article, and an educational Escape Room were implemented. This work expands on what was presented for CACIC 2024.
Claudia Cecilia Russo, Tamara Ahmad, Gustavo Gnazzo, Paula Lencina, Hugo Ramón, Florencia Castro, Pilar Traverso

Graphic Computation, Images and Visualization

Frontmatter
Evaluating Input Modalities for Low-Cost XR: Head, Hand, and Voice-Based Interactions
Abstract
Interactions in low-cost Extended Reality (XR) environments present significant challenges due to the limitations of available hardware, particularly in motion tracking and object manipulation. This work explores and evaluates three interaction techniques designed for low-cost XR systems: head movement tracking using smartphone sensors, hand tracking via a custom low-cost controller using microcontrollers, and voice-based interaction using a cellphone’s microphone. The user study analyzes the performance of these modalities in an object manipulation scenario with basic commands. Experimental results with user participation indicate that head movement tracking offers the highest precision and efficiency, whereas the hand tracking method suffers from drift and noise due to hardware constraints. Voice interaction, while effective for basic commands, shows limitations in terms of ease of use. These findings provide insights into the trade-offs between interaction fidelity, cost, and usability in low-cost XR environments, contributing to the development of more accessible and efficient XR interaction techniques.
Rodrigo N. Herlein, María Luján Ganuza, Matías N. Selzer

Databases and Data Mining

Frontmatter
BOLDSC: A New Dynamic, Secondary-Memory Metric Index
Abstract
Metric space searching addresses the problem of efficient similarity searching across diverse applications, in particular for non-structured objects, for instance, natural language or images. Although promising, this approach is still immature in several aspects that are well-established in traditional databases. Particularly, most indexing schemes are not dynamic, as they cannot efficiently handle insertions over an ongoing index without significant performance degradation. Moreover, very few of them work efficiently in secondary memory. The List of Clusters (LC) has proven to be a competitive index in main memory due to its simplicity and good search performance in high dimensional metric spaces. We introduce a new dynamic, secondary-memory LC variant. Our new index efficiently handles the secondary memory scenario and achieves competitive search and insertion times compared to the state-of-the-art, making it a practical alternative for large-scale database applications. Also, our ideas are applicable to other secondary-memory indexes, where it is possible to control the disk page occupation.
Rodrigo Paredes, Nora Reyes, Karina Figueroa, Manuel Hoffhein
Optimization of the K-Means Algorithm: Evaluation of Computational Savings and Its Relationship with Clustering Quality
Abstract
Information Retrieval Systems use clustering techniques to improve the organization and access to documents. K-Means is widely employed in these environments due to its simplicity and effectiveness, but its high computational cost limits scalability. This work evaluates an optimization of the K-Means algorithm that significantly reduces the number of calculations without affecting clustering quality.
Building on previous research, the analysis is extended by incorporating new evaluation metrics, such as silhouette, the Davies-Bouldin index, the Dunn index, and inertia, to study the relationship between computational savings and clustering quality. Experiments conducted in different dimensional configurations confirm that the optimization maintains high clustering quality while reducing computational cost. Additionally, it is observed that the reduction in calculations is directly related to the stability of clusters and the dispersion of centroids, validating the effectiveness of the proposed approach.
Osvaldo Mario Spositto, Viviana Alejandra Ledesma, Sebastian Quevedo, Lorena Romina Matteo, Julio César Bossero

Software Engineering

Frontmatter
Evaluating Feature Importance in Post Classification Using Chi-Square Analysis
Abstract
This study builds upon a previous research on classifying posts in discussion forums based on their role within threads. First, we extend Bhatia et al.’s model by incorporating additional features related to content, structure, user behavior, and sentiment, and evaluate the enhanced model on two datasets: Ubuntu Forum (UB) and TripAdvisor New York (NYC), showing improved classification accuracy, particularly in social forums, where sentiment features played a significant role.
Later, we refine the previous model by introducing a feature ranking approach based on Chi-square values, to identify the most influential variables for classification. This analysis determines which features contribute most to the model’s performance, enabling a more optimized and efficient classification process.
This work not only improves classification accuracy but also introduces a methodological advancement through the application of Chi-square analysis for feature selection. By identifying and prioritizing the most impactful features, we provide a more efficient and interpretable framework for classifying posts in online discussion forums.
Valeria Zoratto, Gabriela N. Aranda
Tracking the Evolution of Large Language Models in Requirements Elicitation and Analysis
Abstract
Artificial intelligence, particularly large language models, supports various disciplines, including software engineering. This article presents a case study to analyze the evolution and feasibility of using LLMs in the software engineering process, employing various tools for generating interviews during the elicitation phase and for requirement analysis and specification through user stories. Furthermore, the study highlights significant evolutionary improvements in model reasoning and modular segmentation, leading to more structured and nuanced outputs. By comparing experimental phases, the case study demonstrates how these advancements enhance the overall applicability and effectiveness of LLMs in complex software development tasks.
Ailén Panigo, Kristian Petkoff Bankoff, Ariel Pasini, Patricia Pesado
Tools for Quantum Programming: A Review
Abstract
Currently, a limit is being reached in integrated circuits miniaturization. When operating on the nanometer scale, due to a phenomenon known as the “tunnel effect,” electrons escape from the channels through which they must circulate. This phenomenon is associated with quantum physics. Therefore, traditional computing is limited because it has reached scales that are difficult to reduce further. In this context, in recent years, research into quantum technology has grown exponentially. Various organizations and companies invest time and money in the development of quantum computers. Quantum computing represents a radical change with respect to binary computing. In this article, a review of the tools currently used for quantum programming is presented, aimed at highlighting the most relevant aspects based on application context in the computing field.
Luciano Marrero, Verena Olsowy, Juan Fernández Sosa, Fernando Tesone, Leonardo Corbalán, Pablo Thomas, Patricia Pesado
Not User Stories or Use Cases: Requirements!
Abstract
Before the Agile Manifesto's promulgation, one of the main challenges in software development was the production of requirement specifications, often accompanied by numerous inefficiencies. Among the four core values of the Agile Manifesto, one emphasized “working software over comprehensive documentation”. However, this value was frequently misinterpreted, leading to minimal documentation practices that prioritized “doing agility” over building quality software. Consequently, user stories replaced use cases. Over time, it became apparent that one problem was merely substituted for another: a significant decline in the quality of requirements. This work seeks to underscore two key points: 1) The issue does not stem from favoring one technique over the other but rather from the need to properly define and specify requirements; 2) These techniques are not interchangeable; they address distinct aspects of requirements. To achieve the best results, they are most effective when used in combination.
Fernando Pinciroli
State of the Art on User Story Mapping and Scenarios to Specify Software Requirements
Abstract
Requirements engineering is vital to software development, by ensuring software products comply with clients’ needs. However, it faces significant challenges, such as failures due to deficiencies in requirements specification, accounting for up to 47% of all failures in software projects. In order to mitigate these problems, adequately capturing the user’s domain knowledge and needs through Natural Language models, such as User Story Mapping (USM) and Scenarios becomes essential. This paper presents the state of the art on these two tools for software requirements specification. First, a systematic mapping study (SMS) is conducted and this is complemented by an exploratory study based on a survey in order to collect evidence on the current state of practice on the use of the User Story Mapping and Scenarios tools for requirements specification in the software industry. The results of both studies showed that USM and Scenarios are key tools in requirements specification and that using them complementarily enhances accuracy and clarity. Their direct relationship makes it possible to efficiently structure and prioritize needs, by optimizing communication between teams and enhancing the ability of projects to be adapted to the client’s requirements, thus establishing an innovative path in requirements engineering.
Andrea Alegretti, Marisa Panizzi, Leandro Antonelli
ValReCo: A Requirements Validation Process
Abstract
This paper presents the application of the collaborative requirements validation process, known as ValReCo. This process, based on standards and good practices, is based on subprocesses as part of the requirements validation cycle. It uses entity extraction techniques using natural language processing in a collaborative environment. Each defined subprocess includes a set of specific activities and tasks, as well as associated roles, guidelines and work products. The objective of this paper is to describe and show a case study of the ValReCo process applied in a real domain, highlighting its main strengths and weaknesses.
Sonia Santana, Leandro Antonelli, Pablo Thomas, Alejandro Fernandez

Hardware Architectures, Networks, and Operating Systems

Frontmatter
I+IoT RLab: Design and Development of a Remote Laboratory for the Industrial Internet of Things
Abstract
The Internet of Things (IoT) and Industrial Internet of Things (IIoT) networks have undergone remarkable advancements in recent years, driven by their extensive range of applications, diverse functionalities, and ease of implementation. This study presents the design and development of a remote laboratory to facilitate the teaching, exploration, and advancement of techniques, architectures, and protocols for end-to-end IIoT systems. This approach enables the creation of systems encompassing the entire IoT ecosystem, from sensor deployment to cloud computing and big data integration. Within this IoT framework, developers can configure, program, visualize, and test individual components, fostering the implementation of a fully integrated IIoT system. The development process will leverage virtualized infrastructure to manage resource allocation and reservations dynamically.
Sebastián Tobar, Mariano Zapata, Ana Laura Diedrichs, Gustavo Mercado, Cristian Bernoco, Carlos Taffernaberry, Cristian Pérez Monte
CRANE: A Tool for Deploying Containerized Applications in Local Environments. Lessons Learned and Proposed Improvements
Abstract
CRANE is a tool designed to simplify the local deployment of containerized applications, enabling developers and students to test distributed environments efficiently. Its lightweight design incorporates automatic scaling capabilities, making it a good solution for creating and deploying full application stacks within a controlled environment. Furthermore, CRANE aims to enhance DevOps skill development, streamlining the software development process within a continuous delivery framework. This paper introduces the implementation of CRANE as an API that manages the creation and deployment of Docker services while providing features for monitoring, defining scaling policies, and handling alerts. These functionalities support decision-making processes. Additionally, it discusses key lessons learned from the implementation, outlines observed results, and suggests potential improvements for future development.
José Miguel Silva Pavón, Franco Bellino, Patricia Bazán, Alejandra B. Lliteras, Nicolás del Rio

Innovation in Software Systems

Frontmatter
People with Disability Immersive Virtual-Reality Tool Architecture for Social skills training
Abstract
This paper presents a description of the architecture of AgileMotion, a tool designed for the generation of interactive virtual scenes, focused on active and passive micro-learning. The objective of the development is to perform a real proof of concept in rehabilitation and social inclusion with people with autism and other developmental, intellectual and psychosocial disabilities, allowing the assessment and learning of social skills, learning and training in a caring environment. It uses natural language specifications to facilitate the creation of scenarios by non-expert users in software development, such as therapists or educators. It offers flexible and adaptable scene generation to the specific needs of each user, using natural language as the main interface, and allowing medical personnel to interact in the scene through their own avatar. The main tools used are Unity, RASA, and Photon. The components and their links are described to achieve a powerful tool.
Nelson Acosta, Analía Amandi, Patricia Salguero, Santiago Faiela, Franco Viduzzi
On Using Large Language Models for Ontology-Based Data Access
Abstract
Ontology-Based Data Access (OBDA) focuses on representing legacy data sources through ontologies, enabled by a modern, distributed, and standardized data format like OWL. This approach facilitates intelligent querying and processing using SPARQL. However, implementing OBDA requires significant effort to develop software capable of interpreting and transforming data into ontologies. Recently, large language models (LLMs) have emerged as powerful tools for generating solutions from user-provided natural language input. In this paper, we examine the potential of LLMs to automate OBDA. Our hypothesis is that LLMs, such as ChatGPT and LLaMA, can effectively perform OBDA tasks. To test this, we analyzed their responses to various OBDA-related challenges. Our findings indicate that both ChatGPT and LLaMA can generate ontologies from free-text descriptions and structured data, such as tables in text or CSV format. Additionally, they can construct SPARQL queries, convert relational tables into ontologies, and correct integrity constraint violations when given appropriate instructions. However, limitations exist: the free version of ChatGPT struggles with processing large datasets, while LLaMA often provides only partial results.
Sergio Alejandro Gómez, Pablo Rubén Fillottrani
Emotion Recognition System by Using Skin Conductance, Heart Rate Variation, and Facial Expressions
Abstract
Affective computing is an emerging discipline that aims to develop systems and devices capable of recognizing, interpreting, processing, and stimulating human emotions. This paper presents an extended version of the work presented at CACIC 2024, considering the background in terms of instruments, methods, and models in the field of affective computing. It describes a developed multimodal emotional framework that uses data from physiological sensors (heart rate and skin conductance) and facial expressions to predict a subject's emotional state. Additionally, a BCI (Brain-Computer Interface) is used to obtain relaxation and attention. Finally, the general results of experiments conducted in a static environment using IAPS images and in a dynamic environment using a flight simulator are presented and discussed.
Sofia Roldan, Matias Gramajo, Jorge Ierache

Signal Processing and Real-Time Systems

Frontmatter
Application of the Profile of Mood States (POMS) to Participants Before a Dance Competition. A Case Study
Abstract
A new version of an experiment that relates states of mind and sports performance is presented, taking preparation for a dance competition as a case study. The POMS (Profile of Mood States) developed since 1971 was used for the analysis of emotions, using in this case 7 scales and 63 items, as explained in the methodology. This analysis combined data on a participant's heart rate during the last two weeks of preparation for the dance competition, trying to determine the relationship between the objective data (heart rate) and the subjective data (POMS evaluation of mood). Subsequently, the study has been extended to a larger number of competitors (12) with very similar results. On this occasion, the Conclusions analyze these results and present a more detailed description of the previous work with the results of the 12 participants and with a specific focus on the classification of POMS mood states.
Luis Arturo Espín Pazmiño, Armando De Giusti
Performance of MSK with Walsh and Reed-Solomon Coding
Abstract
In this work, the performance of the combination of MSK (Minimum Shift Keying) modulation and Walsh coding concatenated with Reed-Solomon coding is presented using analytical results and simulation. That combination of techniques is employed in applications like Mode 5 to ensure secure military communications.
Oscar Bria, Javier Giacomantone

Innovation in Computer Science Education

Frontmatter
Creating with Digital Technologies for the Development in Computational Thinking. The Case of the Seedbed of Innovators
Abstract
This paper presents a proposal of activities for middle school students to bring them closer to the creation of digital technologies with the aim of developing computational thinking skills. This initiative is linked to current trends in approaching these types of skills, which are fundamental for the training of students in problem solving, creativity, and innovation. A pilot experience is described with nine middle school students who have validated the proposed methodology and have shown interest in this type of project. The results allow us to recognize the strengths and aspects of improvement for future implementations. One of the main results has been the acceptance and motivation of this proposal and the application of computational thinking skills that the participants positively valued.
Cecilia Sanz, Verónica Artola, Santiago Medina, Matías Zeballos, Sabrina Lombardo

Computer Security

Frontmatter
HACONTI and Its Challenges for Learning About Smart Contract Security
Abstract
The increasing adoption of blockchain-based systems has highlighted the need to address potential vulnerabilities in Smart Contracts. Past incidents involving multimillion-dollar losses emphasize the importance of understanding and mitigating risks in these implementations. This article introduces HACONTI, a web-based platform featuring cybersecurity challenges centered around Smart Contracts, where each challenge requires exploiting a specific vulnerability to be solved. The primary goal of our application is to help individuals develop Smart Contracts securely while also enhancing their ability to conduct penetration tests on them, thereby contributing to cybersecurity.
Juan Schällibaum, Paula Venosa, Nicolás Macia
Management of Non-custodian Digital Evidence: An ISO/IEC 27050 Standards-Based Approach
Abstract
This paper addresses the management of non-custodian data sources external to the organization within the framework of ISO/IEC 27050 standards, which provide a structured approach for their identification, preservation, and analysis. As new technologies continue to proliferate, the rigorous handling of this type of evidence becomes critical in legal and forensic contexts. The challenges and opportunities associated with the application of these standards are examined, highlighting the importance of personnel training and interdisciplinary collaboration to ensure the integrity and legality of data collection. The study emphasizes that the effective implementation of ISO/IEC 27050 standards is essential for mitigating risks related to evidence tampering, thereby promoting transparency and trust in judicial processes.
Maria Eugenia Casco, Santiago Enrique Roatta

Digital Governance and Smart Cities

Frontmatter
Evaluation of Key Components for Green Hydrogen Certification: Hardware, COAP and Blockchain
Abstract
In the face of rising concerns about climate change, the energy sector is undergoing a digital transformation. IoT and Blockchain technology has emerged as promising solutions to ensure security, traceability, and efficiency in energy management, particularly in certifying the origin of energy. Various projects in the energy sector are discussed, highlighting how these initiatives are transforming the way energy is produced, distributed, and consumed. The paper explores the design, implementation, and testing of independent units within a green hydrogen certification system. This system is made of hardware that measures hydrogen flow and temperature, a central node that handles data via the COAP protocol, and a blockchain node for secure transactions.
Leandro Jaimes Soria, Julio Cervera, Rodrigo Spano, Alan Arrimondi, Ignacio Zaradnik
University Citizen Satisfaction Screening Instrument Applied to the Electronic File System at Three Argentine Universities
Abstract
This paper introduces the concept of university citizen satisfaction (UCS) and proposes an instrument for the evaluation of such satisfaction. Based on an identification of 11 services belonging to the electronic file system of the UNLP (SUDOCU), an evaluation of the current UCS is made, which allows planning possible improvements on the supply of services by universities to the members of their communities. Results at the UNLP are compared to the ones obtained at UNGS and UNA universities.
Miguel Angel Marafuschi Phillips, Ariel Pasini, Ana Catalina Lacunza
LoRaWAN Technology Applied to Maritime Beaconing at La Plata Port
Abstract
This paper describes the implementation of a monitoring and control system for maritime beaconing in the access channel to La Plata Port using LoRaWAN technology. An IoT sensor network was deployed on the port’s buoys, featuring long-range communication capabilities and low power. The system enables remote supervision of beacon status (particularly batteries and photovoltaic panels), fault detection in signaling electronics, flash synchronization, and real-time alert generation for critical events such as impacts or inclination changes.
The design of the solution was carried out in two stages: (i) evaluating the feasibility of the LoRaWAN network in the maritime environment and (ii) developing the hardware and software for the monitoring nodes, compatible with pre-existing beacons. The results demonstrate that the LoRaWAN network provides optimal performance for maritime applications, with a low packet loss rate and effective coverage even under adverse weather conditions. The implemented solution has enhanced beaconing management efficiency, reduced operational costs, and optimized safety in port navigation.
Javier Díaz, Agustín Candia, Jorge Bellavita, Laura A. Fava
Backmatter
Titel
Computer Science – CACIC 2024
Herausgegeben von
Patricia Pesado
Pablo Thomas
Copyright-Jahr
2026
Electronic ISBN
978-3-032-00718-6
Print ISBN
978-3-032-00717-9
DOI
https://doi.org/10.1007/978-3-032-00718-6

Die PDF-Dateien dieses Buches wurden gemäß dem PDF/UA-1-Standard erstellt, um die Barrierefreiheit zu verbessern. Dazu gehören Bildschirmlesegeräte, beschriebene nicht-textuelle Inhalte (Bilder, Grafiken), Lesezeichen für eine einfache Navigation, tastaturfreundliche Links und Formulare sowie durchsuchbarer und auswählbarer Text. Wir sind uns der Bedeutung von Barrierefreiheit bewusst und freuen uns über Anfragen zur Barrierefreiheit unserer Produkte. Bei Fragen oder Bedarf an Barrierefreiheit kontaktieren Sie uns bitte unter accessibilitysupport@springernature.com.

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, NTT Data/© NTT Data, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, FAST LTA/© FAST LTA, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH