Skip to main content

2024 | Buch

Engineering of Computer-Based Systems

8th International Conference, ECBS 2023, Västerås, Sweden, October 16–18, 2023, Proceedings


Über dieses Buch

This book constitutes the refereed proceedings of the 8th International Conference on Engineering of Computer-Based Systems, ECBS 2023, which was held in Västerås, Sweden, in October 2023.
The 11 full papers included in this book were carefully reviewed and selected from 26 submissions and present software, hardware, and communication perspectives of systems engineering through its many facets. The special theme of this year is ”Engineering for Responsible AI“.


How to Be An Ethical Technologist
Many of us got involved in computing because programming was fun. The advantages of computing seemed intuitive to us. We truly believed that computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous! Recently, however, computer scientists realized that computing is not a game–it is real–and it brings with it not only societal benefits, but also significant societal costs, such as labor polarization, disinformation, and smart-phone addiction.
A common reaction to this crisis is to label it as an “ethics crisis” and talk about “corporate responsibility” and “machine ethics”. But corporations are driven by profits, not ethics, and machines are built by people. We should not expect corporations or machines to act ethically; we should expect people to act ethically. In this talk the speaker will discuss how technologists act ethically.
Moshe Y. Vardi
Toward Responsible Artificial Intelligence Systems: Safety and Trustworthiness
This short paper associated to the invited lectures introduces two key concepts essential to artificial intelligence (AI), the area of trustworthy AI and the concept of responsible AI systems, fundamental to understand the technological, ethical and legal context of the current framework of debate and regulation of AI. The aim is to understand their dimension and their interrelation with the rest of the elements involved in the regulation and auditability of AI algorithms in order to achieve safe and trusted AI. We highlight concepts in bold in order to fix the moment when they are described in context.
Francisco Herrera
Ambient Temperature Prediction for Embedded Systems Using Machine Learning
In this work, we use two well-established machine learning algorithms i.e., Random Forest (RF) and XGBoost, to predict ambient temperature for a baseband’s board. After providing an overview of the related work, we describe how we train the two ML models and identify the optimal training and test datasets to avoid the problems of data under- and over-fitting. Given this train/test split, the trained RF and XGBoost models provide temperature predictions with an accuracy lower than one degree Celsius, i.e., far better than any other approach that we used in the past. Our feature importance assessments reveal that the temperature sensors contribute significantly more towards predicting the ambient temperature compared to the power and voltage readings. Furthermore, the RF model appears less volatile than XGBoost using our training data. As the results demonstrate, our predictive temperature models allow for an accurate error prediction as a function of baseband board sensors.
Selma Rahman, Mattias Olausson, Carlo Vitucci, Ioannis Avgouleas
A Federated Learning Algorithms Development Paradigm
At present many distributed and decentralized frameworks for federated learning algorithms are already available. However, development of such a framework targeting smart Internet of Things in edge systems is still an open challenge. A solution to that challenge named Python Testbed for Federated Learning Algorithms (PTB-FLA) appeared recently. This solution is written in pure Python, it supports both centralized and decentralized algorithms, and its usage was validated and illustrated by three simple algorithm examples. In this paper, we present the federated learning algorithms development paradigm based on PTB-FLA. The paradigm comprises the four phases named by the code they produce: (1) the sequential code, (2) the federated sequential code, (3) the federated sequential code with callbacks, and (4) the PTB-FLA code. The development paradigm is validated and illustrated in the case study on logistic regression, where both centralized and decentralized algorithms are developed.
Miroslav Popovic, Marko Popovic, Ivan Kastelan, Miodrag Djukic, Ilija Basicevic
Machine Learning Data Suitability and Performance Testing Using Fault Injection Testing Framework
Creating resilient machine learning (ML) systems has become necessary to ensure production-ready ML systems that acquire user confidence seamlessly. The quality of the input data and the model highly influence the successful end-to-end testing in data-sensitive systems. However, the testing approaches of input data are not as systematic and are few compared to model testing. To address this gap, this paper presents the Fault Injection for Undesirable Learning in input Data (FIUL-Data) testing framework that tests the resilience of ML models to multiple intentionally-triggered data faults. Data mutators explore vulnerabilities of ML systems against the effects of different fault injections. The proposed framework is designed based on three main ideas: The mutators are not random; one data mutator is applied at an instance of time, and the selected ML models are optimized beforehand. This paper evaluates the FIUL-Data framework using data from analytical chemistry, comprising retention time measurements of anti-sense oligonucleotide. Empirical evaluation is carried out in a two-step process in which the responses of selected ML models to data mutation are analyzed individually and then compared with each other. The results show that the FIUL-Data framework allows the evaluation of the resilience of ML models. In most experiments cases, ML models show higher resilience at larger training datasets, where gradient boost performed better than support vector regression in smaller training sets. Overall, the mean squared error metric is useful in evaluating the resilience of models due to its higher sensitivity to data mutation.
Manal Rahal, Bestoun S. Ahmed, Jörgen Samuelsson
IDPP: Imbalanced Datasets Pipelines in Pyrus
We showcase and demonstrate IDPP, a Pyrus-based tool that offers a collection of pipelines for the analysis of imbalanced datasets. Like Pyrus, IDPP is a web-based, low-code/no-code graphical modelling environment for ML and data analytics applications. On a case study from the medical domain, we solve the challenge of re-using AI/ML models that do not address data with imbalanced class by implementing ML algorithms in Python that do the re-balancing. We then use these algorithms and the original ML models in the IDPP pipelines. With IDPP, our low-code development approach to balance datasets for AI/ML applications can be used by non-coders. It simplifies the data-preprocessing stage of any AI/ML project pipeline, which can potentially improve the performance of the models. The tool demo will showcase the low-code implementation and no-code reuse and repurposing of AI-based systems through end-to end Pyrus pipelines.
Amandeep Singh, Olga Minguett
Learning in Uppaal for Test Case Generation for Cyber-Physical Systems
We propose a test-case generation method for testing cyber-physical systems by using learning and statistical model checking. We use timed game automata for modelling. Different from other studies, we construct the model from the environment’s perspective. After building the model, we synthesize policies for different kinds of environments by using reinforcement learning in Uppaal and parse the policies for test-case generation. Statistical model checking enables us to analyse the test cases for finding the ones that are more likely to detect bugs.
Rong Gu
A Literature Survey of Assertions in Software Testing
Assertions are one of the most useful automated techniques for checking program’s behaviour and hence have been used for different verification and validation tasks. We provide an overview of the last two decades of research involving ‘assertions’ in software testing. Based on a term–based search, we filtered the inclusion of relevant papers and synthesised them w.r.t. the problem addressed, the solution designed, and the evaluation conducted. The survey rendered 119 papers on assertions in software testing. After test oracle, the dominant problem focus is test generation, followed by engineering aspects of assertions. Solutions are typically embedded in tool prototypes and evaluated throughout limited number of cases while using large–scale industrial settings is still a noticeable method. We conclude that assertions would be worth more attention in future research, particularly regarding the new and emerging demands (e.g., verification of programs with uncertainty), for effective, applicable, and domain-specific solutions.
Masoumeh Taromirad, Per Runeson
FPGA-Based Encryption for Peer-to-Peer Industrial Network Links
Securing company networks has become a critical aspect of modern industrial environments. With the recent rise of Industry 4.0 concepts, it became essential to extend IT security across increasingly connected factories. However, in the highly specialised field of operations technology and embedded systems, not every device can run additional security measures, as they are old or designed with sparse resources. We introduce here the concept of a “universal” encryption device that enables the securing of communication links in a direct peer-to-peer industrial setting by using the AES-128 encryption standard. We propose a design of such an encryption device by developing a modular system architecture with decoupled communication and cryptography. The resulting architecture is implemented as a proof of concept for Ethernet communication and tested through simulation as well as on an FPGA device. The impact of the encryption device is briefly investigated in a lab setup, followed by conclusions on system stability and performance.
Florian Sprang, Tiberiu Seceleanu
Formalization and Verification of MQTT-SN Communication Using CSP
The MQTT-SN protocol is a lightweight version of the MQTT protocol and is customized for Wireless Sensor Networks (WSN). It removes the need for the underlying protocol to provide ordered and reliable connections during transmission, making it ideal for sensors in WSN with extremely limited computing power and resources. Due to the widespread use of WSN in various areas, the MQTT-SN protocol has promising application prospects. Furthermore, security is crucial for MQTT-SN, as sensor nodes applying this protocol are often deployed in uncontrolled wireless environments and are vulnerable to a variety of external security threats.
To ensure the security of the MQTT-SN protocol without compromising its simplicity, we introduce the ChaCha20-Poly1305 cryptographic authentication algorithm. In this paper, we formally model the MQTT-SN communication system using Communicating Sequential Process (CSP) and then verify seven properties of this model using Process Analysis Toolkit (PAT), including deadlock freedom, divergence freedom, data reachability, client security, gateway security, broker security, and data leakage. According to the verification results in PAT, our model satisfies all the properties above. Therefore, we can conclude that the MQTT-SN protocol is secure with the introduction of ChaCha20-Poly1305.
Wei Lin, Sini Chen, Huibiao Zhu
Detecting Road Tunnel-Like Environments Using Acoustic Classification for Sensor Fusion with Radar Systems
Radar systems equipped with Misalignment Monitoring and Adjustment (MM &A) face challenges in accurately functioning within complex environments, particularly tunnels. Standard radar system design assumes constant background activity of the MM &A throughout a host vehicle’s ignition cycle, monitoring for misaligned radar sensors and mitigating issues associated with faulty radar measurements. However, the presence of tunnels and other unfavorable driving conditions can influence MM &A, thereby affecting its performance.
To address this issue, it is crucial to develop a reliable method for detecting tunnel-like environments and appropriately adjusting the MM &A system. This research paper focuses on the novel acoustic sensing system called SONETE (Sonic Sensing for Tunnel Environment) for classification of acoustic signatures recorded by pressure zone microphone to accurately identify tunnel environments.
The study aims to explore acoustic features and classification algorithms to distinguish between road and tunnel environment and using a sensor fusion with radar systems, suspend the MM &A system accordingly. By tackling this problem, the research contributes to the advancement of intelligent transportation systems by enhancing radar technology’s robustness in complex environments and ensuring effective MM &A adjustments in tunnels.
Overall, this paper demonstrates the potential of using acoustic signatures as a complementary sensor for tunnel detection in vehicles where traditional sensors have limitations.
Nikola Stojkov, Filip Tirnanić, Aleksa Luković
Comparative Analysis of Uppaal  SMC, ns-3 and MATLAB/Simulink
IoT networks connect everyday devices to the internet to communicate with one another and humans. It is more cost-effective to analyse and verify the performance of the designed prototype before deploying these complex networks. Network Simulator 3 (ns-3), MATLAB/Simulink, and Uppaal  SMC are three industry-leading tools that simulate communicating models, each with strengths and weaknesses. NS3 is suitable for large-scale network simulations, MATLAB/Simulink is suitable for complex models and data analysis, and Uppaal  SMC is efficient for real-time probabilistic systems with complex timing requirements, This paper presents a comparative analysis of NS3 and MATLAB/Simulink and Uppaal  SMC, based on a Sigfox-based case study, focusing on the behaviour of a single Sigfox node. The comparison is drawn on ease of use, flexibility, and scalability. The results can help researchers make informed decisions when designing and evaluating simulation experiments. They demonstrate that the choice of tool depends on the specific requirements of the simulation project and requires careful consideration of the strengths and weaknesses of each tool.
Muhammad Naeem, Michele Albano, Kim Guldstrand Larsen, Brian Nielsen
Using Automata Learning for Compliance Evaluation of Communication Protocols on an NFC Handshake Example
Near-Field Communication (NFC) is a widely adopted standard for embedded low-power devices in very close proximity. In order to ensure a correct system, it has to comply to the ISO/IEC 14443 standard. This paper concentrates on the low-level part of the protocol (ISO/IEC 14443-3) and presents a method and a practical implementation that complements traditional conformance testing. We infer a Mealy state machine of the system-under-test using active automata learning. This automaton is checked for bisimulation with a specification automaton modelled after the standard, which provides a strong verdict of conformance or non-conformance. As a by-product, we share some observations of the performance of different learning algorithms and calibrations in the specific setting of ISO/IEC 14443-3, which is the difficulty to learn models of system that a) consist of two very similar structures and b) very frequently give no answer (i.e. a timeout as an output).
Stefan Marksteiner, Marjan Sirjani, Mikael Sjödin
Towards LLM-Based System Migration in Language-Driven Engineering
In this paper we show how our approach of extending Language Driven Engineering (LDE) with natural language-based code generation supports system migration: The characteristic decomposition of LDE into tasks that are solved with dedicated domain-specific languages divides the migration tasks into portions adequate to apply LLM-based code generation. We illustrate this effect by migrating a low-code/no-code generator for point-and-click adventures from JavaScript to TypeScript in a way that maintains an important property: generated web applications can automatically be validated via automata learning and model analysis by design. In particular, this allows to easily test the correctness of migration by learning the difference automaton for the generated products of the source and the target system of the migration.
Daniel Busch, Alexander Bainczyk, Bernhard Steffen
Synthesizing Understandable Strategies
The result of reinforcement learning is often obtained in the form of a q-table mapping actions to future rewards. We propose to use SMT solvers and strategy trees to generate a representation of a learned strategy in a format which is understandable for a human. We present the methodology and demonstrate it on a small game.
Peter Backeman
ReProInspect: Framework for Reproducible Defect Datasets for Improved AOI of PCBAs
Today, the process of producing a printed circuit board assembly (PCBA) is growing rapidly, and this process requires cutting-edge debugging and testing of the boards. The Automatic Optical Inspection (AOI) process detects defects in the boards, components, or solder pads using image processing and machine learning (ML) algorithms. Although state-of-the-art approaches for identifying defects are well developed, due to three main issues, the ML algorithms and datasets are incapable of fully integrating into industrial plants. These issues are privacy limitations for sharing data, the distribution shifts in the PCBA industry, and the absence of a degree of freedom for reproducible and modifiable synthetic datasets.
This paper addresses these challenges and introduces “ReProInspect”, a comprehensive framework designed to meet these requirements. ReProInspect uses fabrication files from the designed PCBs in the manufacturing line to automatically generate 3D models of the PCBAs. By incorporating various techniques, the framework introduces controlled defects into the PCBA, thereby creating reproducible and differentiable defect datasets. The quality data produced by this framework enables an improved detection and classification scenario for AOI in industrial applications. The initial results of ReProInspect are demonstrated and discussed through detailed instances. Finally, the paper also highlights future work to improve the current state of the framework.
Ahmad Rezaei, Johannes Nau, Detlef Streitferdt, Jörg Schambach, Todor Vangelov
Cyber-Physical Ecosystems: Modelling and Verification
In this paper, we set up a mathematical framework for the modelling and verification of complex cyber-physical ecosystems. In our setting, cyber-physical ecosystems are cyber-physical systems of systems that are highly connected. These are networked systems that combine cyber-physical systems with an interaction mechanism with other systems and the environment (ecosystem capability). Our contribution will be on two streams: (i) modelling the constituent systems and their interfaces, and (ii) local/global verification of cyber-physical ecosystems. We introduce a concept of basic model, whose skeleton is a Markov decision process and we propose a verification based abstraction methodology.
Manuela L. Bujorianu
Integrating IoT Infrastructures in Industrie 4.0 Scenarios with the Asset Administration Shell
The Asset Administration Shell (AAS) specifies digital twins to enable unified access to all data and services available for a physical asset to cope with heterogeneous and fragmented data sources. The setup of an AAS infrastructure requires the integration of all relevant devices and their data. As the devices often already communicate with an IoT backend, we present three approaches to integrate an IoT backend with an AAS infrastructure, share insights into an implementation project, and briefly discuss them.
Sven Erik Jeroschewski, Johannes Kristan, Milena Jäntgen, Max Grzanna
A Software Package (in progress) that Implements the Hammock-EFL Methodology
This poster paper presents a software package (in progress) that implements the Hammock-EFL approach for Project Management and Parallel Programming, written in Python.
Moshe Goldstein, Oren Eliezer
Dynamic Priority Scheduling for Periodic Systems Using ROS 2
In this paper, a novel dynamic priority scheduling algorithm for ROS 2 systems is proposed. The algorithm is based on determining deadlines of callbacks by taking the buffer size and update rates of channels into account. The efficacy of the scheduling algorithm is demonstrated on an illustrative example, where the needed buffer size is reduced in comparison to the conventional single-threaded executor in ROS 2.
Lukas Dust, Saad Mubeen
Continuous Integration of Neural Networks in Autonomous Systems
The perception of the autonomous driving software of the FS223, a low-level sensor fusion of Lidar and Camera data requires the use of a neural network for image classification. To keep the neural network up to date with updates in the training data, we introduce a Continuous Integration (CI) pipeline to re-train the network. The network is then automatically validated and integrated into the code base of the autonomous system. The introduction of proper CI methods in these high-speed embedded software applications is an application of state-of-the-art MLOps techniques that aim to provide rapid generation of production-ready models. It further serves the purpose of professionalizing the otherwise script-based software production, which is re-done almost completely every year as the teams change from one year to the next.
Bruno Steffen, Jonas Zohren, Utku Pazarci, Fiona Kullmann, Hendrik Weißenfels
Building a Digital Twin Framework for Dynamic and Robust Distributed Systems
Digital Twins (DTs) serve as the backbone of Industry 4.0, offering virtual representations of actual systems, enabling accurate simulations, analysis, and control. These representations help in predicting system behaviour, facilitating multiple real-time tests, and reducing risks and costs while identifying optimization areas. DTs meld cyber and physical realms, accelerating the design and modelling of sustainable innovations. Despite their potential, the complexity of DTs presents challenges in their industrial application. We sketch here an approach to build an adaptable and trustable framework for building and operating DT systems, which is the basis for the academia-industry project A Digital Twin Framework for Dynamic and Robust Distributed Systems (D-RODS). D-RODS aims to address the challenges above, aiming to advance industrial digitalization and targeting areas like system efficiency, incorporating AI and verification techniques with formal support.
Tiberiu Seceleanu, Ning Xiong, Eduard Paul Enoiu, Cristina Seceleanu
A Simple End-to-End Computer-Aided Detection Pipeline for Trained Deep Learning Models
Recently, there has been a significant rise in research and development focused on deep learning (DL) models within healthcare. This trend arises from the availability of extensive medical imaging data and notable advances in graphics processing unit (GPU) computational capabilities. Trained DL models show promise in supporting clinicians with tasks like image segmentation and classification. However, advancement of these models into clinical validation remains limited due to two key factors. Firstly, DL models are trained on off-premises environments by DL experts using Unix-like operating systems (OS). These systems rely on multiple libraries and third-party components, demanding complex installations. Secondly, the absence of a user-friendly graphical interface for model outputs complicates validation by clinicians. Here, we introduce a conceptual Computer-Aided Detection (CAD) pipeline designed to address these two issues and enable non-AI experts, such as clinicians, to use trained DL models offline in Windows OS. The pipeline divides tasks between DL experts and clinicians, where experts handle model development, training, inference mechanisms, Grayscale Softcopy Presentation State (GSPS) objects creation, and containerization for deployment. The clinicians execute a simple script to install necessary software and dependencies. Hence, they can use a universal image viewer to analyze results generated by the models. This paper illustrates the pipeline's effectiveness through a case study on pulmonary embolism detection, showcasing successful deployment on a local workstation by an in-house radiologist. By simplifying model deployment and making it accessible to non-AI experts, this CAD pipeline bridges the gap between technical development and practical application, promising broader healthcare applications.
Ali Teymur Kahraman, Tomas Fröding, Dimitrios Toumpanakis, Mikael Fridenfalk, Christian Jamtheim Gustafsson, Tobias Sjöblom
Astrocyte-Integrated Dynamic Function Exchange in Spiking Neural Networks
This paper presents an innovative methodology for improving the robustness and computational efficiency of Spiking Neural Networks (SNNs), a critical component in neuromorphic computing. The proposed approach integrates astrocytes, a type of glial cell prevalent in the human brain, into SNNs, creating astrocyte-augmented networks. To achieve this, we designed and implemented an astrocyte model in two distinct platforms: CPU/GPU and FPGA. Our FPGA implementation notably utilizes Dynamic Function Exchange (DFX) technology, enabling real-time hardware reconfiguration and adaptive model creation based on current operating conditions. The novel approach of leveraging astrocytes significantly improves the fault tolerance of SNNs, thereby enhancing their robustness. Notably, our astrocyte-augmented SNN displays near-zero latency and theoretically infinite throughput, implying exceptional computational efficiency. Through comprehensive comparative analysis with prior works, it’s established that our model surpasses others in terms of neuron and synapse count while maintaining an efficient power consumption profile. These results underscore the potential of our methodology in shaping the future of neuromorphic computing, by providing robust and energy-efficient systems.
Murat Isik, Kayode Inadagbo
Correct Orchestration of Federated Learning Generic Algorithms: Formalisation and Verification in CSP
Federated learning (FL) is a machine learning setting where clients keep the training data decentralised and collaboratively train a model either under the coordination of a central server (centralised FL) or in a peer-to-peer network (decentralised FL). Correct orchestration is one of the main challenges. In this paper, we formally verify the correctness of two generic FL algorithms, a centralised and a decentralised one, using the Communicating Sequential Processes calculus (CSP) and the Process Analysis Toolkit (PAT) model checker. The CSP models consist of CSP processes corresponding to generic FL algorithm instances. PAT automatically proves the correctness of the two generic FL algorithms by proving their deadlock freeness (safety property) and successful termination (liveness property). The CSP models are constructed bottom-up by hand as a faithful representation of the real Python code and is automatically checked top-down by PAT.
Ivan Prokić, Silvia Ghilezan, Simona Kašterović, Miroslav Popovic, Marko Popovic, Ivan Kaštelan
CareProfSys - Combining Machine Learning and Virtual Reality to Build an Attractive Job Recommender System for Youth: Technical Details and Experimental Data
The current article presents CareProfSys - an innovative job recommender system (RS) for youth, which integrates several emergent technologies, such as machine learning (ML) and virtual reality on web (WebVR). The recommended jobs are the ones provided by the well-known European Skills, Competences, Qualifications, and Occupations (ESCO) framework. The machine-learning based recommendation mechanism uses a K-Nearest Neighbors (KNN) algorithm: the data needed to train the machine learning model was based on the Skills Occupation Matrix Table offered by ESCO, as well as on data collected by our project team. This two-source method made sure that the dataset was strong and varied, which made it easier for the model to make accurate recommendations. Each job was described in terms of the features needed by individuals to be good professionals, e.g., skill levels for working with computers, constructing, management, working with machinery and specialized equipment, for assisting and caring, for communication collaboration and creativity are just a few of the directions considered to define a profession profile. The recommended jobs are described in a modern manner, by allowing the users to explore various WebVR scenarios with specific professional activities. The article provides the technical details of the system, the difficulties of building a stack of such diverse technologies (ML, WebVR, semantic technologies), as well as validation data from experiments with real users: a group of high school students from not so developed cities from Romania, interacting first time with modern technologies.
Maria-Iuliana Dascalu, Andrei-Sergiu Bumbacea, Ioan-Alexandru Bratosin, Iulia-Cristina Stanica, Constanta-Nicoleta Bodea
Engineering of Computer-Based Systems
herausgegeben von
Jan Kofroň
Tiziana Margaria
Cristina Seceleanu
Electronic ISBN
Print ISBN