Skip to main content

Open Access 2025 | Open Access | Buch

Special Topics in Information Technology

insite
SUCHEN

Über dieses Buch

This open access book presents outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Italy. Information technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems.

The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers.

Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially computer science and engineering, electronics, systems and control, and telecommunications.

Each year, more than 50 Ph.Ds. graduate from the program. This book gathers the outcomes of the best theses defended in 2023–24 and selected for the IT Ph.D. award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements, and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in information technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists.

Inhaltsverzeichnis

Frontmatter

Computer Science and Engineering

Frontmatter

Open Access

Viral Data Integration and Knowledge Discovery Methods for Current and Future Pandemics
Abstract
Viral genomics is an interesting and challenging field of science. The vast amount of data, coupled with the intrinsic variability of viruses, demands robust data management and computational methods that support domain experts in studying this complex domain. This Chapter addresses the demand for data and knowledge integration as a way to analyse and discover new insights on SARS-CoV-2 and other viruses, even through artificial intelligence. Finally, a novel method for detecting recombination events in RNA viruses is presented. This method offers significant advantages over existing approaches and represents a valuable resource for public health preparedness. Overall, this work contributes significantly to viral genomics by addressing important challenges in data integration, knowledge modeling, and recombination detection.
Tommaso Alfonsi

Open Access

Learning Optimal Equilibria and Mechanisms Under Information Asymmetry
Abstract
Multi-agent environments represent a ubiquitous case of application for game-theoretic techniques. The default game model involves the presence of a large number of agents with asymmetric information on the environment that sequentially interact between them. Algorithmic Game Theory starts from particular notions of equilibria, i.e., sets of strategies for the players such that no one has incentive to deviate from her strategy, and studies the development of algorithms for computing or approximating those. The milestones achieved by researchers in the field over the last decades made it clear how, in order to obtain successful deployments of such game-theoretic techniques in complex real-world settings, it is of uttermost importance to leverage the formulation of learning algorithms for finding approximately optimal solutions of the games. Most of the research has been focused on the design of learning algorithms for simple scenarios, e.g., two-players games, but algorithms for more general cases are still far from reaching adequate performances. The goal of this manuscript is to advance the research in this sense. In particular, we investigate different multi-agent scenarios, which we differentiate based on the role of the players holding information on the environment, focusing on the definition of suitable learning algorithms for finding optimal players’ strategies. In the first part of the manuscript, we study cases in which the agents holding information are active, i.e., they can leverage their information to take informed actions in the game. In this context, we tackle two distinct cases: team games, which model cases in which two teams of agents compete one against the other, as well as the broader class of general-sum games, in which we do not make any particular assumption on the players. For team games, we introduce a simple transformation that uses a correlation protocol based on public information for obtaining a compact formulation of the teams’ strategy sets. The transformation yields an equivalent two-players zero-sum game, which can be naturally used to obtain the first no-regret learning-based algorithm for computing equilibria in team games. Then, inspired by previous literature, we lay the ground for the formulation in the context of team games of popular techniques that proved crucial for achieving strong performances in two-players games, i.e., stochastic regret minimization and subgame solving. For general-sum games, instead, we observe that the mainstream approach that is being used, which consists in the use of decentralized and coupled learning dynamics for approximating different types of correlated equilibria, suffers from major drawbacks as it does not offer any guarantee on the type of equilibrium reached. To mitigate this issue, we take the perspective of a mediator issuing action recommendations to the players and design a centralized learning dynamic that guarantees convergence to the set of optimal correlated equilibria in sequential games.The second part of the manuscript is devoted to the study of cases in which the agents holding information are passive, i.e., they cannot directly take actions in the game, but can only report their information (possibly untruthfully) in order to influence the behavior of another uninformed party. This setting corresponds to the case of information acquisition, in which we take the perspective of the uninformed agent (which we call the principal) that is interested in gathering information from the agents, incentivizing their behavior by means of mechanisms composed by an action policy and/or payment functions. In this context, we separately study the cases in which the principal’s mechanisms are composed exclusively by action policies and payment schemes and, for both cases, we provide algorithms for learning optimal mechanisms via interactions with the agents.
Federico Cacciamani

Open Access

Technology and Applications of Compiler-Based Precision Tuning
Abstract
In many computer architectures, high-precision calculations are inefficient and power-hungry. As a result, oftentimes it is valuable to exploit the tradeoff between precision and performance to better utilize the hardware in ways that otherwise wouldn’t be possible. Precision tuning is the practice of taking advantage of this tradeoff, and it is very labour-intensive for the programmer to perform manually. Hence there is an increasing interest for compiler-based autotuners which however are still imperfect and hard to use in practice. The underlying issue is that the analyses and transformations required for precision tuning do not have a sufficient level of generality to be applicable to most existing programs. We attempt to improve the state-of-the-art in precision tuning compilers by tackling this aspect, introducing a novel data type allocation methodology and approaches for handling mathematical functions and non-single-threaded programming. We also demonstrate the applicability of Precision Tuning to applications based on machine learning and safety-critical systems.
Daniele Cattaneo

Open Access

Resource Allocation and Scheduling Problems in Computing Continua for Artificial Intelligence Applications
Abstract
The problem of optimizing the execution of Artificial Intelligence (AI) and Deep Learning (DL) applications in the Computing Continuum gained remarkable popularity in recent years, due to both the widespread adoption of AI in real-life scenarios and the challenging environment introduced by a distributed Edge-to-Cloud paradigm. We tackled the resource selection, scheduling and placement problem both from a design-time and runtime perspective, considering, on one hand, AI inference applications characterized by complex workflows with multiple heterogeneous components and, on the other hand, resource-demanding DL training jobs executed on public or private GPU-accelerated clusters.
Federica Filippini

Electronics

Frontmatter

Open Access

Analog Circuit Design for In-Memory Linear Algebra Accelerators
Abstract
Since its introduction in 1945, computing systems have been built around von Neumann’s architecture, predicating the physical separation of memory and computing units on grounds of flexibility and generality. However, the increasingly data-driven workloads of modern-day applications exacerbate the energy and latency overheads associated with data shuttling. In-memory computing (IMC) radically subverts the classical paradigm by performing computation in situ within the memory elements, unlocking theoretically unrivaled throughput and energy efficiency. Among the wide spectrum of IMC architectures, closed-loop in-memory computing (CL-IMC) has attracted interest for its capability to accelerate computationally heavy operations of increasing use, such as matrix inversion. This chapter focuses on analog closed-loop circuits for in-memory accelerators. A mathematical framework is derived to develop a matrix-based circuit simulator providing orders-of-magnitude speedups with respect to SPICE solvers. New circuits for the acceleration of regularized regressions and linear quadratic estimation are characterized in terms of accuracy and speed, providing improvement with respect to digital solvers in baseband processing in 6G systems and Kalman filters. Experimental demonstrations finally provide a real-world implementation of CL-IMC topologies. The obtained results strengthen the position of CL-IMC as a promising candidate for next-generation energy-efficient algebraic accelerators.
Piergiulio Mannocci, Daniele Ielmini

Open Access

Localized LO Phase Shifting for Phased Array Systems
Abstract
Multiple-input-multiple-output is a promising technology to enable spatial multiplexing and improve throughput in wireless communication systems. The phase shifter needed in each element of the phased array to perform the electronic steering of the beam typically introduces a non-ideal transfer function in the signal path and consumes significant area and power, making it a major source of cost and dissipation. To address those issues, this chapter describes a novel technique referred to as localized LO phase-shifting, where the array of phase shifters in the receiver and in the transmitter is replaced by an array of synchronized PLLs, providing the local oscillation to each path, with fine and inherently linear phase regulation. This approach not only helps reduce power consumption and area occupation, in modern CMOS nodes, but also improves phase noise since the beam is formed by the combination of uncorrelated noise sources. To demonstrate the concept, a dual-element LO phase-shifting system, based on fractional-N digital PLLs in the 8.5-to-10.0-GHz range, is implemented in a standard 28-nm CMOS process. Each element occupies 0.23 mm2 of area and dissipates 20 mW of power. An arbitrary phase shift between the LO outputs can be set over the full 360\(^{\circ }\) range with a resolution of 0.7 millidegrees. The measured rms phase accuracy is 0.76\(^{\circ }\), and the peak-to-peak phase error is 2.1\(^{\circ }\), without requiring any linearity or gain calibration. Combining the outputs of the two elements, the measured integrated random jitter scales down from 58.5 to 44.6 fs rms.
Francesco Tesolin, Salvatore Levantino

Systems and Control

Frontmatter

Open Access

Data-Based Control Design for Recurrent Neural Network Models with Stability Guarantees
Abstract
This Brief aims to discuss the application of data-driven methods to control systems described by recurrent neural network models, which are known to be universal approximators of dynamical systems. The unified hybrid approach outlined here fills a significant gap in the current literature by combining the strengths of both direct and indirect methodologies so as to ensure in a purely data-driven fashion, on the one hand, desired performances and, on the other hand, closed-loop stability.
William D’Amico

Open Access

In Silico Modelling, Analysis, and Control of Complex Diseases: Addressing Clinical Questions, Personalized Treatments, and Healthcare Management
Abstract
Human diseases are complex and dynamic. Understanding and controlling diseases require interdisciplinary approaches, aided by advances in digital technology, data analysis, and computational power. Specifically, in his Ph.D. Thesis, Matteo Italia has developed in silico models to study cancers, Restless Legs Syndrome (RLS), and Covid-19. The goals are to answer clinical questions, optimize treatments, and manage healthcare. For cancers, the developed models suggest that dynamic and personalized protocols can overcome drug resistance more effectively than static protocols. For neuroblastoma, the MYCN gene’s role in treatment outcomes is explored. For melanoma, promising drug combinations are identified to overcome vemurafenib resistance. In RLS, the first mathematical model supports the hypothesis that a single neuronal generator triggers periodic leg movements, aiding disease understanding. For Covid-19, a new compartment model, including vaccination policies and protection waning, emphasizes the importance of global equitable vaccine access to mitigate the pandemic. Overall, this ensemble of works highlights the importance of a systematic computational methodology in healthcare, a sort of engineered modus operandi that combines data analysis, systems and control, mathematics, optimization, simulations, and coding, among others.
Matteo Italia, Fabio Dercole

Open Access

Control of Large-Scale MLD Systems via Multi-agent Reformulation and Decentralized Optimization
Abstract
Motivated by the challenges emerging in the energy sector, this brief presents a comprehensive framework for the optimal operation of large-scale Mixed Logical Dynamical (MLD) systems modeling engineering systems characterized by interleaved physical, logical, and digital components and subject to operational constraints. When the performance index is linear, the problem translates into a Mixed Integer Linear Program (MILP) that is NP-hard and becomes prohibitive as the size of the system increases. In the case of multi-agent systems that are constraint-coupled, decentralized schemes with provable feasibility guarantees are introduced to recover computational tractability by reducing the global MILP to multiple smaller ones that are iteratively solved in parallel. If the matrix modeling the MILP constraints is sparse, a method is proposed to possibly recover a hidden constraint-coupled multi-agent structure to which the decentralized resolution schemes can then be applied. Finally, multi-agent constraint-coupled MILPs with uncertain local constraints are considered and probabilistic feasibility guarantees are derived for their data-based decentralized solution. The framework is tested on an application in the energy sector concerning the provision of ancillary services to the power distribution grid.
Lucrezia Manieri

Telecommunications

Frontmatter

Open Access

Forensic Analysis of Satellite Imagery: Challenges and Solutions
Abstract
Satellite images are now a widespread asset that is easily obtainable on the Web. Many portals offer these images for free, and their role in sensitive applications, such as natural disaster response, intelligence, and military, is becoming paramount. For these reasons, satellite images can be a target for malicious manipulations. Multimedia Forensics (MMF) is the discipline concerned with assessing the authenticity of multimedia data. Satellite images pose new challenges to MMF due to (i) being an inherently multimodal data asset, with some of its modalities, like Synthetic Aperture Radar (SAR) signals, which the community has never investigated; (ii) having a complex processing pipeline where forensic traces are challenging to model. In this chapter, we tackle the forensic analysis of satellite images, namely panchromatic and SAR imagery, and propose using Convolutional Neural Networks (CNNs) to extract forensic information. In particular, we will consider the problems of source attribution and image splicing localization. In both situations, CNNs prove effective and more performant than techniques developed for standard digital pictures, substantiating the need for forensic tools tailored to remote sensing data.
Edoardo Daniele Cannas

Open Access

Digital Signal Processing Methodologies for Audio System Modeling and Audio Output Enhancement
Abstract
This chapter summarizes some of the main results I obtained during my Ph.D. studies at Politecnico di Milano, under the supervision of Professor Augusto Sarti and Professor Alberto Bernardini. Audio systems have become, nowadays, pervasive in many different market sectors, such as that of consumer electronics and biomedical devices. To accurately represent, digitally replicate, and process the signals of such complex systems, it is crucial to develop multiphysics models that capture their nonlinear behaviors. In this chapter, I introduce novel multiphysics models of audio systems together with innovative digital signal processing techniques with the ultimate goal of improving their acoustic response. By leveraging the efficiency and accuracy of Wave Digital Filters, I propose novel modeling methods to incorporate the various physical domains involved in audio systems. I introduce iterative methods for streamlined emulation and parallel implementation, addressing, at the same time, complex nonlinearities such as magnetic saturation and hysteresis. I present new linearization and virtualization algorithms to manipulate the audio device behavior leveraging on the newly introduced multiphysics models. Finally, I combine psychoacoustic methodologies with deep-learning models to tackle operating conditions affected by very strict physical constraints. With this investigation, I cover diverse audio signal processing tasks, offering fresh insights and practical solutions across various application scenarios.
Riccardo Giampiccolo

Open Access

Optimized ISAC Waveform Design in 6G Networks
Abstract
Sixth-generation (6G) wireless networks introduce Integrated Sensing and Communication (ISAC) technology, enabling simultaneous communication and sensing through shared time, frequency, space, and energy resources. Designing ISAC waveforms is challenging due to the need to satisfy both high-capacity communication and accurate sensing. This work presents a dual-domain waveform design that integrates classical Orthogonal Frequency Division Multiplexing (OFDM) with a tailored sensing signal in the delay-Doppler domain with carefully adjusted power to enhance sensing resolution without significantly affecting communication rate. Moreover, the paper tackles practical challenges such as high sidelobes and decreased sensing accuracy due to underutilized frequency-time resources. To address these issues, optimal resource allocation over time, frequency, and energy is defined, along with a novel interpolation technique based on Schatten p quasi-norm matrix completion. Numerical results show that these approaches outperform current methods, demonstrating the potential for improving 6G networks.
Silvia Mura
Metadaten
Titel
Special Topics in Information Technology
herausgegeben von
Simone Garatti
Copyright-Jahr
2025
Electronic ISBN
978-3-031-80268-3
Print ISBN
978-3-031-80267-6
DOI
https://doi.org/10.1007/978-3-031-80268-3