Skip to main content
Top

Integrated Formal Methods

20th International Conference, iFM 2025, Paris, France, November 19–21, 2025, Proceedings

  • 2026
  • Book
insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 20th International Conference on Integrated Formal Methods, iFM 2025, which took place in Paris, France, during November 19–21, 2025.

The 23 full papers presented together with 1 invited paper in this volume were carefully reviewed and selected from 69 submissions. The conference focuses on all aspects of formal methods, including deductive verification, model checking, reachability analysis, model-based testing, synthesis, timing and stochastic modelling, cyber-physical systems, autonomous systems, security and blockchain.

Table of Contents

Frontmatter

Deductive Verification

Frontmatter
Unfolding Iterators: Specification and Verification of Higher-Order Iterators in OCaml
Abstract
Albeit being a central notion of every programming language, formally and modularly reasoning about iteration proves itself to be a non-trivial feat, especially in the context of higher-order iteration. In this paper, we present a generic approach to the specification and deductive verification of higher-order iterators, written in the OCaml language. Our methodology is established on the basis of two building blocks: first, the usage of the Gospel specification language to describe the general behaviour of any iteration schema; second, the usage of the Cameleer framework to deductively verify that every iteration client is correct with respect to its logical specification. To validate our approach we develop a set of verified case studies, ranging from classic list iterators to graph algorithms implemented in the widely used OCamlGraph library.
Ion Chirica, Mário Pereira
When Separation Arithmetic is Enough
Abstract
In the practice of deductive program verification, it is desirable to make proofs as automated as possible. Today, the best tools for that are SMT solvers, which are able to handle both first-order logic and linear arithmetic. However, SMT solvers are not well suited for inductive reasoning, which is often needed to deal with recursive data structures such as linked lists or trees. In this paper, we propose a technique for specifying and proving imperative programs manipulating pointer-based recursive data structures, which stays within reach of first-order provers. The idea is to map a recursive structure onto a flat integer-indexed sequence, in such a way that separation and frame properties can be expressed using only simple arithmetic relations. We illustrate this approach with two examples: an original variant of list reversal and Morris’s algorithm for constant-space traversal of a binary tree.
Jean-Christophe Filliâtre, Andrei Paskevich, Olivier Danvy
Verified Implementation of Associative Containers with Iterators Using Threaded Red-Black Trees
Abstract
We address the verification of abstract data types (ADTs) implementing associative containers like maps and sets with fine-grained specifications for iterators, which allow for multiple readers and writers in client code. Our verified implementations are written in the verification-aware programming language Dafny. In this paper we introduce new methodological contributions in verifying heap-allocated data structures, such as separation between heap-based components and functional behavior, and fine-grained control of verification proofs. Additionally, we have implemented a library of associative containers in Dafny, including a verified implementation of threaded red-black trees with iterators. These contributions aim to reduce proof complexity in order to improve verification times, while maintaining proper encapsulation.
Jorge Blázquez, Manuel Montenegro, Clara Segura
Formal Verification of Legal Contracts: A Translation-Based Approach
Abstract
Stipula is a domain-specific programming language designed to model legal contracts with enforceable properties, especially those involving asset transfers and obligations. This paper presents a methodology to formally verify the correctness of Stipula contracts through translation into Java code annotated with Java Modeling Language specifications. As a verification backend, the deductive verification tool KeY is used. Both, the translation and the verification of partial and total correctness for a large subset of Stipula contracts, those with disjoint cycles, is fully automatic. Our work demonstrates that a general-purpose deductive verification tool can be used successfully in a translation approach.
Reiner Hähnle, Cosimo Laneve, Adele Veschetti

Model Checking

Frontmatter
CTL Model Checking Partially Specified Systems
Abstract
The behavioral specifications of a system are often partial at early stages of development due to pending design decisions. We view such specifications as being parameterized by these decisions and consider the problem of identifying a possible concretization, i.e., taking a subset of these decisions, to ensure that the system conforms to the desired requirements. We capture such a partially specified system as a partially-labeled Kripke structure (\(\texttt{plKS} \)), where certain propositions labeling the states of the Kripke structure correspond to pending design decisions, thus may be unknown. We then reduce the verification problem to model checking different \(\texttt{plKS} \) instances, each corresponding to a specific set of design decisions taken, inducing a semi-lattice on the instances. Central to our solution strategy is the effective and efficient exploration of this semi-lattice and the application of model checking techniques to \(\texttt{plKS} \)s with 3-valued semantics of temporal properties, to take into account unknown state labels. We also address the problem of identifying an optimal instance of a partially-specified system that satisfies a desired property, where the cost of design decisions drive the optimality criterion. We use a prototype implementation of our strategy to validate its viability on a multi-objective path planning problem.
Eshita Zaman, Christopher Johannsen, Andrew S. Miner, Gianfranco Ciardo, Samik Basu
Extending Timed Automata with Clock Derivatives
Abstract
The increasing complexity of safety-critical systems in domains like aerospace, robotics, and industrial control demands precise modeling and verification methods. While Timed Automata (TA) and Distributed Timed Automata (DTA) are standard formalisms for real-time systems, they assume synchronized clocks or lack the expressiveness to capture clock drift and indirect timing dependencies. To overcome these limits, we propose Timed Automata with Clock Derivatives (idTA), extending TA with rate constraints to model independent clock evolution. We also introduce \(\textsf {DL}_{\nu }\), a temporal logic over Multi-Timed Labeled Transition Systems (MLTS), capturing properties of systems with unsynchronized clocks. We show that model checking for \(\textsf {DL}_{\nu }\) is EXPTIME-complete. Finally, we present MIMETIC, a model checking tool supporting idTA and \(\textsf {DL}_{\nu }\), providing a platform for analyzing clock interactions and verification of Distributed Real-time Systems (DRTS).
David Cortés, Jean Leneutre, Vadim Malvone, James Ortiz, Pierre-Yves Schobbens
Model Checking Buffered Durable Linearizability in CSP
Abstract
Non-volatile memory (NVM) is a high-performance memory technology that supports persistency (i.e., durability) of data in case of a system crash (e.g., power failure). For implementations of concurrent objects (e.g., concurrent data structures such as stacks and hash maps), NVM poses a correctness problem: can the implementation recover to a consistent state after a crash and continue execution? In this paper, we study a notion of correctness known as buffered durable linearizability for implementations of periodically persistent concurrent objects, and develop a refinement-based proof method for buffered durable linearizability. To support our proofs, we develop a generic abstract specification of a buffered durable linearizable object that is generated from the object’s sequential specification. Our main case study is a concurrent hash map known as Dalí, which we show satisfies buffered durable linearizability. Specifically, we use FDR, a model checker based on CSP, that offers a fully automatic method for checking trace refinement.
Chelsea Edmonds, John Derrick, Brijesh Dongol, Gerhard Schellhorn, Heike Wehrheim

Cyber-Physical Systems

Frontmatter
Safe Temperature Regulation: Formally Verified and Real-World Validated
Abstract
This paper presents a case study in the design and formal verification of a safe controller for the generic two-element lumped-capacitance model of temperature regulation, using formal cyber-physical system (CPS) theorem proving. The coupled dynamics and the absence of a fallback state reflect the complexities of real-world control systems and make it a representative challenge for theorem proving. A controller is developed by a design-by-invariant methodology. Verification using the axiomatic theorem prover KeYmaera X revealed critical assumptions for safety and pushes the frontier of CPS theorem proving. The parametric, general and provably safe controller can be applied to a wide range of temperature regulation tasks and is validated by deploying an instance of the verified controller on a physical system, demonstrating its robustness under model inaccuracies and confirming its real-world usability.
Carlos Isasa, Noah Abou El Wafa, Claudio Gomes, Peter Gorm Larsen, André Platzer
Online Model Checking for Anomaly Detection in Industrial Control Systems
Abstract
Cyber attacks on Industrial Control Systems (ICSs) are becoming increasingly sophisticated, undermining the ability of these systems to manage critical processes and compromising the availability of key public infrastructure. Detecting system anomalies is an important element in the identification of cyber attacks, allowing the rapid deployment of crucial incident-response activities. In this paper, we introduce a novel anomaly detection approach that integrates SPIN model checking into ICS environments to detect anomalies in live system data. Our approach uses the application code extracted from Programmable Logic Controllers (PLCs) to generate the dynamic system model, requiring only a small amount of test data to validate their design. We evaluate our approach by generating models using a representative physical hydroelectric dam testbed containing real PLCs. These models are used to analyse synthetic data containing potential irregularities that could occur within the dam as a result of false data injection attacks. Our approach was shown to identify anomalies and verify normal system behaviour. Our evaluation shows that the models achieved high performance while maintaining explainability and delivering metrics of 99.99% precision, 99.05% recall, a 99.52% F1-score, and 99.05% accuracy.
Douglas Fraser, Alice Miller, Marco Cook, Dimitrios Pezaros
Using Bayesian Inference and Flowpipe Construction to Bound Predictions of Biogas Production at Wastewater Treatment Plants
Abstract
In this paper, we use a novel combination of probabilistic programming and flowpipe construction to predict bounds on future biogas production for a wastewater treatment plant given operational data from the past. The operation of the anaerobic digester of a wastewater treatment plant is modeled through an Ordinary Differential Equation (ODE) model with unknown parameters and unobservable internal states. We are given data from the plant’s operation that includes the daily measurement of the incoming waste volumes and concentrations along with the volume of biogas produced. We formalize our problem as first estimating the unknown parameters and initial conditions using Bayesian inference, such that the past behavior of the system is “compatible” with the observed data. Next, we propagate those input parameter estimates forward using flowpipe construction. To enable rapid and accurate flowpipe construction, we exploit the monotonicity property of the dynamical model of the plant. The procedure yields an over-approximation of the upper and lower bounds on biogas production, given the inputs. As a result, it can be used to formally bound future predictions that might inform facility operations. We implemented this procedure using a first-order kinetics model of hydrolysis to model the anaerobic digester of a real-world case study facility. We demonstrate how this method constructs realistic bounds for biogas prediction from the historical data that contain the actual ground-truth data 100% of the time. Our approach outperforms the standard approach that computes a posterior predictive distribution from samples both in terms of time and accuracy.
Fletcher Chapin, Ankur Varma, Samuel Akinwande, Meagan Mauter, Sriram Sankaranarayanan

Reachability

Frontmatter
CHC-Based Reachability Analysis via Cycle Summarization
Abstract
Modern reachability analysis techniques are highly effective when applied to software safety verification. However, they still struggle with certain classes of problems, particularly the verification of programs with complex control flow and deep nested loops. In this paper, we introduce Cycle Summarization-based Reachability Analysis (CSRA), a new Constrained Horn Clause (CHC) based approach for reachability analysis of nested-loop software. Our technique relies on the generation and refinement of cycle summaries within the CHC system. CSRA analyzes cycles in a modular manner, constructing summaries and cycle unrollings. Cycle summaries in our approach are used both to prove safety and detect potential safety violations. This enables more efficient exploration of nested loops. The prototype of CSRA is implemented within the Golem CHC solver. An empirical comparison with other reachability analysis techniques demonstrates that our approach is highly competitive in both proving safety and constructing counterexamples.
Konstantin Britikov, Grigory Fedyukovich, Natasha Sharygina
Reachability Analysis of Function-as-a-Service Scheduling Policies
Abstract
Functions-as-a-Service (FaaS) is a Serverless Cloud paradigm where a platform manages the execution (e.g., scheduling, runtime environments) of stateless functions. Recently, domain-specific languages like APP and aAPP have been developed to express per-function scheduling policies, e.g., enforcing the allocation of functions on nodes that enjoy low data-access latencies thanks to proximity and connection pooling.
Reachability analysis of FaaS scheduling policies is fundamental to check quality-of-service properties, like preventing the scheduling of functions on workers which cannot sustain expected performance levels, or verifying security properties, e.g., that safety-critical functions cannot run on nodes with untrusted ones. We investigate the complexity of reachability analysis for APP and aAPP– the latter extends APP with constraints for placing functions according to the absence/presence of other (anti-)affine functions on workers. We show that reachability analysis has linear time complexity in APP, while the addition of affinities (in aAPP) makes reachability PSPACE. Given the computational complexity correspondence between reachability analysis in aAPP and automatic planning problems (both are in PSPACE), we investigate the exploitability of planners, i.e., tools specialised for solving planning problems for the realisation of static analysers of reachability-like problems for aAPP.
Giuseppe De Palma, Saverio Giallorenzo, Jacopo Mauro, Matteo Trentin, Gianluigi Zavattaro
From Zonotopes to Proof Certificates: A Formal Pipeline for Safe Control Envelopes
Abstract
Synthesizing controllers that enforce both safety and actuator constraints is a central challenge in the design of cyber-physical systems. While zonotope-based reachability methods deliver impressive scalability, only parts of these methods have been formalized. Consequently, no practical tool provides a fully verified end-to-end pipeline, leaving an assurance gap for safety-critical systems. Deductive verification with the hybrid system prover KeYmaera X could, in principle, resolve this assurance gap, but the high-dimensional set representations required for reachability analysis overwhelm its reasoning based on quantifier elimination. To close this gap, we develop a verification pipeline that combines scalability with formal rigor by computing control-invariant sets using high-performance reachability algorithms and verifying them using novel logical proof rules. Computationally intensive zonotope containment tasks are offloaded to efficient numerical backends, which return compact witnesses that KeYmaera X can validate rapidly. We show the practical utility of our approach through representative case studies.
Jonathan Hellwig, Lukas Schäfer, Long Qian, André Platzer, Matthias Althoff

Autonomous Systems

Frontmatter
Formal Modeling of Trust in Autonomous Delivery Vehicles
Abstract
Trust modeling is critical for the safe deployment of autonomous systems, yet existing approaches that rely primarily on historical performance data fail to capture dynamic operational contexts and real-time agent capabilities. This paper introduces a formal framework for modeling actual trust in Autonomous Delivery Vehicles (ADVs)—a context-aware trust model that evaluates an agent’s current ability, knowledge state, and commitment to task completion rather than relying solely on past behavior. We present a systematic refinement-based approach using Event-B formal methods to model trust in ADV task delegation scenarios. Our methodology progresses through five refinement levels, transitioning from an untrusted baseline model to a comprehensive trust framework that integrates three key dimensions: (1) strategic trust (capability verification), (2) epistemic trust (knowledge-based assessment), and (3) commitment trust (availability and willingness evaluation). Each refinement level addresses specific failure modes identified in traditional delegation systems where tasks may be assigned to incapable, unknown, or unavailable vehicles. The formal model is verified using the Rodin theorem prover with 93 proof obligations, achieving 90% automatic verification. Our approach demonstrates how actual trust can be systematically integrated into autonomous systems through correctness-by-construction refinement, ensuring that task assignments occur only when trust conditions are formally verified. The framework provides a foundation for trustworthy task delegation in multi-agent autonomous systems and offers insights for developing reliable AI-driven delivery networks.
Manar Altamimi, Asieh Salehi Fathabadi, Vahid Yazdanpanah
The CAISAR Platform: Extending the Reach of Machine Learning Specification and Verification
Abstract
The formal specification and verification of machine learning models have advanced remarkably in less than a decade, leading to a profusion of verification tools that provide mathematical guarantees about model properties. However, this growing diversity risks ecosystem fragmentation, making it difficult to compare tools beyond narrowly defined benchmarks. Moreover, much of the progress to date has focused on a limited class of properties, particularly local robustness. While existing tools are increasingly effective at verifying such properties, more complex ones, such as those involving multiple neural networks, remain beyond their capabilities: these properties cannot currently be expressed in their specification languages, nor can they be directly verified. This applies even to the winning verification tools of the International Verification of Neural Networks Competition (VNN-Comp).
In this tool paper, we present CAISAR, an open-source platform for specifying and verifying properties of machine learning models, with particular focus on neural networks and support vector machines. CAISAR provides a high-level language for specifying complex properties and integrates several state-of-the-art verifiers for their automatic verification. Through concrete use cases, we show how CAISAR leverages automated graph-editing techniques to translate high-level specifications into queries for the supported verifiers, bridging the (embedding) gap between user specifications and the corresponding ones that are actually verified.
Michele Alberti, François Bobot, Julien Girard-Satabin, Alban Grastien, Aymeric Varasse, Zakaria Chihani

Security and Blockchain

Frontmatter
Security of the Lightning Network: Model Checking a Stepwise Refinement with TLA+
Abstract
Payment channel networks such as the Lightning Network are an approach to improve the scalability of blockchain-based cryptocurrencies. The complexity of Lightning, the Lightning Network’s protocol, makes it hard to assess whether the protocol is secure. To enable computer-aided security verification of Lightning, we formalize the protocol in TLA+ and formally specify the security property that honest users are guaranteed to retrieve their correct balance. Model checking provides a fully automated verification of the security property, however, the state space of the protocol’s specification is so large that model checking is unfeasible. We make model checking of Lightning possible using two refinement steps that we verify using proofs. In a first step, we abstract the model of time and in a second step we use compositional reasoning to separately model check a single payment channel and multi-hop payments. These refinements reduce the state space sufficiently to allow for model checking Lightning with small finite models. Our results indicate that the current specification of Lightning is secure.
Matthias Grundmann, Hannes Hartenstein
Formal Verification of PKCS#1 Signature Parser Using Frama-C
Abstract
Message parsing represents a complex security-critical problem. It has been demonstrated by numerous real-world exploits on parsers, e.g. on PKCS#1 (Public-Key Cryptography Standard) v1.5 signature, X.509 certificate chain, or infamously on a TLS extension during the Heartbleed attack. In this case study, we perform formal verification of a PKCS#1 v1.5 signature parser using Frama-C, where the verification of the parser is realized for the first time directly over the actual implementation in C. This brings highest guarantees of security and functional properties, while leaving developers the flexibility to adapt the code to the project’s specific requirements. We present the proven properties, our verification approach and results. In particular, this work rules out applications of any variants of Bleichenbacher’s signature forgery and ensures that we are able to detect potential parser incompatibilities. This work opens the door to future extensions to other protocols, for example, for parsing DER ASN.1 encoding of X.509 certificates and CRLs (Certificate Revocation Lists).
Martin Hána, Nikolai Kosmatov, Virgile Prevosto, Julien Signoles
Game Modeling of Blockchain Protocols
Abstract
Reasoning about incentives in a blockchain protocol can be captured by game-theoretic modeling. We present modeling principles sufficient to create a faithful representation of a blockchain protocol as an extensive form game. Such games are then suitable for automatically establishing game-theoretic security. We showcase the semi-automated generation of the game models for two parts of Bitcoin’s Lightning protocol: the closing of a channel and the routing of a payment along channels. Additionally, we provide a domain-specific language, which eases the implementation of the games. We believe our modeling principles and guidelines strengthen machine-supported modeling practices.
Sophie Rain, Anja Petković Komel, Michael Rawson, Laura Kovács
Concurrency Under Control: Systematic Analysis of SDN Races Hazards
Abstract
Race conditions in Software-Defined Networks (SDNs) pose a significant threat to the correctness and reliability of network behavior, particularly in dynamic and distributed control plane environments. This paper presents a formal approach to identifying and analyzing races in SDNs using vector clocks and DyNetKAT, a domain-specific language for specifying and reasoning about dynamic packet-processing policies. By modeling network execution traces with vector clocks, we detect concurrent events through incomparable clock states, which signal potential races. We then assess the harmfulness of these races by comparing the DyNetKAT expressions associated with the corresponding transitions, determining if the race affects the network’s behaviour. Our methodology enables systematic detection of harmful races that lead to packet drops, policy violations, or inconsistent forwarding behaviors. Through case studies and experimental validation on real network topologies, we demonstrate the effectiveness of our approach in uncovering subtle concurrency bugs that are often missed by traditional testing. This work provides a foundation for more robust SDN verification tools and contributes to the safe evolution of programmable network infrastructures.
Georgiana Caltais, Andrei Covaci, Hossein Hojjat

Model-Based Testing and Synthesis

Frontmatter
Model-Based Testing of an Intermediate Verifier Using Executable Operational Semantics
Abstract
Lightweight validation technique, such as those based on random testing, are sometimes practical alternatives to full formal verification—providing valuable benefits, such as finding bugs, without requiring a disproportionate effort. In fact, such validation techniques can be useful even for fully formally verified tools, by exercising the parts of a complex system that go beyond the reach of formal models.
In this context, this paper introduces bcc: a model-based testing technique for the Boogie intermediate verifier. bcc combines the formalization of a small, deterministic subset of the Boogie language with the generative capabilities of the plt Redex language engineering framework. Basically, bcc uses plt Redex  to generate random Boogie programs, and to execute them according to a formal operational semantics; then, it runs the same programs through the Boogie verifier. Any inconsistency between the two executions (in plt Redex and with Boogie) may indicate a potential bug in Boogie’s implementation.
To understand whether bcc can be useful in practice, we used it to generate three million Boogie programs. These experiments found 2% of cases indicative of completeness failures (i.e., spurious verification failures) in Boogie’s toolchain. These results indicate that lightweight analysis tools, such as those for model-based random testing, are also useful to test and validate formal verification tools such as Boogie.
Lidia Losavio, Marco Paganoni, Carlo A. Furia
Quick Theory Exploration for Algebraic Data Types via Program Transformations
Abstract
We present an approach to theory exploration, i.e., a lemma synthesis procedure which discovers algebraic laws over recursive functions over Algebraic Data Types (ADTs). The approach, LemmaCalc, builds on, adapts and extends program calculation techniques known from optimization of functional programs (fusion and accumulator removal). Our approach avoids exponential search space of term enumeration (SyGuS) that can render state-of-the-art techniques prohibitively expensive or even useless on large theories with more than a handful of function symbols. In this paper we describe how this approach can be realized and contribute a robust implementation. The evaluation shows that different methods have complementary strengths and that each can produce lemmas not found by the other, but LemmaCalc scales much better to larger theories.
Gidon Ernst, Grigory Fedyukovich
Auto-Generating Visual Editors for Formal Logics with Blockly
Abstract
Formal logics are central to the specification and verification of computational systems, yet their adoption outside highly specialised domains is hindered by steep learning curves and error-prone textual notations. Making these notations more approachable is particularly important in education and in settings where non-expert stakeholders need to engage with formal reasoning. We present a framework that automatically generates block-based visual editors for formal logics using the Blockly library. From a structured JSON specification of syntax and composition rules, our tool produces browser-based editors in which formulas are constructed by combining graphical blocks rather than writing code. This approach lowers syntactic barriers for learners and non-experts, while allowing experts to define new logics without manual interface design. Although integration with verification backends is planned, the tool already provides a reusable foundation for accessible and customisable logic editors.
Angelo Ferrando, Peng Lu, Vadim Malvone

Timing and Stochastic Modelling

Frontmatter
Automata Learning – Expect Delays!
Abstract
This paper studies active automata learning (AAL) in the presence of stochastic delays. We consider Mealy machines that have stochastic delays associated with each transition and explore how the learner can efficiently arrive at faithful estimates of those machines, the precision of which crucially relies on repetitive sampling of transition delays. While it is possible to naïvely integrate the delay sampling into AAL algorithms such as \(L^*\), this leads to considerable oversampling near the root of the state space. We address this problem by separating conceptually the learning of behavior and delays such that the learner uses the information gained while learning the logical behavior to arrive at efficient input sequences for collecting the needed delay samples. We put emphasis on treating cases in which identical input/output behaviors might stem from distinct delay characteristics. Finally, we provide empirical evidence that our method outperforms the naïve baseline across a wide range of benchmarks and investigate its applicability in a realistic setting by studying the join order in a relational database.
Gabriel Dengler, Sven Apel, Holger Hermanns
Distributed Timed Scenarios
Abstract
We introduce the notion of a distributed timed scenario (DTS) and define the semantics of a DTS as a set of distributed behaviours. We define the notion of consistency of a DTS in terms of its semantics and develop a constructive method for determining the consistency. We define the semantic equivalence of two distributed scenarios and develop two different, but related, canonical representations for an equivalence class of distributed scenarios. Equipped with these results we tackle the problem of realisability of sets of sequential scenarios as distributed scenarios. We develop an algorithm that determines whether a set of sequential scenarios can be realised by a DTS, i.e., whether there is a DTS whose semantics is identical to the union of the behaviours allowed by the members of the set. If so, we produce such a DTS.
Neda Saeedloei, Feliks Kluźniak
Backmatter
Title
Integrated Formal Methods
Editors
Ferruccio Damiani
Marie Farrell
Copyright Year
2026
Electronic ISBN
978-3-032-10794-7
Print ISBN
978-3-032-10793-0
DOI
https://doi.org/10.1007/978-3-032-10794-7

PDF files of this book have been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation, keyboard-friendly links and forms and searchable, selectable text. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com.

Premium Partner

    Image Credits
    Neuer Inhalt/© ITandMEDIA, Nagarro GmbH/© Nagarro GmbH, AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, USU GmbH/© USU GmbH, Ferrari electronic AG/© Ferrari electronic AG