Zum Inhalt

Languages, Compilers, Analysis - From Beautiful Theory to Useful Practice

Essays Dedicated to Alan Mycroft on the Occasion of His Retirement

  • 2026
  • Buch
insite
SUCHEN

Über dieses Buch

Diese Festschrift ist Alan Mycroft gewidmet, der sich nach fast 40 Jahren formell von der Universität Cambridge zurückzog. Nach einem BA in Mathematik an der Universität Cambridge im Jahr 1977 erwarb Alan 1978 das Diplom in Informatik. Anschließend promovierte er 1981 in Edinburgh zum Thema Abstrakte Interpretation und Optimierung von Transformationen für anwendungstechnische Programme unter der Leitung von Rod Burstall und Robin Milner. Nach einem Postdoktorandenstipendium des EPSRC in Edinburgh und einer Assistenzstelle an der Chalmers University kehrte er 1984 an die Universität Cambridge zurück und blieb dort bis zu seiner Pensionierung im Jahr 2023. Alan ist bekannt für bahnbrechende Beiträge zur Programmiersprachentheorie und -anwendung, die sowohl Design als auch Implementierung abdecken. Seine Arbeit reicht von Kompilations- und Optimierungstechniken über Semantik, statische Analyse und Typensysteme bis hin zur parallelen, gleichzeitigen und Datenflussprogrammierung. Gemeinsam mit Arthur Norman entwickelte er den Norcroft C-Compiler, er war Mitautor des Buches Java 8 in Action: Lambdas, Streams und Functional-Style Programming, und er gründete die Raspberry Pi Foundation, eine enorm erfolgreiche Initiative zur Entwicklung von Programmierkenntnissen und Denkweisen für alle Altersgruppen. Zusätzlich zu seinem exzellenten Forschungsprofil war Alan ein inspirierender Lehrer, Mentor und Mitarbeiter, der immer großzügig mit Ermutigung und Feedback umging. Die Wirkung und Reichweite seiner Karriere spiegelt sich in der Bandbreite der Themen in diesem Band wider, eine angemessene Hommage an ihn.

Inhaltsverzeichnis

Frontmatter

Compilers and Code Generators

Frontmatter
The Norcroft Compiler at Arm
Abstract
In the mid 1980 s, a ‘couple of academics wearing startup-company hats’ were developing a retargetable C compiler. The most prolific target for this compiler was the ARM processor, a novel RISC architecture initially designed by Acorn and subsequently developed by Arm with input from its partners. In this paper, we will review 1980 s compiler technology and see how the Norcroft compiler significantly advanced the state of the art, particularly highlighting the friendly error messages, flexible intermediate representation and graph-colouring register allocation. We will review the reach and legacy of the Norcroft compiler over the past four decades.
Jeremy Singer, Lee Smith
When Obfuscations Preserve Constant-Time
Abstract
Obfuscating compilers are designed to protect a program by obscuring its meaning and impeding the reconstruction of its original source code. Usually, the main concern with such compilers is their correctness and their robustness against reverse engineering. On the contrary, little attention is paid to ensure that obfuscation introduces no attacks in the transformed program that where not present in the original one. Cryptographic libraries often resort to the so-called cryptographic constant-time property to guarantee that no attackers can learn any secret values by monitoring and analyzing program execution time.
Here, we propose a sufficient condition to check if a given obfuscation preserves the constant-time property. Checking this condition amounts to a simple and efficient static analysis that can be easily implemented. We consider several obfuscating transformations implemented in popular obfuscating compilers (e.g. the Tigress C compiler and O-MVLL). By relying on our condition we prove that some of them preserve constant-time, while others do not. For the last case we propose a translation validation procedure that applies our condition to the given program and verify whether instead the constant-time property is kept.
Matteo Busi, Pierpaolo Degano, Letterio Galletta
A Symbolic Computing Perspective on Software Systems
Abstract
Symbolic mathematical computing systems have served as a canary in the coal mine of software systems for more than sixty years. They have introduced or have been early adopters of programming language ideas such as dynamic memory management, arbitrary precision arithmetic and dependent types. These systems have the property of being highly complex while at the same time operating in a domain where results are well-defined and clearly verifiable. These software systems span multiple layers of abstraction with concerns ranging from instruction scheduling and cache pressure up to algorithmic complexity of constructions in algebraic geometry. All of the major symbolic mathematical computing systems include low-level code for arbitrary precision arithmetic, memory management and other primitives, a compiler or interpreter for a bespoke programming language, a library of high-level mathematical algorithms, and some form of user interface. Each of these parts invokes multiple deep issues. We present some lessons learned from this environment and free flowing opinions on topics including:
  • Portability of software across architectures and decades;
  • Infrastructure to embrace and infrastructure to avoid;
  • Choosing base abstractions upon which to build;
  • How to get the most out of a small code base;
  • How developments in compilers both to optimise and to validate code have always been and remain of critical importance, with plenty of remaining challenges;
  • The way in which individuals including in particular Alan Mycroft who has been able to span from hand-crafting Z80 machine code up to the most abstruse high-level code analysis techniques are needed, and
  • Why it is important to teach full-stack thinking to the next generation.
The key message is that the real world is often messier than presentation in papers and we need to be able to cross between very low and very high levels of abstraction to deal with this, as Alan Mycroft has done in his work.
Arthur C. Norman, Stephen M. Watt
GPU Implementations for Midsize Integer Addition and Multiplication
Abstract
This paper explores practical aspects of using a high-level functional language for GPU-based arithmetic on “midsize” integers. By this we mean integers of up to about a quarter million bits, which is sufficient for most practical purposes. The goal is to understand whether it is possible to support efficient nested-parallel programs with a small, flexible code base. We report on GPU implementations for addition and multiplication of integers that fit in one cuda block, thus leveraging temporal reuse from scratchpad memories. Our key contribution resides in the simplicity of the proposed solutions: We recognize that addition is a straightforward application of scan, which is known to allow efficient GPU implementation. For quadratic multiplication we employ a simple work-partitioning strategy that offers good temporal locality. For FFT multiplication, we efficiently map the computation in the domain of integral fields by finding “good” primes that enable almost-full utilization of machine words. In comparison, related work uses complex tiling strategies—which feel too big a hammer for the job—or uses the computational domain of reals, which may degrade the magnitude of the base in which the computation is carried. We evaluate the performance in comparison to the state-of-the-art cgbn library, authored by NvidiaLab, and report that our cuda prototype outperforms cgbn for integer sizes higher than 32K bits, while offering comparable performance for smaller sizes. Moreover, we are, to our knowledge, the first to report that FFT multiplication outperforms the classical one on the larger sizes that still fit in a cuda block. Finally, we examine Futhark’s strengths and weaknesses for efficiently supporting such computations and find out that the significant overheads and scalability issues of the Futhark implementation are mainly caused by the absence of a compiler pass aimed at efficient sequentialization of excess parallelism.
Cosmin E. Oancea, Stephen M. Watt

Static Analysis

Frontmatter
The Contributions of Alan Mycroft to Abstract Interpretation
Abstract
We summarize and contextualize the most influential contributions of Alan Mycroft in the field of static analysis. Some of Alan’s seminal contributions are well-known such as his work on strictness analysis and completeness, but others have been largely overlooked, namely his thesis work on linked data structures and his precursory work on predicate abstraction.
Patrick Cousot
A Fixed Point Iteration Technique for Proving Correctness of Slicing for Probabilistic Programs
Abstract
When proving the correctness of a method for slicing probabilistic programs, it was previously discovered by the authors that for a fixed point iteration to work one needs a non-standard starting point for the iteration. This paper presents and explores this technique in a general setting; it states the lemmas that must be established to use the technique to prove the correctness of a program transformation, and sketches how to apply the technique to slicing of probabilistic programs.
Torben Amtoft, Anindya Banerjee
On Graded Coeffect Types for Information-Flow Control
Abstract
Graded types are an overarching paradigm that provides fine-grained reasoning by reflecting the structure of typing rules into a system of type annotations. A significant subset of graded type systems is that of coeffect systems, originally introduced by Petricek, Orchard, and Mycroft as a dual to effect systems, capturing the dataflow of values in a calculus by annotating variables and function types with elements of a semiring. A particularly useful instance of these graded coeffect systems is to capture security properties of data to enforce information-flow control. We examine this particular use case and give a new characterisation of a subclass of semirings which enable the key non-interference theorem of information-flow control: that less privileged observers are unable to distinguish the dependence of computations on more privileged inputs. The result relies on a logical relations proof and is mechanised in Agda. We consider its relationship to other characterisations of non-interference in the recent literature on graded types and in the historical context of coeffect and graded systems. We leverage these results for programming with security in the Granule programming language, a research language for graded types. We conclude with extensions to Granule that go beyond non-interference to declassification, leveraging graded types to control deliberate information leakage.
Vilem-Benjamin Liepelt, Danielle Marshall, Dominic Orchard, Vineet Rajani, Michael Vollmer

Hardware

Frontmatter
Static Analysis for Hardware Design
Abstract
Implementing algorithms in hardware can be a substantial engineering challenge. Hardware accelerators for some algorithms may be a way to achieve better time and energy efficiency of computational problems. We explore some possible applications of static analysis in the design phase of constructing hardware design for algorithms targeting field-programmable gate arrays (FPGA).
Drawing inspiration from Alan Mycroft’s 2007 invited talk on static analysis and subsequent articles discussing the connection between hardware evolution, language design, and static analysis, we explore the usage of static analysis as a tool to facilitate the realization of hardware accelerators for algorithms. We examine methodologies for analyzing communication and data flows within the hardware design, thereby enhancing our understanding of these aspects in the pursuit of efficient FPGA-based algorithm implementations.
Mads Rosendahl, Maja H. Kirkeby
Context-Aware Air Quality Analytics Using Low-Cost Sensors and First-Order Logic
Abstract
Low-cost sensors changed the pollution monitoring paradigm due to their ability to measure air pollution ad hoc, close to the sources and in great spatio-temporal resolution. Our team has been engaged in the development of highly reliable air quality sensing devices, using low-cost sensors. Air quality measurements are transmitted to a scalable dynamic knowledge-base maintenance system for representing and reasoning with knowledge about Sentient Computing, extended with air quality predicates. The system functions similarly to a two-level cache: the lower layer maintains knowledge about air quality at the sensor level by continually processing a high rate of observations from the measuring devices. The higher layer maintains easily-retrievable, user-defined abstract knowledge about current and historical states of air quality along with temporal properties such as the time of occurrence of observation timeseries and their length. Our approach uses deductive systems in an unusual way: by creating a model of real-world air quality we are able to prove quality aspects of logical predicates both at the sensor level - such as the existence of outliers, stuck-at-zero values or whether a timeseries is additive or multiplicative - and at the derived predicate level - such as AQI index, a person’s aggregated exposure to pollutants or analytical trends and patterns, paving the road to explainability. Furthermore, by embedding the lower layer of the architecture inside the monitoring devices we support in-network processing, lowering the number of published messages and extending the device lifetime. The preliminary results provide useful insights on both pollution level and the intensity of industrial activity in the dairy.
Eleftheria Katsiri

Software

Frontmatter
Sustainable Software Development: New Challenges for Programming, Language Design and Analysis
Abstract
Energy consumption and the associated \(CO_{2}\) footprint of Information and Communication Technology (ICT) has become a major concern. Some estimates suggest that 4–6% of global electricity consumption in 2020 was spent on ICT and, although the ICT industry is very good at using green energy, \(CO_{2}\) emissions from ICT are at par with or has overtaken \(CO_{2}\) emissions from aviation. Pessimistic forecasts suggest that electricity consumption from ICT may rise to 20% in 2030.
Clearly software does not emit \(CO_{2}\), but software is executed on hardware, and hardware consumes energy when executing software. In recent years there has been a huge effort in understanding the relationship between software and energy consumption of the underlying hardware. There is now evidence that the structure of the software, the program constructs used in the software and even the programming languages and compilers used for developing the software influence the energy consumption when the software is executed. There is a huge global effort on raising awareness of sustainable software development and there is a growing body of knowledge of many aspects.
However, the literature on how programming language design and analysis can impact energy consumption of the underlying hardware is sparse.
In a seminal presentation at SAS’07 Alan gave an overview of the changes going on in hardware and outlined his view on the implications of this on programming language design and analysis research. In this paper we follow in Alan’s footsteps and outline our view on the implications of energy consumption on programming language design and analysis research.
Bent Thomsen, Lone Leth Thomsen, Thomas Bøgholm
On the Limits of Making Programming Easy
Abstract
A lot of programming research shares the same basic motivation: how can we make programming easier? Alas, this problem is difficult to tackle directly. Programming is a tangle of conceptual models, programming languages, user interfaces and more and we cannot advance all of these at the same time. Moreover, we have no good metric for measuring whether programming is easy. As a result, we usually give up on the original motivation and pursue narrow tractable research for which there is a rigorous methodology.
In this paper, we investigate the limits of making programming easy. We use a dialectic method to circumscribe the design space within which easier programming systems may exist. In doing so, we not only bring together ideas on open-source software, self-sustainable systems, visual programming languages, but also the analysis of limits by Fred Brooks in his classic “No Silver Bullet” essay. We sketch a possible path towards easier programming of the future, but more importantly, we argue for the importance of proto-theories as a method for tackling the original motivating basic research question.
Tomas Petricek, Joel Jakubovic
Triemaps that Match
Abstract
The trie data structure is a good choice for finite maps whose keys are data structures (trees) rather than atomic values. But what if we want the keys to be patterns, each of which matches many lookup keys? Efficient matching of this kind is well studied in the theorem prover community, but much less so in the context of statically typed functional programming. Doing so yields an interesting new viewpoint—and a practically useful design pattern, with good runtime performance.
Simon Peyton Jones, Sebastian Graf
Backmatter
Titel
Languages, Compilers, Analysis - From Beautiful Theory to Useful Practice
Herausgegeben von
Dominic Orchard
Tomas Petricek
Jeremy Singer
Copyright-Jahr
2026
Electronic ISBN
978-3-032-08187-2
Print ISBN
978-3-032-08186-5
DOI
https://doi.org/10.1007/978-3-032-08187-2

Die PDF-Dateien dieses Buches wurden gemäß dem PDF/UA-1-Standard erstellt, um die Barrierefreiheit zu verbessern. Dazu gehören Bildschirmlesegeräte, beschriebene nicht-textuelle Inhalte (Bilder, Grafiken), Lesezeichen für eine einfache Navigation, tastaturfreundliche Links und Formulare sowie durchsuchbarer und auswählbarer Text. Wir sind uns der Bedeutung von Barrierefreiheit bewusst und freuen uns über Anfragen zur Barrierefreiheit unserer Produkte. Bei Fragen oder Bedarf an Barrierefreiheit kontaktieren Sie uns bitte unter accessibilitysupport@springernature.com.

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, NTT Data/© NTT Data, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, Ferrari electronic AG/© Ferrari electronic AG