Skip to main content
Top

Languages, Compilers, Analysis - From Beautiful Theory to Useful Practice

Essays Dedicated to Alan Mycroft on the Occasion of His Retirement

  • 2026
  • Book
insite
SEARCH

About this book

This Festschrift is dedicated to Alan Mycroft who formally retired from the University of Cambridge after almost 40 years. Following a BA in Mathematics from the University of Cambridge in 1977, Alan took the Diploma in Computer Science in 1978. He then completed a PhD in Edinburgh in 1981 on Abstract Interpretation and Optimising Transformations for Applicative Programs under the supervision of Rod Burstall and Robin Milner. After an EPSRC postdoctoral fellowship at Edinburgh and a research assistant position at Chalmers University, he returned to the University of Cambridge in 1984 and remained there until his retirement in 2023.

Alan is well-known for pioneering contributions to programming language theory and applications, covering both design and implementation. His work ranges from compilation and optimisation techniques, through semantics, static analysis, and type systems, to parallel, concurrent, and dataflow programming. He co-created the Norcroft C compiler with Arthur Norman, he co-authored the book Java 8 in Action: Lambdas, Streams, and Functional-Style Programming, and he co-founded the Raspberry Pi Foundation, a hugely successful initiative to develop programming skills and thinking for all ages.

In addition to his excellent research profile, Alan has been an inspiring teacher, mentor, and collaborator, always generous with encouragement and feedback. The impact and scope of his career is reflected in the breadth of topics in this volume, a fitting tribute to him.

Table of Contents

  1. Frontmatter

  2. Compilers and Code Generators

    1. Frontmatter

    2. The Norcroft Compiler at Arm

      Jeremy Singer, Lee Smith
      Abstract
      In the mid 1980 s, a ‘couple of academics wearing startup-company hats’ were developing a retargetable C compiler. The most prolific target for this compiler was the ARM processor, a novel RISC architecture initially designed by Acorn and subsequently developed by Arm with input from its partners. In this paper, we will review 1980 s compiler technology and see how the Norcroft compiler significantly advanced the state of the art, particularly highlighting the friendly error messages, flexible intermediate representation and graph-colouring register allocation. We will review the reach and legacy of the Norcroft compiler over the past four decades.
    3. When Obfuscations Preserve Constant-Time

      Matteo Busi, Pierpaolo Degano, Letterio Galletta
      Abstract
      Obfuscating compilers are designed to protect a program by obscuring its meaning and impeding the reconstruction of its original source code. Usually, the main concern with such compilers is their correctness and their robustness against reverse engineering. On the contrary, little attention is paid to ensure that obfuscation introduces no attacks in the transformed program that where not present in the original one. Cryptographic libraries often resort to the so-called cryptographic constant-time property to guarantee that no attackers can learn any secret values by monitoring and analyzing program execution time.
      Here, we propose a sufficient condition to check if a given obfuscation preserves the constant-time property. Checking this condition amounts to a simple and efficient static analysis that can be easily implemented. We consider several obfuscating transformations implemented in popular obfuscating compilers (e.g. the Tigress C compiler and O-MVLL). By relying on our condition we prove that some of them preserve constant-time, while others do not. For the last case we propose a translation validation procedure that applies our condition to the given program and verify whether instead the constant-time property is kept.
    4. A Symbolic Computing Perspective on Software Systems

      Arthur C. Norman, Stephen M. Watt
      Abstract
      Symbolic mathematical computing systems have served as a canary in the coal mine of software systems for more than sixty years. They have introduced or have been early adopters of programming language ideas such as dynamic memory management, arbitrary precision arithmetic and dependent types. These systems have the property of being highly complex while at the same time operating in a domain where results are well-defined and clearly verifiable. These software systems span multiple layers of abstraction with concerns ranging from instruction scheduling and cache pressure up to algorithmic complexity of constructions in algebraic geometry. All of the major symbolic mathematical computing systems include low-level code for arbitrary precision arithmetic, memory management and other primitives, a compiler or interpreter for a bespoke programming language, a library of high-level mathematical algorithms, and some form of user interface. Each of these parts invokes multiple deep issues. We present some lessons learned from this environment and free flowing opinions on topics including:
      • Portability of software across architectures and decades;
      • Infrastructure to embrace and infrastructure to avoid;
      • Choosing base abstractions upon which to build;
      • How to get the most out of a small code base;
      • How developments in compilers both to optimise and to validate code have always been and remain of critical importance, with plenty of remaining challenges;
      • The way in which individuals including in particular Alan Mycroft who has been able to span from hand-crafting Z80 machine code up to the most abstruse high-level code analysis techniques are needed, and
      • Why it is important to teach full-stack thinking to the next generation.
      The key message is that the real world is often messier than presentation in papers and we need to be able to cross between very low and very high levels of abstraction to deal with this, as Alan Mycroft has done in his work.
    5. GPU Implementations for Midsize Integer Addition and Multiplication

      Cosmin E. Oancea, Stephen M. Watt
      Abstract
      This paper explores practical aspects of using a high-level functional language for GPU-based arithmetic on “midsize” integers. By this we mean integers of up to about a quarter million bits, which is sufficient for most practical purposes. The goal is to understand whether it is possible to support efficient nested-parallel programs with a small, flexible code base. We report on GPU implementations for addition and multiplication of integers that fit in one cuda block, thus leveraging temporal reuse from scratchpad memories. Our key contribution resides in the simplicity of the proposed solutions: We recognize that addition is a straightforward application of scan, which is known to allow efficient GPU implementation. For quadratic multiplication we employ a simple work-partitioning strategy that offers good temporal locality. For FFT multiplication, we efficiently map the computation in the domain of integral fields by finding “good” primes that enable almost-full utilization of machine words. In comparison, related work uses complex tiling strategies—which feel too big a hammer for the job—or uses the computational domain of reals, which may degrade the magnitude of the base in which the computation is carried. We evaluate the performance in comparison to the state-of-the-art cgbn library, authored by NvidiaLab, and report that our cuda prototype outperforms cgbn for integer sizes higher than 32K bits, while offering comparable performance for smaller sizes. Moreover, we are, to our knowledge, the first to report that FFT multiplication outperforms the classical one on the larger sizes that still fit in a cuda block. Finally, we examine Futhark’s strengths and weaknesses for efficiently supporting such computations and find out that the significant overheads and scalability issues of the Futhark implementation are mainly caused by the absence of a compiler pass aimed at efficient sequentialization of excess parallelism.
  3. Static Analysis

    1. Frontmatter

    2. The Contributions of Alan Mycroft to Abstract Interpretation

      Patrick Cousot
      Abstract
      We summarize and contextualize the most influential contributions of Alan Mycroft in the field of static analysis. Some of Alan’s seminal contributions are well-known such as his work on strictness analysis and completeness, but others have been largely overlooked, namely his thesis work on linked data structures and his precursory work on predicate abstraction.
    3. A Fixed Point Iteration Technique for Proving Correctness of Slicing for Probabilistic Programs

      Torben Amtoft, Anindya Banerjee
      Abstract
      When proving the correctness of a method for slicing probabilistic programs, it was previously discovered by the authors that for a fixed point iteration to work one needs a non-standard starting point for the iteration. This paper presents and explores this technique in a general setting; it states the lemmas that must be established to use the technique to prove the correctness of a program transformation, and sketches how to apply the technique to slicing of probabilistic programs.
    4. On Graded Coeffect Types for Information-Flow Control

      Vilem-Benjamin Liepelt, Danielle Marshall, Dominic Orchard, Vineet Rajani, Michael Vollmer
      Abstract
      Graded types are an overarching paradigm that provides fine-grained reasoning by reflecting the structure of typing rules into a system of type annotations. A significant subset of graded type systems is that of coeffect systems, originally introduced by Petricek, Orchard, and Mycroft as a dual to effect systems, capturing the dataflow of values in a calculus by annotating variables and function types with elements of a semiring. A particularly useful instance of these graded coeffect systems is to capture security properties of data to enforce information-flow control. We examine this particular use case and give a new characterisation of a subclass of semirings which enable the key non-interference theorem of information-flow control: that less privileged observers are unable to distinguish the dependence of computations on more privileged inputs. The result relies on a logical relations proof and is mechanised in Agda. We consider its relationship to other characterisations of non-interference in the recent literature on graded types and in the historical context of coeffect and graded systems. We leverage these results for programming with security in the Granule programming language, a research language for graded types. We conclude with extensions to Granule that go beyond non-interference to declassification, leveraging graded types to control deliberate information leakage.
  4. Hardware

    1. Frontmatter

    2. Static Analysis for Hardware Design

      Mads Rosendahl, Maja H. Kirkeby
      Abstract
      Implementing algorithms in hardware can be a substantial engineering challenge. Hardware accelerators for some algorithms may be a way to achieve better time and energy efficiency of computational problems. We explore some possible applications of static analysis in the design phase of constructing hardware design for algorithms targeting field-programmable gate arrays (FPGA).
      Drawing inspiration from Alan Mycroft’s 2007 invited talk on static analysis and subsequent articles discussing the connection between hardware evolution, language design, and static analysis, we explore the usage of static analysis as a tool to facilitate the realization of hardware accelerators for algorithms. We examine methodologies for analyzing communication and data flows within the hardware design, thereby enhancing our understanding of these aspects in the pursuit of efficient FPGA-based algorithm implementations.
    3. Context-Aware Air Quality Analytics Using Low-Cost Sensors and First-Order Logic

      Eleftheria Katsiri
      Abstract
      Low-cost sensors changed the pollution monitoring paradigm due to their ability to measure air pollution ad hoc, close to the sources and in great spatio-temporal resolution. Our team has been engaged in the development of highly reliable air quality sensing devices, using low-cost sensors. Air quality measurements are transmitted to a scalable dynamic knowledge-base maintenance system for representing and reasoning with knowledge about Sentient Computing, extended with air quality predicates. The system functions similarly to a two-level cache: the lower layer maintains knowledge about air quality at the sensor level by continually processing a high rate of observations from the measuring devices. The higher layer maintains easily-retrievable, user-defined abstract knowledge about current and historical states of air quality along with temporal properties such as the time of occurrence of observation timeseries and their length. Our approach uses deductive systems in an unusual way: by creating a model of real-world air quality we are able to prove quality aspects of logical predicates both at the sensor level - such as the existence of outliers, stuck-at-zero values or whether a timeseries is additive or multiplicative - and at the derived predicate level - such as AQI index, a person’s aggregated exposure to pollutants or analytical trends and patterns, paving the road to explainability. Furthermore, by embedding the lower layer of the architecture inside the monitoring devices we support in-network processing, lowering the number of published messages and extending the device lifetime. The preliminary results provide useful insights on both pollution level and the intensity of industrial activity in the dairy.
  5. Software

    1. Frontmatter

    2. Sustainable Software Development: New Challenges for Programming, Language Design and Analysis

      Bent Thomsen, Lone Leth Thomsen, Thomas Bøgholm
      Abstract
      Energy consumption and the associated \(CO_{2}\) footprint of Information and Communication Technology (ICT) has become a major concern. Some estimates suggest that 4–6% of global electricity consumption in 2020 was spent on ICT and, although the ICT industry is very good at using green energy, \(CO_{2}\) emissions from ICT are at par with or has overtaken \(CO_{2}\) emissions from aviation. Pessimistic forecasts suggest that electricity consumption from ICT may rise to 20% in 2030.
      Clearly software does not emit \(CO_{2}\), but software is executed on hardware, and hardware consumes energy when executing software. In recent years there has been a huge effort in understanding the relationship between software and energy consumption of the underlying hardware. There is now evidence that the structure of the software, the program constructs used in the software and even the programming languages and compilers used for developing the software influence the energy consumption when the software is executed. There is a huge global effort on raising awareness of sustainable software development and there is a growing body of knowledge of many aspects.
      However, the literature on how programming language design and analysis can impact energy consumption of the underlying hardware is sparse.
      In a seminal presentation at SAS’07 Alan gave an overview of the changes going on in hardware and outlined his view on the implications of this on programming language design and analysis research. In this paper we follow in Alan’s footsteps and outline our view on the implications of energy consumption on programming language design and analysis research.
    3. On the Limits of Making Programming Easy

      Tomas Petricek, Joel Jakubovic
      Abstract
      A lot of programming research shares the same basic motivation: how can we make programming easier? Alas, this problem is difficult to tackle directly. Programming is a tangle of conceptual models, programming languages, user interfaces and more and we cannot advance all of these at the same time. Moreover, we have no good metric for measuring whether programming is easy. As a result, we usually give up on the original motivation and pursue narrow tractable research for which there is a rigorous methodology.
      In this paper, we investigate the limits of making programming easy. We use a dialectic method to circumscribe the design space within which easier programming systems may exist. In doing so, we not only bring together ideas on open-source software, self-sustainable systems, visual programming languages, but also the analysis of limits by Fred Brooks in his classic “No Silver Bullet” essay. We sketch a possible path towards easier programming of the future, but more importantly, we argue for the importance of proto-theories as a method for tackling the original motivating basic research question.
    4. Triemaps that Match

      Simon Peyton Jones, Sebastian Graf
      Abstract
      The trie data structure is a good choice for finite maps whose keys are data structures (trees) rather than atomic values. But what if we want the keys to be patterns, each of which matches many lookup keys? Efficient matching of this kind is well studied in the theorem prover community, but much less so in the context of statically typed functional programming. Doing so yields an interesting new viewpoint—and a practically useful design pattern, with good runtime performance.
  6. Backmatter

Title
Languages, Compilers, Analysis - From Beautiful Theory to Useful Practice
Editors
Dominic Orchard
Tomas Petricek
Jeremy Singer
Copyright Year
2026
Electronic ISBN
978-3-032-08187-2
Print ISBN
978-3-032-08186-5
DOI
https://doi.org/10.1007/978-3-032-08187-2

PDF files of this book have been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation, keyboard-friendly links and forms and searchable, selectable text. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com.

Premium Partner

    Image Credits
    Neuer Inhalt/© ITandMEDIA, Nagarro GmbH/© Nagarro GmbH, AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, USU GmbH/© USU GmbH, Ferrari electronic AG/© Ferrari electronic AG