Skip to main content

Über dieses Buch

This book constitutes the proceedings of the Third International Workshop on Foundational and Practical Aspects of Resource Analysis, FOPARA 2013, held in Bertinoro, Italy, in August 2013. The 9 papers presented in this volume were carefully reviewed and selected from 12 submissions. They deal with traditional approaches to complexity analysis, differential privacy, and probabilistic analysis of programs.



Certified Complexity (CerCo)

We provide an overview of the FET-Open Project CerCo (‘Certified Complexity’). Our main achievement is the development of a technique for analysing non-functional properties of programs (time, space) at the source level with little or no loss of accuracy and a small trusted code base. The core component is a C compiler, verified in Matita, that produces an instrumented copy of the source code in addition to generating object code. This instrumentation exposes, and tracks precisely, the actual (non-asymptotic) computational cost of the input program at the source level. Untrusted invariant generators and trusted theorem provers may then be used to compute and certify the parametric execution time of the code.
Roberto M. Amadio, Nicolas Ayache, Francois Bobot, Jaap P. Boender, Brian Campbell, Ilias Garnier, Antoine Madet, James McKinna, Dominic P. Mulligan, Mauro Piccolo, Randy Pollack, Yann Régis-Gianas, Claudio Sacerdoti Coen, Ian Stark, Paolo Tranquilli

On the Modular Integration of Abstract Semantics for WCET Analysis

We propose here a modular resource analysis which is constructed around a rewrite-based formal specification of an embedded system. Designing and analyzing embedded systems considers both hardware and software behavioral aspects which we capture using the modular notion of system configuration. Hence, we use a configuration-based design methodology and we instantiate parts of the configuration to accommodate data and control-flow abstractions. These instantiations require no modifications of the original formal specification. We implement in this manner a particular resource analysis, namely worst case execution time (WCET), and evaluate it with respect to a reusability metric.
Mihail Asăvoae, Irina Măriuca Asăvoae

Can a Light Typing Discipline Be Compatible with an Efficient Implementation of Finite Fields Inversion?

We focus on the fragment TFA of \(\lambda \)-calculus. It contains terms which normalize in polynomial time only. Inside TFA we translated BEA, a well known, imperative and fast algorithm which calculates the multiplicative inverse of binary finite fields. The translation suggests how to categorize the operations of BEA in sets which drive the design of a variant that we called DCEA. On several common architectures we show that these two algorithms have comparable performances, while on UltraSPARC and ARM architectures the variant we synthesized from a purely functional source can go considerably faster than BEA.
Daniele Canavese, Emanuele Cesena, Rachid Ouchary, Marco Pedicini, Luca Roversi

Probabilistic Analysis of Programs: A Weak Limit Approach

We present an approach to probabilistic analysis which is based on program semantics and exploits the mathematical properties of the semantical operators to ensure a form of optimality for the analysis. As in the algorithmic setting, where the analysis results are used the help the design of efficient algorithms, the purposes of our framework are to offer static analysis techniques usable for resource optimisation.
Alessandra Di Pierro, Herbert Wiklicky

Predicative Lexicographic Path Orders

An Application of Term Rewriting to the Region of Primitive Recursive Functions
In this paper we present a novel termination order the predicative lexicographic path order (PLPO for short), a syntactic restriction of the lexicographic path order. As well as lexicographic path orders, several non-trivial primitive recursive equations, e.g., primitive recursion with parameter substitution, unnested multiple recursion, or simple nested recursion, can be oriented with PLPOs. It can be shown that the PLPO however only induces primitive recursive upper bounds on derivation lengths of compatible rewrite systems. This yields an alternative proof of a classical fact that the class of primitive recursive functions is closed under those non-trivial primitive recursive equations.
Naohi Eguchi

A Hoare Logic for Energy Consumption Analysis

Energy inefficient software implementations may cause battery drain for small systems and high energy costs for large systems. Dynamic energy analysis is often applied to mitigate these issues. However, this is often hardware-specific and requires repetitive measurements using special equipment.
We present a static analysis deriving upper-bounds for energy consumption based on an introduced energy-aware Hoare logic. Software is considered together with models of the hardware it controls. The Hoare logic is parametric with respect to the hardware. Energy models of hardware components can be specified separately from the logic. Parametrised with one or more of such component models, the analysis can statically produce a sound (over-approximated) upper-bound for the energy-usage of the hardware controlled by the software.
Rody Kersten, Paolo Parisen Toldin, Bernard van Gastel, Marko van Eekelen

Reasoning About Resources in the Embedded Systems Language Hume

In this paper we present an instrumented program logic for the embedded systems language Hume, suitable to reason about resource consumption. Matching the structure of Hume programs, it integrates two logics, a VDM-style program logic for the functional language and a TLA-style logic for the coordination language of Hume. We present a soundness proof of the program logic, and demonstrate the usability of these logics by proving resource bounds for a Hume program. Both logics, the soundness proof and the example have been fully formalised in the Isabelle/HOL theorem prover.
Hans-Wolfgang Loidl, Gudmund Grov

On Paths-Based Criteria for Polynomial Time Complexity in Proof-Nets

Several variants of linear logic have been proposed to characterize complexity classes in the proofs-as-programs correspondence. Light linear logic (\(LLL\)) ensures a polynomial bound on reduction time, and characterizes in this way the class \(Ptime\). In this paper we study the complexity of linear logic proof-nets and propose two sufficient criteria, one for elementary time soundness and the other one for \(Ptime\) soundness, based on the study of paths inside the proof-net. These criteria offer several benefits: they provide a bound for any reduction sequence and they can be used to prove the complexity soundness of several variants of linear logic. As an example we show with our criteria a strong polytime bound for the system \(L^4\) (light linear logic by levels).
Matthieu Perrinel

Collected Size Semantics for Strict Functional Programs over General Polymorphic Lists

Size analysis can be an important part of heap consumption analysis. This paper is a part of ongoing work about typing support for checking output-on-input size dependencies for function definitions in a strict functional language. A significant restriction for our earlier results is that inner data structures (e.g. in a list of lists) all must have the same size. Here, we make a big step forwards by overcoming this limitation via the introduction of higher-order size annotations such that variate sizes of inner data structures can be expressed. In this way the analysis becomes applicable for general, polymorphic nested lists.
Olha Shkaravska, Marko van Eekelen, Alejandro Tamalet


Weitere Informationen