Skip to main content

2008 | Buch

Compiler Construction

17th International Conference, CC 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2008, Budapest, Hungary, March 29 - April 6, 2008. Proceedings

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Papers from Invited Talks

Design Choices in a Compiler Course or How to Make Undergraduates Love Formal Notation
Abstract
The undergraduate compiler course offers a unique opportunity to combine many aspects of the Computer Science curriculum. We discuss the many design choices that are available for the instructor and present the current compiler course at the University of Aarhus, the design of which displays at least some decisions that are unusual, novel, or just plain fun.
Michael I. Schwartzbach
Improved Memory-Access Analysis for x86 Executables
Abstract
Over the last seven years, we have developed static-analysis methods to recover a good approximation to the variables and dynamically allocated memory objects of a stripped executable, and to track the flow of values through them. It is relatively easy to track the effects of an instruction operand that refers to a global address (i.e., an access to a global variable) or that uses a stack-frame offset (i.e., an access to a local scalar variable via the frame pointer or stack pointer). In our work, our algorithms are able to provide useful information for close to 100% of such “direct” uses and defs.
It is much harder for a static-analysis algorithm to track the effects of an instruction operand that uses a non-stack-frame register. These “indirect” uses and defs correspond to accesses to an array or a dynamically allocated memory object. In one study, our approach recovered useful information for only 29% of indirect uses and 33% of indirect defs. However, using the technique described in this paper, the algorithm recovered useful information for 81% of indirect uses and 90% of indirect defs.
Thomas Reps, Gogul Balakrishnan

Analyses and Transformations

A System for Generating Static Analyzers for Machine Instructions
Abstract
This paper describes the design and implementation of a language for specifying the semantics of an instruction set, along with a run-time system to support the static analysis of executables written in that instruction set. The work advances the state of the art by creating multiple analysis phases from a specification of the concrete operational semantics of the language to be analyzed.
Junghee Lim, Thomas Reps
IDE Dataflow Analysis in the Presence of Large Object-Oriented Libraries
Abstract
A key scalability challenge for interprocedural dataflow analysis comes from large libraries. Our work addresses this challenge for the general category of interprocedural distributive environment (IDE) dataflow problems. Using pre-computed library summary information, the proposed approach reduces significantly the cost of whole-program IDE analyses without any loss of precision. We define an approach for library summary generation by using a graph representation of dataflow summary functions, and by abstracting away redundant dataflow facts that are internal to the library. Our approach also handles object-oriented features, by employing an IDE type analysis as well as special handling of polymorphic library call sites whose target methods depend on the future (unknown) client code. Experimental results show that dramatic cost savings can be achieved with the help of these techniques.
Atanas Rountev, Mariana Sharp, Guoqing Xu
An Adaptive Strategy for Inline Substitution
Abstract
Inline substitution is an optimization that replaces a procedure call with the body of the procedure that it calls. Inlining has the immediate benefit of reducing the overhead associated with the call, including register saves and restores, parameter evaluation, and activation record setup and teardown. It has secondary benefits that arise from providing greater context for global optimizations. These benefits can be offset by the effects of increased code size, and by deleterious interactions with other optimizations, such as register allocation.
The difficult aspect of inline substitution is choosing which calls to inline. Previous work has focused on static, one-size-fits-all heuristics. This paper presents a feedback-driven adaptive scheme that derives a program-specific inlining heuristic. The key contributions of this work are: (1) a novel parameterization scheme for the inliner that makes it susceptible to fine-grained external control, (2) a scheme for discretizing large integer parameter spaces, and (3) effective search techniques for the resulting search space. This work provides a proof of concept that can provide insight into the design of adaptive controllers for other optimizations with complex decision heuristics. Our goal in this work is not to exhibit the world’s best inliner. Instead, we present evidence to suggest that a program-specific, adaptive scheme is needed to achieve the best results.
Keith D. Cooper, Timothy J. Harvey, Todd Waterman
Automatic Transformation of Bit-Level C Code to Support Multiple Equivalent Data Layouts
Abstract
Portable low-level C programs must often support multiple equivalent in-memory layouts of data, due to the byte or bit order of the compiler, architecture, or external data formats. Code that makes assumptions about data layout often consists of multiple highly similar pieces of code, each designed to handle a different layout. Writing and maintaining this code is difficult and bug-prone: Because the differences among data layouts are subtle, implicit, and inherently low-level, it is difficult to understand or change the highly similar pieces of code consistently.
We have developed a small extension for C that lets programmers write concise declarative descriptions of how different layouts of the same data relate to each other. Programmers then write code assuming only one layout and rely on our translation to generate code for the others. In this work, we describe our declarative language for specifying data layouts, how we perform the automatic translation of C code to equivalent code assuming a different layout, and our success in applying our approach to simplify the code base of some widely available software.
Marius Nita, Dan Grossman

Compiling for Parallel Architectures

Control Flow Emulation on Tiled SIMD Architectures
Abstract
Heterogeneous multi-core and streaming architectures such as the GPU, Cell, ClearSpeed, and Imagine processors have better power/ performance ratios and memory bandwidth than traditional architectures. These types of processors are increasingly being used to accelerate compute-intensive applications. Their performance advantage is achieved by using multiple SIMD processor cores but limiting the complexity of each core, and by combining this with a simplified memory system. In particular, these processors generally avoid the use of cache coherency protocols and may even omit general-purpose caches, opting for restricted caches or explictly managed local memory.
We show how control flow can be emulated on such tiled SIMD architectures and how memory access can be organized to avoid the need for a general-purpose cache and to tolerate long memory latencies. Our technique uses streaming execution and multipass partitioning. Our prototype targets GPUs. On GPUs the memory system is deeply pipelined and caches for read and write are not coherent, so reads and writes may not use the same memory locations simultaneously. This requires the use of double-buffered streaming. We emulate general control flow in a way that is transparent to the programmer and include specific optimizations in our approach that can deal with double-buffering.
Ghulam Lashari, Ondřej Lhoták, Michael McCool
Generating SIMD Vectorized Permutations
Abstract
This paper introduces a method to generate efficient vectorized implementations of small stride permutations using only vector load and vector shuffle instructions. These permutations are crucial for high-performance numerical kernels including the fast Fourier transform. Our generator takes as input only the specification of the target platform’s SIMD vector ISA and the desired permutation. The basic idea underlying our generator is to model vector instructions as matrices and sequences of vector instructions as matrix formulas using the Kronecker product formalism. We design a rewriting system and a search mechanism that applies matrix identities to generate those matrix formulas that have vector structure and minimize a cost measure that we define. The formula is then translated into the actual vector program for the specified permutation. For three important classes of permutations, we show that our method yields a solution with the minimal number of vector shuffles. Inserting into a fast Fourier transform yields a significant speedup.
Franz Franchetti, Markus Püschel
Automatic Transformations for Communication-Minimized Parallelization and Locality Optimization in the Polyhedral Model
Abstract
The polyhedral model provides powerful abstractions to optimize loop nests with regular accesses. Affine transformations in this model capture a complex sequence of execution-reordering loop transformations that can improve performance by parallelization as well as locality enhancement. Although a significant body of research has addressed affine scheduling and partitioning, the problem of automatically finding good affine transforms for communication-optimized coarse-grained parallelization together with locality optimization for the general case of arbitrarily-nested loop sequences remains a challenging problem.
We propose an automatic transformation framework to optimize arbitrarily-nested loop sequences with affine dependences for parallelism and locality simultaneously. The approach finds good tiling hyperplanes by embedding a powerful and versatile cost function into an Integer Linear Programming formulation. These tiling hyperplanes are used for communication-minimized coarse-grained parallelization as well as for locality optimization. The approach enables the minimization of inter-tile communication volume in the processor space, and minimization of reuse distances for local execution at each node. Programs requiring one-dimensional versus multi-dimensional time schedules (with scheduling-based approaches) are all handled with the same algorithm. Synchronization-free parallelism, permutable loops or pipelined parallelism at various levels can be detected. Preliminary studies of the framework show promising results.
Uday Bondhugula, Muthu Baskaran, Sriram Krishnamoorthy, J. Ramanujam, Atanas Rountev, P. Sadayappan

Runtime Techniques and Tools

How to Do a Million Watchpoints: Efficient Debugging Using Dynamic Instrumentation
Abstract
Application debugging is a tedious but inevitable chore in any software development project. An effective debugger can make programmers more productive by allowing them to pause execution and inspect the state of the process, or monitor writes to memory to detect data corruption. This paper introduces the new concept of Efficient Debugging using Dynamic Instrumentation (EDDI). The paper demonstrates for the first time the feasibility of using dynamic instrumentation on-demand to accelerate software debuggers, especially when the available hardware support is lacking or inadequate. As an example, EDDI can simultaneously monitor millions of memory locations without crippling the host processing platform. It does this in software and hence provides a portable debugging environment. It is also well suited for interactive debugging because of its low overhead. EDDI provides a scalable and extensible debugging framework that can substantially increase the feature set of current debuggers.
Qin Zhao, Rodric Rabbah, Saman Amarasinghe, Larry Rudolph, Weng-Fai Wong
Compiler-Guaranteed Safety in Code-Copying Virtual Machines
Abstract
Virtual Machine authors face a difficult choice between low performance, cheap interpreters, or specialized and costly compilers. A method able to bridge this wide gap is the existing code-copying technique that reuses chunks of the VM’s binary code to create a simple JIT. This technique is not reliable without a compiler guaranteeing that copied chunks are still functionally equivalent despite aggressive optimizations. We present a proof-of-concept, minimal-impact modification of a highly optimizing compiler, GCC. A VM programmer marks chunks of VM source code as copyable. The chunks of native code resulting from compilation of the marked source become addressable and self-contained. Chunks can be safely copied at VM runtime, concatenated and executed together. This allows code-copying VMs to safely achieve speedup up to 3 times, 1.67 on average, over the direct interpretation. This maintainable enhancement makes the code-copying technique reliable and thus practically usable.
Gregory B. Prokopski, Clark Verbrugge
Hardware JIT Compilation for Off-the-Shelf Dynamically Reconfigurable FPGAs
Abstract
JIT compilation is a model of execution which translates at run time critical parts of the program to a low level representation. Typically a JIT compiler produces machine code from an intermediate bytecode representation. This paper considers a hardware JIT compiler targeting FPGAs, which are digital circuits configurable as needed to implement application specific circuits. Recent FPGAs in the Xilinx Virtex family are particularly attractive for hardware JIT because they are reconfigurable at run time, they contain both CPUs and reconfigurable logic, and their architecture strikes a balance of features.
In this paper we discuss the design of a hardware architecture and compiler able to dynamically enhance the instruction set with hardware specialized instructions. A prototype system based on the Xilinx Virtex family supporting hardware JIT compilation is described and evaluated.
Etienne Bergeron, Marc Feeley, Jean Pierre David
Visualization of Program Dependence Graphs
Abstract
The analysis of a compiler’s intermediate data structures helps at debugging complex optimizations. We present a graphical tool for analyzing the program dependence graph of Sun Microsystems’ Java HotSpotTM server compiler. The tool saves snapshots of the graph during the compilation. It displays the graphs and provides filtering mechanisms based on customizable JavaScript code and regular expressions. High performance and sophisticated navigation possibilities enable the tool to handle large graphs with thousands of nodes.
Thomas Würthinger, Christian Wimmer, Hanspeter Mössenböck

Analyses

On the Relative Completeness of Bytecode Analysis Versus Source Code Analysis
Abstract
We discuss the challenges faced by bytecode analyzers designed for code verification compared to similar analyzers for source code. While a bytecode-level analysis brings many simplifications, e.g., fewer cases, independence from source syntax, name resolution, etc., it also introduces precision loss that must be recovered either via preprocessing, more precise abstract domains, more precise transfer functions, or a combination thereof.
The paper studies the relative completeness of a static analysis for bytecode compared to the analysis of the program source. We illustrate it through examples originating from the design and the implementation of Clousot, a generic static analyzer based on Abstract Interpretation for the analysis of MSIL.
Francesco Logozzo, Manuel Fähndrich
Efficiency, Precision, Simplicity, and Generality in Interprocedural Data Flow Analysis: Resurrecting the Classical Call Strings Method
Abstract
The full call strings method is the most general, simplest, and most precise method of performing context sensitive interprocedural data flow analysis. It remembers contexts using call strings. For full precision, all call strings up to a prescribed length must be constructed. Two limitations of this method are (a) it cannot be used for frameworks with infinite lattices, and (b) the prescribed length is quadratic in the size of the lattice resulting in an impractically large number of call strings. These limitations have resulted in a proliferation of ad hoc methods which compromise on generality, precision, or simplicity.
We propose a variant of the classical full call strings method which reduces the number of call strings, and hence the analysis time, by orders of magnitude as corroborated by our empirical measurements. It reduces the worst case call string length from quadratic in the size of the lattice to linear. Further, unlike the classical method, this worst case length need not be reached. Our approach retains the precision, generality, and simplicity of call strings method without imposing any additional constraints. It can accommodate demand-driven approximations and hence can be used for frameworks with infinite lattices.
Uday P. Khedker, Bageshri Karkare
Java Bytecode Verification for @NonNull Types
Abstract
Java’s annotation mechanism allows us to extend its type system with non-null types. However, checking such types cannot be done using the existing bytecode verification algorithm. We extend this algorithm to verify non-null types using a novel technique that identifies aliasing relationships between local variables and stack locations in the JVM. We formalise this for a subset of Java Bytecode and report on experiences using our implementation.
Chris Male, David J. Pearce, Alex Potanin, Constantine Dymnikov
Efficient Context-Sensitive Shape Analysis with Graph Based Heap Models
Abstract
The performance of heap analysis techniques has a significant impact on their utility in an optimizing compiler. Most shape analysis techniques perform interprocedural dataflow analysis in a context-sensitive manner, which can result in analyzing each procedure body many times (causing significant increases in runtime even if the analysis results are memoized). To improve the effectiveness of memoization (and thus speed up the analysis) project/extend operations are used to remove portions of the heap model that cannot be affected by the called procedure (effectively reducing the number of different contexts that a procedure needs to be analyzed with). This paper introduces project/extend operations that are capable of accurately modeling properties that are important when analyzing non-trivial programs (sharing, nullity information, destructive recursive functions, and composite data structures). The techniques we introduce are able to handle these features while significantly improving the effectiveness of memoizing analysis results (and thus improving analysis performance). Using a range of well known benchmarks (many of which have not been successfully analyzed using other existing shape analysis methods) we demonstrate that our approach results in significant improvements in both accuracy and efficiency over a baseline analysis.
Mark Marron, Manuel Hermenegildo, Deepak Kapur, Darko Stefanovic

Atomicity and Transactions

Coqa: Concurrent Objects with Quantized Atomicity
Abstract
This paper introduces a new language model, Coqa, for deeply embedding concurrent programming into objects. Every program written in our language has the desirable behaviors of atomicity,mutual exclusion, and race freedom automatically built in. A key property of ourmodel is the notion of quantized atomicity: every concurrent program execution can be viewed as being divided into quantumregions of atomic execution, greatly reducing the number of interleavings to consider. Rather than building atomicity locally, i.e. declaring some code blocks as atomic blocks and leaving other code segments with no guarantee of any atomicity property, we build it in globally, so that a form of atomicity, quantized atomicity, ubiquitously exists at all program points. We justify our approach both from a theoretical basis by showing that a formal representation, Kernel- Coqa, has provable quantized atomicity properties, and by implementing CoqaJava, a Java extension incorporating all of the Coqa features.
Yu David Liu, Xiaoqi Lu, Scott F. Smith
Keep Off the Grass: Locking the Right Path for Atomicity
Abstract
Atomicity provides strong guarantees against errors caused by unanticipated thread interactions, but is difficult for programmers to implement with low-level concurrency primitives. With the introduction of multicore processors, the problems are compounded. Atomic sections are a high level language feature that programmers can use to designate the blocks of code that need to be free from unanticipated thread interactions, letting the language implementation handle the low-level details such as deadlock. From a language designer’s point of view, the challenge is to implement atomic sections without compromising performance.
We propose an implementation of atomic sections that inserts locks transparently into object-oriented programs. The main advantages of our approach are: (1) We infer path expressions (that at run-time resolve to actual objects) for many more accesses in the atomic section than previous work could infer. (2) We use multi-granularity locking for guarding iterative traversals. (3) We ensure freedom from deadlock by rolling back the lock acquisition phase. (4) We release locks as early as possible. In summary, our approach uses a finer-grained locking discipline than previous lock inference techniques.
Dave Cunningham, Khilan Gudka, Susan Eisenbach
Supporting Legacy Binary Code in a Software Transaction Compiler with Dynamic Binary Translation and Optimization
Abstract
Transactional memory (TM) has been shown to be a promising programming model for multi-core systems. We developed a Software-based Transactional Memory (STM) compiler that generates efficient transactional code for transactions to run on a STM runtime without the need of transactional hardware support. Since real-world applications often invoke third party libraries available only in binary form, it is imperative for our STM compiler to support legacy binary functions and provide an efficient solution to convert those invoked inside transactions to the corresponding transactional code. Our STM compiler employs a Lightweight Dynamic Binary Translation and Optimization Module (LDBTOM) to automatically convert legacy binary functions to transactional code. In this paper, we describe our LDBTOM system, which 1) seamlessly integrates the translated code with the STM compiler generated code to run on the STM runtime, and 2) optimizes the translated code taking advantage of dynamic optimization opportunities and STM runtime information. Although the binary code is inherently harder to optimize than high-level source code, our experiment shows that it can be translated and optimized into efficient transactional code by LDBTOM.
Cheng Wang, Victor Ying, Youfeng Wu
Backmatter
Metadaten
Titel
Compiler Construction
herausgegeben von
Laurie Hendren
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-78791-4
Print ISBN
978-3-540-78790-7
DOI
https://doi.org/10.1007/978-3-540-78791-4

Premium Partner