Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 4th International Conference on Tools and Methods for Program Analysis, TMPA 2017, Moscow, Russia, March 3-4, 2017.
The 12 revised full papers and 5 revised short papers presented together with three abstracts of keynote talks were carefully reviewed and selected from 51 submissions. The papers deal with topics such as software test automation, static program analysis, verification, dynamic methods of program analysis, testing and analysis of parallel and distributed systems, testing and analysis of high-load and high-availability systems, analysis and verification of hardware and software systems, methods of building quality software, tools for software analysis, testing and verification.

Inhaltsverzeichnis

Frontmatter

Distributed Analysis of the BMC Kind: Making It Fit the Tornado Supercomputer

Abstract
Software analysis is becoming increasingly important as a way of software quality assurance. Most works in this area focus their attention on a single machine scenario, when the analysis is run and implemented on a single processing node, as it seems to be a good fit for the current software development methodologies. We argue that in some cases it is reasonable to employ high performance computing (HPC) to do software analysis, if the performance impact is worth the increase in computational requirements. In this paper we present our experience with the implementation of a HPC version of the bounded model checker Borealis, major problems we encountered together with their solutions, and the evaluation results on a number of different real-world projects.
Azat Abdullin, Daniil Stepanov, Marat Akhin

Predicate Abstraction Based Configurable Method for Data Race Detection in Linux Kernel

Abstract
The paper presents a configurable method for static data race detection. The method is based on a lightweight approach that implements Lockset algorithm with a simplified memory model. The paper contributes two heavyweight extensions which allow to adjust required precision of the analysis by choosing the balance between spent resources and a number of false alarms. The extensions are (1) counterexample guided refinement based on predicate abstraction and (2) thread analysis. The approach was implemented in the CPALockator tool and was applied to Linux kernel modules. Real races found by the tool have been approved and fixed by Linux kernel developers.
Pavel Andrianov, Vadim Mutilin, Alexey Khoroshilov

Extended Context-Free Grammars Parsing with Generalized LL

Abstract
Parsing plays an important role in static program analysis: during this step a structural representation of code is created upon which further analysis is performed. Parser generator tools, being provided with syntax specification, automate parser development. Language documentation often acts as such specification. Documentation usually takes form of ambiguous grammar in Extended Backus-Naur Form which most parser generators fail to process. Automatic grammar transformation generally leads to parsing performance decrease. Some approaches support EBNF grammars natively, but they all fail to handle ambiguous grammars. On the other hand, Generalized LL parsing algorithm admits arbitrary context-free grammars and achieves good performance, but cannot handle EBNF grammars. The main contribution of this paper is a modification of GLL algorithm which can process grammars in a form which is closely related to EBNF (Extended Context-Free Grammar). We also show that the modification improves parsing performance as compared to grammar transformation-based approach.
Artem Gorokhov, Semyon Grigorev

Defect Report Classification in Accordance with Areas of Testing

Abstract
There can be thousands of software defects found during testing and submitted into a bug-tracking system. This paper intends to reveal the importance of distinguishing different areas of testing in order to be able to perform further meaningful manipulations with defects, compute various metrics, classify or cluster bugs. An area of testing is made up of a group of software components. The Component/s field in a bug tracking system usually contains information as to what area the defect belongs to. However, sometimes the field can be empty or does not include all the necessary elements. Moreover, every defect belongs to one or several areas, that is why the classes can overlap within the classification. Therefore it becomes necessary to use the Summary field, which has brief information about the defect. Both fields have text format and require natural language processing. This paper introduces some techniques to classify defect reports according to areas of testing, using the data of two text fields and natural language processing methods and tools.
Anna Gromova

Technology and Tools for Developing Industrial Software Test Suites Based on Formal Models and Implementing Scalable Testing Process on Supercomputer

Abstract
In this article, an approach of getting verified test scenarios for developed software system without losing the model’s semantics is proposed. Existing problem of generating test cases for real software systems is solved by using multi-level paradigm to obtain the real system signals, transactions and states. The execution of test scenarios is performed using the supercomputer, where each test scenario runs on its own computing node. In this paper, a software tool consisting of several modules is suggested for detailing test scenarios and distributing them on compute nodes. Process of translating abstract data structures into detailed data structures used in system implementation is presented with examples.
Vsevolod Kotlyarov, Pavel Drobintsev, Nikita Voinov, Ivan Selin, Alexey Tolstoles

Dl-Check: Dynamic Potential Deadlock Detection Tool for Java Programs

Abstract
Deadlocks are one of the main problems in multithreaded programming. This paper presents a novel approach to detecting potential deadlocks at run-time. As opposed to many dynamic methods based on analyzing execution traces, it detects a potential deadlock immediately at the point of the lock hierarchy violation which happens first during execution, so all the run-time context is available at the point of detection. The approach is based on the target program instrumentation to capture lock acquisition and release operations. An acyclic lock-order graph is maintained incrementally, so cycles are detected during graph maintenance. The presented algorithm is based on topological order maintenance and uses various heuristics and concurrent data structures which improve performance and scalability. Experimental results show that Dl-Check is efficient for large-scale multithreaded applications.
Nikita Koval, Dmitry Tsitelov, Roman Elizarov

A Survey on Model-Based Testing Tools for Test Case Generation

Abstract
Compared to traditional testing methods, Model-Based Testing (MBT) is able to manage and accomplish testing tasks in a cheaper and more efficient way. A number of MBT tools are developed to support MBT activities in the past few years, whereas the characteristics of these tools largely vary from one to another and users without prior knowledge can hardly choose appropriate tools. This paper aims at providing a survey on the emerging MBT tools following a list of criteria emphasizing on test case generation while illustrating aspects of test data and test script generation. Firstly, we introduce the general MBT process for a common understanding; we then present a list of criteria oriented to test case generation covering fours dimensions i.e., model specification, test generation, test description and overall support; following our proposed criteria, we survey and characterize the emerging MBT tools; at last we summarize the current limitations based on our survey and shed light on further directions of MBT tool development.
Wenbin Li, Franck Le Gall, Naum Spaseski

Functional Parser of Markdown Language Based on Monad Combining and Monoidal Source Stream Representation

Abstract
The main goal of this work is to develop flexible and expressive methods of parsers construction based on modern techniques of structuring of effectful computations. We compare two approaches to describing effectful computations: monad transformers and extensible effects in respect to construction of parser combinator libraries. We develop two parser combinator libraries: one based on monad transformers and another on top of extensible effects, and Markdown-to-HTML translator with https://static-content.springer.com/image/chp%3A10.1007%2F978-3-319-71734-0_8/461034_1_En_8_IEq1_HTML.gif blocks based on first library.
Georgiy Lukyanov, Artem Pelenitsin

Simple Type-Based Alias Analysis for a VLIW Processor

Abstract
Major C/C++ compilers use their own implements of type-based alias analysis; however, the impact of such algorithms on VLIW (Very Long Instruction Word) processors is not explored. This paper describes an implementation and effects of a simple type-based alias analysis known as strict-aliasing in the C/C++ compiler for Elbrus processor. Elbrus is a general-purpose VLIW microprocessor that relies on compiler optimizations for good performance. This paper demonstrates that strict-aliasing is a fast and scalable alias analysing technique. SPEC CPU2006 benchmarks measurement showed in the mean execution speed improvement of 28.6% for INT tests and 13.3% for FP tests in per module compilation mode. The measurement of whole-program mode showed 4.3% execution speed improvement for FP tests and 1.9% speed improvement for INT tests. These results illustrate that strict-aliasing can serve as a substitute for tradidional analysis in per module build mode and also disambiguate additional pointers in whole-program build mode.
Aleksey Markin, Alexandr Ermolitsky

5W+1H Static Analysis Report Quality Measure

Abstract
Modern development best practices rank static analysis quite high in a list of quality assurance methods. Static analyzers indicate errors found and help improve software quality. However, the quality of reports is merely evaluated, if done at all. In this paper we generalize analyzer output messages and explore ways to improve reliability of comparison results. We introduce informational value as a measure of report quality with respect to 5Ws (What, When, Where, Who, Why) and 1H (How To Fix) questions, formulate and verify a hypothesis about its independence on generic quality measures, suggest a methodology to include it into static analysis benchmarking and present our observations after testing, which might help tool developers choose the direction towards more understandable reports.
Maxim Menshchikov, Timur Lepikhin

Layered Layouts for Software Systems Visualization Using Nested Petri Nets

Abstract
Modern software systems can be large. They often have a complex architecture and non-trivial behaviour. That is why different modelling techniques are used for their design and analysis. Illustrative visualization of a system architecture can facilitate better analysis and give useful insights. In this paper we propose an approach for visualization of software system models. We introduce layered layouts aimed to combine the advantages of 2D and 3D visualizations. The paper describes the concept of layered layouts and shows how they can represent different model characteristics in a single view. A prototype tool for visualization of hierarchical and multi-agent software system models using nested Petri nets notation is shown to illustrate the idea.
Alexey A. Mitsyuk, Yaroslav V. Kotylev

Unity Application Testing Automation with Appium and Image Recognition

Abstract
This work is dedicated to the problem of integrating simple functional tests (smoke tests) into the automated continuous integration pipeline. While functional testing is typically performed manually by the QA staff members, there is a large number of scenarios that can be automated, and readily available instruments, such as Appium and Calabash, designed for this task. Automated smoke testing is especially challenging for the applications with nonstandard GUI, such as games made with Unity. The only viable option in this case is to analyze screen content as a flat image and identify GUI elements with pattern matching algorithms. This approach is not novel, but its practical applicability and limitations are rarely discussed. We consider a case study of a mobile tennis game project, developed in Unity and covered with a suite of Appium-supported functional tests. We show how image matching capabilities of OpenCV library can be used in Appium tests to build a reliable automated QA pipeline.
Maxim Mozgovoy, Evgeny Pyshkin

Compositional Process Model Synthesis Based on Interface Patterns

Abstract
Coordination of several distributed system components is an error-prone task, since interaction of several simple components can generate rather sophisticated behavior. Verification of such systems is very difficult or even impossible because of the so-called state space explosion problem, when the size of the system reachability set grows exponentially on the number of interacting agents. To overcome this problem several approaches to construct correct models of interacting agents in a compositional way were proposed in the literature. They define different properties and conditions to ensure correct behavior of interacting agents. Checking these conditions may be in its turn quite a problem.
In this paper we propose patterns for correct composition of component models. For justifying these patterns we use special net morphisms. However, to apply patterns the user does not need to be familiar with the underlying theory.
Roman A. Nesterov, Irina A. Lomazova

Using Functional Directives to Analyze Code Complexity

Abstract
Functional programming languages bring a set of benefits, programmability and lack of state to name a few, that are missing on typical procedural languages. In recent years efforts have been made to introduce a set of the functional concepts into procedural languages (like C++11 or python). Research projects, such as POLCA, have develop means to introduce functional-like semantics to procedural code that describe the algorithmic behavior of the application. This allows compilers to perform aggressive code transformations that increase performance and allow portability across different architectures. In this work we describe how the static analysis of functional semantics directives can be used to determine the algorithmic complexity.
Daniel Rubio Bonilla, Colin W. Glass

Generating Cost-Aware Covering Arrays for Free

Abstract
Software systems generally have a large number of configurable options interacting with each other. Such systems are more likely to be prone to errors, crashes, and faulty executions that are usually caused by option interactions. To avoid such errors, testing all possible configurations during the development phase is usually not feasible, since the number of all possible configurations is exponential in the order of number of options. A t-way covering array (CA) is a 2-dimensional combinatorial object that helps to efficiently cover all t-length option interactions of the system under test. Generating a CA with a small number of configurations is important to shorten the testing phase. However, the testing cost (e.g. the testing time) may differ from one configuration to another. Currently, most sequential tools can generate optimum CAs in terms of number of configurations, but they are not cost-aware, i.e., they cannot handle the varying costs of configurations. In this work, we implement a parallel, cost-aware CA-generation tool based on a sequential tool, Jenny, to generate lower-cost CAs faster. Experimental results show that our cost-aware CA construction approach can generate \(32\%\) and \(21\%\) lower cost CAs on average for t = 2 and t = 3, respectively, compared to state-of-the-art CA-generation tools. Moreover, the cost-awareness comes for free, i.e., we speed up our algorithm by leveraging parallel computation. The cost models and cost reduction techniques we propose could also be adapted for other existing CA generation tools.
Mustafa Kemal Taş, Hanefi Mercan, Gülşen Demiröz, Kamer Kaya, Cemal Yilmaz

Statically Checking Conventionality of Array Objects in JavaScript

Abstract
A JavaScript array object is just like any other kind of object except that it has a property named length that is automatically updated. The array object may have other property names such as non-numeric strings and string representations of negative integers. A conventional array is an array that has a property named length and the names of all other properties are (string representation of) positive integers. This paper presents a conventionality analysis of array objects in JavaScript programs. The analysis provides useful information for program development and understanding since any non-conventional use of an array could indicate a potential programming error unless it is intended by the programmer. The analysis is built upon a novel abstract string domain that captures array index information more precisely than the existing ones. Our experiments on a set of 98 benchmark programs show that arrays are used in a conventional way in 95% of the benchmark programs.
Astrid Younang, Lunjin Lu, Nabil Almashfi

A Survey of High-Performance Computing for Software Verification

Abstract
Automatic software verification is a high-demanded method for software quality assurance since it allows to prove the correctness of programs formally. However, any precise software verification technique requires a considerable amount of computational resources like memory and CPU time. Existing hardware architectures like multi-core CPUs distributed systems, clouds and GPUs can impressively boost verification performance. Developers of verification tools have to adapt their algorithms to modern hardware peculiarities for its productive employment. In the survey, we consider case studies of high-performance computing for speeding up and scaling verification of programs and program models.
Ilja Zakharov

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise