Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 10th International Symposium on Search-Based Software Engineering, SSBSE 2018, held in Montpellier, France, in September 2018.

The 12 full papers and 7 short papers presented together with 3 keynotes, 2 tutorials, and 1 anniversary paper were carefully reviewed and selected from 21 submissions. SSBSE welcomes not only applications from throughout the software engineering lifecycle but also a broad range of search methods ranging from exact Operational Research techniques to nature-inspired algorithms and simulated annealing.

Chapter "Deploying Search Based Software Engineering with Sapienz at Facebook" is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Inhaltsverzeichnis

Frontmatter

Keynotes

Frontmatter

Open Access

Deploying Search Based Software Engineering with Sapienz at Facebook

Abstract
We describe the deployment of the Sapienz Search Based Software Engineering (SBSE) testing system. Sapienz has been deployed in production at Facebook since September 2017 to design test cases, localise and triage crashes to developers and to monitor their fixes. Since then, running in fully continuous integration within Facebook’s production development process, Sapienz has been testing Facebook’s Android app, which consists of millions of lines of code and is used daily by hundreds of millions of people around the globe.
We continue to build on the Sapienz infrastructure, extending it to provide other software engineering services, applying it to other apps and platforms, and hope this will yield further industrial interest in and uptake of SBSE (and hybridisations of SBSE) as a result.
Nadia Alshahwan, Xinbo Gao, Mark Harman, Yue Jia, Ke Mao, Alexander Mols, Taijin Tei, Ilya Zorin

Evolving Living Technologies—Insights from the EvoEvo Project

Abstract
The EvoEvo project was a 2013–2017 FP7 European project aiming at developing new evolutionary approaches in information science and producing novel algorithms based on the current understanding of molecular and evolutionary biology, with the ultimate goals of addressing open-ended problems in which the specifications are either unknown or too complicated to express, and of producing software able to operate even in unpredictable, varying conditions. Here we present the main rationals of the EvoEvo project and propose a set of design rules to evolve adaptive software systems.
Guillaume Beslon, Santiago F. Elena, Paulien Hogeweg, Dominique Schneider, Susan Stepney

Tutorials

Frontmatter

Ultra-Large Repair Search Space with Automatically Mined Templates: The Cardumen Mode of Astor

Abstract
Astor is a program repair library which has different modes. In this paper, we present the Cardumen mode of Astor, a repair approach based mined templates that has an ultra-large search space. We evaluate the capacity of Cardumen to discover test-suite adequate patches (aka plausible patches) over the 356 real bugs from Defects4J [11]. Cardumen finds 8935 patches over 77 bugs of Defects4J. This is the largest number of automatically synthesized patches ever reported, all patches being available in an open-science repository. Moreover, Cardumen identifies 8 unique patches, that are patches for Defects4J bugs that were never repaired in the whole history of program repair.
Matias Martinez, Martin Monperrus

Special Tenth SSBSE papers – “Best of Previous SSBSEs”

Frontmatter

How Can Metaheuristics Help Software Engineers?

Abstract
This paper is a brief description of the revamped presentation based in the original one I had the honor to deliver back in 2009 during the very first SSBSE in London. At this time, the many international forces dealing with search, optimization, and learning (SOL) met software engineering (SE) researchers in person, all of them looking for a quantified manner of modeling and solving problems in software. The contents of this work, as in the original one, will develop on the bases of metaheuristics to highlight the many good ways in which they can help to create a well-grounded domain where the construction, assessment, and exploitation of software are not just based in human expertise, but enhanced with intelligent automatic tools. Since the whole story started well before the first SSBSE in 2009, we will mention a few previous applications in software engineering faced with intelligent algorithms, as well as will discuss on the present interest and future challenges of the domain, structured in both short and long term goals. If we understand this as a cross-fertilization task between research fields, then we could learn a wider and more useful lesson for innovative research. In short, we will have here a semantic perspective of the old times (before SBSE), the recent years on SBSE, and the many avenues for future research and development spinning around this exciting clash of stars. A new galaxy has been born out of the body of knowledge in SOL and SE, creating forever a new class of researchers able of building unparalleled tools and delivering scientific results for the benefit of software, that is, of modern societies.
Enrique Alba

A Tutorial on Using and Extending the EvoSuite Search-Based Test Generator

Abstract
EvoSuite is an automated unit test generation tool for Java. It takes as input a Java class under test, and produces JUnit tests optimised for code coverage, and enhanced with regression assertions, as output. This paper is a tutorial on how to use EvoSuite to generate tests, on how to build and extend EvoSuite, and how to use EvoSuite to run experiments on search-based testing.
Gordon Fraser

A Preliminary Systematic Mapping Study of Human Competitiveness of SBSE

Abstract
Search Based Software Engineering (SBSE) seeks to reformulate Software Engineering complex problems as search problems to be, hereafter, optimized through the usage of artificial intelligence techniques. As pointed out by Harman in 2007, in his seminal paper about the current state and future of SBSE, it would be very attractive to have convincing examples of human competitive results in order to champion the field. A landmark effort in this direction was made by Souza and others, in the paper titled “The Human Competitiveness of Search Based Software Engineering”, published at SSBSE’2010, voted by the SBSE community as the most influential paper of the past editions in the 10th anniversary of the SSBSE, in 2018. This paper presents a preliminary systematic mapping study to provide an overview of the current state of human competitiveness of SBSE, carried out via a snowball reading of Souza’s paper. The analyses of the 29 selected papers showed a growing interest in this topic, especially since 2010. Seven of those papers presented relevant experimental results, thus demonstrating the human competitiveness of results produced by SBSE approaches.
Jerffeson Souza, Allysson Allex Araújo, Raphael Saraiva, Pamella Soares, Camila Maia

Main Track Papers

Frontmatter

Search-Based Stress Testing the Elastic Resource Provisioning for Cloud-Based Applications

Abstract
One of the main benefits of cloud computing is to enable customers to deploy their applications on a cloud infrastructure that provisions resources (e.g., memory) to these applications on as-needed basis. Unfortunately, certain workloads can cause customers to pay for resources that are provisioned to, but not fully used by their applications, and as a result their performances then deteriorate beyond some acceptable thresholds and the benefits of cloud computing may be significantly reduced or even completely obliterated. We propose a novel approach to automatically discover these workloads to stress test elastic resource provisioning for cloud-based applications. We experimented with four non-trivial applications on the Microsoft Azure cloud to determine how effectively and efficiently our approach explores a very large space of the workload parameters’ values. The results show that our approach discovers the first irregular workload faster in the search space of over \(10^{40}\) input combinations compared to the random approach, and it discovers more irregular workloads that result in much higher costs and performance degradations for applications in the cloud.
Abdullah Alourani, Md. Abu Naser Bikas, Mark Grechanik

Injecting Social Diversity in Multi-objective Genetic Programming: The Case of Model Well-Formedness Rule Learning

Abstract
Software modelling activities typically involve a tedious and time-consuming effort by specially trained personnel. This lack of automation hampers the adoption of the Model Driven Engineering (MDE) paradigm. Nevertheless, in the recent years, much research work has been dedicated to learn MDE artifacts instead of writing them manually. In this context, mono- and multi-objective Genetic Programming (GP) has proven being an efficient and reliable method to derive automation knowledge by using, as training data, a set of examples representing the expected behavior of an artifact. Generally, the conformance to the training example set is the main objective to lead the search for a solution. Yet, single fitness peak, or local optima deadlock, one of the major drawbacks of GP, remains when adapted to MDE and hinders the results of the learning. We aim at showing in this paper that an improvement in populations’ social diversity carried out during the evolutionary computation will lead to more efficient search, faster convergence, and more generalizable results. We ascertain improvements are due to our changes on the search strategy with an empirical evaluation featuring the case of learning well-formedness rules in MDE with a multi-objective genetic algorithm. The obtained results are striking, and show that semantic diversity allows a rapid convergence toward the near-optimal solutions. Moreover, when the semantic diversity is used as for crowding distance, this convergence is uniform through a hundred of runs.
Edouard Batot, Houari Sahraoui

Automated Optimization of Weighted Non-functional Objectives in Self-adaptive Systems

Abstract
A self-adaptive system (SAS) can reconfigure at run time in response to adverse combinations of system and environmental conditions in order to continuously satisfy its requirements. Moreover, SASs are subject to cross-cutting non-functional requirements (NFRs), such as performance, security, and usability, that collectively characterize how functional requirements (FRs) are to be satisfied. In many cases, the trigger for adapting an SAS may be due to a violation of one or more NFRs. For a given NFR, different combinations of hierarchically-organized FRs may yield varying degrees of satisfaction (i.e., satisficement). This paper presents Providentia, a search-based technique to optimize NFR satisficement when subjected to various sources of uncertainty (e.g., environment, interactions between system elements, etc.). Providentia searches for optimal combinations of FRs that, when considered with different subgoal decompositions and/or differential weights, provide optimal satisficement of NFR objectives. Experimental results suggest that using an SAS goal model enhanced with search-based optimization significantly improves system performance when compared with manually- and randomly-generated weights and subgoals.
Kate M. Bowers, Erik M. Fredericks, Betty H. C. Cheng

Comparison of Search-Based Algorithms for Stress-Testing Integrated Circuits

Abstract
This paper is concerned with the task of ‘stress testing’an integrated circuit in its operational environment with the goal of identifying any circumstances under which the circuit might suffer from performance issues. Previous attempts to use simple hill-climbing algorithms to automate the generation of tests have faltered because the behaviour of the circuits can be subject to non-determinism, with a search space that can give rise to local maxima. In this paper we seek to work around these problems by experimenting with different search algorithms which ought to be better at handling such search-space properties (random-restart hill-climbing and simulated annealing). We evaluate these enhancements by applying the approach to test the Arm Cache Coherent Interconnect Unit (CCI) on a new 64-bit development platform, and show that both simulated annealing and random-restart hill-climbing outperforms simple hill-climbing algorithm.
Basil Eljuse, Neil Walkinshaw

Damage Reduction via White-Box Failure Shaping

Abstract
Emerging hardware that trades reliability guarantees for resource savings presents a challenge to software engineered for deterministic execution. Research areas like approximate computing, however, embrace non-determinism by abandoning strict correctness in favor of maximizing the probability and degree of correctness. Existing work has used stochastic failure sampling to perform white-box searches along software execution paths, producing criticality assessments of which selected operations are likely most damaging if they fail. Here, we apply these assessments to a new domain and employ them using failure shaping, an automated method for reducing a computation’s expected output damage in a model where failures can be relocated but not eliminated. In two case studies, we demonstrate error reductions of 38% to 63% on Strassen’s matrix multiplication algorithm despite a virtually identical failure count. We discuss how our framework helps provide a smooth landscape for performing the search-based software engineering that will be required to apply this technology to larger problems.
Thomas B. Jones, David H. Ackley

Automated Co-evolution of Metamodels and Transformation Rules: A Search-Based Approach

Abstract
Metamodels frequently change over time by adding new concepts or changing existing ones to keep track with the evolving problem domain they aim to capture. This evolution process impacts several depending artifacts such as model instances, constraints, as well as transformation rules. As a consequence, these artifacts have to be co-evolved to ensure their conformance with new metamodel versions. While several studies addressed the problem of metamodel/model co-evolution (Please note the potential name clash for the term co-evolution. In this paper, we refer to the problem of having to co-evolve different dependent artifacts in case one of them changes. We are not referring to the application or adaptation of co-evolutionary search algorithms.), the co-evolution of metamodels and transformation rules has been less studied. Currently, programmers have to manually change model transformations to make them consistent with the new metamodel versions which require the detection of which transformations to modify and how to properly change them. In this paper, we propose a novel search-based approach to recommend transformation rule changes to make transformations coherent with the new metamodel versions by finding a trade-off between maximizing the coverage of metamodel changes and minimizing the number of static errors in the transformation and the number of applied changes to the transformation. We implemented our approach for the ATLAS Transformation Language (ATL) and validated the proposed approach on four co-evolution case studies. We demonstrate the outperformance of our approach by comparing the quality of the automatically generated co-evolution solutions by NSGA-II with manually revised transformations, one mono-objective algorithm, and random search.
Wael Kessentini, Houari Sahraoui, Manuel Wimmer

Learning Without Peeking: Secure Multi-party Computation Genetic Programming

Abstract
Genetic Programming is widely used to build predictive models for defect proneness or development efforts. The predictive modelling often depends on the use of sensitive data, related to past faults or internal resources, as training data. We envision a scenario in which revealing the training data constitutes a violation of privacy. To ensure organisational privacy in such a scenario, we propose SMCGP, a method that performs Genetic Programming as Secure Multiparty Computation. In SMCGP, one party uses GP to learn a model of training data provided by another party, without actually knowing each datapoint in the training data. We present an SMCGP approach based on the garbled circuit protocol, which is evaluated using two problem sets: a widely studied symbolic regression benchmark, and a GP-based fault localisation technique with real world fault data from Defects4J benchmark. The results suggest that SMCGP can be equally accurate as the normal GP, but the cost of keeping the training data hidden can be about three orders of magnitude slower execution.
Jinhan Kim, Michael G. Epitropakis, Shin Yoo

Towards Minimizing the Impact of Changes Using Search-Based Approach

Abstract
Software maintenance is becoming more challenging with the increased complexity of the software and the frequently applied modifications. To manage this complexity, systems development is headed towards Model-driven engineering (MDE) and search-based software engineering (SBSE). Additionally, prior to applying a change to these complex systems, change impact analysis is usually performed in order to determine the scope of the change, its feasibility, and the time and resources required to implement the change. The bigger the scope, the riskier the change is on the system. In this paper, we introduce a set of transformation rules for Extended Finite State Machine (EFSM) models of state-based systems. These transformation rules can be used as the basis for search-based model optimization in order to reduce the average impact of a potential change applied to an EFSM model. Assuming that Model-driven development is adopted for the implementation of a state-based system, reducing the change impact at the model level will lead to reducing the impact at the system level. An exploratory study is performed to measure the impact reduction for a given EFSM model when the transformation rules are applied by a search-based algorithm. The initial results show a promising usage of the transformation rules which can lead to a reduction of more than 50% of the initial average change impact of the model.
Bogdan Korel, Nada Almasri, Luay Tahat

Exploring Evolutionary Search Strategies to Improve Applications’ Energy Efficiency

Abstract
Energy consumption have become an important non-functional requirement for applications running on battery powered devices through data centers. Despite the increased interest on detecting and understanding what causes an application to be energy inefficient, few works focus on helping developers to automatically make their applications more energy efficient based on developers’ design and implementation decisions. This paper explores how search strategies based on genetic algorithms can help developers automatically find an energy efficient version of an application based on transformations corresponding to developers’ high level decisions (e.g., selecting API implementations). Our results show how different search strategies can help to improve the energy efficiency for nine Java applications.
Irene Manotas, James Clause, Lori Pollock

Optimization Experiments in the Continuous Space

The Limited Growth Optimistic Optimization Algorithm
Abstract
Online controlled experiments are extensively used by web-facing companies to validate and optimize their systems, providing a competitive advantage in their business. As the number of experiments scale, companies aim to invest their experimentation resources in larger feature changes and leave the automated techniques to optimize smaller features. Optimization experiments in the continuous space are encompassed in the many-armed bandits class of problems. Although previous research provides algorithms for solving this class of problems, these algorithms were not implemented in real-world online experimentation problems and do not consider the application constraints, such as time to compute a solution, selection of a best arm and the estimation of the mean-reward function. This work discusses the online experiments in context of the many-armed bandits class of problems and provides three main contributions: (1) an algorithm modification to include online experiments constraints, (2) implementation of this algorithm in an industrial setting in collaboration with Sony Mobile, and (3) statistical evidence that supports the modification of the algorithm for online experiments scenarios. These contributions support the relevance of the LG-HOO algorithm in the context of optimization experiments and show how the algorithm can be used to support continuous optimization of online systems in stochastic scenarios.
David Issa Mattos, Erling Mårtensson, Jan Bosch, Helena Holmström Olsson

Incremental Control Dependency Frontier Exploration for Many-Criteria Test Case Generation

Abstract
Several criteria have been proposed over the years for measuring test suite adequacy. Each criterion can be converted into a specific objective function to optimize with search-based techniques in an attempt to generate test suites achieving the highest possible coverage for that criterion. Recent work has tried to optimize for multiple-criteria at once by constructing a single objective function obtained as a weighted sum of the objective functions of the respective criteria. However, this solution suffers the problem of sum scalarization, i.e., differences along the various dimensions being optimized get lost when such dimensions are projected into a single value. Recent advances in SBST formulated coverage as a many-objective optimization problem rather than applying sum scalarization. Starting from this formulation, in this work, we apply many-objective test generation that handles multiple adequacy criteria simultaneously. To scale the approach to the big number of objectives to be optimized at the same time, we adopt an incremental strategy, where only coverage targets in the control dependency frontier are considered until the frontier is expanded by covering a previously uncovered target.
Annibale Panichella, Fitsum Meshesha Kifetew, Paolo Tonella

Single-objective Versus Multi-objectivized Optimization for Evolutionary Crash Reproduction

Abstract
EvoCrash is a recent search-based approach to generate a test case that reproduces reported crashes. The search is guided by a fitness function that uses a weighted sum scalarization to combine three different heuristics: (i) code coverage, (ii) crash coverage and (iii) stack trace similarity. In this study, we propose and investigate two alternatives to the weighted sum scalarization: (i) the simple sum scalarization and (ii) the multi-objectivization, which decomposes the fitness function into several optimization objectives as an attempt to increase test case diversity. We implemented the three alternative optimizations as an extension of EvoSuite, a popular search-based unit test generator, and applied them on 33 real-world crashes. Our results indicate that for complex crashes the weighted sum reduces the test case generation time, compared to the simple sum, while for simpler crashes the effect is the opposite. Similarly, for complex crashes, multi-objectivization reduces test generation time compared to optimizing with the weighted sum; we also observe one crash that can be replicated only by multi-objectivization. Through our manual analysis, we found out that when optimizing the original weighted function gets trapped in local optima, optimization for decomposed objectives improves the search for crash reproduction. Generally, while multi-objectivization is under-explored, our results are promising and encourage further investigations of the approach.
Mozhan Soltani, Pouria Derakhshanfar, Annibale Panichella, Xavier Devroey, Andy Zaidman, Arie van Deursen

Hot off the Press Papers

Frontmatter

A New Approach for Search Space Reduction and Seeding by Analysis of the Clauses

Abstract
The search space of potential inputs of a program is very large, even for the very small one, while this size is a key determining factor affecting the performance of any search-based test data generation approach. However, despite the large volume of work on search-based test data generation, the literature contains little work that concerns this problem. In this paper, by analysis of the clauses of the program, in addition to proposing a new search space reduction strategy, a new seeding approach is introduced.
Atieh Monemi Bidgoli, Hassan Haghighi

Learning Fault Localisation for both Humans and Machines Using Multi-objective GP

Abstract
Genetic Programming has been successfully applied to fault localisation to learn ranking models that place the faulty program element as near the top as possible. However, it is also known that, when localisation results are used by Automatic Program Repair (APR) techniques, higher rankings of faulty program elements do not necessarily result in better repair effectiveness. Since APR techniques tend to use localisation scores as weights for program mutation, lower scores for non-faulty program elements are as important as high scores for faulty program elements. We formulate a multi-objective version of GP based fault localisation to learn ranking models that not only aim to place the faulty program element higher in the ranking, but also aim to assign as low scores as possible to non-faulty program elements. The results show minor improvements in the suspiciousness score distribution. However, surprisingly, the multi-objective formulation also results in more accurate fault localisation ranking-wise, placing 155 out of 386 faulty methods at the top, compared to 135 placed at the top by the single objective formulation.
Kabdo Choi, Jeongju Sohn, Shin Yoo

Mapping Class Dependencies for Fun and Profit

Abstract
Classes depend on other classes to perform certain tasks. By mapping these dependencies, we may be able to improve software quality. We have developed a prototype framework for generating optimized groupings of classes coupled to targets of interest. From a pilot study investigating the value of coupling information in test generation, we have seen that coupled classes generally have minimal impact on results. However, we found 23 cases where the inclusion of coupled classes improves test suite efficacy, with an average improvement of 120.26% in the likelihood of fault detection. Seven faults were detected only through the inclusion of coupled classes. These results offer lessons on how coupling information could improve automated test generation.
Allen Kanapala, Gregory Gay

Evolving Better Software Parameters

Abstract
Genetic improvement might be widely used to adapt existing numerical values within programs. Applying GI to embedded parameters in computer code can create new functionality. For example, CMA-ES can evolve 1024 real numbers in a GNU C library square root to implement a cube root routine for C.
William B. Langdon, Justyna Petke

On the Placebo Effect in Interactive SBSE: A Preliminary Study

Abstract
Search Based Software Engineering approaches have proven to be feasible and promising in tackling a number of software engineering problems. More recently, researchers have been considering the challenges and opportunities related to involving users’ expertise in the resolution process, among other reasons, to deal with the mistrust or misunderstanding of fully automated optimisation approaches. This paper presents a preliminary study concerned at assessing the users’ subjective perception when his/her preferences are considered in an Interactive SBSE approach. Regarding the evaluation, we conducted a placebo-controlled study with 12 software engineering practitioners by simulating a Next Release Problem scenario. The results indicate that most (68%) of the gain achieved by the interactive approach could be attributed to being the placebo effect, that is, refers strictly to the fact that the user felt part of the optimisation process. In addition, there was an important increased confidence in the results, even in the placebo group.
Jerffeson Souza, Allysson Allex Araújo, Italo Yeltsin, Raphael Saraiva, Pamella Soares

EvoIsolator: Evolving Program Slices for Hardware Isolation Based Security

Abstract
To provide strong security support for today’s applications, microprocessor manufacturers have introduced hardware isolation, an on-chip mechanism that provides secure accesses to sensitive data. Currently, hardware isolation is still difficult to use by software developers because the process to identify access points to sensitive data is error-prone and can lead to under and over protection of sensitive data. Under protection can lead to security vulnerabilities. Over protection can lead to an increased attack surface and excessive communication overhead. In this paper we describe EvoIsolator, a search-based framework to (i) automatically generate executable minimal slices that include all access points to a set of specified sensitive data; and (ii) automatically optimize (for small code block size and low communication overhead) the code modules for hardware isolation. We demonstrate, through a small feasibility study, the potential impact of our proposed code optimizer.
Mengmei Ye, Myra B. Cohen, Witawas Srisa-an, Sheng Wei

Challenge Paper

Frontmatter

Detecting Real Faults in the Gson Library Through Search-Based Unit Test Generation

Abstract
An important benchmark for test generation tools is their ability to detect real faults. We have identified 16 real faults in Gson—a Java library for manipulating JSON data—and added them to the Defects4J fault database. Tests generated using the EvoSuite framework are able to detect seven faults. Analysis of the remaining faults offers lessons in how to improve generation. We offer these faults to the community to assist future research.
Gregory Gay

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise