Skip to main content

2019 | Buch

Software Technologies

13th International Conference, ICSOFT 2018, Porto, Portugal, July 26-28, 2018, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the 13th International Joint Conference on Software Technologies, ICSOFT 2018, held in Porto, Portugal, in July 2018.

The 18 revised full papers were carefully reviewed and selected from 117 submissions. The topics covered in the papers include: business process modelling, IT service management, interoperability and service-oriented architecture, project management software, scheduling and estimating, software metrics, requirements elicitation and specification, software and systems integration, etc.

Inhaltsverzeichnis

Frontmatter

Software Engineering and Systems Development

Frontmatter
Using Semantic Metrics to Predict Mutation Equivalence
Abstract
Equivalent mutants are a major nuisance in mutation testing because they introduce a significant amount of bias. But weeding them out is difficult because it requires a detailed analysis of the source code of the base program and the mutant. In this paper we argue that for most applications, it is not necessary to identify equivalent mutants individually; rather it suffices to estimate their number. Also, we explore how we can estimate their number by a cursory/automatable analysis of the base program and the mutant generation policy.
Amani Ayad, Imen Marsit, Nazih Mohamed Omri, JiMeng Loh, Ali Mili
A Rating Tool for the Automated Selection of Software Refactorings that Remove Antipatterns to Improve Performance and Stability
Abstract
Antipatterns are known to be bad solutions for recurring design problems. To detect and remove antipatterns has proven to be a useful mean to improve the quality of software. While there exist several approaches to detect antipatterns automatically, existing work on antipattern detection often does not solve the detected design problems automatically. Although there exist refactorings that have the potential to significantly increase the quality of a program, it is hard to decide which refactorings effectively yield improvements with respect to performance and stability. In this paper, we present a rating tool that makes use of static antipattern detection together with software profiling for the automated selection of refactorings that remove antipatterns and are promising candidates to improve performance and stability. Our key idea is to extend a previously proposed heuristics that utilizes software properties determined by both static code analyses and dynamic software analyses to compile a list of concrete refactorings sorted by their assessed potential to improve performance with an approach to identify refactorings that may improve stability. We do not impose an order on the refactorings that may improve stability. We demonstrate the practical applicability of our overall approach with experimental results.
Nikolai Moesus, Matthias Scholze, Sebastian Schlesinger, Paula Herber
Model-Based On-the-Fly Testing of Web Applications and Multilingual Websites
Abstract
This paper examines techniques for the model-based testing of web applications and multilingual websites. For this purpose, the simple web game application GuessNumbers is used to explain the essential steps for a model-based test process that applies statistical usage models to generate appropriate test suites. We also discuss methods for performing on-the-fly testing by means of an executable usage model. Model-based techniques that provide graphical representations of usage models make it easy to set the test focus on specific regions of the system under test that shall be tested. In addition, adapted profiles support the selective generation of test suites. We also show how generic usage models that are adapted to specific environments during the test execution, enable multilingual websites to be tested. Using the TestPlayer tool chain, a model-based testing approach is easily done.
Winfried Dulz
On the Impact of Order Information in API Usage Patterns
Abstract
Many approaches have been proposed for learning Application Programming Interface (API) usage patterns from code repositories. Depending on the underlying technique, the mined patterns may (1) be strictly sequential, (2) consider partial order between method calls, or (3) not consider order information. Understanding the trade-offs between these pattern types with respect to real code is important in many applications (e.g. misuse detection), given that APIs often have usage constraints, such as restrictions on call order. API misuses, i.e., violations of these constraints, may lead to software crashes, bugs and vulnerabilities.
In this paper, we present the results of a work that addresses this need. We have constructed a benchmark based on an episode mining algorithm that can be configured to learn three type of patterns: sequential, partial, and no-order patterns. We use the benchmark in two ways. First, we use it to empirically study the different types of the mined API usage patterns based on three well-defined metrics: expressiveness, consistency and generalizability. Second, we evaluate the effect of the different pattern types within the real application context of using them as an input to a misuse detector. We run the benchmark on two existing datasets consisting of: (1) 360 C# code repositories, and (2) four Java projects. We use the C# data set to empirically study the resulting API usage patterns, and the Java data set to evaluate the effect of different pattern types on the application context of misuse detection. For this purpose, we build EMDetect for detecting API misuses in Java projects.
Our results show practical evidence that not only do partial-order patterns represent a generalized super set of sequential-order patterns, partial-order mining also finds additional patterns missed by sequence mining, which are used by a larger number of developers across code repositories. Additionally, our study empirically quantifies the importance of the order information encoded in sequential and partial-order patterns for representing correct co-occurrences of code elements in real code. On the application context of misuse detection, our results show that sequential-order patterns perform better in terms of precision by ranking true positives higher in the top findings, while partial-order patterns perform better in terms of recall by being able to find more misuses in the source code. Last but not least, our benchmark can be used by other researchers to explore additional properties of API patterns, and for building-up other applications based on API usage patterns.
Ervina Çergani, Mira Mezini
A Practical Approach for Constraint Solving in Model Transformations
Abstract
In model transformation scenarios, expressing a Constraint Satisfaction Problem (CSP) is a complex and error prone activity. Indeed, transformation techniques do not provide fully integrated supports for solving constraints, and external solvers are not well adapted. This chapter presents a practical approach for constraint solving in model transformations. The base principle is to consider a pattern matching problem as a high level specification of a CSP. Besides, a transformation infrastructure that underpins the conceptual proposal can be generated in a semi-automatic manner. This infrastructure provides support for pattern specification, match model search, and transformation into valid target models. An application case extracted from the Escape It! serious game has been selected to illustrate our contribution.
Youness Laghouaouta, Pierre Laforcade
An Integrated Requirements Engineering Framework for Agile Software Product Lines
Abstract
Requirements engineering (RE) techniques play a determinant role within Agile Product Lines development methods; these notably allow to establish the relevance to adopt or not the product line approach for software-intensive systems production. This paper proposes an integrated goal and feature-based meta-model for agile software product lines development. The main objective is to permit the sepecification of the requirements that precisely capture stakeholder’s needs and intentions as well as the management of product line variabilities. Adopting practices from requirements engineering, especially goal and feature models, helps designing the domain and application engineering tiers of an agile product line. Such an approach allows a holistic perspective integrating human, organizational and agile aspects to better understand product lines dynamic business environments. It helps bridging the gap be-tween product lines structures and requirements models, and proposes an integrated framework to all actors involved in the product line architecture. In this paper we show how our proposed metamodel can be applied to the requirements engineering stage of an agile product line development mainly for feature-oriented agile product lines such as our own methodology called AgiFPL.
Hassan Haidar, Manuel Kolp, Yves Wautelet
Systematic Refinement of Softgoals Using a Combination of KAOS Goal Models and Problem Diagrams
Abstract
Softgoals are goals that do not have a clear-cut criterion for their satisfaction (in contrast to so-called hardgoals). They are considered to be satisfied when there is sufficient positive and little negative evidence for this claim. Thus, they are expected to be satisfied within acceptable limits rather than absolutely. Examples of such softgoals are quality attributes such as safety, security, and trustworthiness. In a previous paper, we showed how the systematic refinement of goals can be supported by combining KAOS goal models and problem diagrams that are created based on the Six-Variable Model. Therein, we mainly focussed on hardgoals. In this paper, we show how the systematic refinement of softgoals can be supported. We mainly focus on security as a softgoal and show how it can be refined in a systematic way. However, our method can be used in the same way to systematically decompose other softgoals as well. The benefit of our method is that it results not only in detailed security requirements but helps also in making expectations to be satisfied e.g. by sensors, actuators, other systems, and users explicit.
Nelufar Ulfat-Bunyadi, Nazila Gol Mohammadi, Roman Wirtz, Maritta Heisel
Simplifying the Classification of App Reviews Using Only Lexical Features
Abstract
User reviews submitted to app marketplaces contain information that falls into different categories, e.g., feature evaluation, feature request, and bug report. This information is valuable for developers to improve the quality of mobile applications. However, due to the large volume of reviews received every day, manual classification of user reviews into these categories is not feasible. Therefore, developing automatic classification methods using machine learning approaches is desirable. In this study, we address the problem of automatic classification of app review sentences (as opposed to full reviews) into different categories. We compare the simplest textual machine learning classifier using only lexical features – the so-called Bag-of-Words (BoW) approach – with more complex models used in previous work adopting rich linguistic features. We find that the performance of the simple BoW model is very competitive and has the advantage of not requiring any external linguistic tools to extract the features. Moreover, we experiment with deep learning based Convolutional Neural Network (CNN) models that have recently achieved state-of-the-art results in many classification tasks. We find that, on average, the CNN models do not perform significantly better than the simple BoW model. Finally, the manual analysis of misclassification errors and data annotations suggests that classifying review sentences in isolation does not always contain enough information to make a correct prediction. Thus, we suggest that adopting neural models to incorporate additional contextual knowledge might improve the classification performance.
Faiz Ali Shah, Kairit Sirts, Dietmar Pfahl
Smart Measurements and Analysis for Software Quality Enhancement
Abstract
Requests to improve the quality of software are increasing due to the competition in software industry and the complexity of software development integrating multiple technology domains (e.g., IoT, Big Data, Cloud, Artificial Intelligence, Security Technologies). Measurements collection and analysis is key activity to assess software quality during its development live-cycle. To optimize this activity, our main idea is to periodically select relevant measures to be executed (among a set of possible measures) and automatize their analysis by using a dedicated tool. The proposed solution is integrated in a whole PaaS platform called MEASURE. The tools supporting this activity are Software Metric Suggester tool that recommends metrics of interest according several software development constraints and based on artificial intelligence and MINT tool that correlates collected measurements and provides near real-time recommendations to software development stakeholders (i.e. DevOps team, project manager, human resources manager etc.) to improve the quality of the development process. To illustrate the efficiency of both tools, we created different scenarios on which both approaches are applied. Results show that both tools are complementary and can be used to improve the software development process and thus the final software quality.
Sarah Dahab, Stephane Maag, Wissam Mallouli, Ana Cavalli
Modular Programming and Reasoning for Living with Uncertainty
Abstract
Embracing uncertainty in software development is one of the crucial research topics in software engineering. In most projects, we have to deal with uncertain concerns by using informal ways such as documents, mailing lists, or issue tracking systems. This task is tedious and error-prone. Especially, uncertainty in programming is one of the challenging issues to be tackled, because it is difficult to verify the correctness of a program when there are uncertain user requirements, unfixed design choices, and alternative algorithms. If uncertainty can be dealt with modularly, we can add or delete uncertain concerns to/from code whenever they arise or are fixed to certain concerns. This paper proposes a new programming and reasoning style based on Modularity for Uncertainty. The iArch-U IDE (Integrated Development Environment) is developed to support uncertainty-aware software development. The combined usage of a type checker and a model checker in iArch-U plays an important role in verifying whether or not some important properties are guaranteed even if uncertainty remains in a program. Our model checker is based on LTSA (Labelled Transition System Analyzer) and is implemented as an Eclipse plug-in. Agile methods embrace change to accept changeable user requirements. On the other hand, our approach embraces uncertainty to support exploratory software development.
Naoyasu Ubayashi, Yasutaka Kamei, Ryosuke Sato

Software Systems and Applications

Frontmatter
Empowering Continuous Delivery in Software Development: The DevOps Strategy
Abstract
Continuous Delivery refers to a software development practice where members of a team frequently integrate their work, so that the process of delivery can be easily conducted. However, this continuous integration and delivery requires a reliable collaboration between development and IT operation teams. The DevOps practices support this collaboration since they enable that the operation staff making use of the same infrastructure as developers for their systems work. Our study aims at presenting a practical DevOps implementation and analyzing how the process of software delivery and infrastructure changes was automated. Our approach follows the principles of infrastructure as code, where a configuration platform – PowerShell DSC – was used to automatically define reliable environments for continuous software delivery. In this context, we defined the concept of “stage for dev”, also using the Docker technology, which involves all the elements that enable members of a team to have the same production environment, locally configured in their personal machines and thus empowering the continuous integration and delivery of system releases.
Clauirton Siebra, Rosberg Lacerda, Italo Cerqueira, Jonysberg P. Quintino, Fabiana Florentin, Fabio B. Q. da Silva, Andre L. M. Santos
Can Commit Change History Reveal Potential Fault Prone Classes? A Study on GitHub Repositories
Abstract
Various studies had successfully utilized graph theory analysis as a way to gain a high-level abstraction view of the software systems, such as constructing the call graph to visualize the dependencies among software components. The level of granularity and information shown by the graph usually depends on the input such as variable, method, class, package, or combination of multiple levels. However, there are very limited studies that investigated how software evolution and change history can be used as a basis to model software-based complex network. It is a common understanding that stable and well-designed source code will have less update throughout a software development lifecycle. It is only those code that were badly design tend to get updated due to broken dependencies, high coupling, or dependencies with other classes. This paper put forward an approach to model a commit change-based weighted complex network based on historical software change and evolution data captured from GitHub repositories with the aim to identify potential fault prone classes. Four well-established graph centrality metrics were used as a proxy metric to discover fault prone classes. Experiments on ten open-source projects discovered that when all centrality metrics are used together, it can yield reasonably good precision when compared against the ground truth.
Chun Yong Chong, Sai Peck Lee
An Agent-Based Planning Method for Distributed Task Allocation
Abstract
In multi-agent systems, agents should socially cooperate with their neighboring agents in order to solve task allocation problem in open and dynamic network environments. This paper proposes an agent-based architecture to handle different tasks; in particular, we focus on planning and distributed task allocation. In the proposed approach, each agent uses the fuzzy logic technique to select the alternative plans. We also propose an efficient task allocation algorithm that takes into consideration agent architectures and allows neighboring agents to help to perform a task as well as the indirectly related agents in the system. We illustrate our line of thought with a Benchmark Production System used as a running example in order to explain better our contribution. A set of experiments was conducted to demonstrate the efficiency of our planning approach and the performance of our distributed task allocation method.
Dhouha Ben Noureddine, Atef Gharbi, Samir Ben Ahmed
Automatic Test Data Generation for a Given Set of Applications Using Recurrent Neural Networks
Abstract
To address the problem of automatic software testing against vulnerabilities, our work focuses on creating a tool capable in assisting users to generate automatic test sets for multiple programs under test at the same time. Starting with an initial set of inputs in a corpus folder, the tool works by clustering the inputs depending on their application target type, then produces a generative model for each of these clusters. The architecture of the models is falling in the recurrent neural network architecture class, and for training and inferencing the models we used the Tensorflow framework. Online-learning is supported by the tool, thus models can get better as long as new inputs for each application cluster are added to the corpus folder. Users can interact with the tool similar to the interface used in expert systems: customize various parameters exposed per cluster, or override various function hooks for learning and inferencing the model, with the purpose of getting finer control over the tool’s backend. As the evaluation section shows, the tool can be useful for creating important sets of new inputs, with good code coverage quality and less resources consumed.
Ciprian Paduraru, Marius-Constantin Melemciuc, Miruna Paduraru
Guiding the Functional Change Decisions in Agile Project: An Empirical Evaluation
Abstract
Agile methods are becoming increasingly used in software industry as a response to the challenges of managing the frequent changes during the software life-cycle. However, an important number of agile projects yield unsatisfactory results and end up with failure. This is due mainly to a lack of structured change control process. A well-defined change control process gives software industry a significant competitive advantage. This paper describes an evaluation of functional changes affecting either an ongoing sprint or an implemented sprint. This evaluation can greatly assist the development teams in making appropriate decisions. We quantitatively and qualitatively evaluate 15 software development projects using agile (scrum) method. We also investigate the use of COSMIC Functional Size Measurement method for a rapid quantification and evaluation of a change request.
Asma Sellami, Mariem Haoues, Nour Borchani, Nadia Bouassida
Wise Objects for IoT (WIoT): Software Framework and Experimentation
Abstract
Despite their expansion, Internet of Things (IoT) technologies remain young and require software technologies to ensure information management in order to deliver sophisticated services to their users. Users of IOT technologies particularly need systems that adapt to their use and not the reverse. To meet those requirements, we enriched our object oriented framework WOF (Wise Object Framework) with a communication structure to interconnect WOs (Wise Objects) and IoT. Things from IoT are then able to learn, monitor and analyze data in order to be able to adapt their behavior. In this paper, we recall the underlying concepts of our framework and then focus on the interconnection between WOs and IoT. This is enabled by a software bus-based architecture and IoT related communication protocols. We designed a dedicated communication protocol for IoT objects. We show how IoT objects can benefit from learning, monitoring and analysis mechanisms provided by WOF to identify usual behavior of a system and to detect unusual behavior. We illustrate our approach through two case studies in home automation. The first shows how a wise smart presence sensor learns on a classroom occupation. The second shows how a wise system helps us to see correlation among several WOs.
Ilham Alloui, Eric Benoit, Stéphane Perrin, Flavien Vernier
A Software Product Line Approach to Design Secure Connectors in Component-Based Software Architectures
Abstract
This paper describes a software product line approach to design secure connectors in distributed component-based software architectures. The variability of secure connectors is modelled by means of a feature model, which consists of security pattern and communication pattern features. Applying separation of concerns, each secure connector is designed as a composite component that encapsulates both security pattern and communication pattern components. Integration of these components within a secure connector is enabled by a security coordinator, the high-level template of which is customized based on the selected security pattern features.
Michael Shin, Hassan Gomaa, Don Pathirage
Towards an Automatic Verification of BPMN Model Semantic Preservation During a Refinement Process
Abstract
In this paper, we present a refinement approach for business processes specified with Business Process Modeling Notation (BPMN). The Business process or workflow refinement approach is a step-wise modeling approach which is composed of a set of abstraction levels. Each refinement step corresponds to an abstract level of a BPMN model. For each refined workflow model, we analyze, automatically, the workflow change impact using NuSMV model checker. The change impact concerns the semantic preservation of workflow models during the refinement process. We talk about workflow data and control flow dependencies. To realize this analysis, we have to transform at each level of modeling refinement, the BPMN model to a Kripke structure formalizing, hence, the semantics of the refined business process model.
Yousra Bendaly Hlaoui, Salma Ayari, Leila Jemni Ben Ayed
Backmatter
Metadaten
Titel
Software Technologies
herausgegeben von
Dr. Marten van Sinderen
Prof. Leszek A. Maciaszek
Copyright-Jahr
2019
Electronic ISBN
978-3-030-29157-0
Print ISBN
978-3-030-29156-3
DOI
https://doi.org/10.1007/978-3-030-29157-0

Premium Partner