Performance evaluation of component-based software systems: A survey
Introduction
During the last ten years, researchers have proposed many approaches for evaluating the performance (i.e., response time, throughput, resource utilisation) of component-based software systems. These approaches deal with both performance prediction and performance measurement. The former ones analyse the expected performance of a component-based software design to avoid performance problems in the system implementation, which can lead to substantial costs for re-designing the component-based software architecture. The latter ones analyse the observable performance of implemented and running component-based systems to understand their performance properties, to determine their maximum capacity, identify performance-critical components, and to remove performance bottlenecks.
Component-based software engineering (CBSE) is the successor of object-oriented software development [1], [2] and has been supported by commercial component frameworks such as Microsoft’s COM, Sun’s EJB or CORBA CCM. Software components are units of composition with explicitly defined provided and required interfaces [1]. Software architects compose software components based on specifications from component developers. While classical performance models such as queueing networks [3], stochastic Petri nets [4], or stochastic process algebras [5] can be used to model and analyse component-based systems, specialised component performance models are required to support the component-based software development process and to exploit the benefits of the component paradigm, such as reuse and division of work. The challenge for component performance models is that the performance of a software component in a running system depends on the context it is deployed into and its usage profile, which is usually unknown to the component developer creating the model of an individual component.
The goal of this paper is to classify the performance prediction and measurement approaches for component-based software systems proposed during the last ten years. Beyond tuning guides for commercial component frameworks (e.g., [6], [7], [8]), which are currently used in practice, these approaches introduce specialised modelling languages for the performance of software components and aim at an understanding of the performance of a designed architecture instead of code-centric performance fixes. In the future, software architects shall predict the performance of application designs based on the component performance specifications by component developers. This approach is more cost-effective than fixing late life-cycle performance problems [9].
Although many approaches have been proposed, they use different component notions (e.g., EJB, mathematical functions, etc.), aim at different life-cycle stages (e.g., design, maintenance, etc.), target different technical domains (e.g., embedded systems, distributed systems, etc.), and offer different degrees of tool support (e.g., textual modelling, graphical modelling, automated performance simulation). None of the approaches has gained widespread industrial use due to the still immature component performance models, limited tool support, and missing large-scale case study reports. Many software companies still rely on personal experience and hands-on approaches to deal with performance problems instead of using engineering methods [10]. Therefore, this paper presents a survey and critical evaluation of the proposed approaches to help selecting an appropriate approach for a given scenario.
Our survey is more detailed and up-to-date compared to existing survey papers and has a special focus on component-based performance evaluation methods. Balsamo et al. [11] reviewed model-based performance prediction methods for general systems, but did not analyse the special requirements for component-based systems. Putrycz et al. [12] analysed different techniques for performance evaluation of COTS systems, such as layered queueing, curve-fitting, or simulation, but did not give an assessment of component performance modelling languages. Becker et al. [13] provided an overview of component-based performance modelling and measurements methods and coarsely evaluated them for different properties. Woodside et al. [10] designed a roadmap for future research in the domain of software performance engineering and recommended to exploit techniques from Model-Driven Development [14] for performance evaluation of component-based systems.
The contributions of this paper are (i) a classification scheme for performance evaluation methods based on the expressiveness of their component performance modelling language, (ii) a critical evaluation of existing approaches for their benefits and drawbacks, and (iii) an analysis of future research directions. We exclude qualitative architectural evaluation methods, such as ATAM [15] and SAAM [16] from our survey, because they do not provide quantitative measures. We also exclude approaches without tool support or case studies (e.g., [17], [18]). Furthermore, we exclude performance modelling and measurement approaches, where monolithic modelling techniques have been used to assess the performance of component-based systems (e.g., [19], [20]).
This paper is structured as follows. Section 2 lays the foundations and elaborates on the special characteristics of performance evaluation methods for component-based systems. Section 3 summarises the most important methods in this area. Section 4 discusses general features of the approaches and classifies and evaluates them. Section 5 points out directions for future research. Section 6 concludes the paper.
Section snippets
Software component performance
This section introduces the most important terms and notions to understand the challenge of performance evaluation methods for component-based systems. Section 2.1 clarifies the notion of a software component, before Section 2.2 lists the different influence factors on the performance of a software component. Section 2.3 distinguishes between several life-cycle stages of a software component and breaks down which influence factors are known at which life-cycle stages. Based on this, Section 2.4
Performance evaluation methods
In the following, we provide short summaries of the performance evaluation methods for component-based software systems in the scope of this survey. We have grouped the approaches as depicted in Fig. 5.
Main approaches provide full performance evaluation processes, while supplemental approaches focus on specific aspects, such measuring individual components or modelling the performance properties of component connectors (i.e., artefacts binding components). The groups have been chosen based on
Evaluation
This section first discusses the general features of the surveyed main approaches (Section 4.1.1), before classifying them according to the expressiveness of their modelling languages (Section 4.1.2). The Sections 4.2.1 Prediction approaches based on UML, 4.2.2 Prediction methods based on proprietary meta-models, 4.2.3 Prediction methods with focus on middleware, 4.2.4 Formal specification methods, 4.2.5 Measurement methods for system implementations additionally analyse the benefits and
Future directions
This survey has revealed many open issues and recommendations for future work in performance evaluation of component-based software systems:
Model Expressiveness: As shown by the survey and discussed in the former section, there is limited consensus about the performance modelling language for component-based systems. Component performance can be modelled on different abstraction levels. The question is which detail to include into the performance models because of its impact on timing and which
Conclusions
We have surveyed the state-of-the-art in the research of performance evaluation methods for component-based software systems. The survey classified the approaches according to the expressiveness of their performance modelling language and critically evaluated the benefits and drawbacks.
The area of performance evaluations for component-based software engineering has significantly matured over the last decade. Several features have been adopted as good engineering practice and should influence
Acknowledgements
The author would like to thank the members of the SDQ research group at the University of Karlsruhe, especially Anne Martens, Klaus Krogmann, Michael Kuperberg, Steffen Becker, Jens Happe, and Ralf Reussner for their valuable review comments. This work is supported by the EU-project Q-ImPrESS (grant FP7-215013).
Heiko Koziolek works for the Industrial Software Technologies Group of ABB Corporate Research in Ladenburg, Germany. He received his diploma degree (2005) and his Ph.D. degree (2008) in computer science from the University of Oldenburg, Germany. From 2005 to 2008 Heiko was a member of the graduate school ‘TrustSoft’ and held a Ph.D. scholarship from the German Research Foundation (DFG). He has worked as an intern for IBM and sd&m. His research interests include performance engineering, software
References (102)
- et al.
Process algebra for performance evaluation
Theoretical Computer Science
(2002) - et al.
Filling the gap between design and performance/reliability models of component-based systems: A model-driven approach
Journal on Systems and Software
(2007) - et al.
Performance prediction of component-based applications
Journal of Systems and Software
(2005) - et al.
Component Software: Beyond Object-Oriented Programming
(2002) - et al.
Quantitative System Performance
(1984) - et al.
Stochastic Petri Nets—An Introduction to the Theory
(2002) Performance Testing Microsoft.NET Web Applications
(2002)- et al.
J2EE Performance Testing
(2003) Improving.NET Application Performance and Scalability (Patterns & Practices)
(2004)
The future of software performance engineering
Model-based performance prediction in software development: A survey
IEEE Trans. Softw. Eng.
Performance techniques for cots systems
IEEE Software
Performance prediction of component-based systems: A survey from an engineering perspective
Model-Driven Software Development: Technology, Engineering, Management
SAAM: A method for analyzing the properties of software architectures
Compositional generation of software architecture performance qn models
Performance engineering of component-based distributed software systems
Performance modeling and evaluation of distributed component-based systems using queueing petri nets
IEEE Trans. Softw. Eng.
Mass-Produced Software Components
Software Engineering Concepts and Techniques (NATO Science Committee)
Software component models
IEEE Transactions on Software Engineering
The impact of software component adaptation on quality of service properties
L’objet
UML Components: A Simple Process for Specifying Component-based Software Systems
Reasoning on software architectures with contractually specified components
Performance Solutions: A Practical Guide To Creating Responsive, Scalable Software
Towards automatic construction of reusable prediction models for component-based performance engineering
UML-based performance modeling framework for component-based distributed systems
Cb-spe tool: Putting component-based performance engineering into practice
Efficient performance models in component-based software engineering
Performance modeling from software components
The method of layers
IEEE Trans. Softw. Eng.
Performance modeling and prediction of enterprise javabeans with layered queuing network templates
SIGSOFT Softw. Eng. Notes
Packaging predictable assembly
Model-driven performance analysis
The comquad component model: Enabling dynamic selection of implementations by weaving non-functional aspects
From design to analysis models: A kernel language for performance and reliability analysis of component-based systems
Cited by (256)
Architectural support for software performance in continuous software engineering: A systematic mapping study
2024, Journal of Systems and SoftwareProbabilistic program performance analysis with confidence intervals
2023, Information and Software TechnologyMutation-based analysis of queueing network performance models
2022, Journal of Systems and SoftwareA state-of-the-art review on performance measurement petri net models for safety critical systems of NPP
2022, Annals of Nuclear EnergyMS-QuAAF: A generic evaluation framework for monitoring software architecture quality
2021, Information and Software TechnologyCitation Excerpt :However, this type of evaluation can suffer from the following issues: Quantitative evaluation frameworks are addressed to answer specific questions, and thus evaluating specific qualities, such as performance [8] and modifiability [9]. This is due to the fact that some quality attributes are hard to quantify.
Heiko Koziolek works for the Industrial Software Technologies Group of ABB Corporate Research in Ladenburg, Germany. He received his diploma degree (2005) and his Ph.D. degree (2008) in computer science from the University of Oldenburg, Germany. From 2005 to 2008 Heiko was a member of the graduate school ‘TrustSoft’ and held a Ph.D. scholarship from the German Research Foundation (DFG). He has worked as an intern for IBM and sd&m. His research interests include performance engineering, software architecture evaluation, model-driven software development, and empirical software engineering.