Elsevier

Performance Evaluation

Volume 67, Issue 8, August 2010, Pages 634-658
Performance Evaluation

Performance evaluation of component-based software systems: A survey

https://doi.org/10.1016/j.peva.2009.07.007Get rights and content

Abstract

Performance prediction and measurement approaches for component-based software systems help software architects to evaluate their systems based on component performance specifications created by component developers. Integrating classical performance models such as queueing networks, stochastic Petri nets, or stochastic process algebras, these approaches additionally exploit the benefits of component-based software engineering, such as reuse and division of work. Although researchers have proposed many approaches in this direction during the last decade, none of them has attained widespread industrial use. On this basis, we have conducted a comprehensive state-of-the-art survey of more than 20 of these approaches assessing their applicability. We classified the approaches according to the expressiveness of their component performance modelling languages. Our survey helps practitioners to select an appropriate approach and scientists to identify interesting topics for future research.

Introduction

During the last ten years, researchers have proposed many approaches for evaluating the performance (i.e., response time, throughput, resource utilisation) of component-based software systems. These approaches deal with both performance prediction and performance measurement. The former ones analyse the expected performance of a component-based software design to avoid performance problems in the system implementation, which can lead to substantial costs for re-designing the component-based software architecture. The latter ones analyse the observable performance of implemented and running component-based systems to understand their performance properties, to determine their maximum capacity, identify performance-critical components, and to remove performance bottlenecks.

Component-based software engineering (CBSE) is the successor of object-oriented software development [1], [2] and has been supported by commercial component frameworks such as Microsoft’s COM, Sun’s EJB or CORBA CCM. Software components are units of composition with explicitly defined provided and required interfaces [1]. Software architects compose software components based on specifications from component developers. While classical performance models such as queueing networks [3], stochastic Petri nets [4], or stochastic process algebras [5] can be used to model and analyse component-based systems, specialised component performance models are required to support the component-based software development process and to exploit the benefits of the component paradigm, such as reuse and division of work. The challenge for component performance models is that the performance of a software component in a running system depends on the context it is deployed into and its usage profile, which is usually unknown to the component developer creating the model of an individual component.

The goal of this paper is to classify the performance prediction and measurement approaches for component-based software systems proposed during the last ten years. Beyond tuning guides for commercial component frameworks (e.g., [6], [7], [8]), which are currently used in practice, these approaches introduce specialised modelling languages for the performance of software components and aim at an understanding of the performance of a designed architecture instead of code-centric performance fixes. In the future, software architects shall predict the performance of application designs based on the component performance specifications by component developers. This approach is more cost-effective than fixing late life-cycle performance problems [9].

Although many approaches have been proposed, they use different component notions (e.g., EJB, mathematical functions, etc.), aim at different life-cycle stages (e.g., design, maintenance, etc.), target different technical domains (e.g., embedded systems, distributed systems, etc.), and offer different degrees of tool support (e.g., textual modelling, graphical modelling, automated performance simulation). None of the approaches has gained widespread industrial use due to the still immature component performance models, limited tool support, and missing large-scale case study reports. Many software companies still rely on personal experience and hands-on approaches to deal with performance problems instead of using engineering methods [10]. Therefore, this paper presents a survey and critical evaluation of the proposed approaches to help selecting an appropriate approach for a given scenario.

Our survey is more detailed and up-to-date compared to existing survey papers and has a special focus on component-based performance evaluation methods. Balsamo et al. [11] reviewed model-based performance prediction methods for general systems, but did not analyse the special requirements for component-based systems. Putrycz et al. [12] analysed different techniques for performance evaluation of COTS systems, such as layered queueing, curve-fitting, or simulation, but did not give an assessment of component performance modelling languages. Becker et al. [13] provided an overview of component-based performance modelling and measurements methods and coarsely evaluated them for different properties. Woodside et al. [10] designed a roadmap for future research in the domain of software performance engineering and recommended to exploit techniques from Model-Driven Development [14] for performance evaluation of component-based systems.

The contributions of this paper are (i) a classification scheme for performance evaluation methods based on the expressiveness of their component performance modelling language, (ii) a critical evaluation of existing approaches for their benefits and drawbacks, and (iii) an analysis of future research directions. We exclude qualitative architectural evaluation methods, such as ATAM [15] and SAAM [16] from our survey, because they do not provide quantitative measures. We also exclude approaches without tool support or case studies (e.g., [17], [18]). Furthermore, we exclude performance modelling and measurement approaches, where monolithic modelling techniques have been used to assess the performance of component-based systems (e.g., [19], [20]).

This paper is structured as follows. Section 2 lays the foundations and elaborates on the special characteristics of performance evaluation methods for component-based systems. Section 3 summarises the most important methods in this area. Section 4 discusses general features of the approaches and classifies and evaluates them. Section 5 points out directions for future research. Section 6 concludes the paper.

Section snippets

Software component performance

This section introduces the most important terms and notions to understand the challenge of performance evaluation methods for component-based systems. Section 2.1 clarifies the notion of a software component, before Section 2.2 lists the different influence factors on the performance of a software component. Section 2.3 distinguishes between several life-cycle stages of a software component and breaks down which influence factors are known at which life-cycle stages. Based on this, Section 2.4

Performance evaluation methods

In the following, we provide short summaries of the performance evaluation methods for component-based software systems in the scope of this survey. We have grouped the approaches as depicted in Fig. 5.

Main approaches provide full performance evaluation processes, while supplemental approaches focus on specific aspects, such measuring individual components or modelling the performance properties of component connectors (i.e., artefacts binding components). The groups have been chosen based on

Evaluation

This section first discusses the general features of the surveyed main approaches (Section 4.1.1), before classifying them according to the expressiveness of their modelling languages (Section 4.1.2). The Sections 4.2.1 Prediction approaches based on UML, 4.2.2 Prediction methods based on proprietary meta-models, 4.2.3 Prediction methods with focus on middleware, 4.2.4 Formal specification methods, 4.2.5 Measurement methods for system implementations additionally analyse the benefits and

Future directions

This survey has revealed many open issues and recommendations for future work in performance evaluation of component-based software systems:

Model Expressiveness: As shown by the survey and discussed in the former section, there is limited consensus about the performance modelling language for component-based systems. Component performance can be modelled on different abstraction levels. The question is which detail to include into the performance models because of its impact on timing and which

Conclusions

We have surveyed the state-of-the-art in the research of performance evaluation methods for component-based software systems. The survey classified the approaches according to the expressiveness of their performance modelling language and critically evaluated the benefits and drawbacks.

The area of performance evaluations for component-based software engineering has significantly matured over the last decade. Several features have been adopted as good engineering practice and should influence

Acknowledgements

The author would like to thank the members of the SDQ research group at the University of Karlsruhe, especially Anne Martens, Klaus Krogmann, Michael Kuperberg, Steffen Becker, Jens Happe, and Ralf Reussner for their valuable review comments. This work is supported by the EU-project Q-ImPrESS (grant FP7-215013).

Heiko Koziolek works for the Industrial Software Technologies Group of ABB Corporate Research in Ladenburg, Germany. He received his diploma degree (2005) and his Ph.D. degree (2008) in computer science from the University of Oldenburg, Germany. From 2005 to 2008 Heiko was a member of the graduate school ‘TrustSoft’ and held a Ph.D. scholarship from the German Research Foundation (DFG). He has worked as an intern for IBM and sd&m. His research interests include performance engineering, software

References (102)

  • Lloyd G. Williams, Connie U. Smith, Making the business case for software performance engineering, in: Proceedings of...
  • Murray Woodside et al.

    The future of software performance engineering

  • Simonetta Balsamo et al.

    Model-based performance prediction in software development: A survey

    IEEE Trans. Softw. Eng.

    (2004)
  • Erik Putrycz et al.

    Performance techniques for cots systems

    IEEE Software

    (2005)
  • Steffen Becker et al.

    Performance prediction of component-based systems: A survey from an engineering perspective

  • Thomas Stahl et al.

    Model-Driven Software Development: Technology, Engineering, Management

    (2006)
  • Rick Kazman, M. Klein, Paul Clements, ATAM: Method for architecture evaluation, Technical Report CMU/SEI-2000-TR-004,...
  • Rick Kazman et al.

    SAAM: A method for analyzing the properties of software architectures

  • Antinisca DiMarco et al.

    Compositional generation of software architecture performance qn models

  • Hassan Gomaa et al.

    Performance engineering of component-based distributed software systems

  • Marcel Harkema, Bart Gijsen, Rob van der Mei, Bart Nieuwenhuis, Performance comparison of middleware threading...
  • Samuel Kounev

    Performance modeling and evaluation of distributed component-based systems using queueing petri nets

    IEEE Trans. Softw. Eng.

    (2006)
  • Malcom Douglas McIlroy

    Mass-Produced Software Components

    Software Engineering Concepts and Techniques (NATO Science Committee)

    (1968)
  • Microsoft Corp, The COM homepage. http://www.microsoft.com/com/ (last retrieved...
  • Sun Microsystems Corp., The Enterprise Java Beans homepage. http://java.sun.com/products/ejb/, 2007 (last retrieved...
  • Object Management Group (OMG). Corba component model, v4.0 (formal/2006-04-01)....
  • Kung-Kiu Lau et al.

    Software component models

    IEEE Transactions on Software Engineering

    (2007)
  • Steffen Becker et al.

    The impact of software component adaptation on quality of service properties

    L’objet

    (2006)
  • John Cheesman et al.

    UML Components: A Simple Process for Specifying Component-based Software Systems

    (2001)
  • Ralf H. Reussner et al.

    Reasoning on software architectures with contractually specified components

  • Connie U. Smith

    Performance Solutions: A Practical Guide To Creating Responsive, Scalable Software

    (2002)
  • Thomas Kappler et al.

    Towards automatic construction of reusable prediction models for component-based performance engineering

  • Object Management Group (OMG). Unified modeling language: Infrastructure version 2.1.1....
  • Object Management Group (OMG). UML Profile for Schedulability, Performance and Time....
  • Object Management Group (OMG). UML Profile for MARTE, Beta 1. http://www.omg.org/cgi-bin/doc?ptc/2007-08-04 August 2007...
  • Pekka Kahkipuro

    UML-based performance modeling framework for component-based distributed systems

  • Antonia Bertolino et al.

    Cb-spe tool: Putting component-based performance engineering into practice

  • Simonetta Balsamo et al.

    Efficient performance models in component-based software engineering

  • Xiuping Wu et al.

    Performance modeling from software components

  • Jerome A. Rolia et al.

    The method of layers

    IEEE Trans. Softw. Eng.

    (1995)
  • Jing Xu et al.

    Performance modeling and prediction of enterprise javabeans with layered queuing network templates

    SIGSOFT Softw. Eng. Notes

    (2006)
  • Scott A. Hissam et al.

    Packaging predictable assembly

  • Kurt Wallnau, James Ivers, Snapshot of CCL: A Language for Predictable Assembly. Technical report, Software Engineering...
  • Scott Hissam, Mark Klein, John Lehoczky, Paulo Merson, Gabriel Moreno, Kurt Wallnau, Performance property theory for...
  • Gabriel Moreno et al.

    Model-driven performance analysis

  • Scott Hissam, Gabriel Moreno, Daniel Plakosh, Isak Savo, Marcin Stemarczyk, Predicting the behavior of a highly...
  • Steffen Göbel et al.

    The comquad component model: Enabling dynamic selection of implementations by weaving non-functional aspects

  • Marcus Meyerhöfer, Christoph Neumann, TESTEJB - A Measurement Framework for EJBs, in: Proceedings of the 7th...
  • Marcus Meyerhöfer, Messung und Verwaltung von Software-Komponenten für die Performance-Vorhersage, Ph.D. Thesis,...
  • Vincenzo Grassi et al.

    From design to analysis models: A kernel language for performance and reliability analysis of component-based systems

  • Cited by (256)

    • MS-QuAAF: A generic evaluation framework for monitoring software architecture quality

      2021, Information and Software Technology
      Citation Excerpt :

      However, this type of evaluation can suffer from the following issues: Quantitative evaluation frameworks are addressed to answer specific questions, and thus evaluating specific qualities, such as performance [8] and modifiability [9]. This is due to the fact that some quality attributes are hard to quantify.

    View all citing articles on Scopus

    Heiko Koziolek works for the Industrial Software Technologies Group of ABB Corporate Research in Ladenburg, Germany. He received his diploma degree (2005) and his Ph.D. degree (2008) in computer science from the University of Oldenburg, Germany. From 2005 to 2008 Heiko was a member of the graduate school ‘TrustSoft’ and held a Ph.D. scholarship from the German Research Foundation (DFG). He has worked as an intern for IBM and sd&m. His research interests include performance engineering, software architecture evaluation, model-driven software development, and empirical software engineering.

    View full text