skip to main content
10.1145/2737182.2737190acmconferencesArticle/Chapter ViewAbstractPublication PagescomparchConference Proceedingsconference-collections
research-article

Investigating Quality Trade-offs in Open Source Critical Embedded Systems

Published:04 May 2015Publication History

ABSTRACT

During the development of Critical Embedded Systems (CES), quality attributes that are critical for them (e.g., correctness, security, etc.) must be guaranteed. However, this often leads to complex quality trade-offs, since non-critical qualities (e.g., reusability, understandability, etc.) may be compromised. In this study, we aim at empirically investigating the existence of quality trade-offs, on the implemented architecture, among versions of open source CESs, and compare them with those of systems from other application domains. The results of the study suggest that in CES, non-critical quality attributes are usually compromised in favor of critical quality attributes. On the contrary, we have not observed compromises of critical qualities in favor of non-critical ones in either CES or other application domains. Furthermore, quality trade-offs are more frequent among critical quality attributes, compared to trade-offs among non-critical quality attributes. Our study has implications for both practitioners when making trade-offs in practice, as well as researchers that investigate quality trade-offs.

References

  1. Abran, A., Moore, J.W., Bourque, P., Dupuis, R. and Tripp, L.L. 2014. Guide to the Software Engineering Body of Knowledge.Google ScholarGoogle Scholar
  2. Aguiar, A., Filho, S.J., Magalhães, F.G., Casagrande, T.D. and Hessel, F. 2010. Hellfire: A design framework for critical embedded systems? applications. In Proceedings of the 11th International Symposium on Quality Electronic Design (2010). ISQED'10. 730?737.Google ScholarGoogle Scholar
  3. Alhusain, S., Coupland, S., John, R. and Kavanagh, M. 2013. Towards machine learning based design pattern recognition. In 13th UK Workshop on Computational Intelligence (2013). UKCI'13. 244?251.Google ScholarGoogle Scholar
  4. Alshammari, B., Fidge, C. and Corney, D. 2010. Security Metrics for Object-Oriented Designs. In Proceedings of the 21st Australian Software Engineering Conference (2010). ASWEC'10. 55?64. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Ampatzoglou, A., Gkortzis, A., Charalampidou, S. and Avgeriou, P. 2013. An Embedded Multiple-Case Study on OSS Design Quality Assessment across Domains. In Proceedings of the ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (2013). ESEM'13. 255?258.Google ScholarGoogle Scholar
  6. Ampatzoglou, A., Michou, O. and Stamelos, I. 2013. Building and mining a repository of design pattern instances: Practical and research benefits. Entertainment Computing. 4, 2 (Apr. 2013), 131?142.Google ScholarGoogle ScholarCross RefCross Ref
  7. Bansiya, J. and Davis, C.G. 2002. A hierarchical model for object-oriented design quality assessment. IEEE Transactions on Software Engineering. 28, 1 (2002), 4?17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Barney, S., Petersen, K., Svahnberg, M., Aurum, A. and Barney, H. 2012. Software quality trade-offs: A systematic map. Information and Software Technology. 54, 7 (Jul. 2012), 651?662. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Barros, M. de O., Farzat, F. de A. and Travassos, G.H. 2014. Learning from optimization: A case study with Apache Ant. Information and Software Technology. 57, 1 (Aug. 2014), 684?704.Google ScholarGoogle Scholar
  10. Basili, V.R., Caldiera, G. and Rombach, H.D. 1994. Goal Question Metric paradigm. Encyclopedia of Software Engineering. Wiley & Sons. 528?532.Google ScholarGoogle Scholar
  11. Bass, L., Nord, R., Wood, W., Zubrow, D. and Ozkaya, I. 2008. Analysis of architecture evaluation data. Journal of Systems and Software. 81, 9 (Sep. 2008), 1443?1455. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Bate, I. 2008. Systematic approaches to understanding and evaluating design trade-offs. Journal of Systems and Software. 81, 8 (Aug. 2008), 1253?1271. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Buyens, K., Scandariato, R. and Joosen, W. 2009. Measuring the interplay of security principles in software architectures. In Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement (2009). ESEM'09. 554?563. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Chatzigeorgiou, A. and Stiakakis, E. 2013. Combining metrics for software evolution assessment by means of Data Envelopment Analysis. Journal of Software: Evolution and Process. 25, 3 (Mar. 2013), 303?324.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Griffith, I. and Izurieta, C. 2014. Design pattern decay: the case for class grime. In Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (2014). ESEM'14. 1?4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Hovemeyer, D. and Pugh, W. 2004. Finding bugs is easy. ACM SIGPLAN Notices. 39, 12 (2004), 92?106. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kitchenham, B. and Pfleeger, S.L. 1996. Software quality: the elusive target {special issues section}. IEEE Software. 13, 1 (1996), 12?21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Linares-Vásquez, M., Klock, S., McMillan, C., Sabané, A., Poshyvanyk, D. and Guéhéneuc, Y.-G. 2014. Domain matters: bringing further evidence of the relationships among anti-patterns, application domains, and quality-related metrics in Java mobile apps. In Proceedings of the 22nd International Conference on Program Comprehension (2014). ICPC'14. 232?243. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Marwedel, P. 2010. Embedded System Design: Embedded Systems Foundations of Cyber-Physical Systems. Springer Netherlands. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Misra, S.C. and Bhavsar, V.C. 2003. Relationships Between Selected Software Measures and Latent Bug-Density: Guidelines for Improving Quality. In Proceedings of the Computational Science and Its Applications (2003). ICCSA'03. 724?732. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Oliveira, M.F.S., Redin, R.M., Carro, L., Lamb, L. and Wagner, F. 2008. Software Quality Metrics and their Impact on Embedded Software. 5th International Workshop on Model-based Methodologies for Pervasive and Embedded Software (2008). MOMPES'08. 68?77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Penta, M. Di, Cerulo, L. and Aversano, L. 2009. The life and death of statically detected vulnerabilities: An empirical study. Information and Software Technology. 51, 10 (Oct. 2009), 1469?1484. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Perry, D.E. and Wolf, A.L. 1992. Foundations for the study of software architecture. ACM SIGSOFT Software Engineering Notes. 17, 4 (Oct. 1992), 40?52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Del Rosso, C. 2008. Software performance tuning of software product family architectures: Two case studies in the real-time embedded systems domain. Journal of Systems and Software. 81, 1 (Jan. 2008), 1?19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Runeson, P., Host, M., Rainer, A. and Regnell, B. 2012. Case Study Research in Software Engineering: Guidelines and Examples. Wiley Blackwell. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Zaman, S., Adams, B. and Hassan, A.E. 2011. Security versus performance bugs. Proceeding of the 8th working conference on Mining software repositories (2011). MSR'11. 93?102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Zheng, J., Williams, L., Nagappan, N., Snipes, W., Hudepohl, J.P. and Vouk, M.A.S.E.I.T. on 2006. On the value of static analysis for fault detection in software. Software Engineering, IEEE Transactions on. 32, 4 (2006), 240?253. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Investigating Quality Trade-offs in Open Source Critical Embedded Systems

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          QoSA '15: Proceedings of the 11th International ACM SIGSOFT Conference on Quality of Software Architectures
          May 2015
          152 pages
          ISBN:9781450334709
          DOI:10.1145/2737182
          • General Chair:
          • Philippe Kruchten,
          • Program Chairs:
          • Ipek Ozkaya,
          • Heiko Koziolek

          Copyright © 2015 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 4 May 2015

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          QoSA '15 Paper Acceptance Rate14of42submissions,33%Overall Acceptance Rate46of131submissions,35%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader