Weitere Artikel dieser Ausgabe durch Wischen aufrufen
Unit verification, including software inspections and unit tests, is usually the first code verification phase in the software development process. However, principles of unit verification are weakly explored, mostly due to the lack of data, since unit verification data are rarely systematically collected and only a few studies have been published with such data from industry. Therefore, we explore the theory of fault distributions, originating in the quantitative analysis by Fenton and Ohlsson, in the weakly explored context of unit verification in large-scale software development. We conduct a quantitative case study on a sequence of four development projects on consecutive releases of the same complex software product line system for telecommunication exchanges. We replicate the operationalization from earlier studies, analyzed hypotheses related to the Pareto principle of fault distribution, persistence of faults, effects of module size, and quality in terms of fault densities, however, now from the perspective of unit verification. The patterns in unit verification results resemble those of later verification phases, e.g., regarding the Pareto principle, and may thus be used for prediction and planning purposes. Using unit verification results as predictors may improve the quality and efficiency of software verification.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Andersson, C., & Runeson, P. (2007). A replicated quantitative analysis of fault distributions in complex software systems. IEEE Transactions on Software Engineering, 33(5), 273–286. CrossRef
Aurum, A., Petersson, P., & Wohlin, C. (2002). State-of-the-art: Software inspections after 25 years. Software Test and Verification Reliability, 12(3), 133–154. CrossRef
Basili, V. R., & Perricone, B. T. (1984). Software errors and complexity: An empirical investigation. Communications of the ACM, 27(1), 42–52. CrossRef
Basili, V. R., & Selby, R. W. (1987). Comparing the effectiveness of software testing strategies. IEEE Transactions on Software Engineering, 13(12), 1278–1296. CrossRef
Bhat, T., & Nagappan, N. (2006). Evaluating the efficacy of test-driven development: Industrial case studies. In Proceedings of the International Symposium on Empirical Software Engineering. pp. 356–363.
Briand, L. C., El Emam, K., & Freimut, B. G. (2000). A comprehensive evaluation of capture-recapture models for estimating software defect content. IEEE Transactions on Software Engineering, 26(6), 518–540. CrossRef
Briand, L., El Emam, K., Laitenberger, O., & Fussbroich, T. (1998). Using simulation to build inspection efficiency benchmarks for development projects. In Proceedings of the 20th International Conference on Software Engineering. pp. 340–349.
Carver, J. (2010). Towards reporting guidelines for experimental replications: A proposal. In Proceedings of the 1st International Workshop on Replication in Empirical Software Engineering Research (RESER). Cape Town, South Africa.
Catal, C., & Diri, B. (2009). A systematic review of software fault prediction studies. Expert Systems with Application, 36(4), 7346–7354. CrossRef
Concas, G., Marchesi, M., Murgia, A., Tonelli, R., & Turnu, I. (2011). On the distribution of bugs in the eclipse system. IEEE Transactions on Software Engineering, 37(6), 872–877. CrossRef
El Emam, K., Laitenberger, O., & Harbich, T. (2000). The application of subjective estimates of effectiveness to controlling software inspections. The Journal of Systems and Software, 54(2), 119–136. CrossRef
Engström, E., & Runeson, P. (2010). A qualitative survey of regression testing practices. In M. Ali Babar, M. Vierimaa, & M. Oivo (Eds.), Proceedings 11th international conference on product-focused software process improvement (PROFES), volume 6156 of lecture notes in computer science (pp. 3–16). Berlin/Heidelberg: Springer.
Fagan, M. (2002). Design and code inspections to reduce errors in program development, software pioneers. New York: Springer-verlag new york inc.
Fenton, N., & Neil, M. (1999). A critique of software defect prediction models. IEEE Transactions on Software Engineering, 25(5), 675–689. CrossRef
Fenton, N. E., & Ohlsson, N. (2000). Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering, 26(8), 797–814. CrossRef
Galinac Grbac, T., & Huljenić, D. (2011). Defect detection effectiveness and product quality in global software development. In Proceedings of the 12th International Conference Product-Focused Software Process Improvement (PROFES), Proceedings. Lecture Notes in Business Information Processing 6759, Springer: Torre Canne, Italy. 20–22 June 2011.
Galinac Grbac, T., & Huljenić, D. (2015). On the probability distribution of faults in complex software systems. Information and Software Technology, 58, 250–258. CrossRef
Galinac Grbac, T., Car, Z., & Huljenić, D. (2012). Quantifying value of adding inspection effort early in the development process: A case study. Software IET, 6(3), 249–259. CrossRef
Galinac Grbac, T., Car, Z., & Huljenić, D. (2015). A quality cost reduction model for large-scale software development. Software Quality Journal, 23, 363–390. CrossRef
Galinac Grbac, T., Runeson, P., & Huljenić, D. (2013). A second replicated quantitative analysis of fault distributions in complex software systems. IEEE Transactions on Software Engineering, 39(4), 462–476. CrossRef
Gilb, T., & Graham, D. (1993). “Software inspection”, software pioneers. Boston: Addison-Wesley.
Gómez, O. S., Juristo, N., & Vegas, S. (2014). Understanding replication of experiments in software engineering: A classification. Information and Software Technology, 56(8), 1033–1048. CrossRef
Hall, T., Beecham, S., Bowes, D., Gray, D., & Counsell, S. (2012). A systematic literature review on fault prediction performance in software engineering. IEEE Transactions on Software Engineering, 38(6), 1276–1304. CrossRef
Hannay, J. E., Sjoberg, D. I. K., & Dybå, T. (2007). A systematic review of theory use in software engineering experiments. IEEE Transactions on Software Engineering, 33(2), 87–107. CrossRef
Hetzel, W. C. (1976). An experimental analysis of program verification methods. Ph.D. Dissertation, The University of North Carolina at Chapel Hill.
IEEE Std. 610.12-1990. (1990). Standard glossary of software engineering terminology, IEEE.
Juristo, N., Moreno, A. M., & Vegas, S. (2004). Reviewing 25 years of testing technique experiments. Empirical Software Engineering, 9(1), 7–44. CrossRef
Juristo, N., Vegas, S., Solari, M., Abrahao, S., & Ramos, I. (2012). Comparing the effectiveness of equivalence partitioning, branch testing and code reading by stepwise abstraction applied by subjects. In Proceedings of Fifth IEEE International Conference on Software Testing, Verification and Validation, pp. 330–339.
Juristo Juzgado, N., Vegas, S., Solari, M., Abrahaõ, S., & Ramos, I. (2013). A process for managing interaction between experimenters to get useful similar replications. Information and Software Technology, 55(2), 215–225. CrossRef
Koru, A. G., Zhang, D., El Emam, K., & Liu, H. (2013). An investigation into the functional form of the size-defect relationship for software modules. IEEE Transactions on Software Engineering, 35(2), 293–304. CrossRef
Kamsties, E., & Lott, C. M. (1995). An empirical evaluation of three defect-detection techniques. In Proceedings of the 5th European Software Engineering and Conference, pp. 362–383.
Kitchenham, B. A. (2008). The role of replications in empirical software engineering: A word of warning. Empirical Software Engineering, 13(2), 219–221. CrossRef
Mäntylä, M. V., & Lassenius, C. (2009). What types of defects are really discovered in code reviews? IEEE Transactions on Software Engineering, 35(3), 430–448. CrossRef
Miller, J. (2005). Replicating software engineering experiments: A poisoned chalice or the holy grail. Information and Software Technology, 47(4), 233–244. CrossRef
Munir, H., Moayyed, M., & Petersen, K. (2014). Considering rigor and relevance when evaluating test driven development: A systematic review, In Information and Software Technology, 13 January 2014. doi: 10.1016/j.infsof.2014.01.002.
Myers, G. J. (1978). A controlled experiment in program testing and code walkthroughs/inspections. Communications of the ACM, 21(9), 13:1–13:31. CrossRef
Nagappan, N., Maximilien, E. M., Bhat, T., & Williams, L. (2008). Realizing quality improvement through test driven development: Results and experiences of four industrial teams. Empirical Software Engineering, 13(3), 289–302. CrossRef
Ohlsson, N., & Alberg, H. (1996). Predicting fault-prone software modules in telephone switches. IEEE Transactions on Software Engineering, 22(12), 886–894. CrossRef
Petersson, H., Thelin, T., Runeson, P., & Wohlin, C. (2004). Capture-recapture in software inspections after 10 years research-theory, evaluation and application. The Journal of Systems and Software, 72(2), 249–264. CrossRef
Runeson, P. (2006). A survey of unit testing practices. IEEE Software, 23(4), 22–29. CrossRef
Runeson, P., Andersson, C., Thelin, T., Andrews, A., & Berling, T. (2006). What do we know about defect detection methods. IEEE Software, 23(3), 82–90. CrossRef
Runeson, P., Höst, M., Rainer, A., & Regnell, B. (2012). Case study research in software engineering - guidelines and examples. New York: Wiley. CrossRef
Runeson, P., Stefik, A., & Andrews, A. (2014). Variation factors in the design and analysis of replicated controlled experiments: Three (dis)similar studies on inspections versus unit testing. Empirical Software Engineering, 19(6), 1781–1808. CrossRef
Shull, F. J., Carver, J. C., Vegas, S., & Juristo, N. (2008). The role of replications in empirical software engineering. Empirical Software Engineering, 13(2), 211–218. CrossRef
Siy, H., & Votta, L. (2001). Does the modern code inspection have value? In Proceedings of the IEEE International Conference on Software Maintenance, pp. 281–89.
Sjøberg, D. I. K., Dybå, T., Anda, B., & Hannay, J. E. (2008). Building theories in software engineering, guide to advanced empirical software engineering. New York: Springer.
Strauss, S. H., & Ebenau, R. G. (1994). Software inspection process. New York: McGraw-Hill. MATH
Wohlin, C., & Runeson, P. (2006). Defect content estimations from review data, In Proceeding of the 20th International Conference on Software Engineering, pp. 400–409.
Wood, M., Roper, M., Brooks, A., & Miller, J. (1997). Comparing and combining software defect detection techniques: A replicated empirical study. SIGSOFT Software Engineering Notes, 22(6), 262–277. CrossRef
Zhang, H. (2008). On the distribution of software faults. IEEE Transactions on Software Engineering, 34(2), 301–302. CrossRef
- A quantitative analysis of the unit verification perspective on fault distributions in complex software systems: an operational replication
Tihana Galinac Grbac
- Springer US
Neuer Inhalt/© ITandMEDIA