skip to main content
10.1145/1159733.1159787acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
Article

Evaluating the efficacy of test-driven development: industrial case studies

Published:21 September 2006Publication History

ABSTRACT

This paper discusses software development using the Test Driven Development (TDD) methodology in two different environments (Windows and MSN divisions) at Microsoft. In both these case studies we measure the various context, product and outcome measures to compare and evaluate the efficacy of TDD. We observed a significant increase in quality of the code (greater than two times) for projects developed using TDD compared to similar projects developed in the same organization in a non-TDD fashion. The projects also took at least 15% extra upfront time for writing the tests. Additionally, the unit tests have served as auto documentation for the code when libraries/APIs had to be used as well as for code maintenance.

References

  1. "IEEE Std 982.2-1988 IEEE guide for the use of IEEE standard dictionary of measures to produce reliable software," 1988.Google ScholarGoogle Scholar
  2. V. Basili, Shull, F.,Lanubile, F., "Building Knowledge through Families of Experiments", IEEE Transactions on Software Engineering, 25(4), pp. 456--472, 1999 Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. K. Beck, Extreme Programming Explained, Embrace Change: Addison Wesley, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. K. Beck, Test Driven Development- by Example. Boston: Addison-Wesley, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. J. Carver, Jaccheri, L., Morasca, S., and Shull, F., "Issues Using Students in Empirical Studies in Software Engineering Education ", Proceedings of IEEE Metrics, pp. 239--249, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. B. Curtis, "Three Problems Overcome with Behavioral Models of the Software Development Process (Panel)", Proceedings of International Conference on Software Engineering, Pittsburgh, PA, pp. 398--399, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H. Erdogmus, Morisio, M., Torchiano, M., "On the effectiveness of the Test-First Approach to Programming", IEEE Transactions in Software Engineering, 31(3), pp. 226--237, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. D. Gelperin and W. Hetzel, "Software Quality Engineering", Proceedings of Fourth International Conference on Software Testing, Washington D.C., June 1987.Google ScholarGoogle Scholar
  9. B. George and L. Williams, "An Initial Investigation of Test- Driven Development in Industry", Proceedings of ACM Symposium on Applied Computing, Melbourne, FL, pp. 1135--1139, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. B. George and L. Williams, "A Structured Experiment of Test-Driven Development", Information and Software Technology (IST), 46(5), pp. 337--342, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  11. W. S. Humphrey, Managing the Software Process. Reading, Massachusetts: Addison-Wesley, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. C. Larman and V. Basili, "A History of Iterative and Incremental Development", IEEE Computer, 36(6), pp. 47--56, June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. M. M. Müller and O. Hagner, "Experiment about Test-first Programming", Proceedings of Conference on Empirical Assessment in Software Engineering (EASE), 2002.Google ScholarGoogle ScholarCross RefCross Ref
  14. M. M. Müller and W. F. Tichy, "Case Study: Extreme Programming in a University Environment", Proceedings of 23rd International Conference on Software Engineering, pp. 537--544, May 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. L. Williams, Krebs, W., Layman, L., "Extreme Programming Evaluation Framework for Object-Oriented Languages -- Version 1.1," Technical Report, North Carolina State University, NCSU CSC TR-2003-20, 2003.Google ScholarGoogle Scholar
  16. L. Williams, E. M. Maximilien, and M. Vouk, "Test-Driven Development as a Defect-Reduction Practice", Proceedings of IEEE International Symposium on Software Reliability Engineering, Denver, CO, pp. 34--45, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Evaluating the efficacy of test-driven development: industrial case studies

              Recommendations

              Reviews

              Andrew Brooks

              Test-driven development (TDD) reduces defect density by at least a factor of two at the expense of increasing coding time by 15 to 35 percent. Finally, we have measures on which to base project management decisions. Or do we__?__ The measures derive from two industrial case studies at Microsoft called project A and project B. Defect densities from these two projects were compared with defect densities from comparable projects that employed a non-TDD methodology. Managers supplied the estimates of how much employing TDD had increased coding time. However, we do not know about the operational profiles of the software. Would using the comparable projects more extensively have resulted in the discovery of more defects__?__ Also, we do not know about other software quality assurance activities. What did the test teams do__?__ Developer ability can vary considerably. Yet, project A had six developers, while its comparable project had two. Complexity typically correlates with lines of code (LOC). Yet, project B was one-fifth the size of its comparable project in terms of source LOC. Finally, we do not know about defect severity. Do measured improvements rely on the inclusion of minor defects__?__ The notion of comparable project used by the authors is fatally flawed, and too much is unknown about the data. The reader cannot simply accept that TDD reduces defect density by at least a factor of two. This paper will only interest those researching test-driven development. Online Computing Reviews Service

              Access critical reviews of Computing literature here

              Become a reviewer for Computing Reviews.

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                ISESE '06: Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering
                September 2006
                388 pages
                ISBN:1595932186
                DOI:10.1145/1159733

                Copyright © 2006 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 21 September 2006

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • Article

                Upcoming Conference

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader