skip to main content
10.1145/2884781.2884812acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Public Access

Belief & evidence in empirical software engineering

Published:14 May 2016Publication History

ABSTRACT

Empirical software engineering has produced a steady stream of evidence-based results concerning the factors that affect important outcomes such as cost, quality, and interval. However, programmers often also have strongly-held a priori opinions about these issues. These opinions are important, since developers are highly-trained professionals whose beliefs would doubtless affect their practice. As in evidence-based medicine, disseminating empirical findings to developers is a key step in ensuring that the findings impact practice. In this paper, we describe a case study, on the prior beliefs of developers at Microsoft, and the relationship of these beliefs to actual empirical data on the projects in which these developers work. Our findings are that a) programmers do indeed have very strong beliefs on certain topics b) their beliefs are primarily formed based on personal experience, rather than on findings in empirical research and c) beliefs can vary with each project, but do not necessarily correspond with actual evidence in that project. Our findings suggest that more effort should be taken to disseminate empirical findings to developers and that more in-depth study the interplay of belief and evidence in software practice is needed.

References

  1. R. Agarwal and J. Prasad. A field study of the adoptxion of software process innovations by information systems professionals. Engineering Management, IEEE Transactions on, 47(3):295--308, 2000.Google ScholarGoogle Scholar
  2. M. Allamanis, E. T. Barr, C. Bird, and C. Sutton. Learning natural coding conventions. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 281--293. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. F. Aniche, G. Oliva, M. Gerosa, et al. What do the asserts in a unit test tell us about code quality? a study on open source and industrial projects. In Software Maintenance and Reengineering (CSMR), 2013 17th European Conference on, pages 111--120. IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. B. Baudry, Y. L. Traon, and J.-M. Jézéquel. Robustness and diagnosability of oo systems designed by contracts. In Software Metrics Symposium, 2001. METRICS 2001. Proceedings. Seventh International, pages 272--284. IEEE, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. Begel and T. Zimmermann. Analyze this! 145 questions for data scientists in software engineering. In Proceedings of the 36th International Conference on Software Engineering, pages 12--23. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Bird, N. Nagappan, P. Devanbu, H. Gall, and B. Murphy. Does distributed development affect software quality?: an empirical case study of windows vista. Communications of the ACM, 52(8):85--93, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. Bird, N. Nagappan, B. Murphy, H. Gall, and P. Devanbu. Don't touch my code!: examining the effects of ownership on software quality. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering, pages 4--14. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Boogerd and L. Moonen. Evaluating the relation between coding standard violations and faultswithin and across software versions. In Mining Software Repositories, 2009. MSR'09. 6th IEEE International Working Conference on, pages 41--50. IEEE, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. J. Brown and P. H. Reingen. Social ties and word-of-mouth referral behavior. Journal of Consumer research, pages 350--362, 1987.Google ScholarGoogle Scholar
  10. A. Burke, F. Heuer, and D. Reisberg. Remembering emotional events. Memory & cognition, 20(3):277--290, 1992.Google ScholarGoogle ScholarCross RefCross Ref
  11. K. Chaloner and I. Verdinelli. Bayesian experimental design: A review. Statistical Science, pages 273--304, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  12. F. Davidoff, B. Haynes, D. Sackett, and R. Smith. Evidence based medicine. BMJ: British Medical Journal, 310(6987):1085, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  13. T. Dybä, B. A. Kitchenham, and M. Jorgensen. Evidence-based software engineering for practitioners. Software, IEEE, 22(1):58--65, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. K. El Emam, S. Benlarbi, N. Goel, and S. N. Rai. The confounding effect of class size on the validity of object-oriented metrics. Software Engineering, IEEE Transactions on, 27(7):630--650, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Hanenberg. An experiment about static and dynamic type systems: Doubts about the positive impact of static type systems on development time. In ACM Sigplan Notices, volume 45, pages 22--35. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. B. C. Hardgrave, F. D. Davis, and C. K. Riemenschneider. Investigating determinants of software developers' intentions to follow methodologies. Journal of Management Information Systems, 20(1):123--151, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. V. J. Hellendoorn, P. T. Devanbu, and A. Bacchelli. Will they like this? evaluating code contributions with language models. In Mining Software Repositories (MSR), 2015 IEEE/ACM 12th Working Conference on, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J. D. Herbsleb and A. Mockus. An empirical study of speed and communication in globally distributed software development. Software Engineering, IEEE Transactions on, 29(6):481--494, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. G. Hertel, S. Niedner, and S. Herrmann. Motivation of software developers in open source projects: an internet-based survey of contributors to the linux kernel. Research policy, 32(7):1159--1177, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  20. J. P. Ioannidis. Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. Jama, 279(4):281--286, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  21. J. P. Ioannidis. Why most published research findings are false. Chance, 18(4):40--47, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  22. D. S. Janzen. Software architecture improvement through test-driven development. In Companion to the 20th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications, pages 240--241. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. M. Jørgensen and E. Papatheocharous. Believing is seeing: Confirmation bias studies in software engineering. In Software Engineering and Advanced Applications (SEAA), 2015 41st Euromicro Conference on, pages 92--95. IEEE, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Keele. Guidelines for performing systematic literature reviews in software engineering. In Technical report, Ver. 2.3 EBSE Technical Report. EBSE. 2007.Google ScholarGoogle Scholar
  25. C. F. Kemerer. Software complexity and software maintenance: A survey of empirical research. Annals of Software Engineering, 1(1):1--22, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  26. B. Kitchenham, O. P. Brereton, D. Budgen, M. Turner, J. Bailey, and S. Linkman. Systematic literature reviews in software engineering--a systematic literature review. Information and software technology, 51(1):7--15, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. B. A. Kitchenham, T. Dyba, and M. Jorgensen. Evidence-based software engineering. In Proceedings of the 26th international conference on software engineering, pages 273--281. IEEE Computer Society, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. A. J. Ko, R. DeLine, and G. Venolia. Information needs in collocated software development teams. In Proceedings of the 29th international conference on Software Engineering, pages 344--353. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. E. Kocaguneli, T. Zimmermann, C. Bird, N. Nagappan, and T. Menzies. Distributed development considered harmful? In Software Engineering (ICSE), 2013 35th International Conference on, pages 882--890. IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. T. D. LaToza, G. Venolia, and R. DeLine. Maintaining mental models: a study of developer work habits. In Proceedings of the 28th international conference on Software engineering, pages 492--501. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. D. Lo, N. Nagappan, and T. Zimmermann. How practitioners perceive the relevance of software engineering research. In ESEC-FSE, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. A. Meneely and L. Williams. Secure open source collaboration: an empirical study of linus' law. In Proceedings of the 16th ACM conference on Computer and communications security, pages 453--462. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. A. Mockus and L. G. Votta. Identifying reasons for software changes using historic databases. In Software Maintenance, 2000. Proceedings. International Conference on, pages 120--130. IEEE, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. A. Mockus and D. M. Weiss. Predicting risk of software changes. Bell Labs Technical Journal, 5(2):169--180, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  35. N. Nagappan and T. Ball. Use of relative code churn measures to predict system defect density. In Software Engineering, 2005. ICSE 2005. Proceedings. 27th International Conference on, pages 284--292. IEEE, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. R. S. Nickerson. Confirmation bias: A ubiquitous phenomenon in many guises. Review of general psychology, 2(2):175, 1998.Google ScholarGoogle Scholar
  37. C. Passos, D. S. Cruzes, A. Hayne, and M. Mendonca. Recommendations to the adoption of new software practices: A case study of team intention and behavior in three software companies. In Empirical Software Engineering and Measurement, 2013 ACM/IEEE International Symposium on, pages 313--322. IEEE, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  38. A. Porter, H. P. Siy, C. Toman, L. G. Votta, et al. An experiment to assess the cost-benefits of code inspections in large scale software development. Software Engineering, IEEE Transactions on, 23(6):329--346, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. F. Rahman and P. Devanbu. Ownership, experience and defects: a fine-grained study of authorship. In Proceedings of the 33rd International Conference on Software Engineering, pages 491--500. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. F. Rahman and P. Devanbu. How, and why, process metrics are better. In Proceedings of the 2013 International Conference on Software Engineering, pages 432--441. IEEE Press, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. F. Rahman, S. Khatri, E. T. Barr, and P. Devanbu. Comparing static bug finders and statistical prediction. In Proceedings of the 36th International Conference on Software Engineering, pages 424--434. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. F. Rahman, D. Posnett, and P. Devanbu. Recalling the imprecision of cross-project defect prediction. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering, page 61. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. A. Rainer, T. Hall, and N. Baddoo. Persuading developers to "buy into" software process improvement: a local opinion and empirical evidence. In Empirical Software Engineering, 2003. ISESE 2003. Proceedings. 2003 International Symposium on, pages 326--335. IEEE, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. N. Ramasubbu and R. K. Balan. Globally distributed software development project performance: an empirical analysis. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, pages 125--134. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. N. Ramasubbu, M. Cataldo, R. K. Balan, and J. D. Herbsleb. Configuring global software teams: a multi-company analysis of project productivity, quality, and profits. In ICSE, pages 261--270. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. B. Ray, D. Posnett, V. Filkov, and P. Devanbu. A large scale study of programming languages and code quality in github. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 155--165. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. T. L. Roberts Jr, M. L. Gibson, K. T. Fields, and R. K. Rainer Jr. Factors that impact implementing a system development methodology. Software Engineering, IEEE Transactions on, 24(8):640--649, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. E. Shihab, A. E. Hassan, B. Adams, and Z. M. Jiang. An industrial study on the risk of software changes. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering, page 62. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. S. E. Sim, C. L. Clarke, and R. C. Holt. Archetypal source code searches: A survey of software developers and maintainers. In Program Comprehension, 1998. IWPC'98. Proceedings., 6th International Workshop on, pages 180--187. IEEE, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. J. Śliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? In ACM sigsoft software engineering notes, volume 30, pages 1--5. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. F. Sultan and L. Chan. The adoption of new technology: the case of object-oriented computing in software companies. Engineering Management, IEEE Transactions on, 47(1):106--126, 2000.Google ScholarGoogle Scholar
  52. T. Tenny. Program readability: Procedures versus comments. Software Engineering, IEEE Transactions on, 14(9):1271--1279, 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. F. Thung, D. Lo, L. Jiang, F. Rahman, P. T. Devanbu, et al. To what extent could we detect field defects? an empirical study of false negatives in static bug finding tools. In Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering, pages 50--59. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. J. Utts. What educated citizens should know about statistics and probability. The American Statistician, 57(2):74--79, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  55. S. Wacholder, S. Chanock, M. Garcia-Closas, N. Rothman, et al. Assessing the probability that a positive report is false: an approach for molecular epidemiology studies. Journal of the National Cancer Institute, 96(6):434--442, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  56. S. Wagner, J. Jürjens, C. Koller, and P. Trischberger. Comparing bug finding tools with reviews and tests. Lecture Notes in Computer Science, 3502:40--55, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. E. J. Weyuker, T. J. Ostrand, and R. M. Bell. Do too many cooks spoil the broth? using the number of developers to enhance defect prediction models. Empirical Software Engineering, 13(5):539--559, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. J. Zheng, L. Williams, N. Nagappan, W. Snipes, J. P. Hudepohl, M. Vouk, et al. On the value of static analysis for fault detection in software. Software Engineering, IEEE Transactions on, 32(4):240--253, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, pages 91--100. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Belief & evidence in empirical software engineering

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICSE '16: Proceedings of the 38th International Conference on Software Engineering
      May 2016
      1235 pages
      ISBN:9781450339001
      DOI:10.1145/2884781

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 14 May 2016

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate276of1,856submissions,15%

      Upcoming Conference

      ICSE 2025

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader