skip to main content
10.1145/3132498.3132514acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbcarsConference Proceedingsconference-collections
research-article

Revealing design problems in stinky code: a mixed-method study

Published:18 September 2017Publication History

ABSTRACT

Developers often have to locate design problems in the source code. Several types of design problem may manifest as code smells in the program. A code smell is a source code structure that may reveal a partial hint about the manifestation of a design problem. Recent studies suggest that developers should ignore smells occurring in isolation in a program location. Instead, they should focus on analyzing stinkier code, i.e. program locations - e.g., a class or a hierarchy - affected by multiple smells. The stinkier a program location is, more likely it contains a design problem. However, there is limited understanding if developers can effectively identify a design problem in stinkier code. Developers may struggle to make a meaning out of inter-related smells affecting the same program location. To address this matter, we applied a mixed-method approach to analyze if and how developers can effectively find design problems when reflecting upon stinky code - i.e., a program location affected by multiple smells. We performed an experiment and an interview with 11 professionals. Surprisingly, our analysis revealed that only 36.36% of the developers found more design problems when explicitly reasoning about multiple smells as compared to single smells. On the other hand, 63.63% of the developers reported much lesser false positives. Developers reported that analyses of stinky code scattered in class hierarchies or packages is often difficult, time consuming, and requires proper visualization support. Moreover, it remains time-consuming to discard stinky program locations that do not represent design problems.

Skip Supplemental Material Section

Supplemental Material

References

  1. M Abbes, F Khomh, Y Gueheneuc, and G Antoniol. 2011. An Empirical Study of the Impact of Two Antipatterns, Blob and Spaghetti Code, on Program Comprehension. In Proceedings of the 15th European Software Engineering Conference; Oldenburg, Germany. 181--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. R. Arcoverde, E. Guimarães, I. Macía, A. Garcia, and Y. Cai. 2013. Prioritization of Code Anomalies Based on Architecture Sensitiveness. In 2013 27th Brazilian Symposium on Software Engineering. 69--78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. L Bass, P Clements, and R Kazman. 2003. Software Architecture in Practice. Addison-Wesley Professional. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. D. Cedrim, A. Garcia, M. Mongiovi, R. Gheyi, L. Sousa, R. Mello, B. Fonseca, M. Ribeiro, and A. Chávez. 2017. Understanding the Impact of Refactoring on Smells. In 11th Joint Meeting of the European Software Engineering Conference and the ACM Sigsoft Symposium on the Foundations of Software (ESEC/FSE'17). Paderborn, Germany. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Diego Cedrim, Leonardo Sousa, Alessandro Garcia, and Rohit Gheyi. 2016. Does Refactoring Improve Software Structural Quality? A Longitudinal Study of 25 Projects. In Proceedings of the 30th Brazilian Symposium on Software Engineering (SBES '16). ACM, New York, NY, USA, 73--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. O. Ciupke. 1999. Automatic detection of design problems in object-oriented reengineering. In Proceedings of Technology of Object-Oriented Languages and Systems - TOOLS 30 (Cat. No.PR00278). 18--32. Google ScholarGoogle ScholarCross RefCross Ref
  7. Online Companion. 2017. https://wnoizumi.github.io/SBCARS2017/. (2017).Google ScholarGoogle Scholar
  8. Steve Easterbrook, Janice Singer, Margaret-Anne Storey, and Daniela Damian. 2008. Selecting Empirical Methods for Software Engineering Research. Springer London, London, 285--311.Google ScholarGoogle Scholar
  9. E Emden and L Moonen. 2002. Java quality assurance by detecting code smells. In Proceedings of the 9th Working Conference on Reverse Engineering; Richmond, USA. 97. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M Fowler. 1999. Refactoring: Improving the Design of Existing Code. Addison-Wesley Professional, Boston. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J Garcia, I Ivkovic, and N Medvidovic. 2013. A Comparative Analysis of Software Architecture Recovery Techniques. In Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering; Palo Alto, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J Garcia, D Popescu, G Edwards, and N Medvidovic. 2009. Identifying Architectural Bad Smells. In CSMR09; Kaiserslautern, Germany. IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. I. Herman, G. Melancon, and M. S. Marshall. 2000. Graph visualization and navigation in information visualization: A survey. IEEE Transactions on Visualization and Computer Graphics 6, 1 (Jan 2000), 24--43. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M Lanza and R Marinescu. 2006. Object-Oriented Metrics in Practice. Springer, Heidelberg. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. I Macia. 2013. On the Detection of Architecturally-Relevant Code Anomalies in Software Systems. Ph.D. Dissertation. Pontifical Catholic University of Rio de Janeiro, Informatics Department.Google ScholarGoogle Scholar
  16. I. Macia, R. Arcoverde, E. Cirilo, A. Garcia, and A. von Staa. 2012. Supporting the identification of architecturally-relevant code anomalies. In ICSM12. 662--665. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. I. Macia, R. Arcoverde, A. Garcia, C. Chavez, and A. von Staa. 2012. On the Relevance of Code Anomalies for Identifying Architecture Degradation Symptoms. In CSMR12. 277--286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Isela Macia, Joshua Garcia, Daniel Popescu, Alessandro Garcia, Nenad Medvidovic, and Arndt von Staa. 2012. Are Automatically-detected Code Anomalies Relevant to Architectural Modularity?: An Exploratory Analysis of Evolving Systems. In AOSD '12. ACM, New York, NY, USA, 167--178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Robert C. Martin and Micah Martin. 2006. Agile Principles, Patterns, and Practices in C# (Robert C. Martin). Prentice Hall PTR, Upper Saddle River, NJ, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C Mattmann, D Crichton, N Medvidovic, and S Hughes. 2006. A Software Architecture-Based Framework for Highly Distributed and Data Intensive Scientific Applications. In Proceedings of the 28th ICSE: Software Engineering Achievements Track; Shanghai, China. 721--730. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E. Hassan. 2014. The Impact of Code Review Coverage and Code Review Participation on Software Quality: A Case Study of the Qt, VTK, and ITK Projects. In Proceedings of the 11th Working Conference on Mining Software Repositories. Hyderabad, India, 192--201. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Ran Mo, Yuanfang Cai, R. Kazman, and Lu Xiao. 2015. Hotspot Patterns: The Formal Definition and Automatic Detection of Architecture Smells. In Software Architecture (WICSA), 2015 12th Working IEEE/IFIP Conference on. 51--60. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Emerson Murphy-Hill and Andrew P Black. 2010. An interactive ambient visualization for code smells. In Proceedings of the 5th international symposium on Software visualization; Salt Lake City, USA. ACM, 5--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. W Oizumi and A Garcia. 2015. Organic: A Prototype Tool for the Synthesis of Code Anomalies. (2015). http://wnoizumi.github.io/organic/Google ScholarGoogle Scholar
  25. W Oizumi, A Garcia, T Colanzi, A Staa, and M Ferreira. 2015. On the Relationship of Code-Anomaly Agglomerations and Architectural Problems. Journal of Software Engineering Research and Development 3, 1 (2015), 1--22.Google ScholarGoogle ScholarCross RefCross Ref
  26. W Oizumi, A Garcia, L Sousa, B Cafeo, and Y Zhao. 2016. Code Anomalies Flock Together: Exploring Code Anomaly Agglomerations for Locating Design Problems. In The 38th International Conference on Software Engineering; USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jacek Ratzinger, Michael Fischer, and Harald Gall. 2005. Improving evolvability through refactoring. Vol. 30. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. W. R. Shadish, T. D. Cook, and Donald T. Campbell. 2001. Experimental and Quasi-Experimental Designs for Generalized Causal Inference (2 ed.). Houghton Mifflin.Google ScholarGoogle Scholar
  29. Marcelino Campos Oliveira Silva, Marco Tulio Valente, and Ricardo Terra. 2016. Does Technical Debt Lead to the Rejection of Pull Requests?. In Proceedings of the 12th Brazilian Symposium on Information Systems (SBSI '16). 248--254. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. G Suryanarayana, G Samarthyam, and T Sharmar. 2014. Refactoring for Software Design Smells: Managing Technical Debt. Morgan Kaufmann. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. A. Trifu and R. Marinescu. 2005. Diagnosing design problems in object oriented systems. In WCRE'05. 10 pp. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Twitter. 2017. Working at Twitter. (April 2017). Available at https://about.twitter.com/careers.Google ScholarGoogle Scholar
  33. S. Vidal, E. Guimaraes, W. Oizumi, A. Garcia, A. D. Pace, and C. Marcos. 2016. Identifying Architectural Problems through Prioritization of Code Smells. In 2016 X Brazilian Symposium on Software Components, Architectures and Reuse (SBCARS). 41--50.Google ScholarGoogle Scholar
  34. Santiago A. Vidal, Claudia Marcos, and J. Andrés Díaz-Pace. 2016. An Approach to Prioritize Code Smells for Refactoring. Automated Software Engg. 23, 3 (Sept. 2016), 501--532. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Richard Wettel and Michele Lanza. 2008. Visually localizing design problems with disharmony maps. In Proceedings of the 4th ACM symposium on Software visualization. ACM, 155--164. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Lu Xiao, Yuanfang Cai, Rick Kazman, Ran Mo, and Qiong Feng. 2016. Identifying and Quantifying Architectural Debt. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Yahoo! 2017. Explore Career Opportunities. (April 2017). Available at https://careers.yahoo.com/us/buildyourcareer.Google ScholarGoogle Scholar
  38. A Yamashita and L Moonen. 2013. Exploring the impact of inter-smell relations on software maintainability: an empirical study. In Proceedings of the 35th International Conference on Software Engineering; San Francisco, USA. 682--691. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. A. Yamashita, M. Zanoni, F. A. Fontana, and B. Walter. 2015. Inter-smell relations in industrial and open source systems: A replication and comparative analysis. In ICSME 2015. 121--130. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Revealing design problems in stinky code: a mixed-method study

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        SBCARS '17: Proceedings of the 11th Brazilian Symposium on Software Components, Architectures, and Reuse
        September 2017
        129 pages
        ISBN:9781450353250
        DOI:10.1145/3132498

        Copyright © 2017 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 18 September 2017

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        SBCARS '17 Paper Acceptance Rate12of39submissions,31%Overall Acceptance Rate23of79submissions,29%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader