ABSTRACT
Code smells are characteristics of software that indicate that code may have a design problem. Code smells have been proposed as a way for programmers to recognize the need for restructuring their software. Because code smells can go unnoticed while programmers are working, tools called smell detectors have been developed to alert programmers to the presence of smells in their code, and to help them understand the cause of those smells. In this paper, we propose a novel smell detector called Stench Blossom that provides an interactive ambient visualization designed to first give programmers a quick, high-level overview of the smells in their code, and then, if they wish, to help in understanding the sources of those code smells. We also describe a laboratory experiment with 12 programmers that tests several hypotheses about our tool. Our findings suggest that programmers can use our tool effectively to identify smells and to make refactoring judgements. This is partly because the tool serves as a memory aid, and partly because it is more reliable and easier to use than heuristics for analyzing smells.
Supplemental Material
- J. Callahan, D. Hopkins, M. Weiser, and B. Shneiderman. An empirical comparison of pie vs. linear menus. In CHI '88: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 95--100. ACM, 1988. Google ScholarDigital Library
- M. Drozdz, D. G. Kourie, B. W. Watson, and A. Boake. Refactoring tools and complementary techniques. In AICCSA '06: Proceedings of the IEEE International Conference on Computer Systems and Applications, pages 685--688. IEEE Computer Society, 2006. Google ScholarDigital Library
- Eva van Emden and Leon Moonen. Java quality assurance by detecting code smells. In Proceedings of the Ninth Working Conference on Reverse Engineering, pages 97--106. IEEE Computer Society, 2002. Google ScholarDigital Library
- Martin Fowler. Refactoring: Improving the Design of Existing Code. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1999. Google ScholarDigital Library
- Jennifer Gluck, Andrea Bunt, and Joanna McGrenere. Matching attentional draw with utility in interruption. In CHI '07: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 41--50, New York, NY, USA, 2007. ACM. Google ScholarDigital Library
- Shinpei Hayashi, Motoshi Saeki, and Masahito Kurihara. Supporting refactoring activities using histories of program modification. IEICE - Transactions on Information and Systems, E89-D(4):1403--1412, 2006. Google ScholarDigital Library
- Mik Kersten and Gail C. Murphy. Mylar: a degree-of-interest model for IDEs. In AOSD '05: Proceedings of the 4th International Conference on Aspect-Oriented Software Development, pages 159--168. ACM, 2005. Google ScholarDigital Library
- Rainer Koschke, Christopher D. Hundhausen, and Alexandru Telea, editors. Proceedings of the ACM 2008 Symposium on Software Visualization, Ammersee, Germany, September 16-17, 2008. ACM, 2008.Google Scholar
- Pattie Maes. Agents that reduce work and information overload. Communications of the ACM, 37(7):30--40, 1994. Google ScholarDigital Library
- Jennifer Mankoff, Anind K. Dey, Gary Hsieh, Julie Kientz, Scott Lederer, and Morgan Ames. Heuristic evaluation of ambient displays. In CHI '03: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 169--176. ACM, 2003. Google ScholarDigital Library
- Mika V. Mantyla. An experiment on subjective evolvability evaluation of object-oriented software: explaining factors and interrater agreement. In Proceedings of the International Symposium on Empirical Software Engineering, pages 287--296, November 2005.Google ScholarCross Ref
- Mika V. Mantyla, Jari Vanhanen, and Casper Lassenius. Bad smells - humans as code critics. IEEE International Conference on Software Maintenance, 0:399--408, 2004. Google ScholarDigital Library
- Daniel McFarlane. Comparison of four primary methods for coordinating the interruption of people in human-computer interaction. Hum.-Comput. Interact., 17(1):63--139, 2002. Google ScholarDigital Library
- Emerson Murphy-Hill and Andrew P. Black. Refactoring tools: Fitness for purpose. IEEE Software, 25(5), September-October 2008. Google ScholarDigital Library
- Emerson Murphy-Hill, Chris Parnin, and Andrew P. Black. How we refactor, and how we know it. In ICSE '09: Proceedings of the 31st International Conference on Software Engineering, 2009. Google ScholarDigital Library
- Helmut Neukirchen and Martin Bisanz. Utilising Code Smells to Detect Quality Problems in TTCN-3 Test Suites. In Proceedings of the 19th IFIP International Conference on Testing of Communicating Systems and 7th International Workshop on Formal Approaches to Testing of Software, pages 228--243. Springer, Heidelberg, June 2007. Google ScholarDigital Library
- Jakob Nielsen. Ten usability heuristics. Internet, 2005. http://www.useit.com/papers/heuristic/heuristic_list.html.Google Scholar
- Jakob Nielsen and Rolf Molich. Heuristic evaluation of user interfaces. In CHI '90: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 249--256, New York, NY, USA, 1990. ACM. Google ScholarDigital Library
- Chris Parnin and Carsten Görg. Building usage contexts during program comprehension. In ICPC '06: Proceedings of the 14th IEEE International Conference on Program Comprehension, pages 13--22. IEEE Computer Society, 2006. Google ScholarDigital Library
- Chris Parnin, Carsten Görg, and Ogechi Nnadi. A catalogue of lightweight visualizations to support code smell inspection. In Koschke et al. {9}, pages 77--86. Google ScholarDigital Library
- Jef Raskin. The humane interface: new directions for designing interactive systems. ACM Press/Addison-Wesley Publishing Co., 2000. Google ScholarDigital Library
- T. J. Robertson, Shrinu Prabhakararao, Margaret Burnett, Curtis Cook, Joseph R. Ruthruff, Laura Beckwith, and Amit Phalgune. Impact of interruption style on end-user debugging. In CHI '04: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 287--294, New York, NY, USA, 2004. ACM. Google ScholarDigital Library
- Ben Shneiderman. System message design: Guidelines and experimental results. In Albert Badre and Ben Shneiderman, editors, Directions in Human/Computer Interaction, Human/Computer Interaction, chapter 3, pages 55--78. Ablex Publishing Corporation, 1982.Google Scholar
- Ben Shneiderman. Designing the User Interface (2nd ed.): Strategies for Effective Human-Computer Interaction. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1987. Google ScholarDigital Library
- Frank Simon, Frank Steinbrückner, and Claus Lewerentz. Metrics based refactoring. In Proceedings of the Fifth European Conference on Software Maintenance and Reengineering, pages 30--38. IEEE Computer Society, 2001. Google ScholarDigital Library
- Stephan Slinger. Code smell detection in Eclipse. Master's thesis, Delft University of Technology, March 2005.Google Scholar
- Anne M. Treisman and Garry Gelade. A feature-integration theory of attention. Cognitive Psychology, 12(1):97--136, January 1980.Google ScholarCross Ref
- Nikolaos Tsantalis, Theodoros Chaikalis, and Alexander Chatzigeorgiou. JDeodorant: Identification and removal of type-checking bad smells. In CSMR, pages 329--331. IEEE Computing Society, 2008. Google ScholarDigital Library
- Richard Wettel and Michele Lanza. Visually localizing design problems with disharmony maps. In Koschke et al. {9}, pages 155--164. Google ScholarDigital Library
- P. F. Xiang, A. T. T. Ying, P. Cheng, Y. B. Dang, K. Ehrlich, M. E. Helander, P. M. Matchen, A. Empere, P. L. Tarr, C. Williams, and S. X. Yang. Ensemble: a recommendation tool for promoting communication in software teams. In RSSE '08: Proceedings of the 2008 International Workshop on Recommendation Systems for Software Engineering. ACM, 2008. Google ScholarDigital Library
Index Terms
- An interactive ambient visualization for code smells
Recommendations
Are architectural smells independent from code smells? An empirical study
Highlights- Case study analyzing the correlations among code smells, groups of code smells and architectural smells.
AbstractBackground. Architectural smells and code smells are symptoms of bad code or design that can cause different quality problems, such as faults, technical debt, or difficulties with maintenance and evolution. Some studies ...
Prioritising Refactoring Using Code Bad Smells
ICSTW '11: Proceedings of the 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation WorkshopsWe investigated the relationship between six of Fowler et al.'s Code Bad Smells (Duplicated Code, Data Clumps, Switch Statements, Speculative Generality, Message Chains, and Middle Man) and software faults. In this paper we discuss how our results can ...
On experimenting refactoring tools to remove code smells
XP '15 workshops: Scientific Workshop Proceedings of the XP2015When we develop a software project of a certain complexity, source code maintainability could become a problem, in particular if developers do not use a consolidate development process that simplifies the management of the entire project. When source ...
Comments