skip to main content
10.1145/3132498.3134268acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbcarsConference Proceedingsconference-collections
research-article

Applying software metric thresholds for detection of bad smells

Published:18 September 2017Publication History

ABSTRACT

Software metrics can be an effective measurement tool to assess the quality of software. In the literature, there are a lot of software metrics applicable to systems implemented in different paradigms like Objects Oriented Programming (OOP). To guide the use of these metrics in the evaluation of the quality of software systems, it is important to define their thresholds. The aim of this study is to investigate the effectiveness of the thresholds in the evaluation of the quality of object oriented software. To do that, we used a threshold catalog of 18 software metrics derived from 100 software systems to define detection strategies for five bad smells. They are: Large Class, Long Method, Data Class, Feature Envy and Refused Bequest. We investigate the effectiveness of the thresholds in detection analysis of 12 software systems using these strategies. The results obtained by the proposed strategies were compared with the results obtained by the tools JDeodorant and JSPiRIT, used to identify bad smells. This study shows that the metric thresholds were significantly effective in supporting the detection of bad smells.

Skip Supplemental Material Section

Supplemental Material

References

  1. {n. d.}. ({n. d.}).Google ScholarGoogle Scholar
  2. {n. d.}. MuLATo tool, http://sourceforge.net/projects/mulato/(2009).Google ScholarGoogle Scholar
  3. {n. d.}. Together: http://www.borland.com/us/products/together/.Google ScholarGoogle Scholar
  4. {n. d.}. Understand: http://www.scitools.com/.Google ScholarGoogle Scholar
  5. T. L. Alves, C. Ypma, and J. Visser. 2010. Deriving metric thresholds from benchmark data. In International Conference on Software Maintenance. IEEE, 10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. S. Bellon, R. Koschke, G. A., J. K., and E. M. 2007. Comparison and evaluation of clone detection tools. Transactions on Software Engineering 33, 9 (2007), 577--591. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Saida Benlarbi, Khaled El Emam, Nishith Goel, and Shesh Rai. 2000. Thresholds for Object-Oriented Measures. In Proceedings of the 11th International Symposium on Software Reliability Engineering. IEEE Computer Society, 24--38. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. I.M. Bertrán. 2009. Avaliação da qualidade de software com base em modelos uml. Ph.D. Dissertation. RJ - Brasil.Google ScholarGoogle Scholar
  9. B. Cardoso and E. Figueiredo. 2015. Co-Occurrence of Design Patterns and Bad Smells in Software Systems: An Exploratory Study. In Proceedings of the Annual Conference on Brazilian Symposium on Information Systems, Vol. 46. 347--354. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Couto, C. Maffort, R. Garcia, and M. T. Valente. 2013. COMETS: A Dataset for Empirical Research on Software Evolution Using Source Code Metrics and Time Series Analysis. ACM SIGSOFT Software Engineering Notes 38, 1 (2013), 1--3. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. E. Fernandes, J. Oliveira, G. Vale, T. Paiva, and E. Figueiredo. 2016. A review-based comparative study of bad smell detection tools. In 20th International Conference on Evaluation and Assessment in Software Engineering. ACM, 18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. K. A. M. Ferreira, Mariza A.S. Bigonha, R. S. Bigonha, L. F. O. Mendes, and H. C. Almeida. 2012. Identifying Thresholds for Object-oriented Software Metrics. Journal of Systems and Software 85 (2012), 244--257. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. T. G. S. Filó, M. A. S. Bigonha, and K. A. M. Ferreira. 2015. A Catalogue of Thresholds for Object-Oriented Software Metrics. In Proceedings of International Conference on Advances and Trends in Software Engineering (SOFTENG). 48--55.Google ScholarGoogle Scholar
  14. F. A. Fontana, V. Ferme, M. Zanoni, and A. Yamashita. 2015. Automatic metric thresholds derivation for code smell detection. In Proceedings of the Sixth International Workshop on Emerging Trends in Software Metrics. IEEE Press, 44--53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Fowler. 1999. Refactoring:improving the design of existing code. Pearson Ed.Google ScholarGoogle Scholar
  16. S. Kaur, S. Singh, and H. Kaur. 2013. A quantitative investigation of software metrics threshold values at acceptable risk level. International Journal of Engineering Research and Technology 2 (2013).Google ScholarGoogle Scholar
  17. H. L., Z. M, W. S., and Z. N. 2012. Schedule of bad smell detection and resolution:a new way to save effort. Transactions on Software Engineering 38, 1 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Lanza and R. Marinescu. 2010. Object-Oriented Metrics in Practice: Using Software Metrics to Characterize, Evaluate, and Improve the Design of Object-Oriented Systems (1st ed.). Springer Publishing Company, Incorporated. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. I. Macia, R. Arcoverde, A. Garcia, C. Chavez, and A. von Staa. 2012. On the Relevance of Code Anomalies for Identifying Architectural Degradation Symptoms. In Proceedings of the 16th Europe Conference on Software Maintenance and Reengineering. IEEE, 277--286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. I. Macia, J. Garcia, D. Popescu, A. Garcia, N. Medvidovic, and A. von Staa. 2012. Are automatically-detected code anomalies relevant to architectural modularity?: an exploratory analysis of evolving systems. In Proceedings of the 11th annual international conference on Aspect-oriented Software Development. ACM, 167--178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. R. Marinescu. 2004. Detection strategies:metrics-based rules for detecting design flaws. In Software Maintenance. 20th International Conference. IEEE, 350--359. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. P. Oliveira, M. T. Valente, and F. P. Lima. 2014. Extracting relative thresholds for source code metrics. In Software Evolution Week - IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering (CSMR-WCRE). 254--263.Google ScholarGoogle Scholar
  23. J. Padilha, E. Figueiredo, C. Sant'Anna, and A. Garcia. 2013. Detecting God Methods with Concern Metrics: An Exploratory Study. In Proceedings of the 7th Latin-American Workshop on Aspect-Oriented Software Development(LA-WASP).Google ScholarGoogle Scholar
  24. T. Paiva, A. Damasceno, J. Padilha, E. Figueiredo, and C. Santanna. 2015. Experimental evaluation of code smell detection tools. In III Workshop on Software Visualization, Evolution, and Maintenance (VEM).Google ScholarGoogle Scholar
  25. M. Riaz, E. Mendes, and E. Tempero. 2009. A Systematic Review of Software Maintainability Prediction and Metrics. In Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement (ESEM). 367--377. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. D. Sahin, M. K., S. Bechikh, and K. Deb. 2014. Code-smell detection as a bilevel problem. Transactions on Software Engineering and Methodology 24, 1 (2014), 6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. V. Sales, R. Terra, L. F. Miranda, and M. T. Valente. 2013. Recommending move method refactorings using dependency sets. In Reverse Engineering (WCRE), 2013 20th Working Conference on. IEEE, 232--241.Google ScholarGoogle Scholar
  28. R. Shatnawi, W. Li, J. Swain, and T. Newman. 2010. Finding Software Metrics Threshold Values Using ROC Curves. Journal of software maintenance and evolution: Research and practice 22, 1 (2010), 1--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. S. Singh and K. Kahlon. 2014. Object oriented software metrics threshold values at quantitative acceptable risk level. CSI transactions on ICT 2, 3 (2014), 191--205.Google ScholarGoogle Scholar
  30. I. Sommervile. 2011. Engenharia de Software. Pearson Education Brazil.Google ScholarGoogle Scholar
  31. Bruno L. Sousa, Mariza A.S. Bigonha, and Kecia Ferreira. 2017. Evaluating Co-Ocorrence of GOF Design Patterns with God Class and Long Method Bad Smells. In XIII Brazilian Symposium on Information Systems (SBSI). 396.Google ScholarGoogle Scholar
  32. Bruno L. Sousa, P.P. Souza, E. Fernandes, K.A.M. Ferreira, and M.A.S. Bigonha. 2017. FindSmells:Flexible Composition of Bad Smell Detection Strategies. In 25th International Conference on Program Comprehension (ICPC). 360--363. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Priscila Souza. 2016. A Utilidade dos Valores Referência de Métricas na Avaliação da Qualidade de Softwares Orientados por Objeto. Master's thesis. DCC-UFMG.Google ScholarGoogle Scholar
  34. E. Tempero, C. Anslow, J. Dietrich, T. Han, J. Li, M. Lumpe, H. Melton, and J. Noble. 2010. The Qualitas Corpus: A Curated Collection of Java Code for Empirical Studies. In Asia Pacific Software Engineering Conference. 336--345. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. R. Terra, L. F. Miranda, M. T. Valente, and R. S. Bigonha. 2013. Qualitas. class corpus: A compiled version of the qualitas corpus. 38, 5 (2013), 1--4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. N. Tsantalis, T. Chaikalis, and A. Chatzigeorgiou. 2008. Jdeodorant: Identification and removal of type-checking bad smells. In Software Maintenance and Reengineering, 2008. CSMR 2008. 12th European Conference. IEEE, 329--331. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. G. Vale, D. Albuquerque, E. Figueiredo, and A. Garcia. 2015. Defining metric thresholds for software product lines: a comparative study. In Proceedings of the 19th International Conference on Software Product Line. ACM, 176--185. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. S. Vidal, Vasquez H., A. Díaz-Pace, and W. Oizumi. 2015. JSpIRIT: a flexible tool for the analysis of code smells. In 34th International Conference of the Chilean Computer Science. SCCC, 1--11.Google ScholarGoogle Scholar
  39. S. A. Vidal, C. Marcos, and J. A. Díaz-Pace. 2014. An approach to prioritize code smells for refactoring. Automated Software Engineering (2014). Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén. 2012. Experimentation in software engineering. Springer Science and Business Media. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Applying software metric thresholds for detection of bad smells

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Other conferences
            SBCARS '17: Proceedings of the 11th Brazilian Symposium on Software Components, Architectures, and Reuse
            September 2017
            129 pages
            ISBN:9781450353250
            DOI:10.1145/3132498

            Copyright © 2017 ACM

            © 2017 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 18 September 2017

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            SBCARS '17 Paper Acceptance Rate12of39submissions,31%Overall Acceptance Rate23of79submissions,29%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader