skip to main content
research-article

Effectiveness of encapsulation and object-oriented metrics to refactor code and identify error prone classes using bad smells

Published:30 September 2011Publication History
Skip Abstract Section

Abstract

To assist maintenance and evolution teams, work needs to be done at the onset of software development. One such facilitation is refactoring the code, making it easier to read, understand and maintain. Refactoring is done by identifying bad smell areas in the code. In this paper, based on empirical analysis, we develop a metrics model to identify smelly classes. The role of two new metrics (encapsulation and information hiding) is also investigated for identifying smelly and faulty classes in software code. This paper first presents a binary statistical analysis of thev relationship between metrics and bad smells, the results of which show a significant relationship. Then, the metrics model (with significant metrics shortlisted from the binary analysis) for bad smell categorization (divided into five categories) is developed. To verify our model, we examine the open source Firefox system, which has a strong industrial usage. The results show that proposed metrics model for bad smell can predict faulty classes with high accuracy, but in the case of the categorized model not all categories of bad smells can adequately identified the faulty and smelly classes. Due to certain limitations of our study more experiments are required to generalize the results of bad smell and faulty class identification in software code.

References

  1. Abreau, F.B., M. Goulão, R. Esteves, Toward the design quality evaluation of object-orientated software systems, Proc. 5th Int. Conf. On Software Quality, 1995.Google ScholarGoogle Scholar
  2. Abreau, F.B., W. Melo, Evaluating the impact of object-orientated design on software quality, Proc. 3rd International Software Metrics Symposium (METRICS'96), IEEE, Berlin, Germany, March, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bansiya J, David CG, A hierarchical model for object-oriented design quality. IEEE Transactions on software engineering, 2002, 28, pp. 4--17 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Basili, V.L., Briand, L., Melo, W.L., A validation of object-oriented metrics as quality indicators. IEEE Transactions on Software Engineering, 1996, 22(10), pp. 751--761. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Beck K., Beedle M., van Bennekum A., Cockburn A., Cunningham W., Fowler M, Grenning J et al. Manifesto for agile software development 2001. Available from http://agilemanifesto.org/Google ScholarGoogle Scholar
  6. Bieman, J., Kang, B.K., Measuring Design Level cohesion, IEEE Transactions on software engineering, 1998,24(2), 111--124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Briand, L. C., Daly, J.W., Wust, J., (1998) A Unified Framework for Cohesion Measurement in Object Oriented Systems, Empirical Software Engineering Journal, 1998,3(1), 65--117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Briand, L., Arisholm, E., Counsell S., Houdek, F. and Thevenod-Fosse, P., Empirical Studies of Object-Oriented Artifacts, Methods, and Processes: State of the Art and Future Direction, Empirical Software Engineering, 1999, 4(4), 387--404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Briand, L.C., Wuest, J., Daly, J.W., Porter, D.V., Exploring the relationship between design measures and software quality in object oriented systems. Journal of Systems and Software 2000, 51(3), 245--273. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Cao Y., Zhu, Q., Improved Metrics for Encapsulation Based on Information Hiding, The 9th International Conference for Young Computer Scientists, 2008, 1(1), 742--747. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Cartwright, M., Shepperd, M., An empirical investigation of an object-oriented software system. IEEE Transactions on Software Engineering, 2000, 26(7), 786--796. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Chidamber S.R., Kemerer C.F., Towards a metrics suite for object oriented design, Proceedings of the Conference on Object-Oriented Programming: Systems, Languages and Applications (OOPSLA '91), 1991, 197--21 Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Chidamber, S.R., Kemerer, C.F., A Metric Suite for Object-Oriented design, IEEE Transactions on Software Engineering, June 1994, 20(6), 476--493. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Coleman D, Ash D, Lowther B, Oman PW, Using metrics to evaluate software system maintainability. IEEE Computing Practices, 1994, 27(8), 44--49. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. D. Hosmer and S. Lemeshow, Applied Logistic Regression, second ed. John Wiley and Sons, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  16. Dhambri, K., Sahraoui, H., Poulin. P., Visual detection of design anomalies. In Proceedings of the 12th European Conference on Software Maintenance and Reengineering, IEEE CS, Tampere, Finland, April 2008, 279--283. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Emam, K.E., Benlarbi, S., Goel, N., Rai, S.N., The confounding effect of class size on the validity of object-oriented metrics. IEEE Transactions on Software Engineering, 2001, 27(7), 630--648. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Emam, K.E., Melo, Walcelio, Machado, Javam, The prediction of faulty classes using object-oriented design metrics. The Journal of Systems and Software, 2001, 56, 63--75. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Etzkorn L. H. et al., A comparison of cohesion metrics for objectoriented systems. Information and Software Technology., 2004,46(10), 677--687.Google ScholarGoogle ScholarCross RefCross Ref
  20. F. Simon, F, Steinbruckner, F., Lewerentz. C., Metrics based refactoring. In Proceedings of the Fifth European Conference on Software Maintenance and Reengineering (CSMR'01) IEEE CS Press, 2001, pp 30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Fawcett, T., ROC graphs: Notes and practical considerations for researchers. Machine Learning, 2004, pp. 31Google ScholarGoogle Scholar
  22. Fowler, Martin, Refactoring: Improving the Design of Existing Code. Addison-Wisely, 2000.Google ScholarGoogle Scholar
  23. Francisca Munoz Bravo, A Logic Meta-Programming Framework for Supporting the Refactoring Process. PhD thesis, Vrije Universiteit Brussel, Belgium, 2003.Google ScholarGoogle Scholar
  24. Grady RB, Successfully applying software metrics. IEEE Computer Vol 27, No. 9, pp. 18--25 Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Gronback Richard C., Software Remodeling : Improving Design and Implementation Quality Using audits , metrics and refactoring in Borland Together Control Centre, A Borland White Paper, January, 2003.Google ScholarGoogle Scholar
  26. Gyimothy, T., Ferenc, R., Siket, I., Empirical validation of objectoriented metrics on open source software for fault prediction. IEEE Transactions on Software Engineering, 2005, 31(10), 897--910. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Harrison, R., Counsell, S.J., Nithi, R.V., An Evaluation of the MOOD Set of Object-Oriented Software Metrics, IEEE Transactions on Software Engineering, 1998, 24, 491--496. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Henderson-Sellers, B., Object-Oriented Metrics: Measures of complexity, Prentice Hall Upper Saddle River, New Jersey, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Hitz, M. and Montazeri, B., Measuring Coupling and Cohesion in Object Oriented systems, International Symposium on Applied Corporate computing, Monterey, Mexico, 1995, 25--27.Google ScholarGoogle Scholar
  30. Hitz, M., Montazeri, B., Chidamber and Kemerer's metrics suite: A measurement perspective, IEEE Transactions on Software Engineering, 1996, 22(4), 267--271. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. http:// www.frontendart.comGoogle ScholarGoogle Scholar
  32. J.M. Bieman, J.M., Kang, B., Cohesion and Reuse in an Object-Oriented System, ACM System Symposium on Software Reusability, 1995, 259--262. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Khan, R.A., Metric Based Testability Model for Object Oriented Design ( MTMOOD ) SIGSOFT Software Engineering Notes, 2009, 34(2), 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Khomh F, Penta MD. An Exploratory Study of the Impact of Antipatterns on Class Change- and Fault-Proneness. Available at: www.ptidej.net/downloads/experiments/emse10/TR.pdf. Accessed 23 December 2010 Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Kutner, Nachtsheim, Neter, Applied Linear Regression Models, 4th edition, McGraw-Hill Irwin, 2004.Google ScholarGoogle Scholar
  36. Li W, Shatnawi R., An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution. Journal of Systems and Software. 2007, 80(7), 1120--1128. Available at: http://linkinghub.elsevier.com/retrieve/pii/S0164121206002780 Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Li, W. and Henry, S., Object-Oriented Metrics that Predict Maintainability, Journal of Systems and Software, 1993,23(2), 111--122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Mäntylä MV, Lassenius C. Subjective evaluation of software evolvability using code smells: An empirical study. Empirical Software Engineering., 2006, 11(3), 395--431. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Marinescu, R., Detecting design flaws via metrics in object-oriented systems. In Proceedings of the TOOLS, USA 39, Santa Barbara, USA, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Marticorena, R., Lopez C., Crespo Y., Extending a Taxonomy of Bad Code Smells with Metrics, WOOR'06, Nantes, 2006.Google ScholarGoogle Scholar
  41. Mayer T., Hall, T., A Critical Analysis of Current OO Design Metrics, Software Quality Journal, 1999, 8(2), 97--110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Mayer, T., Hall, T., Measuring OO Systems: A Critical Analysis of the MOOD Metrics, Proceedings of Technology of Object-Oriented Languages and Systems, Nancy, 1999, 108--117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Shatnawi R, Li W., The effectiveness of software metrics in identifying error-prone classes in post-release software evolution process. Journal of Systems and Software., 2008,81(11),1868--1882. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Subramanyam R, Krishnan MS, Empirical analysis of CK metrics for object-oriented design complexity: implications for software defects. IEEE Transactions on Software Engineering, 2003, 29(4), 297--310 Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Tsui, F., Bonja, C., Duggins, S. and Karam, O., An Ordinal Metric for Intra-Method Class Cohesion, Proceedings of IADIS Applied Computing Conference, Algarve, Portugal, April 2008.Google ScholarGoogle Scholar
  46. Rule Package Description for FrontEndART Monitor. 2009:1-4 available at http://www.frontendart.com/sites/default/files/BSM.pdfGoogle ScholarGoogle Scholar

Index Terms

  1. Effectiveness of encapsulation and object-oriented metrics to refactor code and identify error prone classes using bad smells

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader