skip to main content
10.1145/3127005.3127011acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Multi-objective search-based approach to estimate issue resolution time

Published:08 November 2017Publication History

ABSTRACT

Background: Resolving issues is central to modern agile software development where a software is developed and evolved incrementally through series of issue resolutions. An issue could represent a requirement for a new functionality, a report of a software bug or a description of a project task.

Aims: Knowing how long an issue will be resolved is thus important to different stakeholders including end-users, bug reporters, bug triagers, developers and managers. This paper aims to propose a multi-objective search-based approach to estimate the time required for resolving an issue.

Methods: Using genetic programming (a meta-heuristic optimization method), we iteratively generate candidate estimate models and search for the optimal model in estimating issue resolution time. The search is guided simultaneously by two objectives: maximizing the accuracy of the estimation model while minimizing its complexity.

Results: Our evaluation on 8,260 issues from five large open source projects demonstrate that our approach significantly (p < 0.001) outperforms both the baselines and state-of-the-art techniques.

Conclusions: Evolutionary search-based approaches offer an effective alternative to build estimation models for issue resolution time. Using multiple objectives, one for measuring the accuracy and the other for the complexity, helps produce accurate and simple estimation models.

References

  1. A. Arcuri and L. Briand. A practical guide for using statistical tests to assess randomized algorithms in software engineering. In 2011 33rd International Conference on Software Engineering (ICSE), pages 1--10. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Arcuri and L. Briand. A hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering. Software Testing, Verification and Reliability, 24(3):219--250, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. Choetkiertikul, H. K. Dam, T. Tran, and A. Ghose. Characterization and prediction of issue-related risks in software projects. In Proceedings of the 12th Working Conference on Mining Software Repositories (MSR), pages 280--291. IEEE, 2015. Google ScholarGoogle ScholarCross RefCross Ref
  4. J. Cohen. Statistical power analysis for the behavioral sciences. 1988, hillsdale, nj: L. Lawrence Earlbaum Associates, 2.Google ScholarGoogle Scholar
  5. M. de Oliveira Barros and A. C. Dias-Neto. 0006/2011-threats to validity in search-based software engineering empirical studies. RelaTe-DIA, 5(1), 2011.Google ScholarGoogle Scholar
  6. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE transactions on evolutionary computation, 6(2):182--197, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Dehghan, K. Blincoe, and D. Damian. A hybrid model for task completion effort estimation. In Proceedings of the 2nd International Workshop on Software Analytics, pages 22--28. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. J. Dolado, D. Rodriguez, M. Harman, W. B. Langdon, and F. Sarro. Evaluation of estimation models using the minimum interval of equivalence. Applied Soft Computing, pages 1--12, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. F. Ferrucci, M. Harman, J. Ren, and F. Sarro. Not going to take this anymore: Multi-objective overtime planning for software engineering projects. In Proceedings of the 2013 International Conference on Software Engineering, ICSE '13, pages 462--471, Piscataway, NJ, USA, 2013. IEEE Press. Google ScholarGoogle ScholarCross RefCross Ref
  10. F. Ferrucci, M. Harman, and F. Sarro. Search-based software project management. In Software Project Management in a Changing World, pages 373--399. Springer, 2014. Google ScholarGoogle ScholarCross RefCross Ref
  11. C. Gathercole and P. Ross. An adverse interaction between crossover and restricted tree depth in genetic programming. In Proceedings of the 1st Annual Conference on Genetic Programming, pages 291--296, Cambridge, MA, USA, 1996. MIT Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. E. Giger, M. Pinzger, and H. Gall. Predicting the fix time of bugs. In Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering, pages 52--56. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The weka data mining software: An update. SIGKDD Explor. Newsl., 11(1):10--18, Nov. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. P. Hooimeijer and W. Weimer. Modeling bug report quality. In Proceedings of the 22 IEEE/ACM international conference on Automated software engineering (ASE), pages 34--44. ACM Press, nov 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S.-J. Huang and N.-H. Chiu. Optimization of analogy weights by genetic algorithm for software effort estimation. Information and software technology, 48(11):1034--1045, 2006. Google ScholarGoogle ScholarCross RefCross Ref
  16. R. Kikas, M. Dumas, and D. Pfahl. Using dynamic and contextual features to predict issue lifetime in github projects. In Proceedings of the 13th International Workshop on Mining Software Repositories, pages 291--302. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. E. Kocaguneli, T. Menzies, and J. W. Keung. On the value of ensemble effort estimation. IEEE Transactions on Software Engineering, 38(6):1403--1416, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J. R. Koza. Genetic programming: on the programming of computers by means of natural selection, volume 1. MIT press, 1992.Google ScholarGoogle Scholar
  19. W. B. Langdon, J. Dolado, F. Sarro, and M. Harman. Exact mean absolute error of baseline predictor, marp0. Information and Software Technology, 73:16--18, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. Lokan. What should you optimize when building an estimation model? In 11th IEEE International Software Metrics Symposium (METRICS'05), pages 1--10. IEEE, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. L. Marks, Y. Zou, and A. E. Hassan. Studying the fix-time for bugs in large open source projects. In Proceedings of the 7th International Conference on Predictive Models in Software Engineering, page 11. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. D. R. McCallum and J. L. Peterson. Computer-based readability indexes. In Proceedings of the ACM'82 Conference, pages 44--48. ACM, 1982. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. T. Menzies. Occam's razor and simple software project management. In G. Ruhe and C. Wohlin, editors, Software Project Management in a Changing World, pages 447--472. Springer, 2014. Google ScholarGoogle ScholarCross RefCross Ref
  24. T. Menzies, Z. Chen, J. Hihn, and K. Lum. Selecting best practices for effort estimation. IEEE Transactions on Software Engineering, 32(11):883--895, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. T. Menzies and M. Shepperd. Special issue on repeatable results in software engineering prediction. Empirical Software Engineering, 17(1):1--17, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. L. L. Minku and X. Yao. Software effort estimation as a multiobjective learning problem. ACM Transactions on Software Engineering and Methodology (TOSEM), 22(4):35, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. D. Panjer. Predicting eclipse bug lifetimes. In Proceedings of the Fourth International Workshop on mining software repositories, page 29. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. F. Sarro, A. Petrozziello, and M. Harman. Multi-objective software effort estimation. In Proceedings of the 38th International Conference on Software Engineering, pages 619--630. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. M. Shepperd and S. MacDonell. Evaluating prediction systems in software project estimation. Information and Software Technology, 54(8):820--827, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Y. Tian, D. Lo, and C. Sun. Drone: Predicting priority of reported bugs by multi-factor analysis. In ICSM, pages 200--209, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. A. Vargha and H. D. Delaney. A critique and improvement of the cl common language effect size statistics of mcgraw and wong. Journal of Educational and Behavioral Statistics, 25(2):101--132, 2000.Google ScholarGoogle Scholar
  32. C. Weiss, R. Premraj, T. Zimmermann, and A. Zeller. How long will it take to fix this bug? In Proceedings of the Fourth International Workshop on Mining Software Repositories, page 1. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. X. Xia, D. Lo, E. Shihab, X. Wang, and B. Zhou. Automatic, high accuracy prediction of reopened bugs. Automated Software Engineering, 22(1):75--109, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. X. Xia, D. Lo, X. Wang, X. Yang, S. Li, and J. Sun. A comparative study of supervised learning algorithms for re-opened bug prediction. In Software Maintenance and Reengineering (CSMR), 2013 17th European Conference on, pages 331--334. IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    PROMISE: Proceedings of the 13th International Conference on Predictive Models and Data Analytics in Software Engineering
    November 2017
    120 pages
    ISBN:9781450353052
    DOI:10.1145/3127005

    Copyright © 2017 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 November 2017

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    PROMISE Paper Acceptance Rate12of25submissions,48%Overall Acceptance Rate64of125submissions,51%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader