Weitere Artikel dieser Ausgabe durch Wischen aufrufen
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Collecting product and process measures in software development projects, particularly in education and training environments, is important as a basis for assessing current performance and opportunities for improvement. However, analyzing the collected data manually is challenging because of the expertise required, the lack of benchmarks for comparison, the amount of data to analyze, and the time required to do the analysis. ProcessPAIR is a novel tool for automated performance analysis and improvement recommendation; based on a performance model calibrated from the performance data of many developers, it automatically identifies and ranks potential performance problems and root causes of individual developers. In education and training environments, it increases students’ autonomy and reduces instructors’ effort in grading and feedback. In this article, we present the results of a controlled experiment involving 61 software engineering master students, half of whom used ProcessPAIR in a Personal Software Process (PSP) performance analysis assignment, and the other half used a traditional PSP support tool (Process Dashboard) for performing the same assignment. The results show significant benefits in terms of students’ satisfaction (average score of 4.78 in a 1–5 scale for ProcessPAIR users, against 3.81 for Process Dashboard users), quality of the analysis outcomes (average grades achieved of 88.1 in a 0–100 scale for ProcessPAIR users, against 82.5 for Process Dashboard users), and time required to do the analysis (average of 252 min for ProcessPAIR users, against 262 min for Process Dashboard users, but with much room for improvement).
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Alperowitz, L., Dzvonyar, D., Bruegge, B. (2016). Metrics in agile project courses. In Proceedings of the 38th international conference on software engineering companion (ICSE ’16) (pp. 323–326). New York: ACM, DOI https://doi.org/10.1145/2889160.2889183, (to appear in print).
Alves, T. (2012). Benchmark-based software product quality evaluation. PhD Thesis, U. Minho.
Alves, T., Ypma, C., Visser, J. (2010). Deriving metric thresholds from benchmark data. In 2010 IEEE international conference on software maintenance (ICSM’10) (pp. 1–10), IEEE. https://doi.org/10.1109/ICSM.2010.5609747.
Basili, V.R. (2007). The role of controlled experiments in software engineering research, empirical software engineering issues. Critical assessment and future directions. In Basili, V.R., Rombach, D., Schneider, K., Kitchenham, B., Pfahl, D., Selby, R. (Eds.) LNCS (pp. 33–37). Berlin: Springer.
Beck, K., & Andres, C. (2004). Extreme programming explained: embrace change, 2nd Edn. Addison-Wesley.
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J. (1983). Classification and regression trees. Belmont.
Fowler, M., Beck, K., Brant, J., Opdyke, W., Roberts, D. (1999). Refactoring: improving the design of existing code. Addison-Wesley.
Hackystat Development Team. (2010). Hackystat [online], available: http://code.google.com/p/hackystat/ (last release: January 2010; last visited October 2016).
Humphrey, W. (2005). PSP SM: a self-improvement process for software engineers. Addison-Wesley Professional.
Humphrey, W. (2009). The software quality profile. White paper, SEI.
Jedlitschka, A., Ciolkowski, M., Pfahl, D. (2008). Reporting experiments in software engineering. Guide to advanced empirical software engineering.
Ko, A.J., Latoza, T.D., Burnett, M.M. (2015). A practical guide to controlled experiments of software engineering tools with human participants. Empirical Software Engineering, 20(1), 110–141. CrossRef
Kumar, V., Kinshuk, Somasundaram, T., Harris, S., Boulanger, D., Seanosky, J., Paulmani, G., Panneerselvam, K. (2015). An approach to measure coding competency evolution: toward learning analytics, smart learning environments. In Chang, M. & Li, Y. (Eds.) Lecture notes in educational technology (pp. 27–43). Springer. https://doi.org/10.1007/978-3-662-44447-4_2.
Raza, M. (2017). Automated software process performance analysis and improvement recommendation. PhD Thesis, Faculty of Engineering of the University of Porto.
Raza, M., & Faria, J. (2016a). A model for analyzing performance problems and root causes in the personal software process. Journal of Software: Evolution and Process, 28(4), 254–271.
Raza, M., & Faria, J. (2016b). ProcessPAIR: a tool for automated performance analysis and improvement recommendation in software development. In Proceedings of the 31st IEEE/ACM international conference on automated software engineering (ASE 2016) (pp. 798–803). ACM. https://doi.org/10.1145/2970276.2970284.
Raza, M., Faria, J.J., Salazar, R. (2016). Empirical evaluation of the ProcessPAIR tool for automated performance analysis. In 28th international conference on software engineering and knowledge engineering (SEKE 2016). https://doi.org/10.18293/SEKE2016-205.
Raza, M., Faria, J., Salazar, R. (2017). Helping software engineering students analyzing their performance data: tool support in an educational environment. Published in 39th international conference on software engineering (ICSE). Buenos Aires, Argentina.
Rong, G., Zhang, H., Qi, S., Shao, D. (2016). Can engineering students program defect-free? An educational approach. In Proceedings of the 38th international conference on software engineering companion (ICSE ’16) (pp. 364–373). ACM. https://doi.org/10.1145/2889160.2889189.
Saltelli, A., Chan, K., Scott, E. (2008). Sensitivity analysis. Wiley.
Shalizi, C. (2009). Classification and regression trees, 36–350, Data Mining. Standford University, (lecture notes).
Sjøberg, D.I.K., Hannay, J.E., Hansen, O., Kampenes, V.B., Karahasanovic, A., Liborg, N.-K., Rekdal, A. (2005). A survey of controlled experiments in software engineering. IEEE Transactions on Software Engineering, 31.9, 733–753. CrossRef
Shadish, W.R., Cook, T.D., Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
Shin, H., Choi, H., Baik, J. (2007). Jasmine: a PSP supporting tool. In Proceedings of the international conference on software process (ICSP 2007), LNCS 4470 (pp. 73–83). Springer. https://doi.org/10.1007/978-3-540-72426-1_7.
Thisuk, S., & Ramingwong, S. (2014). WBPS: a new web based tool for personal software process. In 2014 11th international conference on electrical engineering/electronics, computer, telecommunications and information technology. IEEE.
Tuma Solutions LLC. (2015). Process Dashboard [online], available: http://www.processdash.com (last release: December 2015; last visited October 2016).
Visser, J. (2015). Building maintainable software. O’Reilly, ISBN13: 9781491940662.
- Assisting software engineering students in analyzing their performance in software development
João Pascoal Faria
- Springer US
Software Quality Journal
Print ISSN: 0963-9314
Elektronische ISSN: 1573-1367
Neuer Inhalt/© ITandMEDIA, Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung/© astrosystem | stock.adobe.com