Skip to main content
Top
Published in: Empirical Software Engineering 2/2019

12-05-2018 | Experience Report

Successes, challenges, and rethinking – an industrial investigation on crowdsourced mobile application testing

Authors: Ruizhi Gao, Yabin Wang, Yang Feng, Zhenyu Chen, W. Eric Wong

Published in: Empirical Software Engineering | Issue 2/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The term crowdsourcing – a compound contraction of crowd and outsourcing – is a new paradigm for utilizing the power of crowds of people to facilitate large-scale tasks that are costly or time consuming with traditional methods. This paradigm offers mobile application companies the possibility to outsource their testing activities to crowdsourced testers (crowdtesters) who have various testing facilities and environments, as well as different levels of skills and expertise. With this so-called Crowdsourced Mobile Application Testing (CMAT), some of the well-recognized issues in testing mobile applications, such as multitude of mobile devices, fragmentation of device models, variety of OS versions, and omnifariousness of testing scenarios, could be mitigated. However, how effective is CMAT in practice? What are the challenges and issues presented by the process of applying CMAT? How can these issues and challenges be overcome and CMAT be improved? Although CMAT has attracted attention from both academia and industry, these questions have not been addressed or researched in depth based on a large-scale and real-life industrial study. Since June 2015, we have worked with Mooctest, Inc., a CMAT intermediary, on testing five real-life Android applications using their CMAT platform – Kikbug. Throughout the process, we have collected 1013 bug reports from 258 crowdtesters and found 247 bugs in total. This paper will present our industrial study thoroughly and give an insightful analysis to investigate the successes and challenges of applying CMAT.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Footnotes
1
In this paper, we use “bug” and “fault” interchangeably.
 
2
The functionalities of different CMAT platforms may vary. However, the impact on the general CMAT workflow is insignificant.
 
3
You may visit http://​mooctest.​net/​wiki for a test trial with more detailed instructions.
 
4
For the rest of the paper, a “detected bug” is one that is reported and also approved by the customers. A “reported bug” is not necessarily a “detected bug” unless the customers approve it.
 
5
STQA is sponsored by NSF I/UCRC (Industry/University Cooperative Research Centers Program). You may visit http://​paris.​utdallas.​edu/​stqa for more detailed information about STQA.
 
Literature
go back to reference Allahbakhsh M, Benatallah B, Ignjatovic A, Motahari-Nezhad H, Bertino E, Dustdar S (2013) Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput 17(2):76–81CrossRef Allahbakhsh M, Benatallah B, Ignjatovic A, Motahari-Nezhad H, Bertino E, Dustdar S (2013) Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput 17(2):76–81CrossRef
go back to reference Bruun A, Stage J (2015) New approaches to usability evaluation in software development: barefoot and crowdsourcing. J Syst Softw 105:40–53CrossRef Bruun A, Stage J (2015) New approaches to usability evaluation in software development: barefoot and crowdsourcing. J Syst Softw 105:40–53CrossRef
go back to reference Capgemini (2017–2018) World Quality Report for Mobile Testing Capgemini (2017–2018) World Quality Report for Mobile Testing
go back to reference Z. Chen and B. Luo (2014) “Quasi-crowdsourcing testing for educational projects,” in Proceedings of international conference on software engineering, pp. 272.275, Hyderabad, India, Mary Z. Chen and B. Luo (2014) “Quasi-crowdsourcing testing for educational projects,” in Proceedings of international conference on software engineering, pp. 272.275, Hyderabad, India, Mary
go back to reference J. Cheng, J. Teevan, M. S. Bernstein (2015) “Measuring crowdsourcing effort with error-time curves,” in Proceedings of ACM conference on human factors in computing systems, pp. 1365–1374, Seoul, Korea J. Cheng, J. Teevan, M. S. Bernstein (2015) “Measuring crowdsourcing effort with error-time curves,” in Proceedings of ACM conference on human factors in computing systems, pp. 1365–1374, Seoul, Korea
go back to reference Crowdsourcing.org (2013) “Using crowdsourcing for software testing” Crowdsourcing.org (2013) “Using crowdsourcing for software testing”
go back to reference E. Dolstra, R. Vliegendhart, and J. Pouwelse (2013) “Crowdsourcing GUI tests,” In Proceedings of the IEEE International Conference on Software Testing, Verification and Validation, pages 332–341, Luxembourg E. Dolstra, R. Vliegendhart, and J. Pouwelse (2013) “Crowdsourcing GUI tests,” In Proceedings of the IEEE International Conference on Software Testing, Verification and Validation, pages 332–341, Luxembourg
go back to reference Y. Feng, Z. Chen, J. A. Jone, C. Fang, and B. Xu (2015) “Test report prioritization to assist Crowdsourced testing,” in Proceedings of joint meeting on foundations of software engineering, pp. 225–236, Bergamo, Italy Y. Feng, Z. Chen, J. A. Jone, C. Fang, and B. Xu (2015) “Test report prioritization to assist Crowdsourced testing,” in Proceedings of joint meeting on foundations of software engineering, pp. 225–236, Bergamo, Italy
go back to reference M. Goldman (2011) “Role-based interfaces for collaborative software development,” in Proceedings of the 24th annual ACM symposium adjunct on user Interface software and technology, pp. 23–26, Charlotte, USA M. Goldman (2011) “Role-based interfaces for collaborative software development,” in Proceedings of the 24th annual ACM symposium adjunct on user Interface software and technology, pp. 23–26, Charlotte, USA
go back to reference M. Goldman, G. Little, and R. C. Miller (2011) “Real-time collaborative coding in a web IDE,” in Proceedings of the 24th annual ACM symposium on user interface software and technology, pp. 155–164, Santa Barbara, USA M. Goldman, G. Little, and R. C. Miller (2011) “Real-time collaborative coding in a web IDE,” in Proceedings of the 24th annual ACM symposium on user interface software and technology, pp. 155–164, Santa Barbara, USA
go back to reference M. Gomez, R. Rouvoy, B. Adams, and L. Seinturier (2016) “Reproducing context-sensitive crashes of mobile apps using Crowdsourced monitoring,” in Proceedings of the international conference on mobile software engineering and systems, pp. 88–99, Austin, Texas M. Gomez, R. Rouvoy, B. Adams, and L. Seinturier (2016) “Reproducing context-sensitive crashes of mobile apps using Crowdsourced monitoring,” in Proceedings of the international conference on mobile software engineering and systems, pp. 88–99, Austin, Texas
go back to reference F. Guaiani and H. Muccini (2016) “Crowd and laboratory testing, can they co-exist? An exploratory study,” in Proceedings of the second international workshop on CrowdSourcing in software engineering, pp. 32–37, Florence, Italy F. Guaiani and H. Muccini (2016) “Crowd and laboratory testing, can they co-exist? An exploratory study,” in Proceedings of the second international workshop on CrowdSourcing in software engineering, pp. 32–37, Florence, Italy
go back to reference Haerem T, Rau D (2007) The influence of degree of expertise and objective task complexity on perceived task complexity and performance. J Appl Psychol 92(5):1320–1331CrossRef Haerem T, Rau D (2007) The influence of degree of expertise and objective task complexity on perceived task complexity and performance. J Appl Psychol 92(5):1320–1331CrossRef
go back to reference M. Harman, Y. Jia, W. B. Langdon, J. Petke, I. H. Moghadam, S. Yoo, and F. Wu (2014) “Genetic improvement for adaptive software engineering,” in Proceedings of the international symposium on software engineering for Adaptiveand self-managing systems, pp. 1–4, Austin, USA M. Harman, Y. Jia, W. B. Langdon, J. Petke, I. H. Moghadam, S. Yoo, and F. Wu (2014) “Genetic improvement for adaptive software engineering,” in Proceedings of the international symposium on software engineering for Adaptiveand self-managing systems, pp. 1–4, Austin, USA
go back to reference Hotelling H (1953) New light on the correlation coefficient and its transforms. J R Stat Soc 15(2):193–232MathSciNetMATH Hotelling H (1953) New light on the correlation coefficient and its transforms. J R Stat Soc 15(2):193–232MathSciNetMATH
go back to reference J. Howe (2016) “The rise of crowdsourcing,” Wired Magazine J. Howe (2016) “The rise of crowdsourcing,” Wired Magazine
go back to reference Y.-C. Huang, C.-I. Wang, and J. Hsu (2013) “Leveraging the crowd for creating wireframe-based exploration of mobile design pattern gallery,” in Proceedings of the companion publication of the 2013 international conference on intelligent user interfaces, pp. 17–20, Santa Monica, USA Y.-C. Huang, C.-I. Wang, and J. Hsu (2013) “Leveraging the crowd for creating wireframe-based exploration of mobile design pattern gallery,” in Proceedings of the companion publication of the 2013 international conference on intelligent user interfaces, pp. 17–20, Santa Monica, USA
go back to reference Latoza TD, Van der Hoek A (2016) Crowdsourcing in software engineering: models, motivations, and challenges. IEEE Softw 33(1):74–80CrossRef Latoza TD, Van der Hoek A (2016) Crowdsourcing in software engineering: models, motivations, and challenges. IEEE Softw 33(1):74–80CrossRef
go back to reference N. Leicht, N. Knop, I. Blohm, C. Müller-Bloch, and J. M. Leimeister (2016) “When is crowdsourcing advantageous? The case of Crowdsourced software testing,” in Proceedings of European conference on information systems, pp. 1–17, Istanbul, Turkey N. Leicht, N. Knop, I. Blohm, C. Müller-Bloch, and J. M. Leimeister (2016) “When is crowdsourcing advantageous? The case of Crowdsourced software testing,” in Proceedings of European conference on information systems, pp. 1–17, Istanbul, Turkey
go back to reference Leicht N, Blohm I, Leimeister JM (2017) Leveraging the power of the crowd for software testing. IEEE Softw 34(2):62–69CrossRef Leicht N, Blohm I, Leimeister JM (2017) Leveraging the power of the crowd for software testing. IEEE Softw 34(2):62–69CrossRef
go back to reference D. Liu, M. Lease, R. Kuipers, and R. Bia (2012) “Crowdsourcing for usability testing,” in Proceedings of the American Society for Information Science and Technology, vol. 49, no. 1, pp. 1–10 D. Liu, M. Lease, R. Kuipers, and R. Bia (2012) “Crowdsourcing for usability testing,” in Proceedings of the American Society for Information Science and Technology, vol. 49, no. 1, pp. 1–10
go back to reference Mantyla MV, Itkonen J (2013) More testers - the effect of crowd size and time restriction in software testing. Inf Softw Technol 55(6):986–1003CrossRef Mantyla MV, Itkonen J (2013) More testers - the effect of crowd size and time restriction in software testing. Inf Softw Technol 55(6):986–1003CrossRef
go back to reference K. Mao, L. Capra, M. Harman, and Y. Jia (2015) “A survey of the use of crowdsourcing in software engineering,” Research Note, University College London K. Mao, L. Capra, M. Harman, and Y. Jia (2015) “A survey of the use of crowdsourcing in software engineering,” Research Note, University College London
go back to reference Mok R, Chang R, Li W (2017) Detecting low-quality workers in QoE Crowdtesting: a worker behavior-based approach. IEEE Transactions on Multimedia 19(3):530–543CrossRef Mok R, Chang R, Li W (2017) Detecting low-quality workers in QoE Crowdtesting: a worker behavior-based approach. IEEE Transactions on Multimedia 19(3):530–543CrossRef
go back to reference D. Mujumdar, M. Kallenbach, B. Liu, and B. Hartmann (2011) “Crowdsourcing suggestions to programming problems for dynamic web development languages,” in Proceedings of the 2011 annual conference extended abstracts on human factors in computing systems, pp. 1525–1530, Vancouver, Canada D. Mujumdar, M. Kallenbach, B. Liu, and B. Hartmann (2011) “Crowdsourcing suggestions to programming problems for dynamic web development languages,” in Proceedings of the 2011 annual conference extended abstracts on human factors in computing systems, pp. 1525–1530, Vancouver, Canada
go back to reference M. Nebeling, M. Speicher, and M. C. Norrie (2013) “CrowdStudy: General Toolkit for Crowdsourced Evaluation of Web Interfaces,” in Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pp 255–264, London, UK M. Nebeling, M. Speicher, and M. C. Norrie (2013) “CrowdStudy: General Toolkit for Crowdsourced Evaluation of Web Interfaces,” in Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pp 255–264, London, UK
go back to reference OpenSignal, Android Fragmentation Visualized (2015) OpenSignal, Android Fragmentation Visualized (2015)
go back to reference F. Pastore, L. Mariani, and G. Fraser (2013) “Crowdoracles: can the crowd solve the Oracle problem?” In Proceedings of the IEEE International Conference on Software Testing, Verification and Validation, pages 342–351, Luxembourg F. Pastore, L. Mariani, and G. Fraser (2013) “Crowdoracles: can the crowd solve the Oracle problem?” In Proceedings of the IEEE International Conference on Software Testing, Verification and Validation, pages 342–351, Luxembourg
go back to reference H. Xue (2013) “Using redundancy to improve security and testing,” Ph.D. dissertation, University of Illinois at Urbana-Champaign H. Xue (2013) “Using redundancy to improve security and testing,” Ph.D. dissertation, University of Illinois at Urbana-Champaign
go back to reference M. Yan, H. Sun, and X. Liu (2014) “iTest: testing software with mobile crowdsourcing,” in Proceedings of the 1st International Workshop on Crowd-based Software Development Methods and Technologies, pp. 19–24, Hong Kong M. Yan, H. Sun, and X. Liu (2014) “iTest: testing software with mobile crowdsourcing,” in Proceedings of the 1st International Workshop on Crowd-based Software Development Methods and Technologies, pp. 19–24, Hong Kong
go back to reference M. Yuen, I. King, and K. Leung (2011) “A survey of crowdsourcing systems,” in Proceedings of IEEE International Conference on Privacy, Security, Risk, and Trust, and IEEE Conference on Social Computing, pp. 766–773, Boston, USA M. Yuen, I. King, and K. Leung (2011) “A survey of crowdsourcing systems,” in Proceedings of IEEE International Conference on Privacy, Security, Risk, and Trust, and IEEE Conference on Social Computing, pp. 766–773, Boston, USA
go back to reference Zhang X, Yang Z, Zhou Z, Cai H, Chen L, Li X (2014) Free market of crowdsourcing: incentive mechanism Design for Mobile Sensing. IEEE Trans Prallel Dist Syst 25(12):3190–3200CrossRef Zhang X, Yang Z, Zhou Z, Cai H, Chen L, Li X (2014) Free market of crowdsourcing: incentive mechanism Design for Mobile Sensing. IEEE Trans Prallel Dist Syst 25(12):3190–3200CrossRef
go back to reference Zogaj S, Bretschneider U, Leimeister JM (2014) Managing Crowdsourced software testing: a case study based insight on the challenges of a crowdsourcing intermediary. J Bus Econ 84:375–405CrossRef Zogaj S, Bretschneider U, Leimeister JM (2014) Managing Crowdsourced software testing: a case study based insight on the challenges of a crowdsourcing intermediary. J Bus Econ 84:375–405CrossRef
Metadata
Title
Successes, challenges, and rethinking – an industrial investigation on crowdsourced mobile application testing
Authors
Ruizhi Gao
Yabin Wang
Yang Feng
Zhenyu Chen
W. Eric Wong
Publication date
12-05-2018
Publisher
Springer US
Published in
Empirical Software Engineering / Issue 2/2019
Print ISSN: 1382-3256
Electronic ISSN: 1573-7616
DOI
https://doi.org/10.1007/s10664-018-9618-5

Other articles of this Issue 2/2019

Empirical Software Engineering 2/2019 Go to the issue

Premium Partner