Characterizing mobile apps from a source and test code viewpoint

https://doi.org/10.1016/j.infsof.2018.05.006Get rights and content

Highlights

  • One third of the analyzed apps contain automated tests.

  • Automated tests rely on frameworks for unit testing, mocking, and GUI.

  • We found a medium correlation between project size and presence of tests.

  • Tests for connectivity, GUI, sensors, and multiple configurations are scarce.

  • There is no correlation between automated tests and app popularity.

Abstract

Context: while the mobile computing market has expanded and become critical, the amount and complexity of mobile apps have also increased. To assure reliability, these apps require software engineering methods, mainly verification, validation, and testing. However, mobile app testing is a challenging activity due to the diversity and limitations found in mobile devices. Thus, it would be interesting to characterize mobile apps in hopes of assisting in the definition of more efficient and effective testing approaches. Objective: this paper aims to identify and quantify the specific characteristics of mobile apps so that testers can draw from this knowledge and tailor software testing activities to mobile apps. We investigate the presence of automated tests, adopted frameworks, external connectivity, graphical user interface (GUI) elements, sensors, and different system configurations. Method: we developed a tool to support the automatic extraction of characteristics from Android apps. We conducted an empirical study with a sample of 663 open source mobile apps. Results: we found that one third of the projects perform automated testing. The frameworks used in these projects can be divided into three groups: unit testing, GUI testing, and mocking. There is a medium correlation between project size and test presence. Specific features of mobile apps (connectivity, GUI, sensors, and multiple configurations) are present in the projects, however, they are fully covered by tests. Conclusion: automated tests are still not developed in a systematic way. Interestingly, measures of app popularity (number of downloads and rating) do not seem to be correlated with the presence of tests. However, the results show a correlation of the project size and more critical domains with the existence of automated tests. Although challenges such as connectivity, sensors, and multiple configurations are present in the examined apps, only one tool has been identified to support the testing of these challenges.

Introduction

Mobile computing has gone mainstream due to the many technological advances over the past decades. According to an Ericsson mobility report, there are 2.6 billion smartphone subscriptions in the world and this number is expected to hit 6.1 billion by 2020 [1]. The smartphones, along with tablets, e-readers and wearables, run mobile apps on specific operating systems (OSes), such as Google Android [2], Apple iOS [3], and Windows Phone [4]. Among them, Android was the first platform to be provided as open source, allowing not only the development of mobile apps [5], but also the conduction of empirical studies.

This appealing technology and its massive market attracted several information technology (IT) professionals to develop software for mobile computing [6], [7]. Mobile apps were initially developed for entertainment purposes, but they have reached even critical domains like health, finance, and industry [8]. Recent reports have shown that Google Play, the most popular mobile app1 market for the Android platform, currently offers 2.8 million apps for multiple domains. Similarly, Apple’s App Store, the app market for the iOS platform, has 2.2 million apps available [9].

Mobile app business is booming, thus a way to gain advantage over other competitors is to deliver apps that are reliable. Naturally, developers have to resort to software testing approaches in order to assure the quality of their products. Gao et al. [10] define mobile app testing as all test activities that, by means of well-defined methods and tools, intend to assure quality in functionalities, behaviors, performance and services, as well as mobility, connectivity, security, usability, privacy, and interoperability. In particular, approaches and tools have been developed to automate the execution of test cases created by developers as well as to automatically generate test cases [11], [12]. In this paper, however, we focus only on approaches that automate the execution of predefined test cases (as opposed to automatically generated test cases) in a systematic and formal fashion [13].

The development of mobile apps is relatively new, thus researchers and practitioners have come up with different approaches for dealing with the testing challenges posed by mobile apps [7], [8], [14]. For instance, researchers have been investigating how to test native apps that use specific features of the devices (e.g., camera, sensors, accelerometers, and geolocalization) [8], [15], [16], [17], [18]. However, there is a lack of studies that characterize the mobile apps with respect to the challenges they present to the testing activity. The literature also lacks studies reporting whether and how such challenges have been addressed in mobile development projects.

In this context, we set out to identify and quantify the characteristics of mobile apps that are relevant from a software testing viewpoint. In particular, we conducted an empirical study with 663 open source projects aiming to extract pieces of information related to the presence of automated tests, frameworks adopted, the presence of challenges (namely, rich GUIs, sensors, connectivity, and multiple configurations) and whether they are tested or not. Moreover, we correlated the presence of tests with project size, category, and popularity measures (i.e., number of downloads and rating). We automated the data extraction process by developing a static analysis tool able to extract the aforementioned information from Android projects.

Preliminary results of this paper were published in Silva et al. [19]. The improvements made to this paper are fourfold: (i) we adopted a much larger sample of apps (in the previous version of this study, only 19 mobile apps were taken into account); (ii) we revisited and refined the research questions; (iii) we developed a tool that automates the process of extracting the information that is relevant to answer our research questions; and (iv) we provide a comprehensive analysis and discussion of the results, which takes into account a larger sample comprising 663 apps.

The remainder of this paper is organized as follows. Section 2 covers background and challenges related to mobile app testing. Section 3 describes the setup of our empirical study. Section 4 presents an analysis of results and Section 5 further discusses our results. Section 6 outlines related work. Section 7 presents the conclusion and outlines future work.

Section snippets

Background

Mobile computing is the manipulation of portable devices through mobile apps to exchange information regardless of their physical location [20]. Thus, a mobile app is any software that is developed to run on mobile devices [21], [22]. Currently, such apps execute under specific platforms like Android, Apple iOS, or Windows Phone, for example. The Android platform is an open source, layered software environment based on the Linux kernel [5]. Android was primarily designed for smartphones,

Study setting

We devised an empirical study to characterize mobile apps from a software testing viewpoint. Table 1 shows the research questions (RQs) we set out to answer in this paper.

The rationale behind the first RQ is to probe into whether app projects present any evidence of test automation (RQ1), how extensive test cases are (RQ1.1), and which are the most popular testing frameworks (RQ1.2). Additionally, we set out to investigate if there is any correlation between the presence of test automation and

Analysis of results

This section presents the results we obtained from analyzing MA663 (Section 3.1), a sample of 663 open source apps available for end users in Google Play. Initially, we characterize MA663 in terms of size and GitHub perspectives; then we present the analysis performed to answer the RQs.

Discussion

This section discusses the results of our investigation and show how they can guide potential research directions for both researchers and practitioners. This discussion is split in three parts: testing automation culture among app developers; correlation between test automation and some metrics; and testing challenges.

Related work

As previous research indicated, developing software that runs on mobile devices is challenging because of a number of technical issues: wide variety of display sizes, low-processing power, short battery life, and wireless communications. As pointed out by Wasserman [22], while many “classic” software engineering techniques can be easily transferred to the mobile app domain, there are still many areas for research and development. Despite the ubiquity of mobile devices among end-users, there is

Concluding remarks

In this paper, we conducted an exploratory study with 663 open source Android applications. The main goal was to shed light on how automated tests have been implemented in mobile apps developed by the open source communities. We analyzed the testing approaches and frameworks adopted, as well as the relation between elements of production and test code. Finally, we investigated whether the testing of following challenges has been automated: connectivity, rich GUIs, limited resources, sensors,

Acknowledgments

Andre T. Endo was partially financially supported by CNPq/Brazil (Grant number 445958/2014-6). Marcelo M. Eler is partially financially supported by FAPESP/Brazil (Grant number 2014/08713-9). The authors are grateful to the anonymous reviewers for their useful comments and suggestions.

References (41)

  • Ericsson, Ericsson mobility report, 2016,...
  • Android, 2017,...
  • Apple, 2017,...
  • W. Phone, 2017,...
  • W.F. Ableson

    Android in Action

    (2012)
  • P. Bhattacharya et al.

    An empirical analysis of bug reports and bug fixing in open source android apps

    2011 15th European Conference on Software Maintenance and Reengineering

    (2013)
  • M.E. Joorabchi et al.

    Real challenges in mobile app development

    The ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)

    (2013)
  • H. Muccini, D. Informatica, A.D. Francesco, P. Esposito, Software testing of mobile applications: challenges and future...
  • Statista, Number of apps available in leading app stores as of may 2017, 2017,...
  • J. Gao et al.

    Mobile application testing: a tutorial

    Computer

    (2014)
  • S.R. Choudhary et al.

    Automated test input generation for android: Are we there yet? (e)

    Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)

    (2015)
  • K. Mao et al.

    Sapienz: Multi-objective automated testing for android applications

    Proceedings of the 25th International Symposium on Software Testing and Analysis (ISSTA)

    (2016)
  • C.S. Jensen et al.

    Automated testing with targeted event sequence generation

    Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA)

    (2013)
  • P.S. Kochhar et al.

    Understanding the test automation culture of app developers

    2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)

    (2015)
  • G. Chen et al.

    A Survey of Context-Aware Mobile Computing Research, Hanover, NH, USA

    Technical Report

    (2000)
  • IDC, Voice of the next-generation mobile developer, appcelerator/idc q3 2012 mobile developer report, 2017,...
  • M. Satyanarayanan

    Fundamental challenges in mobile computing

    Proceedings of the Fifteenth Annual ACM Symposium on Principles of Distributed Computing

    (1996)
  • B. Schilit et al.

    Context-aware computing applications

    Proceedings of the 1994 First Workshop on Mobile Computing Systems and Applications

    (1994)
  • D.B. Silva et al.

    An analysis of automated tests for mobile android applications

    2016 XLII Latin American Computing Conference (CLEI)

    (2016)
  • J. Jing et al.

    Client-server computing in mobile environments

    ACM Comput. Surv.

    (1999)
  • Cited by (18)

    View all citing articles on Scopus
    View full text