skip to main content
10.1145/2594368.2594377acmconferencesArticle/Chapter ViewAbstractPublication PagesmobisysConference Proceedingsconference-collections
research-article

Automatic and scalable fault detection for mobile applications

Published:02 June 2014Publication History

ABSTRACT

This paper describes the design, implementation, and evaluation of VanarSena, an automated fault finder for mobile applications (``apps''). The techniques in VanarSena are driven by a study of 25 million real-world crash reports of Windows Phone apps reported in 2012. Our analysis indicates that a modest number of root causes are responsible for many observed failures, but that they occur in a wide range of places in an app, requiring a wide coverage of possible execution paths. VanarSena adopts a ``greybox'' testing method, instrumenting the app binary to achieve both coverage and speed. VanarSena runs on cloud servers: the developer uploads the app binary; VanarSena then runs several app ``monkeys'' in parallel to emulate user, network, and sensor data behavior, returning a detailed report of crashes and failures. We have tested VanarSena with 3000 apps from the Windows Phone store, finding that 1108 of them had failures; VanarSena uncovered 2969 distinct bugs in existing apps, including 1227 that were not previously reported. Because we anticipate VanarSena being used in regular regression tests, testing speed is important. VanarSena uses two techniques to improve speed. First, it uses a ``hit testing'' method to quickly emulate an app by identifying which user interface controls map to the same execution handlers in the code. Second, it generates a ProcessingCompleted event to accurately determine when to start the next interaction. These features are key benefits of VanarSena's greybox philosophy.

References

  1. http://www.magomedov.co.uk/2010/11/navigation-is-already-in-progress.html.Google ScholarGoogle Scholar
  2. App platform compatibility for Windows Phone. http://msdn.microsoft.com/en-US/library/windowsphone/develop/jj206947(v=vs.105).aspx.Google ScholarGoogle Scholar
  3. Fiddler. http://fiddler2.com/.Google ScholarGoogle Scholar
  4. Y. Agarwal and M. Hall. Protectmyprivacy: Detecting and mitigating privacy leaks on ios devices using crowdsourcing. In Mobisys, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. R. Agrawal and R. Srikant. Fast algorithms for mining association rules in large databases. In VLDB, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Amalfitano, A. R. Fasolino, S. D. Carmine, A. Memon, and P. Tramontana. Using gui ripping for automated testing of android applications. In ASE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Anand, M. Naik, M. J. Harrold, and H. Yang. Automated concolic testing of smartphone apps. In FSE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. T. Azim and I. Neamtiu. Targeted and depth-first exploration for systematic testing of android apps. In OOPSLA, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. T. Ball and J. Larus. Efficient Path Profiling. In PLDI, 1997.Google ScholarGoogle Scholar
  10. C. Cadar, D. Dunbar, and D. Engler. Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In OSDI, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Chandra, B. F. Karlsson, N. Lane, C.-J. M. Liang, S. Nath, J. Padhye, L. Ravindranath, and F. Zhao. Towards Scalable Automated Mobile App Testing. MSR-TR-2014-44.Google ScholarGoogle Scholar
  12. P. H. Chia, Y. Yamamoto, and N. Asokan. Is this app safe?: A large scale study on application permissions and risk signals. In Proceedings of the 21st International Conference on World Wide Web, WWW '12, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Crussell, C. Gibler, and H. Chen. Attack of the clones: Detecting cloned applications on android markets. In ESORICS, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  14. B. Davis and H. Chen. Retroskeleton: Retrofitting android apps. In Mobisys, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. W. Enck, P. Gilbert, B.-G. Chun, L. P. Cox, J. Jung, P. McDaniel, and A. Seth. TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones. In OSDI, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. W. Enck, P. Gilbert, B.-G. Chun, L. P. Cox, J. Jung, P. McDaniel, and A. N. Sheth. Taintdroid: An information-flow tracking system for realtime privacy monitoring on smartphones. In OSDI, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. W. Enck, D. Octeau, P. McDaniel, and S. Chaudhuri. A study of android application security. In USENIX Security, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Unhandled exception handler. http://msdn.microsoft.com/en-us/library/system.appdomain.unhandledexception.aspx.Google ScholarGoogle Scholar
  19. S. Ganov, C. Killmar, S. Khurshid, and D. Perry. Event listener analysis and symbolic execution for testing gui applications. Formal Methods and Software Engg., 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. Gibler, J. Crussell, J. Erickson, and H. Chen. Androidleaks: Automatically detecting potential privacy leaks in android applications on a large scale. In Trust and Trustworthy Computing, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. L. Gomez, I. Neamtiu, T. Azim, and T. Millstein. Reran: Timing- and touch-sensitive record and replay for android. In ICSE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Google. UI/Application Exerciser Monkey. http://developer.android.com/tools/help/monkey.html.Google ScholarGoogle Scholar
  23. M. Grace, Y. Zhou, Q. Zhang, S. Zou, and X. Jiang. Riskranker: Scalable and accurate zero-day android malware detection. In Mobisys, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. GUITAR: A model-based system for automated GUI testing. http://guitar.sourceforge.net/.Google ScholarGoogle Scholar
  25. D. Hackner and A. M. Memon. Test case generator for GUITAR. In ICSE, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. S. Hao, D. Li, W. Halfond, and R. Govindan. SIF: A Selective Instrumenation Framework for Mobile Applications. In MobiSys, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. C. Hu and I. Neamtiu. Automating gui testing for android applications. In AST, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. C. S. Jensen, M. R. Prasad, and A. MÃyller. Automated testing with targeted event sequence generation. In Int. Symp. on Software Testing and Analysis, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. J. C. King. Symbolic execution and program testing. CACM, 19(7):385--394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Y. Kwon, S. Lee, H. Yi, D. Kwon, S. Yang, B.-G. Chun, L. Huang, P. Maniatis, M. Naik, and Y. Paek. Mantis: Automatic performance prediction for smartphone applications. In Usenix ATC, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. K. Lee, J. Flinn, T. Giuli, B. Noble, and C. Peplin. Amc: Verifying user interface properties for vehicular applications. In ACM Mobisys, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. C.-J. M. Liang, N. Lane, N. Brouwers, L. Zhang, B. Karlsson, R. Chandra, and F. Zhao. Contextual Fuzzing: Automated Mobile App Testing Under Dynamic Device and Environment Conditions. MSR-TR-2013-100.Google ScholarGoogle Scholar
  33. S. Lidin. Inside Microsoft .NET IL Assembler. Microsoft Press, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. B. Livshits and J. Jung. Automatic mediation of privacy-sensitive resource access in smartphone applications. In USENIX Security, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. A. Machiry, R. Tahiliani, and M. Naik. Dynodroid: An input generation system for android apps. In FSE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. A. Memon. Using reverse engineering for automated usability evaluation of gui-based applications. Human-Centered Software Engineering, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  37. N. Mirzaei, S. Malek, C. S. Păsăreanu, N. Esfahani, and R. Mahmood. Testing android apps through symbolic execution. SIGSOFT Softw. Eng. Notes, 37(6):1--5, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. A. Pathak, Y. C. Hu, and M. Zhang. Where is the energy spent inside my app? fine grained energy accounting on smartphones with eprof. In Eurosys, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. A. Pathak, A. Jindal, Y. C. Hu, , and S. Midkiff. What is keeping my phone awake? characterizing and detecting no-sleep energy bugs in smartphone apps. In Mobisys, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. C. Perzold. Microsoft Silverlight Edition: Programming Windows Phone 7. Microsoft Press, 2010.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. V. Rastogi, Y. Chen, and W. Enck. Appsplayground: Automatic security analysis of smartphone applications. In CODASPY, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. L. Ravindranath et al. Appinsight: Mobile app performance monitoring in the wild. In OSDI, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. L. Rizzo. Dummynet.Google ScholarGoogle Scholar
  44. P. Vekris, R. Jhala, S. Lerner, and Y. Agarwal. Towards verifying android apps for the absence of wakelock energy bugs. In HotPower, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. X. S. Wang, A. Balasubramanian, A. Krishnamurthy, and D. Wetherall. Demystifying Page Load Performance with WProf. In NSDI, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Xaml. http://msdn.microsoft.com/en-us/library/ms752059(v=vs.110).aspx.Google ScholarGoogle Scholar
  47. W. Yang, M. R. Prasad, and T. Xie. A Grey-box Approach for Automated GUI-Model Generation of Mobile Applications. In FASE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automatic and scalable fault detection for mobile applications

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MobiSys '14: Proceedings of the 12th annual international conference on Mobile systems, applications, and services
      June 2014
      410 pages
      ISBN:9781450327930
      DOI:10.1145/2594368

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 2 June 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      MobiSys '14 Paper Acceptance Rate25of185submissions,14%Overall Acceptance Rate274of1,679submissions,16%

      Upcoming Conference

      MOBISYS '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    ePub

    View this article in ePub.

    View ePub