skip to main content
survey

The AI-Based Cyber Threat Landscape: A Survey

Published:06 February 2020Publication History
Skip Abstract Section

Abstract

Recent advancements in artificial intelligence (AI) technologies have induced tremendous growth in innovation and automation. Although these AI technologies offer significant benefits, they can be used maliciously. Highly targeted and evasive attacks in benign carrier applications, such as DeepLocker, have demonstrated the intentional use of AI for harmful purposes. Threat actors are constantly changing and improving their attack strategy with particular emphasis on the application of AI-driven techniques in the attack process, called AI-based cyber attack, which can be used in conjunction with conventional attack techniques to cause greater damage. Despite several studies on AI and security, researchers have not summarized AI-based cyber attacks enough to be able to understand the adversary’s actions and to develop proper defenses against such attacks. This study aims to explore existing studies of AI-based cyber attacks and to map them onto a proposed framework, providing insight into new threats. Our framework includes the classification of several aspects of malicious uses of AI during the cyber attack life cycle and provides a basis for their detection to predict future threats. We also explain how to apply this framework to analyze AI-based cyber attacks in a hypothetical scenario of a critical smart grid infrastructure.

References

  1. Horizon 2020 Work Programme 2014-2015. 2015. Leadership in Enabling and Industrial Technologies: Information and Communication Technologies. Retrieved November 25, 2019 from http://ec.europa.eu/research/participants/portal4/doc/call/h2020/common/1587758-05i._ict_wp_2014-2015_en.pdf.Google ScholarGoogle Scholar
  2. Terrence Adams. 2017. AI-powered social bots. arXiv:1706.05143.Google ScholarGoogle Scholar
  3. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. Concrete problems in AI safety. arXiv:1606.06565.Google ScholarGoogle Scholar
  4. Hyrum S. Anderson, Jonathan Woodbridge, and Bobby Filar. 2016. DeepDGA: Adversarially-tuned domain generation and detection. In Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security. ACM, New York, NY, 13--21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Ross Anderson and Shailendra Fuloria. 2010. Who controls the off switch? In Proceedings of the 2010 1st IEEE International Conference on Smart Grid Communications. IEEE, Los Alamitos, CA, 96--101.Google ScholarGoogle ScholarCross RefCross Ref
  6. Daniele Antonioli, Giuseppe Bernieri, and Nils Ole Tippenhauer. 2018. Taking control: Design and implementation of botnets for cyber-physical attacks with CPSBot. arXiv:1802.00152.Google ScholarGoogle Scholar
  7. Alejandro Correa Bahnsen, Ivan Torroledo, David Camacho, and Sergio Villegas. 2018. DeepPhish: Simulating malicious AI. In Proceedings of the 2018 APWG Symposium on Electronic Crime Research (eCrime’18). 1--8.Google ScholarGoogle Scholar
  8. Oliver Bendel. 2019. The synthetization of human voices. AI 8 Society 34, 1 (2019), 83--89.Google ScholarGoogle Scholar
  9. UC Berkeley. 2012. Cyber-Physical Systems—A Concept Map. Retrieved November 25, 2019 from https://ptolemy.berkeley.edu/projects/cps/.Google ScholarGoogle Scholar
  10. William Blum. 2017. Neural Fuzzing: Applying DNN to Software Security Testing. Retrieved November 25, 2019 from https://www.microsoft.com/en-us/research/blog/neural-fuzzing/.Google ScholarGoogle Scholar
  11. Michael Bossetta. 2018. A simulated cyberattack on Twitter: Assessing partisan vulnerability to spear phishing and disinformation ahead of the 2018 US midterm elections. arXiv:1811.05900.Google ScholarGoogle Scholar
  12. Susan M. Bridges and Rayford B. Vaughn. 2000. Fuzzy data mining and genetic algorithms applied to intrusion detection. In Proceedings of the 12th Annual Canadian Information Technology Security Symposium. 109--122.Google ScholarGoogle Scholar
  13. Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, et al. 2018. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv:1802.07228.Google ScholarGoogle Scholar
  14. Zheng Bu. 2014. Zero-Day Attacks Are Not the Same as Zero-Day Vulnerabilities. Retrieved November 25, 2019 from https://www.fireeye.com/blog/executive-perspective/2014/04/zero-day-attacks-are-not-the-same-as-zero-day-vulnerabilities.html.Google ScholarGoogle Scholar
  15. Tomas Bures, Danny Weyns, Bradley Schmer, Eduardo Tovar, Eric Boden, Thomas Gabor, Ilias Gerostathopoulos, et al. 2017. Software engineering for smart cyber-physical systems: Challenges and promising solutions. ACM SIGSOFT Software Engineering Notes 42, 2 (2017), 19--24.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Keywhan Chung, Zbigniew T. Kalbarczyk, and Ravishankar K. Iyer. 2019. Availability attacks on computing systems through alteration of environmental control: Smart malware approach. In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems. ACM, New York, NY, 1--12.Google ScholarGoogle Scholar
  17. Jessica Cussins. 2017. AI Researchers Create Video to Call for Autonomous Weapons Ban at UN. Retrieved November 25, 2019 from https://futureoflife.org/2017/11/14/ai-researchers-create-video-call-autonomous-weapons-ban-un/.Google ScholarGoogle Scholar
  18. Moises Danziger and Marco Aurelio Amaral Henriques. 2017. Attacking and defending with intelligent botnets. In Proceedings of XXXV Simpósio Brasileiro de Telecomunicaç oes e Processamento de Sinais-SBrT. 457--461.Google ScholarGoogle ScholarCross RefCross Ref
  19. AI-Driven Cyber-Attacks. 2018. The Next Paradigm Shift.Google ScholarGoogle Scholar
  20. DARPA. 2016. Cyber Grand Challenge (CGC). Retrieved January 3, 2020 from https://www.darpa.mil/program/cyber-grand-challenge.Google ScholarGoogle Scholar
  21. Kenneth De Jong. 1988. Learning with genetic algorithms: An overview. Machine Learning 3, 2--3 (1988), 121--138.Google ScholarGoogle Scholar
  22. Dhilung Kirat, Jiyong Jang, and Marc Ph. Stoecklin. 2018. DeepLocker—Concealing targeted attacks with AI Locksmithing. In Proceedings of the Black Hat USA Conference.Google ScholarGoogle Scholar
  23. Wenrui Diao, Xiangyu Liu, Zhe Zhou, and Kehuan Zhang. 2014. Your voice assistant is mine: How to abuse speakers to steal information and control your phone. In Proceedings of the 4th ACM Workshop on Security and Privacy in Smartphones and Mobile Devices. ACM, New York, NY, 63--74.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Selma Dilek, Hüseyin Çakır, and Mustafa Aydın. 2015. Applications of artificial intelligence techniques to combating cyber crimes: A review. arXiv:1502.03552.Google ScholarGoogle Scholar
  25. Peter Eder-Neuhauser, Tanja Zseby, Joachim Fabini, and Gernot Vormayr. 2017. Cyber attack models for smart grid environments. Sustainable Energy, Grids and Networks 12 (2017), 10--29.Google ScholarGoogle ScholarCross RefCross Ref
  26. ENISA. 2018. ENISA Threat Landscape Report 2017. Retrieved November 25, 2019 from https://www.enisa.europa.eu/publications/enisa-threat-landscape-report-2017.Google ScholarGoogle Scholar
  27. ESET. 2018. Can Artificial Intelligence Power Future Malware? White Paper. Retrieved November 25, 2019 from https://www.welivesecurity.com/wp-content/uploads/2018/08/Can_AI_Power_Future_Malware.pdf.Google ScholarGoogle Scholar
  28. Gregory Falco, Arun Viswanathan, Carlos Caldera, and Howard Shrobe. 2018. A master attack methodology for an AI-based automated attack planner for smart cities. IEEE Access 6 (2018), 48360--48373.Google ScholarGoogle ScholarCross RefCross Ref
  29. Ming Feng and Hao Xu. 2017. Deep reinforcement learning based optimal defense for cyber-physical system in presence of unknown cyber-attack. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI ’17). IEEE, Los Alamitos, CA, 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  30. Brian E. Finch. 2013. Anything and Everything Can Be Hacked. Retrieved November 25, 2019 from https://www.huffpost.com/entry/caveat-cyber-emptor_b_3748602.Google ScholarGoogle Scholar
  31. Luciano Floridi. 2017. Digital’s cleaving power and its consequences. Philosophy 8 Technology 30, 2 (2017), 123--129.Google ScholarGoogle Scholar
  32. European Regulators Group for Electricity and Gas. 2010. European Energy Regulators’ Position on Smart Grids. Retrieved November 25, 2019 from http://www.cired.net/publications/workshop2010/pdfs/0092.pdf.Google ScholarGoogle Scholar
  33. National Science Foundation. 2015. Cyber-Physical Systems (CPS). Retrieved November 25, 2019 from https://www.nsf.gov/pubs/2015/nsf15541/nsf15541.pdf.Google ScholarGoogle Scholar
  34. Jairo Giraldo, Esha Sarkar, Alvaro A. Cardenas, Michail Maniatakos, and Murat Kantarcioglu. 2017. Security and privacy in cyber-physical systems: A survey of surveys. IEEE Design 8 Test 34, 4 (2017), 7--17.Google ScholarGoogle Scholar
  35. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press, Cambridge, MA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv:1308.0850.Google ScholarGoogle Scholar
  37. Aaron Hansen, Jason Staggs, and Sujeet Shenoi. 2017. Security analysis of an advanced metering infrastructure. International Journal of Critical Infrastructure Protection 18 (2017), 3--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Simon Hansman and Ray Hunt. 2005. A taxonomy of network and computer attacks. Computers 8 Security 24, 1 (2005), 31--43.Google ScholarGoogle Scholar
  39. Haibo He and Jun Yan. 2016. Cyber-physical attacks and defences in the smart grid: A survey. IET Cyber-Physical Systems: Theory 8 Applications 1, 1 (2016), 13--27.Google ScholarGoogle Scholar
  40. Paul D. H. Hines, Ian Dobson, and Pooya Rezaei. 2017. Cascading power outages propagate locally in an influence graph that is not the actual grid topology. IEEE Transactions on Power Systems 32, 2 (2017), 958--967.Google ScholarGoogle Scholar
  41. Briland Hitaj, Paolo Gasti, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. PassGAN: A deep learning approach for password guessing. arXiv:1709.00440.Google ScholarGoogle Scholar
  42. White House. 2016. Artificial Intelligence, Automation, and the Economy. Executive Office of the President. Executive Retrieved January 4, 2020 from https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF.Google ScholarGoogle Scholar
  43. Weiwei Hu and Ying Tan. 2017. Generating adversarial malware examples for black-box attacks based on GAN. arXiv:1702.05983.Google ScholarGoogle Scholar
  44. Abdulmalik Humayed, Jingqiang Lin, Fengjun Li, and Bo Luo. 2017. Cyber-physical systems security—A survey. IEEE Internet of Things Journal 4, 6 (2017), 1802--1831.Google ScholarGoogle ScholarCross RefCross Ref
  45. Eric M. Hutchins, Michael J. Cloppert, and Rohan M. Amin. 2011. Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains. Leading Issues in Information Warfare 8 Security Research 1, 1 (2011), 80.Google ScholarGoogle Scholar
  46. Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. 1996. Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4 (1996), 237--285.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Rida Khatoun and Sherali Zeadally. 2017. Cybersecurity and privacy solutions in smart cities. IEEE Communications Magazine 55, 3 (2017), 51--59.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Young Mie Kim, Jordan Hsu, David Neiman, Colin Kou, Levi Bankston, Soo Yun Kim, Richard Heinrich, Robyn Baragwanath, and Garvesh Raskutti. 2018. The stealth media? Groups and targets behind divisive issue campaigns on Facebook. Political Communication 35, 4 (2018), 515--541.Google ScholarGoogle ScholarCross RefCross Ref
  49. Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo, and Luciano Floridi. 2019. Artificial intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions. Science and Engineering Ethics. February 14, 2019. Epub ahead of print.Google ScholarGoogle Scholar
  50. Jingyue Li, Jin Zhang, and Nektaria Kaloudi. 2018. Could we issue driving licenses to autonomous vehicles? In Proceedings of the International Conference on Computer Safety, Reliability, and Security. 473--480.Google ScholarGoogle ScholarCross RefCross Ref
  51. Tao Liu, Wujie Wen, and Yier Jin. 2018. SIN 2: Stealth infection on neural network—A low-cost agile neural Trojan attack methodology. In Proceedings of the 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST’18). IEEE, Los Alamitos, CA, 227--230.Google ScholarGoogle ScholarCross RefCross Ref
  52. Natasha Lomas. 2017. Lyrebird Is a Voice Mimic for the Fake News Era. Retrieved November 25, 2019 from https://techcrunch.com/2017/04/25/lyrebird-is-a-voice-mimic-for-the-fake-news-era/.Google ScholarGoogle Scholar
  53. William Melicher, Blase Ur, Sean M. Segreti, Saranga Komanduri, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor. 2016. Fast, lean, and accurate: Modeling password guessability using neural networks. In Proceedings of the 25th USENIX Conference on Security Symposium (SEC’16). 175--191.Google ScholarGoogle Scholar
  54. Harel Menashri and Gil Baram. 2015. Critical infrastructures and their interdependence in a cyber attack—The case of the US. Military and Strategic Affairs 7, 1 (2015), 22.Google ScholarGoogle Scholar
  55. MITRE. 2017. Enterprise Matrix. Retrieved November 25, 2019 from https://attack.mitre.org/matrices/enterprise/.Google ScholarGoogle Scholar
  56. Jefferson Seide Molléri, Kai Petersen, and Emilia Mendes. 2016. Survey guidelines in software engineering: An annotated review. In Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, New York, NY, 58.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Thanh Thi Nguyen and Vijay Janapa Reddi. 2019. Deep reinforcement learning for cyber security. arXiv:1906.05799.Google ScholarGoogle Scholar
  58. NIST. 2018. NIST Framework and Roadmap for Smart Grid Interoperability Standards, Release 4.0. Retrieved November 25, 2019 from https://www.nist.gov/engineering-laboratory/smart-grid/smart-grid-framework.Google ScholarGoogle Scholar
  59. Ivan Novikov. 2018. How AI Can Be Applied to Cyberattacks. Retrieved November 25, 2019 from https://www.forbes.com/sites/forbestechcouncil/2018/03/22/how-ai-can-be-applied-to-cyberattacks/.Google ScholarGoogle Scholar
  60. OASIS. 2017. OASIS Open Command and Control (OpenC2) TC. Retrieved November 25, 2019 from https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=openc2.Google ScholarGoogle Scholar
  61. Dan Patterson. 2018. How Weaponized AI Creates a New Breed of Cyber-Attacks. Retrieved November 25, 2019 from https://www.techrepublic.com/article/how-weaponized-ai-creates-a-new-breed-of-cyber-attacks/.Google ScholarGoogle Scholar
  62. Frederic Petit, Duane Verner, David Brannegan, William Buehring, David Dickinson, Karen Guziel, Rebecca Haffenden, Julia Phillips, and James Peerenboom. 2015. Analysis of Critical Infrastructure Dependencies and Interdependencies. Technical Report. Argonne National Lab (ANL), Argonne, IL.Google ScholarGoogle Scholar
  63. D. Petro and B. Morris. 2017. Weaponizing machine learning: Humanity was overrated anyway. In Proceedings of DEF CON 25.Google ScholarGoogle Scholar
  64. Federico Pistono and Roman V. Yampolskiy. 2016. Unethical research: How to create a malevolent artificial intelligence. arXiv:1605.02817.Google ScholarGoogle Scholar
  65. Ashis Pradhan. 2012. Support vector machine-A survey. International Journal of Emerging Technology and Advanced Engineering 2, 8 (2012), 82--85.Google ScholarGoogle Scholar
  66. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1, 8 (2019).Google ScholarGoogle Scholar
  67. Mohit Rajpal, William Blum, and Rishabh Singh. 2017. Not all bytes are equal: Neural byte sieve for fuzzing. arXiv:1711.04596.Google ScholarGoogle Scholar
  68. Patrick Reidy and K. Randal. 2013. Combating the insider threat at the FBI: Real world lessons learned. In Proceedings of the RSA Conference.Google ScholarGoogle Scholar
  69. Bruce Schneier. 1999. Attack trees. Dr. Dobb’s Journal 24, 12 (1999), 21--29.Google ScholarGoogle Scholar
  70. Bruce Schneier. 2018. Artificial intelligence and the attack/defense balance. IEEE Security 8 Privacy 2 (2018), 96.Google ScholarGoogle ScholarCross RefCross Ref
  71. John Seymour and Philip Tully. 2016. Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter. Black Hat USA 37 (2016), 1--39.Google ScholarGoogle Scholar
  72. John Seymour and Philip Tully. 2018. Generative models for spear phishing posts on social media. arXiv:1802.05196.Google ScholarGoogle Scholar
  73. Shahaboddin Shamshirband, Nor Badrul Anuar, Miss Laiha Mat Kiah, and Ahmed Patel. 2013. An appraisal and design of a multi-agent system based cooperative wireless intrusion detection computational intelligence technique. Engineering Applications of Artificial Intelligence 26, 9 (2013), 2105--2127.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Manish Shrestha, Christian Johansen, and Josef Noll. 2017. Security Classification for Smart Grid Infra Structures (Long Version). Retrieved January 4, 2020 from https://www.duo.uio.no/handle/10852/60948.Google ScholarGoogle Scholar
  75. Leslie F. Sikos. 2018. AI in Cybersecurity. Vol. 151. Springer.Google ScholarGoogle Scholar
  76. Kaj Sotala and Roman V. Yampolskiy. 2014. Responses to catastrophic AGI risk: A survey. Physica Scripta 90, 1 (2014), 018001.Google ScholarGoogle ScholarCross RefCross Ref
  77. Marc Ph. Stoecklin. 2018. DeepLocker: How AI Can Power a Stealthy New Breed of Malware. Retrieved November 25, 2019 from https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/.Google ScholarGoogle Scholar
  78. Ilya Sutskever, James Martens, and Geoffrey E. Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). 1017--1024.Google ScholarGoogle Scholar
  79. Pablo Torres, Carlos Catania, Sebastian Garcia, and Carlos Garcia Garino. 2016. An analysis of recurrent neural networks for botnet detection behavior. In Proceedings of the 2016 IEEE Biennial Congress of Argentina (ARGENCON’16). IEEE, Los Alamitos, CA, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  80. Khoa Trieu and Yi Yang. 2018. Artificial intelligence-based password brute force attacks. In Proceedings of the 13th Annual Conference of the Midwest AIS (MWAIS’18).Google ScholarGoogle Scholar
  81. Alexey Turchin. 2015. A Map: AGI Failures Modes and Levels. Retrieved November 25, 2019 from https://www.lesswrong.com/posts/hMQ5iFiHkChqgrHiH/.Google ScholarGoogle Scholar
  82. Alexey Turchin and David Denkenberger. 2018. Classification of global catastrophic risks connected with artificial intelligence. AI 8 Society (2018), 1--17.Google ScholarGoogle Scholar
  83. Jian-Wei Wang and Li-Li Rong. 2009. Cascade-based attack vulnerability on the US power grid. Safety Science 47, 10 (2009), 1332--1336.Google ScholarGoogle ScholarCross RefCross Ref
  84. Claes Wohlin. 2014. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering. 38.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Roman V. Yampolskiy. 2016. Taxonomy of pathways to dangerous artificial intelligence. In Proceedings of the Workshops at the 30th AAAI Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  86. Roman V. Yampolskiy and M. S. Spellchecker. 2016. Artificial intelligence safety and cybersecurity: A timeline of AI failures. arXiv:1610.07997.Google ScholarGoogle Scholar
  87. Jun Yan, Haibo He, Xiangnan Zhong, and Yufei Tang. 2017. Q-learning-based vulnerability analysis of smart grid against sequential topology attacks. IEEE Transactions on Information Forensics and Security 12, 1 (2017), 200--210.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Yuanshun Yao, Bimal Viswanath, Jenna Cryan, Haitao Zheng, and Ben Y. Zhao. 2017. Automated crowdturfing attacks and defenses in online review systems. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, New York, NY, 1143--1158.Google ScholarGoogle Scholar
  89. Rongjunchen Zhang, Xiao Chen, Jianchao Lu, Sheng Wen, Surya Nepal, and Yang Xiang. 2018. Using AI to hack IA: A new stealthy spyware against voice assistance functions in smart phones. arXiv:1805.06187.Google ScholarGoogle Scholar
  90. Yihai Zhu, Jun Yan, Yan Lindsay Sun, and Haibo He. 2014. Revealing cascading failure vulnerability in power grids using risk-graph. IEEE Transactions on Parallel and Distributed Systems 25, 12 (2014), 3274--3284.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. The AI-Based Cyber Threat Landscape: A Survey

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 53, Issue 1
      January 2021
      781 pages
      ISSN:0360-0300
      EISSN:1557-7341
      DOI:10.1145/3382040
      Issue’s Table of Contents

      Copyright © 2020 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 February 2020
      • Accepted: 1 November 2019
      • Revised: 1 October 2019
      • Received: 1 May 2019
      Published in csur Volume 53, Issue 1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • survey
      • Survey
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format