ABSTRACT
In Autonomous Vehicles (AVs), one fundamental pillar is perception,which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process.Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function.We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility.We also discuss defense directions at the AV system, sensor, and machine learning model levels.
Supplemental Material
- 2005. HARD BRAKE HARD ACCELERATION. http://tracknet.accountsupport. com/wp-content/uploads/Verizon/Hard-Brake-Hard-Acceleration.pdf. (2005).Google Scholar
- 2016. ArbExpress. https://www.tek.com/signal-generator/afg2021-software-0. (2016).Google Scholar
- 2017. An Introduction to LIDAR: The Key Self-Driving Car Sensor. https://news.voyage.auto/an-introduction-to-lidar-the-key-self-drivingcar- sensor-a7e405590cff. (2017).Google Scholar
- 2017. Baidu Apollo. http://apollo.auto. (2017).Google Scholar
- 2017. Google's Waymo Invests in LIDAR Technology, Cuts Costs by 90 Percent. https://arstechnica.com/cars/2017/01/googles-waymo-invests-in-lidartechnology- cuts-costs-by-90-percent/. (2017).Google Scholar
- 2017. KITTI Vision Benchmark: 3D Object Detection. http://www.cvlibs.net/ datasets/kitti/eval_object.php?obj_benchmark=3d. (2017).Google Scholar
- 2017. What it Was Like to Ride in GM's New Self-Driving Cruise Car. https://www.recode.net/2017/11/29/16712572/general-motors-gm-new-selfdriving- autonomous-cruise-car-future. (2017).Google Scholar
- 2018. Baidu hits the gas on autonomous vehicles with Volvo and Ford deals. https: //techcrunch.com/2018/11/01/baidu-volvo-ford-autonomous-driving/. (2018).Google Scholar
- 2018. Baidu starts mass production of autonomous buses. https://www.dw.com/ en/baidu-starts-mass-production-of-autonomous-buses/a-44525629. (2018).Google Scholar
- 2018. VeloView. https://www.paraview.org/VeloView/. (2018).Google Scholar
- 2018. Volvo Finds the LIDAR it Needs to Build Self-Driving Cars. https://www. wired.com/story/volvo-self-driving-lidar-luminar/. (2018).Google Scholar
- 2018. Waymo's autonomous cars have driven 8 million miles on public roads. https://www.theverge.com/2018/7/20/17595968/waymo-self-driving-cars- 8-million-miles-testing. (2018).Google Scholar
- 2018. What Is LIDAR, Why Do Self-Driving Cars Need It, And Can It See Nerf Bullets? https://www.wired.com/story/lidar-self-driving-cars-luminar-video/. (2018).Google Scholar
- 2018. You can take a ride in a self-driving Lyft during CES. https://www.theverge. com/2018/1/2/16841090/lyft-aptiv-self-driving-car-ces-2018. (2018).Google Scholar
- Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: a system for large-scale machine learning.. In OSDI, Vol. 16. 265--283.Google ScholarDigital Library
- Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018).Google Scholar
- Anish Athalye and Ilya Sutskever. 2018. Synthesizing Robust Adversarial Examples. In International Conference on Machine Learning (ICML).Google Scholar
- Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, DavidWagner, andWenchao Zhou. 2016. Hidden Voice Commands. In USENIX Security Symposium.Google Scholar
- Nicholas Carlini and David Wagner. 2017. Adversarial Examples are not Easily Detected: Bypassing Ten Detection Methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. ACM, 3--14.Google ScholarDigital Library
- Nicholas Carlini and DavidWagner. 2018. Audio Adversarial Examples: Targeted Attacks on Speech-to-text. In Deep Learning and Security Workshop (DLS).Google Scholar
- Nicholas Carlini and David A.Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22--26, 2017. 39--57. https://doi.org/10.1109/SP.2017.49Google Scholar
- Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, Stefan Savage, Karl Koscher, Alexei Czeskis, Franziska Roesner, and Tadayoshi Kohno. 2011. Comprehensive Experimental Analyses of Automotive Attack Surfaces. In Proceedings of the 20th USENIX Conference on Security (SEC'11).Google ScholarDigital Library
- Qi Alfred Chen, Yucheng Yin, Yiheng Feng, Z. Morley Mao, and Henry X. Liu Liu. 2018. Exposing Congestion Attack on Emerging Connected Vehicle based Traffic Signal Control. In Proceedings of the 25th Annual Network and Distributed System Security Symposium (NDSS '18).Google Scholar
- Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples. arXiv preprint arXiv:1803.01128 (2018).Google Scholar
- Kyong-Tak Cho and Kang G. Shin. 2016. Error Handling of In-vehicle Networks Makes Them Vulnerable. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS'16).Google Scholar
- Moustapha Cisse, Yossi Adi, Natalia Neverova, and Joseph Keshet. 2017. Houdini: Fooling deep structured prediction models. arXiv preprint arXiv:1707.05373 (2017).Google Scholar
- Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. 2017. Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945 1 (2017).Google Scholar
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Physical Adversarial Examples for Object Detectors. In USENIX Workshop on Offensive Technologies (WOOT).Google Scholar
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Robust Physical- World Attacks on Deep Learning Visual Classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
- Yiheng Feng, Shihong Huang, Qi Alfred Chen, Henry X. Liu, and Z. Morley Mao. 2018. Vulnerability of Traffic Control System Under Cyber-Attacks Using Falsified Data. In Transportation Research Board 2018 Annual Meeting (TRB).Google Scholar
- Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).Google Scholar
- Velodyne LiDAR Inc. 2018. VLP-16 User Manual. (2018).Google Scholar
- Radoslav Ivanov, Miroslav Pajic, and Insup Lee. 2014. Attack-resilient sensor fusion. In Proceedings of the conference on Design, Automation & Test in Europe.Google Scholar
- Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. 2015. Spatial transformer networks. In Advances in neural information processing systems. 2017-- 2025.Google Scholar
- Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).Google Scholar
- Karl Koscher, Alexei Czeskis, Franziska Roesner, Shwetak Patel, Tadayoshi Kohno, Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, and Stefan Savage. 2010. Experimental Security Analysis of a Modern Automobile. In Proceedings of the 2010 IEEE Symposium on Security and Privacy (SP'10).Google ScholarDigital Library
- Yingqi Liu, Shiqing Ma, Yousra Aafer,Wen-Chuan Lee, Juan Zhai,WeihangWang, and Xiangyu Zhang. 2018. Trojaning Attack on Neural Networks. In Annual Network and Distributed System Security Symposium (NDSS).Google Scholar
- Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Michael E Houle, Grant Schoenebeck, Dawn Song, and James Bailey. 2018. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality. arXiv preprint arXiv:1801.02613 (2018).Google Scholar
- Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).Google Scholar
- Sahar Mazloom, Mohammad Rezaeirad, Aaron Hunter, and Damon McCoy. 2016. A Security Analysis of an In-Vehicle Infotainment and App Platform. In Usenix Workshop on Offensive Technologies (WOOT).Google Scholar
- Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks Against Machine Learning. In ACM on Asia Conference on Computer and Communications Security.Google Scholar
- Jonathan Petit, Bas Stottelaar, Michael Feiri, and Frank Kargl. 2015. Remote Attacks on Automated Vehicles Sensors: Experiments on Camera and LiDAR. In Black Hat Europe.Google Scholar
- Ishtiaq Rouf, Rob Miller, Hossen Mustafa, Travis Taylor, Sangho Oh, Wenyuan Xu, Marco Gruteser, Wade Trappe, and Ivan Seskar. 2010. Security and Privacy Vulnerabilities of In-car Wireless Networks: A Tire Pressure Monitoring System Case Study. In Proceedings of the 19th USENIX Conference on Security (USENIX Security'10). USENIX Association, Berkeley, CA, USA, 21--21. http://dl.acm.org/ citation.cfm?id=1929820.1929848Google Scholar
- Hocheol Shin, Dohyun Kim, Yujin Kwon, and Yongdae Kim. 2017. Illusion and Dazzle: Adversarial Optical Channel Exploits Against Lidars for Automotive Applications. In International Conference on Cryptographic Hardware and Embedded Systems (CHES).Google Scholar
- Yasser Shoukry, Paul Martin, Paulo Tabuada, and Mani Srivastava. 2013. Noninvasive Spoofing Attacks for Anti-lock Braking Systems. In Cryptographic Hardware and Embedded Systems - CHES 2013, Guido Bertoni and Jean-Sébastien Coron (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 55--72.Google ScholarDigital Library
- Yasser Shoukry, Paul Martin, Yair Yona, Suhas N. Diggavi, and Mani B. Srivastava. 2015. PyCRA: Physical Challenge-Response Authentication For Active Sensors Under Spoofing Attacks. In ACM Conference on Computer and Communications Security.Google Scholar
- Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick McDaniel. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 (2017).Google Scholar
- Wai Wong, Shihong Huang, Yiheng Feng, Qi Alfred Chen, Z Morley Mao, and Henry X Liu. 2019. Trajectory-Based Hierarchical Defense Model to Detect Cyber- Attacks on Transportation Infrastructure. In Transportation Research Board 2019 Annual Meeting (TRB).Google Scholar
- Chong Xiang, Charles R Qi, and Bo Li. 2018. Generating 3D Adversarial Point Clouds. arXiv preprint arXiv:1809.07016 (2018).Google Scholar
- Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Dawn Song, et al. 2018. Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation. In Proceedings of the (ECCV). 217--234.Google ScholarCross Ref
- Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610 (2018).Google ScholarDigital Library
- Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. 2018. Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612 (2018).Google Scholar
- Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, and Alan Yuille. 2017. Adversarial Examples for Semantic Segmentation and Object Detection. In IEEE International Conference on Computer Vision (ICCV).Google Scholar
- Xiaojun Xu, Xinyun Chen, Chang Liu, Anna Rohrbach, Trevor Darell, and Dawn Song. 2017. Can you fool AI with adversarial examples on a visual Turing test? arXiv preprint arXiv:1709.08693 (2017).Google Scholar
- Chen Yan. 2016. Can You Trust Autonomous Vehicles : Contactless Attacks against Sensors of Self-driving Vehicle.Google Scholar
- Kang Yang, Rui Wang, Yu Jiang, Houbing Song, Chenxia Luo, Yong Guan, Xiaojuan Li, and Zhiping Shi. 2018. Sensor attack detection using history based pairwise inconsistency. Future Generation Computer Systems (2018).Google Scholar
- Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang, and Carl A Gunter. 2018. CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition. In USENIX Security Symposium.Google Scholar
Index Terms
- Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
Recommendations
Robust Roadside Physical Adversarial Attack Against Deep Learning in Lidar Perception Modules
ASIA CCS '21: Proceedings of the 2021 ACM Asia Conference on Computer and Communications SecurityAs Autonomous Vehicles (AVs) mature into viable transportation solutions, mitigating potential vehicle control security risks becomes increasingly important. Perception modules in AVs combine multiple sensors to perceive the surrounding environment. As ...
Can We Use Arbitrary Objects to Attack LiDAR Perception in Autonomous Driving?
CCS '21: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications SecurityAs an effective way to acquire accurate information about the driving environment, LiDAR perception has been widely adopted in autonomous driving. The state-of-the-art LiDAR perception systems mainly rely on deep neural networks (DNNs) to achieve good ...
EMI-LiDAR: Uncovering Vulnerabilities of LiDAR Sensors in Autonomous Driving Setting using Electromagnetic Interference
WiSec '23: Proceedings of the 16th ACM Conference on Security and Privacy in Wireless and Mobile NetworksAutonomous Vehicles (AVs) using LiDAR-based object detection systems are rapidly improving and becoming an increasingly viable method of transportation. While effective at perceiving the surrounding environment, these detection systems are shown to be ...
Comments