DEF CON Forum Site Header Art
DEF CON Forum Site Header Art

Announcement

Collapse
No announcement yet.

AutoDriving CTF at DEF CON 29

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AutoDriving CTF at DEF CON 29

    Autonomous Driving CTF (AutoDriving CTF)


    AutoDriving CTF Overview

    We propose a CTF contest focusing on the emerging security challenges in autonomous driving systems. Various levels of self driving functionalities, such as AI-powered perception, sensor fusion and route planning, are entering the product portfolio of automobile companies. From the security perspective, these AI-powered components not only contain common security problems such as memory safety bugs, but also introduce new threats such as physical adversarial attacks and sensor manipulations. Two popular examples of physical adversarial attacks are camouflage stickers that interfere with vehicle detection systems, and road graffitis that disturb lane keeping systems. The AI-powered navigation and control relies on the fusion of multiple sensor inputs, and many of the sensor inputs can be manipulated by malicious attackers. These manipulations combined with logical bugs in autonomous driving systems pose severe threats to road safety.

    We design autonomous driving CTF (AutoDriving CTF) contests around the security challenges specific to these self-driving functions and components.
    The goals of the AutoDriving CTF are the followings:
    1. Demonstrate security risks of poorly designed autonomous driving systems through hands-on challenges, increase the awareness of such risks in security professionals, and encourage them to propose defense solutions and tools to detect such risks.
    2. Provide CTF challenges that allow players to learn attack and defense practices related to autonomous driving in a well-controlled, repeatable, and visible environment.
    3. Build a set of vulnerable autonomous driving components that can be used for security research and defense evaluation.
    The contest is based on a Jeopardy style of CTF game with a set of independent challenges.

    A typical contest challenge includes a backend that runs autonomous driving components in simulated or real environments, and a frontend that interacts with the players. For example, a lane keeping challenge provides an interface for players to submit “graffitis” images and the contest organizer runs the evaluation at the backend and returns the simulation result. The result is in the form of attack success or failure (e.g. whether to divert the autonomous driving vehicle to a wrong lane or not) and the corresponding video effect of autonomous driving in a simulator.

    The type of challenges include
    • “attack”: such as constructing adversarial patches and spoofing fake sensor inputs
    • “forensics”: such as investigating a security incident related to autonomous driving
    • “detection”: such as detecting spoofed sensor inputs and fake obstacles
    • “crashme on road!”: such as creating dangerous traffic patterns to expose logical errors in autonomous driving systems.

    We plan to have a majority of challenges developed using game-engine based autonomous driving simulators, such as LGSVL and CARLA. We also plan to introduce a few challenges involving real vehicles equipped with open source autonomous driving software, such as Apollo, Autoware and OpenPilot.

    The skills required to participate in the AutoDriving CTF varies depending on the challenges.
    Challenges such as adversarial patching attacks do not require any knowledge of autonomous driving software or adversarial machine learning, although knowledge of those definitely helps. Other challenges, such as incident forensics and sensor spoofing likely would require players to learn domain knowledge such as sensor information format and how fusion works. More information about the challenges and examples of past challenges are provided in the later part of this document.


    The AutoDriving CTF focuses on the emerging issues introduced by these AI components in autonomous driving context. Even though adversarial attacks are popular in academic research, most of the work was not applied in physical environments. In addition, the effect of adversarial attacks on vehicles needs to consider how autonomous driving systems use AI-based perceptions and how the decision making process is done. AutoDriving CTF design challenges that integrate AI perception with driving software, and the effectiveness of physical adversarial attacks should really be evaluated in an integrated way.

    Even for some well explored security topics, such as GPS spoofing and logical errors, AutoDriving CTF provides concrete scenarios to show the domain constraints related to such risks and (sometimes) unexpected severe results when combining conventionally weak attacks.
    AutoDriving-CTF Challenge Format


    All challenges will be in Jeopardy style, with a given challenge description and a site for players to submit answers. The submission can be in a format of an image, a program, a hash number depending on the challenges. For most of the challenges, the answer verification requires the organizers to run the submission with Autonomous driving systems in either a driving simulator or with a real vehicle.

    1 Challenges involving AutoDriving Simulators
    The first format type will involve autonomous driving simulators, such as LGSVL and CARLA, which allows integrated evaluation of software-level attacks/defenses on end-to-end driving behaviors.

    Specifically, we will design security-critical driving scenarios, e.g., keeping in the lane or safely passing an intersection in AutoDriving simulators. We will expose simulator interfaces to players allowing them to perturbe the driving environment with various types of inputs.

    Depending on the challenges, the player inputs can be malicious traffic patterns, spoofed sensor inputs, road signs, or even graffities. To score points, the player needs to cause the AD vehicle to crash into other road objects or to violate traffic rules.
    A few examples of these challenges will be provided in the next section.

    2 Challenges with Real Vehicles in the Loop
    The second challenge format will involve real vehicles, which can most realistically evaluate and showcase AD attacks in the physical world. In this setup, the vehicle under attack will be placed on a rack and the driving environment will be displayed on a monitor in front of the windshield camera. Players will interact with the vehicle by remotely manipulating the virtual surrounding environments (such as the projected road signs in front of the vehicle). The attack results will be judged based on systems logs (for open-sourced systems, such as openpilot) or dashboard visualizations (for closed-sourced vehicles).

    Due to the Covid-19 pandemic, we are likely to have the real vehicle challenges running in a lab and players indirectly interact with the vehicles through the network. Also considering the effort required for setup and judging results for such challenges, this year we plan to mainly adopt the simulator based challenges.

    3 Challenges in the form of conventional software security CTF
    AutoDriving CTF will still have a small set of challenges in the form of software exploitation and forensics specific to Autonomous Driving (AD) software. Specifically, for the AD software exploitation challenges, we will develop vulnerable AD software components and ask the players to exploit software vulnerabilities to obtain a flag file. For the AD forensics challenges, we will provide execution logs related to an incident caused by a AD software defect and then ask the players to diagnose the root cause of the incident.


    AutoDriving-CTF Challenge Genres


    Physical Adversarial Attacks
    This category is for adversarial attacks affecting autonomous driving vehicles. Although the Adversarial AI CTF challenges have appeared in other CTF competitions as well, in this AutoDriving-CTF, we design the challenges to be closely integrated with the Autonomous Driving context. The targets being attacked are deep neural network models for various driving-related tasks such as obstacle/lane/traffic light detections, trajectory prediction, etc. The players will be asked to generate adversarial perturbations to the model inputs, in the form of a malicious sticker, or road sign giraffes, to cause false detections or incorrect predictions.

    Autonomous Driving Forensics
    For safety-critical systems such as AD systems, any accident occurs during lab or road testing should be taken seriously and scrutinized carefully to understand its root cause. Thus, we design the forensics challenge genre, which asks players to discover certain software defects from the sensor traces and execution logs from malfunctioned AD testing runs.

    CrashMe, On Road!
    Autonomous Driving Vehicles (AVs) in the real world not only need to follow basic traffic rules such as lane keeping and traffic lights, but also need to react to the dynamic traffic around themselves. To address this, the AD systems predict the future driving trajectories of the surrounding vehicles and calculate a safe driving plan based on those predicted trajectories. In this challenge genre, we provide simulation interfaces to the players to control other vehicles (NPC) driving alongside the AV. The goal is to cause the AV to predict an incorrect traffic trajectory such that the AV reacts and crashes into other vehicles or the road curbs. To prevent direct crashes initiated by the NPC, we will set the constraints such that the NPC’s actual driving trajectory should not directly collide with the AV.

    Sensor spoofing for Autonomous Driving
    Autonomous driving systems make safety-critical driving decisions based on the real-time feedback from the surrounding environment. As the communication channel between the AD system and the physical world, sensor security has been an important topic in the AD security research community. In the past few years, almost all sensors available on AVs were demonstrated to be vulnerable to spoofing attacks, including GPS, LiDAR, IMU, ultrasonic sensors, etc. As a mitigation, many AD components are designed to be robust against spoofing attacks using sensor fusion. Thus, in this category of challenge, we allow the players to manipulate the sensor inputs to the AD system, aiming to disrupt the fusion process and cause the AV to exhibit unsafe driving behaviors such as going out of lane boundaries.

    Exploitation of Autonomous Driving Software
    We plan to include software exploitation as a genre for autonomous driving CTF for the completeness of the coverage. Binary exploitation challenges are similar to the ones in traditional CTF competitions. However, the programs used in the challenge will be developed to mimic components in Autonomous Driving (AD) systems. To prepare the challenges, we will extract domain-specific code logics from the AD systems, such as GPS data parser, and embed vulnerabilities that can be exploited to reveal the content of the flag.


    Challenge Examples


    Road Graffiti and Lane Detection Challenge
    Lane detection is an essential component in low-autonomy AD systems such as OpenPilot and Tesla Autopilot, which steers the vehicle to keep at the center of the lane based on the detected lane line shapes. Such lane detectors predominantly use DNNs, which are naturally vulnerable to adversarial perturbations to the road surface. Thus, we simulate a scenario where the AV drives on a straight road and performs lateral control based on the lane detection outputs. To enable perturbation, we set an area on the simulated road for the players to place their generated adversarial patches. A successful attack would require the patch to change the lane line shapes in a certain number of camera frames and cause the AV to drive out of lane boundaries. The difficulty can be raised by increasing the number of frames required to attack as well as adopting a black-box attack setting. As a unique feature for such lane detection based steering systems, the adversarial patch should be able to consistently manipulate the lane line shapes toward a single direction or otherwise the AV would simply drive in a zig-zag motion instead of deviating out to one side of the lane.

    A video demonstrating the benign and attacked scenarios are available at
    https://drive.google.com/file/d/1-NB...ew?usp=sharing

    We plan to set up this challenge in the real vehicle in the loop setting. Specifically, the vehicle will be equipped with an open-source lane centering system named OpenPilot, and we will display the road video with the adversarial patch in front of the windshield camera on the vehicle so that the OpenPilot is able to generate real-time steering commands based on the lane lines. Since the vehicle is fixed on a rack, we will intercept the steering commands issued from OpenPilot and feed it to a vehicle motion model to calculate the lateral deviation caused by the adversarial patch.

    The White Truck & Adversarial Patch
    This challenge is based on real-world AV incidents, which happened on multiple Tesla cars in the past years. The reason behind those incidents is that Tesla’s Autopilot system failed to recognize obstacles, e.g., a white truck, and perform the correct driving decision to stop or yield. In this challenge, we reproduce such a driving scenario in a simulation environment but under an adversarial setting. Specifically, the AV’s driving trajectory is blocked by a white truck, and its longitudinal control is decided based on the object detection outcome. During the contest, the players are asked to generate adversarial images, which will be put on the truck and cause the truck to be undetectable under the widely-used DNN based object detectors.

    For each submission, the challenge backend runs the simple autonomous driving system with the DNN object detector. A video of the driving simulation is produced for each submission, which visually shows whether the autonomous driving car collides with the truck or not.

    The following URL shows a video about this challenge.
    https://drive.google.com/file/d/1uR6...ew?usp=sharing

    AutoDriving Accident Forensics by Diagnosing LiDAR Attack Traces
    Despite the controversy among AV companies about whether AVs can be safe enough without the expensive LiDAR sensors, LiDARs can be used to significantly improve both AD perception and localization performances as they can provide environmental data in 3D form (called point cloud) and also with much higher resolution and accuracy than other sensors such as camera and RADAR. Thus, most AV companies today, especially those aiming for high-level autonomy such as Waymo, choose to heavily rely on them in production AD systems to best ensure safety. However, such high reliance also makes LiDAR data highly security/safety critical. For example, if the attackers can modify LiDAR inputs in the run time via LiDAR driver compromises or external LiDAR spoofing attacks (e.g., via laser shooting as demonstrated by recent works), they can significantly influence the AD object detection and localization results to cause severe damages such as crashes.

    In this challenge, we design a forensics task for the players to diagnose an incident related to LiDAR input attacks. The players are provided a trace of LiDAR point cloud frames recorded before the incident, and need to identify which frame(s) in the LiDAR point cloud data were under attack. We set the difficulty levels based on the stealthiness of the LiDAR input attack. In the easiest setting, the attacker simply removes all points from a certain frame. In more difficult settings, we either shift the whole point cloud with certain offsets or overlap the existing point cloud with the spoofed points. This challenge requires the players to have basic understanding of the point cloud data format and physical properties. As a hint, we will provide related materials in the challenge description.

    GPS Spoofing
    For AD localization, GPS is the de facto positioning sensor. However, GPS has been widely demonstrated to be vulnerable to spoofing attacks, where the attacker sends fake satellite signals to the victim GPS receiver to cause it to output a falsified position. In this challenge, the AV is maneuvered by a typical control loop in high-autonomy AD systems, which is constantly correcting the lateral deviation between the AD localization output and a predefined planned trajectory. To enable GPS spoofing capability, we expose a simulator API to the player to receive a series of spoofed GPS positions. The attack will be considered as successful if the spoofed GPS positions lead the AV out of lane boundaries.

    To increase the attack difficulty, we constrain the spoofed GPS positions to be within a certain radius from the real position of the AV. In addition, real world AD systems often adopt a Multi-Sensor Fusion (MSF) based localization design, which not only takes GPS as input but also other positioning sensors such as LiDAR and IMU, to improve the robustness of localization. In such a case, the GPS can no longer dictate the localization output. Thus, in this challenge, we set the highest difficulty level as using GPS spoofing alone to deviate the MSF-based localization results. This would require the players to have certain domain knowledge about fusion algorithms such as Kalman filters, which will be provided as hints in the challenge description.

    A demo video showing the benign and spoofed localization outputs and AV driving trajectory are available at
    https://drive.google.com/file/d/1ltr...ew?usp=sharing
    PGP Key: https://defcon.org/html/links/dtangent.html

  • #2
    The AutoDriving challenge has evolved, please see this post:

    https://forum.defcon.org/node/237724
    PGP Key: https://defcon.org/html/links/dtangent.html

    Comment

    Working...
    X