Interested in security? Follow along for content within Cybersecurity
Self-driving cars (or autonomous vehicles), like those sold by Tesla, are becoming more and more popular. The goal of self-driving cars is to reduce traffic accidents and fatalities.
These cars use “artificial intelligence systems, which employ machine-learning techniques to collect, analyze and transfer in data, in order to make decisions that in conventional cars are taken by humans” (enisa). Although there are some benefits with autonomous technology, there are concerns over the cybersecurity risks such as whether it is possible for cybercriminals to “remotely hijack an autonomous car’s electronics with the intent to cause a crash” therefore being able to hack self-driving cars ([email protected]).
Self-driving cars are tempting targets for cybercriminals who may “attempt to steal financial data from the drivers or launch high-level terrorist attacks by turning vehicles into weapons” due to the high level of connectivity (Innovationatwork).
Artificial Intelligence (AI) systems are what make self-driving cars vulnerable. These AI systems work continuously to “recognize traffic signs and road markings, to detect vehicles, estimate their speed, and to plan the path ahead” (enisa).
Besides unintentional threats like sudden malfunctions in the AI systems, there are intentional attacks that aim to specifically harm the safety-critical functions of the AI systems.
Examples include painting the road to misguide the navigation system or putting stickers on stop signs to prevent it from being recognized. Such alterations can lead the AI systems to wrongly classify objects which as a result could make the self-driving car behave in a dangerous way.
Light Detection and Ranging (LiDAR), is the camera and laser-pulse range measurement system that form the “eyes of the self-driving vehicle, feeding information about the driving scene and environment into a CNN computer model that makes decisions such as speed adjustment and steering corrections” (The Lighthouse). Unfortunately, the CNN can be easily hacked by “adding small, pixel-level changes to the input images which can’t be seen by the naked eye”. Unfortunately, this vulnerability can allow bad actors to hack self-driving cars (The Lighthouse).
The On-Board Diagnostics (OBD) is one of the most vulnerable parts of self-driving cars; malware codes can be inserted into the Electronic Control Unit (ECU) via the OBD. The inserted malware can tune and reprogram the ECU. An infected ECU may be unable to communicate with other Onboard Unit (OBU) components like LiDAR, Camera, and Radar which would compromise the safety of the self-driving cars.
These are a few ways to gain control of self-driving cars:
Back in 2019 the new model, Tesla Model 3, was hacked within a few minutes.
White hat hackers, Amat Cama and Richard Zhu, exploited a weakness of the ‘infotainment’ system to get inside one of the car’s computers” (physicsworld).
Once successful, Cama and Zhu were able to run their own line of codes. You can see them demonstrating the attack in the video below.
The following are a few examples of different attacks that hackers can perform by affecting other services within the car to hack self-driving cars.
In 2011, Chevy Malibu was the first remote intrusion that attackers were able to gain control of. The hackers “manipulated the radio of the vehicle using a Bluetooth stack weakness and inserted the malware codes by syncing their mobile phones with the radio” (Attack on Self-Driving Cars and Their Countermeasures: A Survey). Once successfully inserted, the code can send messages to the ECU of the car and lock the brakes.
Man-In-The-Middle (MiTM) Attack
In a man-in-the-middle attack, the hacker can manipulate the communication between two entities and can gain control of the ECU or the infrastructure Road Side Unit (RSU) by eavesdropping, replaying, and modifying the messages sent between the entities.
Denial of Service (DoS) Attack
DoS is one of the most dangerous attacks that can happen on self-driving cars; they can lead to serious accidents or death.
Attackers can use DoS attacks “to stop Camera, LiDAR, and Radar to detect objects, road, and safety signs” (Attack on Self-Driving Cars and Their Countermeasures: A Survey). DoS attacks can also affect the braking system causing the brakes to not function properly by either causing the car to stop suddenly or unable to stop at all.
This type of attack is severely dangerous for commercial vehicles. Back in 2017, the Honda Motor Company suffered from a major WannaCry ransomware attack where the attackers “demanded a large number of cryptocurrencies to provide the decryption key” (Attack on Self-Driving Cars and Their Countermeasures: A Survey).
Although this attack was not focused on self-driving cars, it still affected many of Honda’s self-driving cars from getting software updates during the attack. This is an attack that may be seen more often than the rest because of how successful hackers have been when they perform a ransomware attack.
As technology is constantly evolving, so are the systems used inside of self-driving vehicles and the vulnerabilities that exist with them.
Hackers will continue to search for new possibilities to hack self-driving cars which may lead to new and undiscovered vulnerabilities making it difficult to detect or protect against. However, that doesn’t mean that companies shouldn’t do their best to take all necessary security precautions to protect these cars from cyber attacks.
Of course, there is no way to fully protect self-driving cars from being hacked but there certainly are ways of making them more secure.
Create your free account to unlock your custom reading experience.