A team of Chinese researchers from the Keen Security Lab published a video in which they demonstrated how a Tesla Model S can be hacked remotely, even from 12 miles away. The announcement appeared roughly at the same as the U.S. government’s guidelines for self-driving vehicles.
Self-Driving Cars’ Hacking Problem
Connected cars, but more importantly, self-driving cars or cars with self-driving capabilities such as the Tesla Model S, are vulnerable to hacking. They often come with internet connectivity, which sometimes can’t even be disabled, because the manufacturers want to ensure some permanent remote capabilities--for instance, the ability to alert the authorities when there’s been a crash.
The self-driving car systems are also highly integrated to give the car owners more options for controlling their vehicles. Tesla Model S owners can remotely turn on the air conditioning from a smartphone application, for example. However, this type of connectivity and control also make these systems more vulnerable to remote attacks.
The high integration also helps the Advanced Driving Assistance System (such as Tesla’s Autopilot) control every part of the car on its own, which means that an attacker could also gain that level of control over the vehicle.
Tesla Model S Remote Hack
Researchers from the Keen Security Lab claimed that they were able to hack an unmodified Tesla Model S with the latest firmware installed. Initially, the team demonstrated in a video that they could open the sunroof, switch on the turning signals, move the seats, and open the doors without using the car’s key. They also showed that they could activate the windshield wipers, fold the side mirror, and open the trunk while the car was moving.
So far, this may be a little scary if it would happen to you while driving on the road, and it could cause an accident through the distraction alone, but it’s nothing too dangerous. However, at the end, the team also showed that it could activate the car’s brakes, a hacking capability that could be used by a malicious hacker to stop a car while it’s driving on the highway at high speed.
In the demo, the brakes were activated from 12 miles away, according to the team, which means it happened over the internet, so an attacker could theoretically activate them from anywhere in the world.
The team said that it “responsibly disclosed” the bugs to Tesla, and it worked together with the company to fix the issue.
“As far as we know, this is the first case of remote attack which compromises CAN Bus to achieve remote controls on Tesla cars. We have verified the attack vector on multiple varieties of Tesla Model S. It is reasonable to assume that other Tesla models are affected,” said the security researchers in a blog post.
Tom’s Hardware contacted Tesla Motors for a reply, and the car company responded with the following:
“Within just 10 days of receiving this report, Tesla has already deployed an over-the-air software update (v7.1, 2.36.31) that addresses the potential security issues,” said a Tesla spokesperson. “The issue demonstrated is only triggered when the web browser is used, and also required the car to be physically near to and connected to a malicious wifi [sic] hotspot. Our realistic estimate is that the risk to our customers was very low, but this did not stop us from responding quickly. We engage with the security research community to test the security of our products so that we can fix potential vulnerabilities before they result in issues for our customers. We commend the research team behind today’s demonstration and plan to reward them under our bug bounty program, which was set up to encourage this type of research,” added the Tesla representative.
What’s not clear from this statement is whether the Keen Security Lab team used the same malicious Wi-Fi hotspot to also activate the vehicle’s brakes from 12 miles away.
U.S. Government’s Guidelines For Self-Driving Cars
The U.S. Department of Transportation released 15 guidelines for self-driving cars, and for how car makers should validate their self-driving systems, how the vehicles should record data, what they should do post-crash, how the users’ privacy should be protected, and even how to protect the cars against cyberattacks.
The released document contained only the general policy points, without any details for more exact requirements from manufacturers. However, both the agency and President Obama said that although they will ensure that the self-driving cars are safe, and won’t hesitate to pull unsafe cars off the road, they also don’t want to burden manufacturers with too many regulations.
For now, this seems to make sense, as too many regulations--the implications of which the government itself may not fully understand--could limit how car makers could innovate with the self-driving systems.
However, some kind of liability should probably also be imposed on self-driving car manufacturers for when accidents or hacks happen. That would encourage them to make the cars as safe, and as digitally secure as possible, so that accidents and security breaches are minimized as autonomous vehicles grow in popularity.