Brad Templeton has been a software architect, a former Electronic Frontier Foundation (EFF) chair, an adviser to Google's self-driving car project, and a Chair for Computing at the Singularity University. He has recently started warning about the cybersecurity issues self-driving cars, or "robocars," may face if automotive companies don't start to take security more seriously as they race to bring them to market.
According to Templeton, robocars should not only be "disconnected" from the internet (in complete opposition to the "connected car" trend) to drastically reduce the attack potential against them, but car makers should also strive to secure everything from the cars' self-driving software platform to their sensors, manufacturing facilities, and update servers. Nothing should be left out, and everything should be designed with security in mind.
Below you can read the whole interview in which he talks about the state of security in today's car industry, as well as his proposals for how to make a robocar that would be resilient against cyberattacks.
Tom's Hardware: We’ve asked a few companies in the past how “hackable” their “connected cars” are, and they all seem to say that they take security very seriously. However, we’ve seen a few recent news stories about connected cars getting hacked in various ways. Do you think carmakers take security more seriously when it comes to their self-driving platforms compared to their connected cars? On a scale of 1 to 10, where 10 would mean their robocars are virtually unhackable, what’s the grade you’d give most self-driving cars right now?
Brad Templeton: That’s not easy to answer. First, nobody has published their security architecture. However, most cars today have limited connectivity. They only talk back to HQ, they are not trying to talk to infrastructure or other cars. They will talk to the car’s internal systems though, and in many cars, these are fairly vulnerable because they were not designed for security.
Most teams report that they are taking security seriously but I am not aware of any that have published what that means.
TH: How connected (or disconnected) should the robocars' critical systems and the entertainment systems be? Should they operate on completely different hardware computing platforms, or would operating in separate virtual security domains suffice?
BT: I believe having fully distinct platforms and networks is the best choice. Why take a risk you don’t have to? What do you really gain from connecting them two way? You can connect them one way – the driving system streams out status data which untrusted systems can display to the user.
Note as well that the robocar of the future may not have much of an “entertainment system.” That’s in your phone, which is where you want it to be. The car may offer a larger screen, speakers, microphone, and other input devices to the phone, which is fine, but the intelligence belongs in the phone, not in the car.
So if the car’s system is just some speakers for your phone to play music on, why does it need a connection to the driving system?
TH: In a recent article, you said that sensors should also communicate over encrypted and authenticated channels. Are you aware if any of the automotive companies working on self-driving car systems are doing that currently? How would an attacker exploit these sensors?
BT: There are different types of attacks we worry about. The most scary are attacks that can come over the air, especially over the internet, because those could compromise any car, or large number of cars at once, and could come from a remote attacker anywhere.
The next class would be ones that come over the air locally, through things like Bluetooth or DSRC, or tire pressure sensors. Those are not quite as scary but still pretty bad because if you can infect one car, it can drive around talking to other cars and infecting them, and soon you can reach every car.
Least scary are attacks that require physical access to the car. That’s where you’ll find most attacks that involve compromising a sensor or the car’s physical network. That’s still something to worry about, but the attacks don’t scale up to large numbers of cars. As such, I suspect this is not as high on priority lists.
Down the road there is the risk that a sensor might get so “smart” that its internal processors – and all sensors will have processors – might be compromised just with malicious sensor data. That’s not super likely but it’s not impossible, [and] it should be on the laundry list.
TH: How important is secure manufacturing for critical robocar components? Should manufacturing processes have to be regularly audited, too? Is that the case right now?
BT: If a component can talk to your driving system directly, there is a risk that if it is malicious, it might take that over. Mainly you want to be paranoid about all inputs. The makers of components you use need to be trustworthy, and their own security procedures have to be trustworthy.
For example, if your radar manufacturer is itself trustworthy, but they have a security breach or compromised insider which allows an attacker to plant malware into the firmware of the radar, and that radar is put on a trusted internal network, it could compromise you. It is difficult to fully vet all internal security procedures at all your suppliers, but you should try, and at the same time be wary that your components might be out to get you.
TH: Do you believe robocars should have to pass certain government or third-party security screening and certification programs before they are allowed on the market?
BT: This might become the case eventually. I don’t believe any regulatory body has the capability to do this well at present. There is a risk that one could make a procedure so complex that it slows the deployment of even important safety updates. Companies should certainly produce at the minimum a self-certification document on their security hygiene rules.
TH: What’s your opinion on self-driving car software being open source? Should consumers petition carmakers to open source all critical software that is used by the autonomous driving system? Would it even matter if it was open source, from a security point of view?
BT: This is a controversial issue. There is evidence that allowing all parties to scrutinize the code is the best path to finding vulnerabilities. This is particularly true when many parties are using the same code and are motivated to keep it secure and high quality. This is a trade-off against the fact that access to the source code can make it easier for attackers to discover vulnerabilities and build exploits for them.
It is complicated by the problem that manufacturers, who will be required to certify the safety of their vehicles, will try to design them so that they can’t run any code that is not signed by the manufacturer. This means that tinkerers won’t be able to easily tweak and improve the code, which is one of the key benefits of open source. While millions use slightly modified versions of operating systems like Linux on their PC, that won’t be the case with cars.
TH: Some cars have come out with entertainment systems that used a three- or four-year-old Android OS version, that likely comes with a browser that hasn’t been updated in years, too. Should car manufacturers continue to develop their own entertainment systems, or should they focus on better integrating their systems with the car owner’s or passenger’s mobile devices?
BT: From a business standpoint, users want their music and other entertainment on their phones, and don’t want to have to use a different system when they get in a car. Car makers are reluctant to give up the large fees they charge for infotainment systems. The car should primarily offer a screen and speakers to the phone. From a security standpoint, neither the infotainment system nor the phone can be trusted, so they should be air-gap isolated from the driving system.
TH: Are the current industry-specific programming languages good enough for writing self-driving platforms with them, or should the industry use safer programming languages to significantly reduce their software’s attack surface?
BT: This is another challenging question. The easy answer is yes, we should all be using better languages and tools which are designed for security. On the other hand, since these are young and not well deployed, they don’t have the same wealth of tools available and fewer programmers are highly trained in their use, so work will be much slower. I believe the whole industry has to bite this bullet, however.
TH: A few years ago, BMW, for instance, was sending software patches over unencrypted connections. Today, they are one of the main carmakers focusing on building robocars. How important is server-side security for self-driving cars?
BT: Actually, you can’t trust the network, so you should design not to trust it. Properly done, all software patches are digitally signed by multiple parties and so it is OK to send them over untrusted networks. (On the other hand, cars should not even parse incoming messages that are not signed and verified.) This is easy and I would presume BMW does this. Harder is the problem of worrying that an internal breach inside a company has them produce a compromised update which is then signed by their keys and trusted by the cars. You need to make it so that no update is signed unless all changes to it have been audited by well designed and secure code review, and all tools which build the update (like compilers, linkers, etc.) are also secured.
This is one clear place for open source. The software development tools need to be secured, and updates to them must be verified as well.
TH: In your post, you also mentioned that it would be best if robocars would receive their patches at “update stations” rather than automatically over the air. Automatic updates are usually considered a good thing by security experts. Why do you think they are a bad solution for self-driving cars? What would be the best way for robocar manufacturers to update their cars?
BT: This is a radical trial balloon to make people think differently. All updates, whether over the air or done in a physical place, must be well audited and signed. However, different updates will have different urgencies and affect different parts of the system. You can have over the air updates, and need them for urgent situations, but to keep them secure, you might require that over the air updates need to be signed by well secured keys kept in physical vaults by senior executives. Ie. You can’t do an OTA update without having the CEO go to the vault in her office to pull out a signing key. And the CTO and team leads. Lower priority updates, with lower risk, might not need signing at that level but could instead require the vehicle go to the depot, so that it’s impossible to do an update without people noticing it.
TH: For the past few years, people have been getting excited about Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communications. Are these technologies necessary for the success of robocars and do they represent any danger to the security and safety of robocars?
BT: They are definitely not necessary for the success of the cars, and the major teams have no plans to depend on them. Since there will always be lots of vehicles (and pedestrians and deer) with no transponders, it is necessary to get to “safe enough” with just your sensors. Extra information can at best be a minor supplement. Because it will take more than a decade to get serious deployment of V2V, other plans (such as use of the 4G and 5G mobile data networks) make much more sense for such information.
In addition, it is a serious security risk, as you say, to have the driving system of the car be communicating complex messages with random cars and equipment it encounters. Since the benefits are minor and the risk is high, this is not the right approach.
TH: If self-driving cars are hacked, who should take the blame? Is it the car owner, the car manufacturer, the third-party vendor of the autonomous driving system, or should governments just focus on catching the malicious hackers and ignore everything else?
BT: Morally, of course, the blame is always on the attacker. However, makers of systems have a duty to make them robust. Liability is rare, right now, in computer security, but it probably will fall on whoever put the car on the road (the vendor or fleet operator. They, however, will insist on assumption of liability by the maker of the driving system, if that is another company.
TH: Do you believe that remote hacking of self-driving cars will keep people away from using them, or do you think these cyberattacks will be rare enough that most people will largely ignore them?
BT: If there is a “nightmare” attack, where somebody compromises an entire fleet of cars, and is able to cause physical harm or even make a demonstration of the ability to make them crash or run people over, that could cause people to stay away from using them. That’s why there needs to be very high attention on any attacks which could take over an entire fleet. Attacks against a single car, particularly those that require physical access to the car, will still scare many people, but the truth is cars are already vulnerable to that today, and have been since the day somebody could cut your brake line or install an ignition bomb.
Attacks that require proximity but not physical access (like a Bluetooth, V2V, or Wi-Fi attack) should scare anybody who thinks they might be an assassination target, but they also offer the risk of a “virus” where cars infect other cars as they drive by.
We should work hard to secure these vehicles, but it would be a major error to not use them because of these risks. At least for now, the death toll from human driving is much higher. We should not avoid technologies that prevent real deaths out of worry over hypothetical ones.