Robocars Should Be 'Disconnected,' Warns Former EFF Chief

Brad Templeton, former EFF Chairman

Brad Templeton has been a software architect, a former Electronic Frontier Foundation (EFF) chair, an adviser to Google's self-driving car project, and a Chair for Computing at the Singularity University. He has recently started warning about the cybersecurity issues self-driving cars, or "robocars," may face if automotive companies don't start to take security more seriously as they race to bring them to market.

According to Templeton, robocars should not only be "disconnected" from the internet (in complete opposition to the "connected car" trend) to drastically reduce the attack potential against them, but car makers should also strive to secure everything from the cars' self-driving software platform to their sensors, manufacturing facilities, and update servers. Nothing should be left out, and everything should be designed with security in mind.

Below you can read the whole interview in which he talks about the state of security in today's car industry, as well as his proposals for how to make a robocar that would be resilient against cyberattacks.

Tom's Hardware: We’ve asked a few companies in the past how “hackable” their “connected cars” are, and they all seem to say that they take security very seriously. However, we’ve seen a few recent news stories about connected cars getting hacked in various ways. Do you think carmakers take security more seriously when it comes to their self-driving platforms compared to their connected cars? On a scale of 1 to 10, where 10 would mean their robocars are virtually unhackable, what’s the grade you’d give most self-driving cars right now?

Brad Templeton: That’s not easy to answer. First, nobody has published their security architecture. However, most cars today have limited connectivity.  They only talk back to HQ, they are not trying to talk to infrastructure or other cars. They will talk to the car’s internal systems though, and in many cars, these are fairly vulnerable because they were not designed for security.

Most teams report that they are taking security seriously but I am not aware of any that have published what that means.

TH: How connected (or disconnected) should the robocars' critical systems and the entertainment systems be? Should they operate on completely different hardware computing platforms, or would operating in separate virtual security domains suffice?

BT: I believe having fully distinct platforms and networks is the best choice. Why take a risk you don’t have to? What do you really gain from connecting them two way?  You can connect them one way – the driving system streams out status data which untrusted systems can display to the user.

Note as well that the robocar of the future may not have much of an “entertainment system.” That’s in your phone, which is where you want it to be. The car may offer a larger screen, speakers, microphone, and other input devices to the phone, which is fine, but the intelligence belongs in the phone, not in the car.

So if the car’s system is just some speakers for your phone to play music on, why does it need a connection to the driving system?

TH: In a recent article, you said that sensors should also communicate over encrypted and authenticated channels. Are you aware if any of the automotive companies working on self-driving car systems are doing that currently? How would an attacker exploit these sensors?

BT: There are different types of attacks we worry about. The most scary are attacks that can come over the air, especially over the internet, because those could compromise any car, or large number of cars at once, and could come from a remote attacker anywhere.

The next class would be ones that come over the air locally, through things like Bluetooth or DSRC, or tire pressure sensors. Those are not quite as scary but still pretty bad because if you can infect one car, it can drive around talking to other cars and infecting them, and soon you can reach every car.

Least scary are attacks that require physical access to the car.  That’s where you’ll find most attacks that involve compromising a sensor or the car’s physical network. That’s still something to worry about, but the attacks don’t scale up to large numbers of cars. As such, I suspect this is not as high on priority lists.

Down the road there is the risk that a sensor might get so “smart” that its internal processors – and all sensors will have processors – might be compromised just with malicious sensor data. That’s not super likely but it’s not impossible, [and] it should be on the laundry list.

TH: How important is secure manufacturing for critical robocar components? Should manufacturing processes have to be regularly audited, too? Is that the case right now?

BT: If a component can talk to your driving system directly, there is a risk that if it is malicious, it might take that over.  Mainly you want to be paranoid about all inputs.  The makers of components you use need to be trustworthy, and their own security procedures have to be trustworthy. 

For example, if your radar manufacturer is itself trustworthy, but they have a security breach or compromised insider which allows an attacker to plant malware into the firmware of the radar, and that radar is put on a trusted internal network, it could compromise you.  It is difficult to fully vet all internal security procedures at all your suppliers, but you should try, and at the same time be wary that your components might be out to get you.

TH: Do you believe robocars should have to pass certain government or third-party security screening and certification programs before they are allowed on the market?

BT: This might become the case eventually.  I don’t believe any regulatory body has the capability to do this well at present. There is a risk that one could make a procedure so complex that it slows the deployment of even important safety updates. Companies should certainly produce at the minimum a self-certification document on their security hygiene rules.

TH: What’s your opinion on self-driving car software being open source? Should consumers petition carmakers to open source all critical software that is used by the autonomous driving system? Would it even matter if it was open source, from a security point of view?

BT: This is a controversial issue. There is evidence that allowing all parties to scrutinize the code is the best path to finding vulnerabilities. This is particularly true when many parties are using the same code and are motivated to keep it secure and high quality. This is a trade-off against the fact that access to the source code can make it easier for attackers to discover vulnerabilities and build exploits for them. 

It is complicated by the problem that manufacturers, who will be required to certify the safety of their vehicles, will try to design them so that they can’t run any code that is not signed by the manufacturer.  This means that tinkerers won’t be able to easily tweak and improve the code, which is one of the key benefits of open source.  While millions use slightly modified versions of operating systems like Linux on their PC, that won’t be the case with cars.

TH: Some cars have come out with entertainment systems that used a three- or four-year-old Android OS version, that likely comes with a browser that hasn’t been updated in years, too. Should car manufacturers continue to develop their own entertainment systems, or should they focus on better integrating their systems with the car owner’s or passenger’s mobile devices?

BT: From a business standpoint, users want their music and other entertainment on their phones, and don’t want to have to use a different system when they get in a car. Car makers are reluctant to give up the large fees they charge for infotainment systems. The car should primarily offer a screen and speakers to the phone. From a security standpoint, neither the infotainment system nor the phone can be trusted, so they should be air-gap isolated from the driving system.

TH: Are the current industry-specific programming languages good enough for writing self-driving platforms with them, or should the industry use safer programming languages to significantly reduce their software’s attack surface?

BT: This is another challenging question. The easy answer is yes, we should all be using better languages and tools which are designed for security. On the other hand, since these are young and not well deployed, they don’t have the same wealth of tools available and fewer programmers are highly trained in their use, so work will be much slower.  I believe the whole industry has to bite this bullet, however.

TH: A few years ago, BMW, for instance, was sending software patches over unencrypted connections. Today, they are one of the main carmakers focusing on building robocars. How important is server-side security for self-driving cars?

BT: Actually, you can’t trust the network, so you should design not to trust it. Properly done, all software patches are digitally signed by multiple parties and so it is OK to send them over untrusted networks.  (On the other hand, cars should not even parse incoming messages that are not signed and verified.)  This is easy and I would presume BMW does this. Harder is the problem of worrying that an internal breach inside a company has them produce a compromised update which is then signed by their keys and trusted by the cars. You need to make it so that no update is signed unless all changes to it have been audited by well designed and secure code review, and all tools which build the update (like compilers, linkers, etc.) are also secured.

This is one clear place for open source.  The software development tools need to be secured, and updates to them must be verified as well.

TH: In your post, you also mentioned that it would be best if robocars would receive their patches at “update stations” rather than automatically over the air. Automatic updates are usually considered a good thing by security experts. Why do you think they are a bad solution for self-driving cars? What would be the best way for robocar manufacturers to update their cars?

BT: This is a radical trial balloon to make people think differently. All updates, whether over the air or done in a physical place, must be well audited and signed. However, different updates will have different urgencies and affect different parts of the system. You can have over the air updates, and need them for urgent situations, but to keep them secure, you might require that over the air updates need to be signed by well secured keys kept in physical vaults by senior executives.  Ie.  You can’t do an OTA update without having the CEO go to the vault in her office to pull out a signing key. And the CTO and team leads. Lower priority updates, with lower risk, might not need signing at that level but could instead require the vehicle go to the depot, so that it’s impossible to do an update without people noticing it.

TH: For the past few years, people have been getting excited about Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communications. Are these technologies necessary for the success of robocars and do they represent any danger to the security and safety of robocars?

BT: They are definitely not necessary for the success of the cars, and the major teams have no plans to depend on them. Since there will always be lots of vehicles (and pedestrians and deer) with no transponders, it is necessary to get to “safe enough” with just your sensors. Extra information can at best be a minor supplement. Because it will take more than a decade to get serious deployment of V2V, other plans (such as use of the 4G and 5G mobile data networks) make much more sense for such information. 

In addition, it is a serious security risk, as you say, to have the driving system of the car be communicating complex messages with random cars and equipment it encounters. Since the benefits are minor and the risk is high, this is not the right approach.

TH: If self-driving cars are hacked, who should take the blame? Is it the car owner, the car manufacturer, the third-party vendor of the autonomous driving system, or should governments just focus on catching the malicious hackers and ignore everything else?

BT: Morally, of course, the blame is always on the attacker.  However, makers of systems have a duty to make them robust. Liability is rare, right now, in computer security, but it probably will fall on whoever put the car on the road (the vendor or fleet operator. They, however, will insist on assumption of liability by the maker of the driving system, if that is another company.

TH: Do you believe that remote hacking of self-driving cars will keep people away from using them, or do you think these cyberattacks will be rare enough that most people will largely ignore them?

BT: If there is a “nightmare” attack, where somebody compromises an entire fleet of cars, and is able to cause physical harm or even make a demonstration of the ability to make them crash or run people over, that could cause people to stay away from using them. That’s why there needs to be very high attention on any attacks which could take over an entire fleet.  Attacks against a single car, particularly those that require physical access to the car, will still scare many people, but the truth is cars are already vulnerable to that today, and have been since the day somebody could cut your brake line or install an ignition bomb.

Attacks that require proximity but not physical access (like a Bluetooth, V2V, or Wi-Fi attack) should scare anybody who thinks they might be an assassination target, but they also offer the risk of a “virus” where cars infect other cars as they drive by.

We should work hard to secure these vehicles, but it would be a major error to not use them because of these risks. At least for now, the death toll from human driving is much higher.  We should not avoid technologies that prevent real deaths out of worry over hypothetical ones.

Lucian Armasu
Lucian Armasu is a Contributing Writer for Tom's Hardware US. He covers software news and the issues surrounding privacy and security.
  • Olle P
    I think a good starting point (from a judicial point of view) is to treat autonomous cars like medical devices.
    If/when a medical device misbehave it can cause death or injury to a patient and/or operator. Likewise a car going haywire can "kill" its passengers and/or external trafficants.

    The issue about cars being on-line or not gets a bit more complicated once navigation come into play. One vision is to have fleets of autonomous cars pretty much acting like taxis. Arriving to pick you up when and where you want and then drive to your intended destination to drop you off before either preparing for a new transport or go to a parking lot that may be located quite some distance away.
    For the car to know when and where to pick you up that data needs to be fed into the navigation system, with the possibility to do so from a distance and at very short notice.
    Reply
  • dark_lord69
    The only way to make it secure:
    1. The self driving components need to be completely disconnected from the internet. This would mean 2 complete separate networks in every self driving car. One for the user that has access to the internet and one for the self driving system. With ZERO crossover connections between the 2 of them.
    2. Updates and patches to the software should be done physically (at the car) by a mechanic. Not using bluetooth, wifi, cell phone towers or any other type of wireless.

    That would be the ONLY way to make it secure. But that will never happen because automakers will want to update remotely and tell the self driving system different things like current traffic conditions. So, don't expect self-driving cars to be secure, at least not up to my standards when my life is on the line.
    Reply
  • chaz_music
    Automotive technology has mostly ignored security just as most other markets. They focus their innovation on tech that sells, and security does not inherently make better sales. Hence the IoT bandwagon.

    Since users are going to want to continue having Bluetooth access and other device linked to their car, connectivity is not going to go away. When Bluetooth first came out in cars, hackers found ways to control and effect these cars quite easily. Implementing new tech without assessing it's vunerabilities is a repeating theme. I believe it was about 20 years ago that Ford came out with a pneumatic door lock system that was "super secure". Then thieves found that they could use a tennis ball with a hole in it to punch against the lock, build up pressure, and unlock all doors. Security has to be treated seriously, and be included with FMEA studies if it is going to be truly successful.

    All one would have to imagine is rush traffic on the Long Island Expressway, and having the hacked cars all accelerate at the same time, or having some cross over the median. A new 9/11. If we truly want to have this kind of tech, there needs to be a market focus on security. And maybe a secure version of the Internet (not anonymous)? Then online banking would be safe, too. But I think slowing down the IoT trend is quite important. There have already been some universities and businesses hit with their IoT devices being turned into botnets. What would North Korea do with that ability?
    Reply
  • kuhndj67
    Holy crap yes... does that mean that the self driving systems in current autonomous vehicles is connected to their internet connected systems!? Wow under definition of "REALLY stupid ideas" on wikipedia this would be the textbook example. I think we WILL see some level of mesh local network for multi-vehicle coordination, but all those systems should be COMPLETELY isolated from any public/general use network.
    Reply
  • Olle P
    20433757 said:
    ... slowing down the IoT trend is quite important. ...
    I totally agree!

    Reply
  • alextheblue
    20433757 said:
    Since users are going to want to continue having Bluetooth access and other device linked to their car, connectivity is not going to go away. When Bluetooth first came out in cars, hackers found ways to control and effect these cars quite easily.
    Most of the wireless vulnerabilities that allow an attacker to control aspects of a vehicle are found in add-on components that are directly connected to the vehicle's CANbus via the OBD-II port. Insecure IoT-like GPS devices handed out by insurance firms, for example. There are few models where there was a successful remote attack (with the ability to control throttle, brakes, etc) without any such insecure add-ons. Mainly this happened with Chrysler products. Even then I can't think of any cases off the top of my head where Bluetooth was the attack vector. In both scenarios the connection was cellular, for the cases I read about. With that being said:

    The way they currently have their networks set up is stupid and dangerous. As stated in the article above, you need two separate networks. I believe they could be linked with one-way connections at points. You could potentially solve the add-on component issue by making two physical OBD connectors, one which offers only one-way data (consumer-facing) and a secure two-way one for temporary use (wired!) during updates/repair work. So even if you have an insecure GPS module or other wireless device hooked up to the one-way port, it can only receive data and can't feed anything back into the secure network.

    Ditto with BT/infotainment systems. There's no need to remove BT. Just isolate that system so that your phone can link and play media, run apps, whatever - without the ability to transmit to the driving systems. Again, the core driving systems should be isolated such that they can share data outside their network but only receive data from other secured systems including their sensors. No influence from outside.
    Reply