In its efforts to extend the functionality of web apps, Google has been developing and supporting two new HTML APIs that may end up making the web less safe and compound on the existing security issues of the Internet of Things. The two new APIs are Web Bluetooth, which has already been enabled in the latest version of Chrome, as well as the WebUSB API.
Connecting Everything To The Web
As a mainly internet-dependent company, it makes sense for Google to want to connect as much as possible to the internet. Google owns some of the most-used internet services, both user-centric such as Gmail and developer-centric such as Google Analytics. That means the more “things” are connected to the Internet, the more data ends up being collected by the company, which it can then monetize.
The two new APIs are tied to Google’s Physical Web initiative, which aims to replace native apps that control the Internet of Things with web apps. Google believes this will make it easier for users to connect to any device they want, anywhere in the world, through the web.
Making The IoT Security Problem Worse
According to one Chrome security engineer, web-connected Bluetooth devices could be subject to the following types of attacks and vulnerabilities:
- An abusive software developer, trying to do embarrassing or privacy-insensitive things that don’t go outside devices’ security models.
- A malicious software developer, trying to exploit users using nearby Bluetooth devices.
- A malicious hardware manufacturer, trying to exploit users or websites who connect to their devices.
- A malicious manufacturer/developer, who can push cooperating hardware and software.
- Weakly-written device firmware, which doesn’t intend to hurt its users, but might be vulnerable to malicious connections.
- Weakly-written kernels, which might be vulnerable to either malicious userland software or malicious connections.
In other words, even if the Chrome team tried to make this specification “as secure as it can be,” there’s no question that the API will increase the number of ways in which web-connected devices could be hacked, compared to the status quo. In terms of security, it’s a net negative.
Malicious software and hardware developers will always exist, as will weakly-written device firmware. As for weakly-written kernels, that’s already the status quo, whether we’re talking about the decades-old legacy-supporting Windows and Linux kernels, or about the mostly un-patched Android kernels out there.
The WebUSB API is potentially even more dangerous than the Web Bluetooth API. For instance, consider all the surveillance cameras getting hacked and feeds from them being made public on the web. The same could happen with USB-connected webcams that can be remotely controlled via the web. We would also likely see many more printers getting hacked through the Internet, too.
Can We Even Secure Web-Connected Devices?
If the recent rise of massive distributed denial of service (DDoS) attacks and ransomware that targets city infrastructure has taught us anything, it's that perhaps we shouldn’t allow absolutely every single device or component with a chip in it to be remotely controlled over the web.
Doing that seems to lead to only two outcomes. One is that securing these devices is going to require too many resources to ensure that everything that has now been exposed to the web is virtually unhackable. The second, and the more likely one, is that most companies are not going to put in the effort and resources necessary to ensure that their devices can’t be hacked.
Therefore, when Google or other companies support technologies that deliberately open devices to remote access, they're already accepting that those devices might be hacked. This is regardless of what requirements (such as having to use TLS encryption for controlling Web Bluetooth devices, for example) they make to minimize the damage after already deciding to design such a protocol.)
Perhaps connecting everything to the web is just the natural evolution of technology and nothing can stop it. However, we could at least ensure that the potential damage is reduced by designing specifications with stricter requirements from the beginning. It’s not clear if that’s what’s happening right now.
Google may aim to make a specification “secure,” but a trade-off will always be made between how secure it can be, and what the developers of device manufacturers are willing to spend to implement that specification. Therefore, deciding on the right compromise is not something fixed in stone.
The specification can be made less secure if there is a bigger push-back from the software developers and manufacturers, or it could be made more secure if there is a similar push-back from users or even the editors of the specification (which in this case happens to be mainly Google).
Prioritizing Local Control Of Smart Devices
We've learned over the past few years that everything connected to the internet tends to be less secure. Therefore, it follows that a device can be made more secure if it's not connected to the internet. Perhaps we should strive to minimize how many devices can be connected directly to the internet by emphasizing localized control and asking ourselves, "Do we really need internet-controlled light-bulbs?"
This may not be to Google's advantage, as it won't be able to obtain as much data from non-internet-connected devices, but it may be to the benefit of the internet at large. Some devices may actually work better and be more useful when connected to the internet, but the majority of the "Internet of Things" probably doesn't actually need an internet connection, especially if those devices can be controlled locally, either through a physical push of a button or through local networks such as Bluetooth, NFC, Thread, or other P2P mesh networking technologies. The latter could bring much of the same convenience of controlling a smart device from an app, without the downside of allowing someone from the other side of the world to connect to it as well.