Google Bakes Machine Learning Into Android O With TensorFlow Lite, New Framework

Google announced that the next version of its mobile operating system, Android O, will include a new feature called TensorFlow Lite to offer developers improved on-device AI for their applications. This could lead to better speech recognition, computer vision, and other machine learning-driven features within Android, and it highlights tech companies' rush to bring AI down from their data centers and onto all of your devices.

TensorFlow is Google's open source machine intelligence software library. Developers can use it to jumpstart their machine learning efforts, allowing them to focus on differentiating their products instead of forcing them to start from scratch. Google also uses TensorFlow in many of its products--the project started as research by the Google Brain Team within Google's Machine Intelligence research organization--as well.

Google's vice president of Android engineering, Dave Burke, said at I/O that TensorFlow Lite is "a library for apps designed to be fast and small yet still enabling state-of-the-art techniques like convnets and LSTMs." He also said that Android O will introduce "a new framework" to hardware accelerated neural computation and that TensorFlow Lite will also use a new neural network API to "tap into silicon-specific accelerators."

All of these additions will work together to "power a next generation of on-device speech processing, visual search, augmented reality, and more," Burke said.

"As Android continues to take advantage of machine learning to improve the user experience," Google said in a blog post, "we want our developer partners to be able to do the same." That's where TensorFlow Lite comes in. Google said TensorFlow Lite and the new framework will be added to Android O, which is set to debut later this summer, in "a maintenance update to O later this year."

Here's what the company said about TensorFlow Lite:

TensorFlow Lite is specifically designed to be fast and lightweight for embedded use cases. Since many on-device scenarios require real-time performance, we’re also working on a new Neural Network API that TensorFlow can take advantage of to accelerate computation.

Enabling on-device machine learning has become an area of focus for some tech companies. Nvidia has tried to position its GPUs as the ideal hardware for deep neural network training and inference for example, and Movidius has developed "Vision Processing Units" (VPUs) specifically for on-device machine learning. You can learn more about how both companies approached this problem in our report on client-side deep learning.

On the software side of things, Apple made a big deal of on-device deep learning when it announced iOS 10 back in June 2016. Facebook announced in November 2016 its Caffe2Go project, which is meant to put "real-time AI in the palm of your hand." Microsoft has also worked to bring AI to your devices with Story Remix, an upcoming Windows 10 app that uses AI to help you edit your home videos and tinker with mixed reality content.

Google hasn't been resting on its laurels. The company also announced a new Cloud TPU at I/O that promises to be 50% faster than the Tesla V100 accelerator Nvidia announced alongside the new Volta GPU architecture, even though Nvidia built "Tensor Cores" right into the device. The Cloud TPU will help improve Google's cloud-based AI; TensorFlow Lite is supposed to help the company and Android developers do the same on-device.

What does this mean for you? Well, it should result in smarter apps that don't require an internet connection to offer their best features. Having to be connected is one of the most significant drawbacks of cloud-based AI, especially on mobile devices. If at least some of those features can use embedded machine learning--even if only as a backup--you would no longer have to worry about apps breaking the moment you go offline.

There's also the potential to keep more information on-device. That could in turn make it more secure--even with the rise of end-to-end encryption, you're often better off keeping personal data offline than sending it to who-knows-where for processing. Microsoft's Cortana doesn't raise privacy concerns because it can help you remember to keep your emailed promises; its data collection is worrisome because that information has to be sent off to Microsoft's servers. The continued rise of on-device AI could help reduce those privacy and security worries.

Nathaniel Mott
Freelance News & Features Writer

Nathaniel Mott is a freelance news and features writer for Tom's Hardware US, covering breaking news, security, and the silliest aspects of the tech industry.