Call it an evolution of spatial computing and positional tracking, sort of a next generation of what Google’s Project Tango hopes to accomplish. Or maybe it’s the next step in the evolution of Deep Learning. Really, it seems to be both, but in any case, Movidius and Google are teaming up to get the benefits of Deep Learning onto mobile devices.
What Movidius brings to the table is efficient visual processing with the MA2450, its flagship Myriad 2 VPU. Google is licensing both the hardware and software. Using models borne of Google’s Deep Learning databases, the Myriad 2 chip will ostensibly be able to leverage the knowledge Deep Learning provides on-device, with no Internet connection.
This Goes On What Now?
Details on exactly what devices will use this technology is unclear; Movidius would not divulge any information on that front when we spoke with the company, saying only that it will be used on “next-gen” Google products. However, we can infer quite a bit from what Remi El-Ouazzane, CEO of Movidius, told us in an interview.
He described this new innovation as a next chapter in the story of Google and Movidius, and he said specifically that it’s a different chapter than Project Tango, although the two are complementary.
That’s not hard to understand: Project Tango is designed to offer spatial awareness and positional tracking, and we’re seeing that technology about to come to market via Intel and Lenovo. But if you’ve seen any of the Project Tango demos, you know that there’s more work to be done.
For example, you can aim the camera of your Project Tango phone at a couch, and it will intelligently (and dynamically, in real-time) measure the couch’s dimensions. With Deep Learning on board, though, the device can “look” at the couch and understand that it is a couch, what color the couch is, who probably makes the couch, and so on.
That’s an example that doesn’t really do justice to the power of Deep Learning, but you get the idea: Deep Learning has to do with teaching machines to comprehend. “Positional tracking matters, but the ability -- with high accuracy -- to recognize and detect objects matters as well,” said El-Ouazzane.
A Sort Of Reversal Of The Cloud Trend
Data is such a fascinating beast. In recent years, more and more of it has been moved to the cloud, and indeed, that has allowed for technologies like Deep Learning to emerge. We have these massive databases, and we’ve been able to teach machines to leverage that data and learn.
However, that creates a problem, because it would seem that unless you’re connected to the Internet (and therefore can access those databases) and also have a supercomputer to process everything, you miss out on what Deep Learning offers.
The solution to that problem is already here, though, in the form of neural networks. Basically, you can load a model derived from a database onto a device, and that device can use that neural network to understand what it “sees.” This happens locally, on the device itself, and it would appear that the Myriad 2 chip is sufficiently powerful and efficient to handle it.
El-Ouazzane did not elaborate on how the device may take its acquired data and send it back to Google so it could be added to the database, but it must, somehow. Without that exchange of information, neither the neural net at large nor the model on-device could grow.
What we’re seeing here, to me, is a certain parity between the importance of the device and the importance of the cloud. They need each other. Therefore, Google and Movidius need each other.
We do not yet have any timeline for when we might see this technology on shipping devices, nor what sort of devices exactly those will be (our best guess is a dev tablet followed by a dev smartphone, just like Project Tango).
However, both key technologies -- Google’s machine learning databases and Movidius’ Myriad 2 VPU and software package -- already exist, so it’s likely that there’s already a prototype device on a workbench somewhere inside the halls of Google.