Today Google has announced the release of MobileNets, a series of TensorFlow vision models built for comparatively low-power, low-speed platforms like mobile devices. In a cross-post on both the Open Source and Research blogs, Google released details about the new visual recognition software. Now even more useful machine learning tools can operate natively on your phone’s hardware, in a fast and accurate way. And, future tools like Google Lens will be able to perform more functions locally, without as much need for mobile data, and without waiting.
It’s one thing to run a machine learning network on a system with a ton of hardware power, without having to worry about things like battery life or sharing resources with other pesky apps or services. But, to pull the same feat on a mobile device, a situation where battery life is a concern, where any operation is going to be sharing hardware with basic requirements like a UI, and where you have a goal of maintaining a smooth 60fps experience is a different thing entirely.
The new MobileNets vision model will be useful for existing tools like Google Photos, which might be able to pre-process images you take to determine their content. But, the greatest use is likely to be with things like the coming Google Lens.
For those that might not be familiar, Google Lens was a new addition to the Assistant that was revealed during this year’s I/O Keynote. Basically, it’s an image recognition model that is able to provide you with information for whatever you point it at, and it further allows you to take that information and use it in a useful way. So not only can you point it at an object and have it identified, but the content of the image can also be used. One of the examples