If a photo is worth a thousand words then for Google it may be worth a thousand searches. Having declared that artificial intelligence (AI) was at the forefront of the company’s R&D a year ago, the tech giant has now merged AI with imaging in a suite of product announcements.
"The mobile camera is quickly becoming one of the most important gateways (and translators) between the real world and the superpower that is the internet," Tom Buontempo, president of New York-based ad agency Attention told Campaign.
The headline act is Google Lens, introduced by Google chief Sundar Pichai at the firm’s I/O Developer Conference, as “a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on what you are looking at.”
It’s an app that uses image recognition to identify objects appearing in your camera lens in real-time. It means you can point a smartphone at a bird and be told exactly what it is.
Google may have its tentacles in a lot of pies but a search-driven internet platform remains at its core and the Google Lens simply extends those capabilities by being integrated into Google Assistant. Instead of searching with text, users can use a combination of voice and images or have the option to choose between the two.
It’s worth noting that Pinterest already has a similar product on the market. Although Pinterest’s Lens doesn’t integrate with a voice-based assistant it does link photos with related search-based suggestions.
Most of us are familiar with GPS — global positioning system — but that technology can only get you so far. Though terrific for travelling around large areas outside, GPS has real limitations when you are inside a building (away from the satellites).
Enter VPS or visual positioning system. Using Google’s augmented reality platform Tango and information from Google Maps, VPS looks for recognisable objects around you to work out where you are, with an apparent accuracy of just a few centimetres.
Google’s head of virtual reality, Clay Bavor, said: “GPS can get you to the door and then VPS can get you to the exact item that you’re looking for.”
Intel and Qualcomm have announced reference Project Tango devices for developers. These use a combination of motion tracking (via accelerometer, gyroscope and other sensors); depth perception, (via Intel’s RealSense 3D camera) and ‘area learning’, a tool that maps and stores the area around it.
The first Tango-equipped smartphone, Lenovo’s £500 Phab 2 Pro, and the Asus Tango-equipped handset ZenFone 3, are already available but it will take more than two [devices] to Tango...
VPS can be combined with an audio (Assistant) to provide directions around nearby objects and obstacles – useful for the vision impaired. It will also be incorporated into Google Lens, suggesting that the company expects to support this across a large number of devices in the future.
In other ‘AI meets imaging’ announcements, there are new versions of the Daydream virtual reality headset in development that work without a smartphone. In this endeavour it is partnering with HTC and Lenovo. The headsets will use location tracking technology to detect when you’re walking around.
In the long term, Google views both virtual and augmented reality as part of ‘immersive computing’, where devices operate in a manner that’s closer to how we see and interact with the world.
To this end, Google has also pimped up its Photo app (which has half a billion users) with software that not only takes all the fun, but basically all the truth, out of snapping a still whether for your holiday album or for Magnum.
Taking a picture of that gorilla through bars at a zoo? No problem. Google Photos will realise the problem and simply erase the bars, filling in the picture using more cunning AI code.
“We can remove the hard work, remove the obstruction, and have the picture of what matters to you, in front of you,” informed Pichai.
Photo editors like Photoshop already offer intelligent eraser tools that remove objects by blending in nearby colours, but we rarely see the feature built into mobile applications.
Naturally, Google Lens is coming to Google Photos so you can ask questions about pictures you’ve already taken.
One potentially handy idea is that if you’re looking at a screenshot you saved of a company’s website and there’s a phone number, you can just tap it to call.
The app will also employ facial recognition to make sharing your photos on social networks even easier (there was a problem?). When the app recognizes someone in a picture who you may know it will automatically suggest sending the pic to that friend or group of friends in your social network (based on connections in Gmail and the Google Photos app). As other reporters have pointed out this opens up all sorts of privacy-related issues connected with children.
And, of course, what Google gets out of all this image sharing is more data and more information about you and your relationships to fuel its ever-evolving algorithms.
You get the picture.