Skip to Content
Mark Preston, SEO Strategy Manager

The author

Mark Preston

SEO Strategy Manager

Google Lens is a new(ish) technology announced officially at Google’s I/O developer conference in May this year. It uses powerful artificial intelligence to analyse images viewed through a smartphone’s camera or stored and accessed on the device, and serves up information about the subject of the images or complete related tasks purely based on visual recognition.

The science of equipping computing systems with the ability to extract, analyse and understand information from images or video has been around for a long time and is collectively known as ‘computer vision’.

Rather than a standalone app, Google Lens is more of a capability, designed to integrate with existing Google products to make them more intelligent, namely Google Assistant (Google’s version of Siri or Alexa) and Google Photos.

During an address at the developer conference featuring Google’s CEO, Sundar Pichai, the tool demonstrated a number of intriguing tricks, including:

  • Telling a user what species of flower they are looking at just by viewing it through a smartphone camera
  • Automatically connecting to a Wi-Fi network using a picture of the network name and password information on the router
  • Providing information about a restaurant including reviews, opening times and menus by viewing the façade through a camera

On top of that, when integrated into Google Assistant, Lens was also able to identify a music concert using a picture of an advertisement for the event, with the user then able to see options for purchasing tickets and adding the event to their calendar through voice commands.

Why is Google launching Lens?

I described Google Lens as a ‘newish’ technology at the start of this post and that’s because Google and other technology companies have been experimenting with similar projects for quite a while.

‘Computer vision’ as mentioned earlier isn’t that novel and Google themselves had previously worked on the Google Goggles project, an app openly available on versions of the Android operating system as early as 2010, but discontinued in 2014 because it was regarded by Google employees as being, “of no clear use to too many people”.

However, fast-forward three years and it seems that technology companies are now embracing visual input as the data source of the future, with scores of businesses looking to jump on the augmented reality and virtual reality bandwagon.

Indeed, more recent announcements have seen Samsung launch its own version of Lens, Bixby Vision, part of the suite of features for its Bixby personal assistant on Galaxy smartphones.

Pinterest has also unveiled a new feature, also confusingly called Lens, which allows users of the social platform to see things like image boards, shopping results or recipe suggestions related to whatever they’ve just snapped with their camera.

Compared to Google’s foray into visual recognition in 2010, the crucial thing now is that by being able to plug in its image recognition technology to other systems – particularly AI-based search and Google Assistant; the power of Google Lens becomes a whole lot more practical – and clearly now useful to a lot of people.

What does this mean for users?

The implications for all types of business are potentially significant. We’re embarking on a not-so-distant future where a house-hunter strolls past a property for sale they like the look of and uses image recognition technology to find the specifications of the building, view images of the interior, find out the asking price, book a viewing all whilst taking the dog for a walk.

Similarly, users could take pictures of any products they see whilst on the go – such as a pair of shoes they see someone wearing that they would like – and be instantly shown shopping listings for the same product, allowing them to order there and then.

A user flicking through their pictures in their Google Photos album might come across a picture from a previous holiday, they could use Lens to find out more information about a particular place they visited; ask Google Assistant for prices in a few weeks’ time, and in just a few steps, they’ve booked a four-night city break.

Of course, I’m not suggesting that Lens is going to transform the house-buying process into a five minute task, but as a means of letting users gather information spontaneously and allowing visual stimuli to almost instantly begin new buying journeys, the sentiment still rings true.

Moreover, because Lens is essentially a tool, rather than a discrete device or program, it can be integrated into a range of different products or applications. Instead of being chained to a mobile-only framework, it’s quite easy to envisage Lens working with some kind of Google Glass 2.0 project, permitting hands-free augmented reality on the go, or any other smart device with image capture capabilities for that matter.

The longer term objective for Google

Google has, for quite a while now, positioned itself as a business that wants to assist people in everyday aspects of their lives, but to meet their own ambitions, this assistance must become, in a sense, omnipresent; meaning it can’t be confined to a single device, be it computer, smart watch, smartphone or otherwise.

This longer term goal becomes a bit clearer when we look at a couple of quotes from Google’s CEO, Sundar Pichai, both from April 2016:

“In the long run, I think we will evolve in computing from a mobile-first to an AI-first world.”

“Looking to the future, the next big step will be for the very concept of the “device” to fade away. Over time, the computer itself—whatever its form factor—will be an intelligent assistant helping you through your day. We will move from mobile first to an AI first world.”

Whilst Google has certainly made a move into hardware manufacturing with the launch of a range of home and mobile tech such as Google Wi-Fi, Chromecast, Google Home and the Pixel smartphones; a lot of these devices are actually geared up to help integrate Google’s AI into our home and our lives.

Chromecast essentially extends smartphone functionality into your TV; meaning that Google’s apps and Assistant will likely be able to exert a monopoly over the screens in our lives. With Google Wi-Fi, Google can ensure that poor Wi-Fi doesn’t come between users and its services.

Although Google may not have designs on becoming a real global force in manufacturing, as these products progress towards becoming ambient – always on and always interacting with both users and other devices, there needs to be a unifying system or language that underpins our smartphones, smart homes and smart vehicles, and facilitates this communication.

The ideal situation for Google would probably be for their Cloud computing platform and ‘AI first’ data centres to become the ‘brain’ driving all of these things and it’s this ambition that has got the likes of Samsung scrabbling to develop their own rival AI.

A seemingly small but very significant step towards this process is the recent release of Google Assistant on iPhone via the Google app, an area of the market previously cornered by Apple’s proprietary assistant, Siri, and opening up around 700 million new users to Google’s personal assistant and by extension, many of its other products.

Can I get Google Lens now?

Unfortunately not, while beta versions of the latest Google App show signs of integration it won’t be fully rolling out until later this year.