Visual Search: A New Form of Discovery

Visual Search: A New Form of Discovery

In the first six months of 2016, ‘search advertising’ generated $16.3 billion — with the majority of this revenue going to Google. What this demonstrates is that brands, businesses and marketers are now fully invested in search as a way to reach potential customers.

However, there’s now another emerging form of search that we need to be aware of and to seriously consider.

Whilst visual search is by no means a new phenomenon, it has arguably struggled to make any real noticeable impact. However, since the launch of TinEye in 2008 (the first visual search engine) image identification technology has considerably matured — aided by the introduction of reverse image searches on Google in 2011.

Now, with new innovations in artificially intelligent machine-learning algorithms, image identification programs can recognise and contextualise images in a far more complex manner. Recent features introduced by Pinterest have ably demonstrated how this technology can now analyse the constructs of images to provide users with more content that is closer to their needs.

As the technology becomes more sophisticated and intwines itself within our daily Internet usage, it could very well establish itself as an important cog in brand and product discovery.

Satisfying Consumer Demand for Convenience

Over the last few years there has been a well documented shift across the Internet, from text-based content to image-led content. Forums are no longer the sole space for peer-to-peer interactions. Now we share photos and videos via Instagram, Snapchat and Facebook as our preferred way to communicate our stories to our friends, families and strangers.

In a sense, images have now become the de facto, universal language of the Internet — allowing us to connect across the globe without the need to share a spoken language.

Just as the act of communicating via images has become the new normal, the idea of using images to actively search for content (be it other images, videos or websites) is also now becoming more commonplace.

Pinterest have quickly looked to establish themselves as a major player in this field. Three years ago, they acquired image recognition and visual search startup, VisualGraph. The investment was seen as a way for Pinterest to improve how they recommended content and delivered relevant ads.

Users are now able to search for ‘similar content’ via a magnifying tool — which can be used to pinpoint specific objects, products, patterns and colours within an image. For desktop users, there’s now a downloadable plug-in that allows them to perform ‘find similar’ visual searches when browsing on any website.

With an estimated 2 billion searches each month, Pinterest is attempting to ensure that they are well placed to support their users as visual search becomes the new normal practice. These features demonstrate an understanding of how we, as consumers, find active discovery a rewarding and enriching experience. Rather than merely recommending ‘other content’, Pinterest is giving the user a greater ability to personalise their discovery journey.

They look to extent this further as they make significant headway in mobile visual search as well with their new feature, Lens. This tool (currently in BETA mode) analyses photos taken by users, determines objects within the image and then presents them with visually similar content.

The feature is comparable to CloudSight’s CamFind app, which also identifies items within a photo taken from a smartphone — providing users with a range of information including related images, local shopping, price comparisons and websites.

Both CamFind and Pinterest’s Lens tool reflect our need to ‘discover-on-the-go’. Furthermore, the ability to utilise our smartphone camera for search purposes, makes the process far more dynamic and rewarding. For marketers, there is now an immediate requirement to think how visual search can be optimised to create a connection with potential (and active) customers through creative means.

Final Say

Visual search tools could play a key role in streamlining the transition between discovery and purchase.

Recently Google announced that they’ll be implementing new e-Commerce capabilities into Google Images. This feature will use visual recognition technology to directly connect consumers with identical and/or similar products contained within a photo.

John Lewis has already demonstrated how AI visual search technology can be integrated into e-commerce platforms as a way to help people explore product ranges by specific similarities. Similar to CamFrog, John Lewis’s ‘Find Similar' tool allows customers to photograph a garment, after which the app provides them with identical and similar items from the high-end retailer’s collection. Likewise, Amazon’s short-lived Fire smartphone leveraged visual recognition technology for their Firefly application, which allowed users to find products on Amazon’s website by taking photos or videos of commercial items.

While this technology will in a sense give individuals greater choice to browse, it may also compress the act of searching. The consequence of this is that discovery, as part of the consumer journey, may arguably become more focused and precise in regards to providing appropriate content.