Mobile Visual Search About to Revolutionize Shopping Thanks to Toronto’s Slyce
Taking a concert selfie and uploading to friends and family via Facebook has become part of daily communication habits for most Americans. This is thanks in part to internet connectivity, but also to the camera embedded in our phones. Many have grown fond of using a smart-phone camera for scanning and sending documents, as well as getting information and making purchases using barcodes. But what is next in line for image technologies and so called “machine vision”? According to a 2014 MIT Technology Review, machines are getting frighteningly good at recognizing objects, but until 2012, it was far from the case.
The annual ImageNet Large-Scale Visual Product Recognition Challenge is what spurred the creation of a game-changing algorithm that enabled computers to more rapidly and accurately perform image recognition tasks. Can you imagine taking a picture of an exciting handbag-shoe combination, being able to get information about quality and price, buying with one click and sharing with friends and family? This takes impulse shopping to a whole new level and will revolutionize consumerism forever.
Founded in 2012, a leading image recognition company headquartered in Toronto, the very same city where the winning ImageNet algorithm was born, is making this technology available to its customers right now. With endorsements from several major retailers such as Neiman Marcus, Slyce offers “mobile visual search” with about 95% accuracy. In other words, the application will analyze the picture you have taken with your phone and then direct you to the product’s website, almost every time. Slyce has raised millions of dollars in funding and plans on growing that amount over the next year, including a recent receipt of just below 10 million in funds.
Just as there was once life before the internet, these dark-ages of living without “snap-to-buy” technologies will soon be hard to imagine thanks to Slyce’s product recognition technology. And it’s not just image-based searches that we will perform from now on, but voice and audio as well.