So the NSA knows whom we call and how long we talk to them. Depending on whom you believe, they might even have access to our emails and social communications. Some find this troubling and an invasion of our privacy. Others, however, are not really surprised or shocked.
Government aside, do you know how much information companies like Google and Facebook have about us? Do you know what these companies can do with this information? I thought it would be interesting to delve a little deeper into the data (especially, the visual data) held by some of our favorite Internet companies, and how they could use them in the future.
With the massive number of photos and videos taken and uploaded to Facebook, Instagram, Pinterest, and other social media, we will soon get to a point where there are hundreds (or even thousands) of photos of each person online. Obviously, these images could be used to recognize your face (generally, more images result in more accurate facial recognition). And perhaps more obvious, the images come with meta data that can identify the time and location when the photo was taken.
Most people are fairly familiar with this type of information extraction. However, a significant amount of information can also be extracted from the actual visual information in the images (i.e. aside from any tags or meta data). The obvious ones are your ethnicity, age, and sex. However, did you know that it is possible to analyze your images to find out what skin products are best for you? Or, what hairstyle or makeup would look best for your given face type, color, and shape?
As an example, we recently created a Facebook Foundation Finder application that recommends makeup foundation colors by analyzing your Facebook images. Try it on just one image and the results are mixed. But when analyzing hundreds of images, it is surprising how accurate the recommendations can be.
Similar things can be done for other products. For example, recommending outfits based on your personal style (as learnt from your photos), or suggesting beauty routines that match what products you wear more often. For marketers, the possibilities are almost endless. For consumers, these technologies offer both significant benefits (i.e. instant product recommendations and face/skin analysis) and privacy concerns. However, it is hard to imagine a future where companies do not use information that is readily available about consumers to better market products.
Sometimes, it is not only what you communicate or the photos you take, but it is your specific actions related to your images. If you tag or comment about a specific luxury brand related to a photo, that instantly creates an advertising opportunity and provides more information about you. In fact comments are often a very useful source of information about images — information that cannot easily be extracted by computer vision. For instance, if you have a Louis Vuitton purse in a photo and a user comments about it, that association suddenly makes you a more interesting target for prestige brands. Also, if a photo is taken near a prestige store (i.e. even if that store location is not identified by you, but by someone else who has checked into FourSquare), then that again creates a lucrative advertising opportunity.
We have only begun to scratch the surface of visual information extraction. Today, the direct meta-data (comments, direct tags, location information) can be used for ad optimization. Indirect relational meta-data, however, has the potential to both amaze advertisers and scare privacy advocates.
Essentially mining relational meta-data would work as follows: Let’s say you are tagged in 100 photos, and in many of those photos you are with your best friends and family. Now, imagine if some of those close friends take photos at a Hugo Boss store. Suddenly, their perceived affinity for the Hugo Boss brand increases and so does yours through your close relationship with them. You can extend this further, essentially reflecting what friends of friends like on to you. This will not always be accurate, but given the large volume of information, it is surprising how accurate the final extracted information becomes.
This is a classic case of big data, where rough and oversimplified assumptions and noisy data result in very accurate final conclusions due to the extremely large amount of data available. As an example of how this works, a few years ago our team at the University of Toronto build a map making application that would take a user’s roughly tagged photos (i.e. a few tags in each photo identifying the buildings), and given enough images, would build a detailed and surprisingly accurate map. In the case of visual social advertising, the map being build is not of physical objects, but your social relationships to both individuals and brands.
In the end, there are applications were this data extraction is useful (i.e. more relevant product recommendations). Whenever you upload a photo or video, however, keep in mind the story that is a part of that photo/video. All of these stories, along with your emails, social connections, and actions, are adding up to your social profile, which picture by picture, and word by word, is enabling the major tech companies (and their advertisers) to learn more about you than you might even know yourself.
[Image courtesy Travis Hornung]