Google’s New A.I. Can Tell Exactly Where a Photo Was Taken
A new neural network is better than humans at sussing out the location of a picture
Where was that picture taken? Computers were once at a disadvantage to humans in their ability to use context and memory to deduce an image's location. But those days may be drawing to a close. As Julie Morgenstern reports for the MIT Technology Review, a new neural network developed by Google can outguess humans almost every time—even with photos taken indoors.
It’s called PlaNet, and it uses a photo’s pixels to determine where it was taken. To train the neural network, researchers divided Earth into thousands of geographic “cells,” then input over 100 million geotagged images into the network. Some of the images were used to teach the network to figure out where an image fell on the grid of cells, and others were used to validate the initial images.
Then things got serious: The researchers put in another 2.3 million images with tags and put the network to the test. PlaNet figured out where 3.6 percent were taken with “street-level accuracy,” another 10.1 percent at the city level, 28.4 percent at country level and 48 percent at continent level. Though not perfect, that’s pretty good for a computer.
Finally, researchers put their network to the ultimate test. They pitted it against people to see how well it compared to their best attempts to guess a location. 56 percent of the time, PlaNet guessed better than humans—and its wrong guesses were only a median of about 702 miles away from the real location of the images. In contrast, the humans subjects’ wrong guesses were over 1,400 miles off.
If all of this reminds you of The Terminator’s evil Skynet system, which was designed to locate military hardware before it went sentient and destroyed all of humanity, you’re not alone. The comparison has already been made—and given the networks’ superhuman skills, it’s pretty apt. Well, except for the sentient, evil part.
Researchers think that one day, neural networks will be incorporated into things like cell phones to perform ever more complex analyses and even teach one another. But these days, the self-organizing systems seem content with figuring out where photos are taken and creating trippy, gallery-worthy art…for now.