One Step Closer to a Brain
It sounds funny, but when Google created a huge computer network that was able to identify cats from YouTube videos, it was a big leap forward for artificial intelligence.
A few months ago Google shared with us another challenge it had taken on. It wasn’t as fanciful as a driverless car or as geekily sexy as augmented reality glasses, but in the end, it could be bigger than both. In fact, it likely will make both of them even more dynamic.
What Google did was create a synthetic brain, or at least the part of it that processes visual information. Technically, it built a mechanical version of a neural network, a small army of 16,000 computer processors that, by working together, was actually able to learn.
At the time, most of the attention focused on what all those machines learned, which mainly was how to identify cats on YouTube. That prompted a lot of yucks and cracks about whether the computers wondered why so many of the cats were flushing toilets.
But Google was going down a path that scientists have been exploring for many years, the idea of using computers to mimick the connections and interactions of human brain cells to the point where the machines actually start learning. The difference is that the search behemoth was able to marshal resources and computing power that few companies can.
The face is familiar
For 10 days, non-stop, 1,000 computers–using those 16,000 processors–examined random thumbnail images taken from 10 million different YouTube videos. And because the neural network was so big–it had more than a billion connections–it was able to learn to identify features on its own, without any real human guidance. Through the massive amount of information it absorbed, the network, by recognizing the relationships between data, basically taught itself the concept of a cat.
Impressive. But in the realm of knowledge, is this cause for great jubilation? Well, yes. Because eventually all the machines working together were able to decide which features of cats merited their attention and which patterns mattered, rather than being told by humans which particular shapes to look for. And from the knowledge gained through much repetition, the neural network was able to create its own digital image of a cat’s face.
That’s a big leap forward for artificial intelligence. It’s also likely to have nice payoffs for Google. One of its researchers who worked on the project, an engineer named Jeff Dean, recently told MIT’s Technology Review that now his group is testing computer models that understand images and text together.
“You give it ‘porpoise” and it gives you pictures of porpoises,” Dean explained. “If you give it a picture of a porpoise, it gives you ‘porpoise’ as a word.”
So Google’s image search could become far less dependent on accompanying text to identify what’s in a photo. And it’s likely to apply the same approach to refining speech recognition by being able to gather extra clues from video.
No question that the ability to use algorithms to absorb and weave together many streams of data, even different types of data, such as sound and images, will help make Google’s driverless car that much more autonomous. Same with Google glasses.
But now a slice of perspective. For all its progress, Google still has a long way to go to measure up to the real thing. Its massive neural network, the one with a billion connections, is, in terms of neurons and synapses, still a million times smaller than the human brain’s visual cortex.
A matter of intelligence
Here are more recent developments in artificial intelligence:
- A bee, or not a bee: A team of British scientists are attempting to create an accurate model of a honeybee’s brain. By reproducing the key systems that make up a bee’s perception, such as vision and scent, the researchers hope to eventually be able to install the artificial bee brain in a small flying robot.
- But does it take the cover into account?: New software called Booksai is using artificial intelligence to give you book recommendations based on the style, tone, mood and genre of things you already know you like to read.
- Do I always look this good?: Scientists at Yale have programmed a robot that can recognize itself in the mirror. In theory, that should make the robot, named Nico, better able to interact with its environment and humans.
- Lost in space no more: Astronomers in Germany have developed an artificial intelligence algorithm to help them chart and explain the structure and dynamics of the universe with amazing accuracy.
- Walk this way: Scientists at MIT have created a wearable intelligent device that creates a real-time map of where you’ve just walked. It’s designed as a tool to help first responders coordinate disaster search and rescue.
Video bonus: In France–where else?–an inventor has created a robot that not only prunes grape vines, but also has the intelligence to memorize the specific needs of each plant. And now it’s learning to pick grapes.
More from Smithsonian.com