In just minutes, an artificially intelligent machine cracked those jumbled text sequences called captchas that are used to distinguish human web users from spam-spreading robots. So much for that.
The AI startup, Vicarious, that built the captcha-cracking bot says its approach could point the way to more general, human-like artificial intelligence. (Captcha is short for "completely automated public Turing test to tell humans and computers apart.")
"This is definitely a small step. But these are the things you need to consider if you want to go in the direction of general artificial intelligence," Vicarious co-founder Dileep George told Live Science, referring to the ability of a machine to generalize and learn from very little data. [Super-Intelligent Machines: 7 Robotic Futures]
Captcha is short for "completely automated public Turing test to tell humans and computers apart." (Image credit: metrue/Shutterstock)The smart machine built by Vicarious, on the other hand, can be trained in a matter of minutes using just a few hundred example characters, the researchers said. It works with multiple different styles of captcha and can also be repurposed to identify handwritten digits, recognize text in photos of real-world scenarios and detect non-text objects in images.
That's because Vicarious designed the system to mimic the way the brain identifies objects after seeing just a few examples and still recognizes them in strange new configurations, George said.
"Nature created a scaffold over millions of years of evolution," he told Live Science. "We look at neuroscience to find out what that scaffolding is, and we put this structure in our model to make it easier for the model to learn quickly."
Vicarious announced a captcha-cracking AI back in 2013, but didn't publish the research in a journal, leading critics to call for a peer-reviewed paper before accepting their claims. Now, the company has detailed its so-called Recursive Cortical Network (RCN) in a paper published yesterday (Oct. 26) in the journal Science.
The company tested the system on text-based captchas from leading providers reCAPTCHA and Bot Detect and those used by Yahoo and PayPal at accuracies ranging from about 57 percent to nearly 67 percent. That's much higher than the 1 percent considered to make them ineffective at stopping bots, according to the study authors. The researchers said that optimizing the system for a specific style can push accuracy up to 90 percent.
While most machine-learning approaches simply scan an entire image looking for patterns in its pixels, the human visual system is wired to build rich models of the objects that make up a scene, George said.
One of the ways it does this is by separating out the contours of an object from its surface properties. This is why people tend to sketch the outline of a shape before coloring it in, and why humans can easily imagine a banana with the texture of a strawberry, despite never having seen one, George said.
This technique of the human brain not only provides a more flexible understanding of what an object could look like; it also means you don't have to see every possible combination of shape and texture to confidently identify the object in new situation, he added.
By embedding this approach into the structure of their system, alongside other brain-inspired mechanisms that help focus attention on objects and separate them out from backgrounds or overlapping objects, the researchers were able to create an AI that could learn from fewer examples and perform well across a range of tasks.
Brenden Lake, an assistant professor at New York University whose research spans cognitive and data science, said that despite recent progress in artificial intelligence, machines have a long way to go to catch up with humans by many measures.
"People can learn a new concept from far fewer examples, and then generalize in more powerful ways than the best machine systems," Lake told Live Science in an email. "It [the Science paper] shows that incorporating principles from cognitive science and neuroscience can lead to more human-like and more powerful machine -learning algorithms."
Building human-like cognitive biases into their system does have drawbacks, George said, because such machines will struggle with the same visual tasks that frustrate humans. For example, training either to understand QR codes would be very difficult, he said.
Original article on Live Science.