Technology

Computer-based AI software can function like human intelligence

Computer-based AI software can function like human intelligence

Computer-based artificial intelligence can act much like human intelligence when trained to use a much faster methodology for understanding new objects, say two neuroscientists who created a model designed to mirror human visual learning. In the journal Frontiers in Computational Neuroscience, Maximilian Riesenhuber, Ph.D., professor of neuroscience at Georgetown University Medical Center, and Joshua Rule, Ph.D., a postdoctoral scholar at UC Berkeley, demonstrate how the current paradigm greatly enhances AI software’s ability to easily learn new visual concepts.

Neuroscientists at Georgetown University Medical Center and the University of California, Berkeley, have created a model that allows artificial intelligence software to behave more like a human brain and to learn new visual concepts more rapidly. The program distinguishes connections between whole visual categories; the traditional approach includes the identification of objects using only low-level and intermediate visual attributes such as form and color.

Researchers discover that tweaking AI Software to function like a human brain improves the computer’s learning ability. They explain how the new approach vastly improves the ability of AI software to quickly learn new visual concepts.

“Our model is a biologically realistic way for artificial neural networks to learn new visual concepts from a limited number of samples,” says Riesenhuber. “We can get machines to learn a lot more from a few experiences by using previous learning in a manner that reflects what the brain is doing.”

Humans can learn new visual concepts easily and reliably from scarce data-sometimes just one example. And three-to four-month-old babies can quickly learn to recognize zebras and to differentiate them from cats, ponies, and giraffes. But machines usually need to “see” a lot of representations of the same thing to know what it is, explains Riesenhuber.

The major improvement required was the creation of software to define associations between whole visual categories, rather than the more traditional way of defining an entity using only low-level and intermediate detail, such as shape and color, Riesenhuber says. “The computational strength of the brain hierarchy lies in the ability to refine learning by using, as it were, previously studied representations from a database of object definitions,” he notes.

Computer-based-AI-software-can-function-like-human-intelligence-1
Tweaking ai software to function like a human brain improves computer’s learning ability

Riesenhuber and Rule showed that artificial neural networks, which interpret objects in terms of previously learned concepts, have learned new visual concepts even faster. The rule states that “Rather than learning high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts. It’s like claiming that a platypus looks a bit like a duck, a beaver, and a sea otter.”

The brain architecture underlying human visual concept learning is built on the neural networks involved in object recognition. The anterior temporal lobe of the brain is believed to hold “abstract” concept images that go beyond form. These complex neural hierarchies for visual perception allow humans to learn new tasks and, most importantly, to exploit prior learning. “By reusing these terms, you can more quickly learn new concepts, new meanings, such as the idea that a zebra is actually a horse of a different kind,” says Riesenhuber.

Despite developments in AI, the human visual system is still the gold standard in terms of the ability to generalize from a few examples, to cope robustly with image differences, and to understand scenes, scientists claim.

“Not only can our results reflect approaches that can help machines learn more easily and effectively, they can also lead to better neuroscience trials aimed at learning how humans learn so quickly, which is not yet well understood,” concluded Riesenhuber.