A sizable model of the mouse’s primary visual cortex has been developed by HBP researchers to perform visual problems in a very reliable manner. A new generation of neural network models can be built upon the model. These models can help progress neuromorphic computing due to their adaptability and energy-efficient processing.
Artificial intelligence (AI) can be greatly impacted by brain modeling: since the brain processes images much more energy-efficiently than artificial networks, scientists are drawing inspiration from neuroscience to develop neural networks that behave similarly to the biological ones in order to save a great deal of energy.
In this sense, neural networks modeled after the human brain are anticipated to have an impact on technology in the future by acting as models for visual processing in neuromorphic hardware that consumes less energy. Now, a study by Human Brain Project (HBP) scientists from the Graz University of Technology (Austria) shown how a massive data-based model can faithfully and flexible replicate some of the brain’s visual processing abilities. The journal Science Advances published the findings.
The team examined a biologically accurate large-scale model of the mouse primary visual cortex that can handle a variety of visual processing tasks with the aid of the PCP Pilot Systems at the Jülich Supercomputing Centre, created through a partnership between the HBP and the software provider Nvidia. The visual cortex area V1, which is the first cortical region to receive and process visual information, is represented by this model, which offers the highest integration of anatomical detail and neurophysiological data currently accessible.
The model has noteworthy advantages over models that are frequently used for visual processing in AI in terms of learning speed and visual processing performance because it is created with a different architecture than deep neural networks utilized in contemporary AI.
All five of the team’s visual challenges could be solved by the model with great accuracy. These tasks included, for example, categorizing photos of handwritten digits or identifying visual alterations in a lengthy series of images. Surprisingly, the virtual model performed as well as the brain did even after the researchers exposed it to noise in the network and visuals that it had not before experienced during training.
The model’s higher robustness, or capacity to handle errors or unexpected input, such as the noise in the photos, can be attributed in part to the fact that it replicates a number of the brain’s distinctive coding features.
The authors claim that their new model offers a “unequaled insight into the dynamics of this brain area” since they have created a special tool for researching brain-style visual processing and neural coding.