According to new research from the University of Birmingham, humans read by ‘pre-processing’ written words to create a pipeline of meaning. According to a study published in Nature Communications, each pre-processing decision can be made in as little as 100ms after the eye lands on the previous word. The neuronal activity required to scan the next word in a sentence increases with word complexity, the researchers discovered.
The findings of the research team, which is based at the University’s Centre for Human Brain Health, could aid in the diagnosis and training of people with certain types of dyslexia in which word pre-processing is impaired. The team has also proposed a new theory for how humans perceive other, more general scenes as they observe and navigate the world around them, based on the principles highlighted in their study.
In a second, theoretical paper published in Trends in Cognitive Neuroscience, the team contends that humans do not always perceive objects sequentially (in series), nor do they perceive items concurrently (in parallel). Instead, they create a pipeline of observations in which meaning is established from one object while another region of the brain decides which next item is important.
The speed at which the participants in our experiment were able to recognize the complexity of the next word in the sequence was really remarkable and much faster than we predicted. Our research has clearly demonstrated that in order to gain meaningful insights into these processes, eye tracking and brain scanning must be studied concurrently.Yali Pan
To make sense of the world, our eyes collaborate with our brains. At any given time, our conscious attention is focused on one location, but we’re also speculating about what’s around it. This peripheral awareness directs the eyes to where they should jump next. This jumping or saltation occurs approximately three times per second.
Scientists at the University of Birmingham are attempting to understand how we guess what the next point of attention might be by combining data from brain scans and eye-tracking, and how different regions of the brain cooperate in this “pipeline of meaning” as “one object is established while another region of the brain is simultaneously deciding which next item is important.” These processes are co-ordinated by alpha waves in the brain and occur extremely quickly.
“The speed at which the participants in our experiment were able to recognize the complexity of the next word in the sequence was really remarkable and much faster than we predicted,” said Yali Pan, a co-author on both studies.
To assess this activity, the team used a combination of eye tracking technology and measurements of magnetic fields in the brain in the experimental study. While eye tracking can reveal how the eye moves and what object or word it focuses on, magnetoencephalography, or MEG, reveals what is going on “behind the scenes” as the brain processes this information.
The researchers asked 39 people between the ages of 18 and 35 to read a series of sentences on a screen. One word in each sentence was ‘tagged,’ meaning it was programmed to flicker at a high frequency (60Hz). Although the participants were not aware of it, it was picked up by a specific region of their brain and detected by the MEG sensors.
The researchers were able to get a detailed picture of what the participants were fixating on different words in a text by monitoring both MEG signals and eye tracking, and then how different regions of the brain processed that information.
Prof. Ole Jensen, who is also a co-author, stated: “Understanding how the brain decodes written information can aid in the development of more sophisticated diagnostic tools for conditions like dyslexia. Our research has clearly demonstrated that in order to gain meaningful insights into these processes, eye tracking and brain scanning must be studied concurrently.”
According to the scientists: “Humans do not always perceive objects sequentially, and they do not always perceive items concurrently (in parallel). Instead, they create a pipeline of observations in which meaning is established from one object while another region of the brain decides which next item is important.”
When we read text, we go through a similar process. “The neuronal activity required to scan the next word in a sentence increases in proportion to the word’s complexity.”
“It’s fascinating to see how different brain regions are responsible for different types of visual processing, and how these observations can give us entirely new insights into the links between visual perception and meaning in the brain,” said co-author Dr. Steven Frisson.