Your brain is a prediction machine that is always active

Summary: The brain is constantly working as a prediction machine, constantly comparing sensory information with internal predictions.

A source: Max Planck Institute

This fits with a recent theory about how our brain works: it’s a prediction machine that constantly compares the sensory information we receive (such as images, sounds, and language) with internal predictions.

“This theoretical idea is very popular in neuroscience, but the evidence for it is often circumstantial and limited to artificial situations,” says lead author Micha Heilbron.

“I want to understand exactly how it works and test it in different situations.”

Brain research on this phenomenon is usually carried out in artificial conditions, Heilbron reveals. To induce arousal, participants are asked to stare at a pattern of moving dots for half an hour or listen to simple patterns such as “beep beep, beep boop.”

“Studies like this show that our brains can actually predict things, but that’s not always the case in the complexities of everyday life. We are trying to get him out of the lab. We are studying the same type of phenomena, how the brain works with unexpected information, but then in natural situations that are not predictable.

Hemingway and Holmes

Scientists analyzed the brain activity of people who listened to stories by Hemingway or Sherlock Holmes. At the same time, they analyzed the texts of the books with the help of computer models called deep neural networks. That way, they could calculate how unexpected it was for each word.

For each word or sound, the brain makes detailed statistical expectations and turns out to be very sensitive to the degree of unpredictability: the brain reacts more strongly when the word is unexpected in context.

Our brain is a prediction machine that is always active. Credit: Illustration created by AI, via: DALL-E, OpenAi – Micha Heilbron

“This in itself is not surprising: everyone knows that you can sometimes predict the language that will happen. For example, if someone starts to speak too slowly, stutters, or can’t think of a word, your brain sometimes automatically “fills in the blanks” and mentally completes it. But what we show here is that it’s happening continuously. Our brains are constantly guessing words; the forecasting machinery is always on.’

more than software

“In fact, our brains do something similar to speech recognition software. Speech recognizers using artificial intelligence are also constantly making predictions and letting you guide their expectations, just like the autocomplete feature on your phone.

“However, we noticed a big difference: brains make predictions at different levels, from abstract meaning and grammar to specific sounds, not just words.”

For example, there is good reason for the continued interest of tech companies looking to use such new insights to create software for language and image recognition. But such applications are not the main goal for Heilbron.

“I want to understand how our predictive technology works at a fundamental level. I’m now working with the same research setup, but for visual and auditory perceptions like music.”

Neuroscience research news about it

Author: Press service
A source: Max Planck Institute
The connection: Press service – Max Planck Institute
Photo: Image courtesy of DALL-E, OpenAi – Micha Heilbron

Original research: Closed access.
“A Hierarchy of Linguistic Predictions During Natural Language Comprehension” by Micha Heilbron et al. PNAS

See also

It shows a woman stretching for a run

Abstract

A hierarchy of linguistic assumptions during natural language comprehension

To understand spoken language, two-way acoustic streams must be transformed into a representational hierarchy from phoneme to meaning. It is assumed that the brain uses prediction to interpret incoming information.

However, the role of forecasting in language processing remains controversial, with disagreements over the ubiquity and representational nature of forecasts.

Here, we address both issues by analyzing brain recordings of participants listening to audiobooks and using a deep neural network (GPT-2) to accurately quantify contextual predictions.

First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we dissociate the model-based predictions into distinct dimensions and reveal dissociable neural signatures of predictions across syntactic category (parts of speech), phonemes, and semantics.

Finally, we show that higher-level (word) predictions inform lower-level (phoneme) predictions, supporting hierarchical preprocessing.

Together, these results highlight the ubiquity of prediction in language processing, suggesting that the brain spontaneously predicts the language ahead at multiple levels of abstraction.

Leave a Comment

Your email address will not be published.