APBN New Site

APBN Developing Site

Why Do People With Autism Read Facial Expressions Differently? Neural Network Model Sheds New Insights

A neural network model that mimics brain developmental processes reveals how neural abnormalities affect learning, cognition, and the ability to recognise emotions.

Humans have six basic emotions – anger, disgust, fear, happiness, sadness, and surprise. But as we grow, we also experience an array of complex feelings like embarrassment, guilt, and shame among many others. Although being able to understand and express these emotions are often considered intuitive, this ability is a learned skill that starts developing from birth and can be continuously honed throughout life. By adulthood, most people are usually able to recognise subtle emotional expressions immediately. However, that is not the case for those with autism spectrum disorder.

It is well-known that people with autism spectrum disorder struggle to interpret facial expressions, recognise emotions and other emotional cues like body language and tone of voice. Subtle expressions of fear, anger, and disgust are especially hard to identify. But why does this happen?

To answer this question, a team of researchers from Tohoku University has built a neural network model to explore the effects of neuronal abnormalities on the ability to read facial expressions. Inspired by the predictive processing theory, a well-established theory in neuroscience, their artificial model was designed to reproduce brain functions on a computer and simulate developmental processes by learning to anticipate and interpret facial expressions.

“Humans recognise different emotions, such as sadness and anger by looking at facial expressions. Yet little is known about how we come to recognise different emotions based on the visual information of facial expressions,” said paper co-author, Yuta Takahashi. “It is also not clear what changes occur in this process that leads to people with autism spectrum disorder struggling to read facial expressions.”

According to the predictive processing theory, the brain is constantly generating and updating a mental model of the environment. This model is said to be used to predict the next sensory input and adapt when its prediction is wrong. In the specific case of emotional recognition, inputting sensory information like facial expressions helps to reduce prediction error.

For the purpose of the study, the scientists based their neural network on the predictive processing theory. They reproduced the brain’s developmental process by training the model to predict how parts of the face would move in videos of facial expressions. As they fed sensory information into the model, the machine self-organised clusters of various emotions into the model’s higher level neuron space. During this time, the model had yet to know which emotion the facial expressions in the videos correspond to. This means that the clusters, which represent how brains categorise information when learning, were segregated based on similarities detected by the model itself.

After extensive training, their neural network model finally succeeded in generalising unknown facial expressions that were not given in the training. It was also able to reproduce facial part movements and minimise prediction errors. However, the model had yet to simulate the processes of autism spectrum disorder. Therefore, Takahashi and colleagues furthered their experiments and induced abnormalities in the neurons’ activities to determine how these changes affect learning development and cognitive characteristics. The team reduced the heterogeneity of activity in neural population and discovered that the ability to generalise expressions also decreased, thereby inhibiting the formation of emotional clusters in higher-level neurons. This caused a tendency to fail in identifying the emotion of unknown facial expressions, which is a similar symptom of autism spectrum disorder.

Based on these findings, their study clarified that the predictive processing theory can be used in a neural network model to explain how emotion recognition is learnt from facial expressions. Takahashi believes that “the study will help advance developing appropriate intervention methods for people who find it difficult to identify emotions.” In future, the team hopes to further their “understanding of the process by which humans learn to recognise emotions and the cognitive characteristics of people with autism spectrum disorder.” [APBN]


Source: Takahashi et al. (2021). Neural network modelling of altered facial expression recognition in autism spectrum disorders based on predictive processing framework. Scientific Reports, 11, 14684.