Recent advances in brain-computer interfaces have made it possible to more accurately extract speech from neural signals in humans, but language is just one of the tools we use to communicate. “When my young nephew asks for ice cream before dinner and I say ‘no,’ the meaning is entirely dictated by whether the word is punctuated with a smirk or a stern frown,” says Geena Ianni, a neuroscientist at the University of Pennsylvania. That’s why in the future, she thinks, neural prostheses meant for patients with a stroke or paralysis will decode facial gestures from brain signals in the same way they decode speech.

To lay a foundation for these future facial gesture decoders, Ianni and her colleagues designed an experiment to find out how neural circuitry responsible for making faces really works. “Although in recent years neuroscience got a good handle on how the brain perceives facial expressions, we know relatively little about how they are generated,” Ianni says. And it turned out that a surprisingly large part of what neuroscientists assumed about facial gestures was wrong.

The natural way

For a long time, neuroscientists thought facial gestures in primates stemmed from a neat division of labor in the brain. “Case reports of patients with brain lesions suggested some brain regions were responsible for certain types of emotional expressions while other regions were responsible for volitional movements like speech,” Ianni explains. We’ve developed a clearer picture of speech by tracing the origin of these movements down to the level of individual neurons. But we’ve not done the same for facial expressions. To fill this gap, Ianni and her team designed a study using macaques—social primates that share most of their complex facial musculature with humans.

Read full article

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *