MIT researchers develop a wearable social coach for people with Asperger's

Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f368810%2f822a2525-5e0c-4f32-bc29-1657cfe8dc18
Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f368810%2f822a2525-5e0c-4f32-bc29-1657cfe8dc18

For people living with Asperger's syndrome, every social interaction can be a battle. While "high-functioning" in some aspects, those on the autism spectrum can struggle to engage with other people and topics outside of their own spheres of interest. 

Keeping up with conversations can be especially challenging then, since difficulty interpreting the meaning of nonverbal communication (like gestures and facial expressions) and modulations in the speech patterns of others is one of the hallmarks of the condition. 

A pair of MIT researchers have set out to make these interactions less harrowing. Using wearable tech and AI deep-learning systems, they've developed a tool that could someday act as a real-time virtual social coach.   

SEE ALSO: New smart ball is a learning tool specially designed for autistic children

In a paper published today, MIT graduate student Tuka Alhanai and PhD candidate Mohammad Ghassemi describe an AI system that uses specialized algorithms to analyze audio, text transcriptions and physiological signals to help determine a conversation's overall tone in real-time. 

The system runs on a Samsung Simband, a modular, research-centric wrist wearable that can be tricked out with a wide variety of sensors and has the capacity to run custom algorithms on its hardware. 

After training two algorithms on data collected by the Simbands in 31 trial conversations, the research team found the system could determine the overall tone of a story with 83 percent accuracy and provide more granular “sentiment scores” for specifically targeted five-second intervals of speech. The models predicted the mood of those intervals at an 18 percent above chance accuracy rate, which is 7.5 percent better than other methods.

Unlike most research in this area of study, this system was tested measuring organic, real world conversations rather than asking participants to watch "happy" or "sad" videos. 

"As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” said Ghassemi in MIT's release touting the report. “Our results show that it’s possible to classify the emotional tone of conversations in real-time."

The system is in the early stages, however — it's not the pocket social coach its creators are envisioning just yet. For now, the system only provides binary feedback for the conversations on the whole, labeling individual interactions as either positive or negative. The Simband platform is another limiting factor since the wearable isn't commercially available.  

But there's a clear path to development. The researchers hope to find a way to use the system on commercial wearables like the Apple Watch, which would massively expand the data available to the algorithms. With more data, algorithms learn and improve, which would in turn make the system more effective.  

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,'” said Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

BONUS: This stress-reducing device may be the answer to daily anxiety