Heather Dial, lead author and assistant professor in the 91破解版鈥檚 department of communication sciences and disorders, found that recording brain activity while a person listens to a story may help diagnose primary progressive aphasia.
Key Takeaways
- A researcher found that recording brain activity while a person listens to a story was up to 75% effective in diagnosing subtypes of primary progressive aphasia (PPA).
- The non-invasive electroencephalography-based method could lead to faster, more patient-friendly assessments for language-affecting disorders like PPA, Alzheimer鈥檚 dementia and stroke.
- The research lays groundwork for future clinical tools and is part of a larger federally funded project exploring brain responses to language.
A 91破解版 researcher found that recording brain activity while a person listens to a story may help diagnose primary progressive aphasia, a rare neurodegenerative syndrome that impairs language skills.
Published Aug. 12 in , the findings show this method was up to 75% effective in classifying the three PPA subtypes by using brain activity data and machine-learning algorithms.

The underlying cause of PPA is often Alzheimer鈥檚 disease or frontotemporal lobar degeneration. Diagnosing PPA 鈥 a type of dementia 鈥 is often challenging, as current methods require two to four hours of cognitive testing and sometimes brain scans that can be emotionally taxing for patients.
鈥淥ur thought with this project was, can we do something different that takes less time, that helps with diagnosis?鈥 said Heather Dial, lead author and assistant professor in 鈥檚 department of communication sciences and disorders.
While still in early stages, the non-invasive approach could lead to faster, more patient-friendly assessments for PPA and other language-effecting disorders such as Alzheimer鈥檚 dementia and stroke.
How it Works
Dial 鈥 along with researchers from University of Wisconsin-Madison, The University of Texas at Austin and Rice University 鈥 used electroencephalography, or EEG, to record electrical activity in participants鈥 brains as they listened to a story.
The EEG tracked how the brain processed different levels of language, from acoustic features (how the story sounded), to syntactic structure (how sentences were formed).
鈥淚f this method is reliable and valid, then we can feel confident in physicians using it to assess change in patient response to treatment and for diagnosis.鈥
鈥 Heather Dial, lead author and assistant professor
Machine-learning models analyzed the data, with the most effective model reaching nearly 75% accuracy in classifying PPA subtype, suggesting a promising foundation for future diagnostic tools 鈥 though it鈥檚 not yet ready for clinical use.
鈥淭his suggests it鈥檚 worth pursing further and trying to find the optimal parameters,鈥 Dial said. 鈥淲hat are the best modeling approaches? What are the best features? How can we use this to improve the tools that a clinician has access to for diagnosis?鈥
The research team plans to refine the algorithm to boost diagnostic accuracy and reliability. Dial鈥檚 team received a $375,000 grant in 2024 from the National Institutes of Health to apply the same story-listening technique to studying stroke-induced language deterioration. That project will run through 2026.
鈥淚f this method is reliable and valid, then we can feel confident in physicians using it to assess change in patient response to treatment and for diagnosis,鈥 she said.