Neural Speech Tracking with EEG: Integrating Acoustics and Linguistics for Hearing Aid Users
This study investigates the neural encoding of speech features in hearing aid users using electroencephalography (EEG) during a simulated cocktail party scenario. The objective was to investigate neural tracking of various acoustic and linguistic features and how hearing aid noise reduction influenced this tracking. The features analyzed included the acoustic envelope, phonetic features, word onset, and word surprisal, the latter derived from GPT-2. Temporal Response Functions (TRFs) were used to correlate these features with EEG signals, revealing how the brain tracks attended (target) versus unattended (masker) speech. TRFs were estimated using a boosting algorithm, with speech features as predictors and EEG signals as responses. Results revealed a significant distinction between target and masker speech. The acoustic envelope showed the strongest correlation with EEG responses. Distinct tracking patterns were observed: the acoustic envelope and phonetic features correlated with early processing stages, while word onset and word suprisal were linked to later stages. Noise reduction further influenced the tracking of these features. These findings improve our understanding of how hearing aid users process speech and provide insight for developing hearing aids that adapt to individual neural responses.