1月31日,《科学公共图书馆—生物》(PLoS Biology)上发表的的一项研究成果称,美国科学家证明,人体大脑会以不同的脑电波模式来应对人们在对话中所听到的字句。由于人脑处理思维的方式与处理声音的方式相似,科学家们由此断言,将来可以研制出一种“读心”装置,植入脑损伤病人头部便可了解病人在对话中确实听到的字句,从而推断病人对对话的理解。
该研究由美国加州大学伯克利分校(University of California at Berkeley)一科研小组完成。他们将一系列电极通过头骨空隙连入15位癫痫病患者脑部,接着向他们播放一段长达5至10分钟的对话录音。在病人聆听的同时,研究人员观测病人脑部颞叶的活动情况。颞叶负责语言处理。在这之后,科学家建立两组电脑模型将病人脑产生的明显信号与个体录音片段进行配对。最后,为了验证配对的准确率,研究人员将对话录音进行拆解,以字为单位单独放给病人听,再通过病人脑部产生的电波推测其听到的具体单词。结果证明,其中一组电脑模型准确率高达90%。
对此,研究成员之一罗伯特·奈特(Robert Knight)教授表示,这对成千上万由于中风或路格里克氏病导致脑部受损,从而无法开口说话的患者来说无疑是天大的喜讯。(生物谷 Bioon.com)
doi:10.1371/journal.pbio.1001251
PMC:
PMID:
Reconstructing Speech from Human Auditory Cortex
Brian N. Pasley, Stephen V. David, Nima Mesgarani, Adeen Flinker, Shihab A. Shamma, Nathan E. Crone, Robert T. Knight, Edward F. Chang.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.