神经科学家可通过分析病人的脑活动来重构他所听到的词语。
你认为科学家们能够解读记忆还不够吗?现在他们也可以窃听你的记忆了。
在一项最新研究中,神经科学家们将一个电极网连接到15位病人的大脑中的听觉中心,并且记录了当病人听到“爵士乐”或“瓦尔多”等词语时,大脑的活动状况。
研究人员发现,每个词语都能在大脑中产生自己独特的模式。
基于此,科学家们开发了两种不同的电脑程序,能够只通过分析病人的脑活动就可重构他所听到的词语。
在这两个程序中,较好的那一套程序在重构时表现得很优秀,能够让研究人员以80%~90%的概率精确地解码出那个神秘的词语。
因为此前已有证据表明,我们听到的词语和我们回忆或者想象的词语会引发相似的脑过程,所以,这项1月31日在线发表于《公共科学图书馆―生物学》杂志上的最新发现,意味着科学家们或许有一天能够收听到你正在想着的词语。这对那些因路格里克氏病或其他原因导致无法开口说话的病人来说,是一个潜在的福音。(生物谷 Bioon.com)
doi:10.1371/journal.pbio.1001251
PMC:
PMID:
Reconstructing Speech from Human Auditory Cortex
Brian N. Pasley, Stephen V. David, Nima Mesgarani, Adeen Flinker, Shihab A. Shamma, Nathan E. Crone, Robert T. Knight,Edward F. Chang
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.