Machine translation of cortical activity to text with an encoder-decoder framework

A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech. Here we show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent advances in machine translation, we train a recurre...

Full description

Saved in:
Bibliographic Details
Published in:Nature neuroscience Vol. 23; no. 4; pp. 575 - 582
Main Authors: Makin, Joseph G, Moses, David A, Chang, Edward F
Format: Journal Article
Language:English
Published: United States Nature Publishing Group 01.04.2020
Subjects:
ISSN:1097-6256, 1546-1726, 1546-1726
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech. Here we show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent advances in machine translation, we train a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence. For each participant, data consist of several spoken repeats of a set of 30-50 sentences, along with the contemporaneous signals from ~250 electrodes distributed over peri-Sylvian cortices. Average word error rates across a held-out repeat set are as low as 3%. Finally, we show how decoding with limited data can be improved with transfer learning, by training certain layers of the network under multiple participants' data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1097-6256
1546-1726
1546-1726
DOI:10.1038/s41593-020-0608-8