Advertisement

SKIP ADVERTISEMENT

Scientists Create Speech From Brain Signals

A prosthetic voice decodes what the brain intends to say and generates (mostly) understandable speech, no muscle movement needed.

Video
Video by Chang lab / UCSF Dept. of Neurosurgery; Simulated Vocal Tract Animation Credit by Speech GraphicsCreditCredit...Chang lab/UCSF Dept. of Neurosurgery

“In my head, I churn over every sentence ten times, delete a word, add an adjective, and learn my text by heart, paragraph by paragraph,” wrote Jean-Dominique Bauby in his memoir, “The Diving Bell and the Butterfly.” In the book, Mr. Bauby, a journalist and editor, recalled his life before and after a paralyzing stroke that left him virtually unable to move a muscle; he tapped out the book letter by letter, by blinking an eyelid.

Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents or combat, of strokes, or of neurodegenerative disorders such as amyotrophic lateral sclerosis, or A.L.S., that disable the ability to speak.

Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboard characters, which a computer synthesized into speech.)

“It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group.

[Like the Science Times page on Facebook. | Sign up for the Science Times newsletter.]

Researchers have developed other virtual speech aids. Those work by decoding the brain signals responsible for recognizing letters and words, the verbal representations of speech. But those approaches lack the speed and fluidity of natural speaking.

The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence.

Experts said the new work represented a “proof of principle,” a preview of what may be possible after further experimentation and refinement. The system was tested on people who speak normally; it has not been tested in people whose neurological conditions or injuries, such as common strokes, could make the decoding difficult or impossible.

For the new trial, scientists at the University of California, San Francisco, and U.C. Berkeley recruited five people who were in the hospital being evaluated for epilepsy surgery.

Image
The ECoG Electrode Array is made up of intracranial electrodes that record brain activity.Credit...University of California, San Francisco
Image
Gopala Anumanchipalli, a neurologist at U.C.S.F., holding an array of electrodes similar to those used in the current study.Credit...University of California, San Francisco

Many people with epilepsy do poorly on medication and opt to undergo brain surgery. Before operating, doctors must first locate the “hot spot” in each person’s brain where the seizures originate; this is done with electrodes that are placed in the brain, or on its surface, and listen for telltale electrical storms.

Pinpointing this location can take weeks. In the interim, patients go through their days with electrodes implanted in or near brain regions that are involved in movement and auditory signaling. These patients often consent to additional experiments that piggyback on those implants.

Five such patients at U.C.S.F. agreed to test the virtual voice generator. Each had been implanted with one or two electrode arrays: stamp-size pads, containing hundreds of tiny electrodes, that were placed on the surface of the brain.

As each participant recited hundreds of sentences, the electrodes recorded the firing patterns of neurons in the motor cortex. The researchers associated those patterns with the subtle movements of the patient’s lips, tongue, larynx and jaw that occur during natural speech. The team then translated those movements into spoken sentences.

Two examples of a research participant reading a sentence, followed by the synthesized version of the sentence generated from patients’ brain activity.

Native English speakers were asked to listen to the sentences to test the fluency of the virtual voices. As much as 70 percent of what was spoken by the virtual system was intelligible, the study found.

“We showed, by decoding the brain activity guiding articulation, we could simulate speech that is more accurate and natural sounding than synthesized speech based on extracting sound representations from the brain,” said Dr. Edward Chang, a professor of neurosurgery at U.C.S.F. and an author of the new study. His colleagues were Gopala K. Anumanchipalli, also of U.C.S.F., and Josh Chartier, who is affiliated with both U.C.S.F. and Berkeley.

Image
Dr. Edward Chang studies how the brain produces and analyzes speech. He is developing a prosthesis to restore speech capabilities to patients with paralysis and other forms of neurological damage.Credit...Steve Babuljak for U.C.S.F.

Previous implant-based communication systems have produced about eight words a minute. The new program generates about 150 a minute, the pace of natural speech.

The researchers also found that a synthesized voice system based on one person’s brain activity could be used, and adapted, by someone else — an indication that off-the-shelf virtual systems could be available one day.

The team is planning to move to clinical trials to further test the system. The biggest clinical challenge may be finding suitable patients: strokes that disable a person’s speech often also damage or wipe out the areas of the brain that support speech articulation.

Still, the field of brain-machine interface technology, as it is known, is advancing rapidly, with teams around the world adding refinements that might be tailored to specific injuries.

“With continued progress,” wrote Chethan Pandarinath and Yahia H. Ali, biomedical engineers at Emory University and Georgia Institute of Technology, in an accompanying commentary, “we can hope that individuals with speech impairments will regain the ability to freely speak their minds and reconnect with the world around them.”

A correction was made on 
April 24, 2019

An earlier version of this article misstated the cause of Jean-Dominique Bauby's paralysis. It was a stroke, not a spinal injury.

How we handle corrections

Benedict Carey has been a science reporter for The Times since 2004. He has also written three books, “How We Learn” about the cognitive science of learning; “Poison Most Vial” and “Island of the Unknowns,” science mysteries for middle schoolers. More about Benedict Carey

A version of this article appears in print on  , Section B, Page 4 of the New York edition with the headline: Hope for the Voiceless: Scientists Decode Brain’s Vocal Signals. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT