Jaimie Henderson became interested in people who lose the ability to communicate from a very young age. In a video call presentation of her latest research in this field, the Stanford University (USA) researcher now recalls that, when she was five years old, her father suffered a very serious traffic accident. “He kept telling jokes, and I laughed at his jokes, but I didn’t understand him because his ability to speak was so damaged,” she said. That led him to study how neurons encode movement and speech, and then find ways to recover them in people with neurological damage. Henderson is the leader of one of the two works that he publishes today Nature and who give hope of communicating again to many people like his father.
The first of these works, led from Stanford University, had Pat Bennet as a patient, a 68-year-old woman who was diagnosed with ALS (amyotrophic lateral sclerosis) in 2012. Of the different manifestations of the disease, Bennet had one version that has allowed him to continue moving, albeit with increasing difficulty, but it took away his speech. Although her brain does not have impaired ability to generate language, the muscles of her lips, her tongue, her larynx or her jaw do not allow her to say anything.
That problem was solved, at least in part, with two sensors—smaller than a fingernail—implanted in his brain to pick up signals from individual neurons in two regions associated with language: the ventral premotor cortex and Broca’s area (this The latter was not useful for the researchers’ objective). The researchers used these neural implants and software to match Bennet’s brain signals and attempts at pronouncing words. After four months of training, the system combined all this information with a computer language model that enabled the patient to produce sentences at 62 words per minute. The figure is slightly less than half the speed of normal speech, and when using a vocabulary of more than 100,000 words, one error occurred for every four words spoken, but the results are three times better than similar communication systems used have tried so far.
In the second study, led by Edward Chang of the University of California San Francisco (UCSF), similar results were obtained with a somewhat different system. In this case, the brain implants (made up of 253 microelectrodes) picked up signals from more diverse regions of the brain of Ann, a woman who lost her speech more than 17 years ago due to a stroke. They managed to reach 78 words per minute with a base vocabulary of just over 1,000 words. The error rate was 25.5% when vocal tract movements were included to reconstruct words and 54.4% when brain signals were translated directly into speech via a synthesizer. Although it is still far from being considered a practical solution for this type of ailment, it substantially improves the results of previous experiments.
The UCSF team also wanted to add an avatar to their brain-machine interface because, as Sean Metzger explained, “the goal is to regain the ability to communicate and connect with loved ones, not just help convey a few words. When speaking there is a sound, an emphasis and other subtleties that are lost when there is only one text”, points out the researcher. This personalized avatar, which would translate other communicative elements such as facial expressions from brain signals, would help improve the patient’s connection with their interlocutors. To recreate the voice, the team used a recording of Ann speaking at her wedding, prior to her stroke.
A leap towards a practical solution
In a joint online presentation, both teams have stated that their results were comparable and that they were interested to see how both signal collection methods, one more localized and the other taking them from more areas, show for the first time that these technologies can offer a solution practice. Videos of the tests show that patient communication is still not smooth, but the authors of the two studies believe that their results validate each other and that they are on the right track. Three years ago, Chang’s group demonstrated that with her method it was possible to decode four words in people with paralysis. In that time, progress has been exponential.
So far, there are only fifty people who have been implanted with brain-computer interfaces with microelectrodes to enable their communication. Among the improvements they propose for the future, in addition to increasing the speed of communication, includes the development of wireless devices that do not require patients to be connected to a machine. It will also be necessary to find out if these systems serve to recover speech in people who are completely trapped in their body and only their brain signals are available to restore their communication.
To achieve these objectives, it will also be necessary to increase the number of patients with whom to work, beyond the two women who collaborated in the two studies published today. Nature. It is necessary, for example, to know if what the algorithms learn during the tedious hours of training can be used to decode the speech in the brain of a different person; or also, to study if the other brain signals that the patient produces, when interpreting what others say, can produce errors in the generation of her own speech.
You can follow THE COUNTRY Health and Well-being in Facebook, Twitter and instagram.
Source: EL PAIS