Researchers at the University of California San Francisco have demonstrated, for the first time, the possibility of converting neuronal activity into recognizable words; the key has been in the development of a brain implant.
Brain-to-machine interfaces aren’t new, but they all face the same challenge of interpreting brain signals and turning them into actions, whether it’s moving a mechanical arm or typing a word on the computer without using the keyboard. Most go the easy way, if it can be said so, of registering movement intent to move a cursor to the buttons that execute the actions. In other words, it is an indirect way of doing things.
This is why the achievement of the California researchers is so important; they have achieved a direct action between the user and the machine , without the need for an intermediate interface. Specifically, the first participant, a paralyzed man, has managed to speak without having to vocalize the words, just by thinking about what he wants to say.
There are still many limitations; the available vocabulary is limited to 50 words , but the important thing is that these are the words that the patient has thought, and the implant has been able to recognize; for example, you can now answer simple questions like “do you want water?” and explain that you are not thirsty.
Until now, the patient could only communicate by choosing individual letters with a pointer attached to a cap on the head, a long and difficult process. According to project leader Edward Chang, the end goal is for you to be able to speak like anyone else.
Typically, a person can speak 150-200 words per minute; to reach that speed, an intermediary system that introduces the words is not viable. Hence, researchers have focused on “going” directly for the words, as it is closer to the way we speak.
The key to the research has been in the development of a new type of brain implant, with high-density electrodes; it was used to record brain activity, relating concrete signals to words.
The machine learning and AI have had great weight in this development; a custom neural network model was created, and trained to study brain activity. In this way, it is able to recognize the signals and identify the words in real time, as they are thought.
In addition, the system has “autocorrect” algorithms , like those of a smartphone, to predict the next word that the user wants to say and select it automatically.
After this successful demonstration, the tests will be extended to more volunteers, and the researchers want to create a large vocabulary to speed up speech decoding.