On the 'mind-reading' advance
UK Guardian has a fairly good report on this.
Lots of misunderstanding in the media.
First, this particular experiment is not reading thoughts or speech; it's just reading a later stage of the brain's response to heard sound. Going a couple stages beyond what a cochlear implant does; looking at a higher level of coding, then reverse-engineering the code to recover the sounds.
Second, there's no guarantee that the production process uses the same coding. It most likely doesn't. Speech production involves controlling the muscles of the larynx, jaw, tongue, lips, velum, pharynx, diaphragm, ribs, and abdomen to achieve a magnificently complex result. These patterns, when measured by an EMG or EEG, have absolutely no correlation with sound waves except in the rhythm of syllables.
However: Speech is
regulated and refined by feedback through the ears, which is why deaf people sound strange. It's possible that the structures monitored in this experiment are also involved in the feedback part of speech.
Basically a thought goes through four main stages: (1) The internal structure of what you mean. This is non-phonetic and may be universal regardless of language, though I doubt it. (2) An internally 'heard' representation of what you want to say. This is in your own language, but sort of idealized; doesn't necessarily sound anything like you speak, and can move
much faster than real speech. (3) The neural commands mentioned above, serving to control the various muscles. These are formed in your own language and dialect, and move in real time. (4) The sound that arises when those commands move your various muscles, causing air from the lungs to set up shaped vibrations. This is in your own language and dialect, but varies from day to day depending on muscular strength, inflammation of tissues, mucus in nose, etc.
Best hope for a speech prosthesis is to find where (2) happens in the brain. This is the idealized version of the utterance, and it may be the template you compare against when using auditory feedback to keep the speech under refined control. If so, it may be found in the structure monitored by the current experiment, and may be outputtable in some understandable form.
Picking up the commands in stage (3) would be the most direct; we know where each of those muscles is represented on the
motor strip, and the outputs are analog voltages that could be fed directly into a synthesizer. But this wouldn't help many of the people who really need such a device. If the lesion is outside the brain, as in bulbar polio or spinal injury or ALS, (3) could help. But with stroke or CP the motor commands themselves are distorted, so you'd have to go inward to (2).