This is seriously impressive -- and a little scary.
The yet-to-be-peer-reviewed research could lay the groundwork for much more capable brain-computer interfaces designed to better help those can't speak or type.
They used this data to train an algorithm that they say can associate these blood flow changes with what the subjects were currently listening to. said what in the radio and podcast recordings. In other words, the algorithm "knows what’s happening pretty accurately, but not who is doing the things, " Huth explained.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more: