Neurological study on decoding "internal voice"

Text-only Version: Click HERE to see this thread with all of the graphics, features, and links.



Omega Vision
http://www.bbc.co.uk/news/science-environment-16811042

inimalist
interesting. PLoS has a poor review system, so I'd take some of the conclusions with skepticism, and the BBC article doesn't really talk about some of the hurdles still necessary to make something like this possible.

At this point, the issue is more of one about computing power and maths. Even decoding simple things like vowel sounds would have taken at least a decently powerful desktop computer, and given the number of neurons that would be involved in more complex speech, and that they want immediate translation from neuronal activity, we are going to need to wait a few years before something small enough with the computing power to do that exists.

The even more complex problem will be that the organization of language is unique for each individual, and has to be, as people can learn different languages (if language was organized the same in everyone's brain we would all speak the same language and never argue over semantics). So, we would have to develop a program that can decode any possible arrangement of millions of neurons and billions of axonal connections. The nature of language itself is going to be difficult too. Depending on how complex we want something like this to be, it will have massive amounts of difficulty with sarcasm, slang, adding prefixes or suffixes to terms not normally done (ie: to turn a noun into a verb by adding "ed" at the end, like, "that guy just inimalsited"wink, and names, both proper from different languages and those made for the internet (inimalist for instance).

I'm really shocked they were trying this from looking at the STG though. Language in the brain is essentially represented in two places, the Superior Temporal Lobe, including the STG, is involved in language comprehension, whereas the left inferior frontal gyrus is involved in grammar and speech production. It seems like this might be done more easily through decoding thoughts of producing language by reading signals from the lifg that would be turned into motor signals for the larynx and tongue, though, that might be extremely difficult... ugh I wont rant about the basal ganglia, though I want to, it is one of the most awesome things ever. I may have just clued myself into why they went with the comprehension area rather than production though.

However, in terms of moving things on a screen with the mind, I would almost suggest the motor areas are better just because of how we work functionally. Things in the world can be mapped egocentrically (ie, I map where my coffee mug is compared to my hand, ie, compared to a part of myself) or allocentrically (i map where my coffee mug is in comparison to other items on my desk, or, in comparison to other things in the world). We already map cursors in both ways when we move a mouse cursor around on a screen, though the egocentric mapping is somewhat more complex than in situations with just the hand. I feel, even if more difficult to accomplish, taking advantage of this system and trying to use the motor system in this type of rehabilitation would be much, much more beneficial in the end, and would provide exponentially more benefit to people who needed it.

also, they have done similar things with EEG readings and video games, I can't find the link, but I remember a few E3s (or something similar) ago there was a harry potter game where people could cast various spells just by thinking it.

Text-only Version: Click HERE to see this thread with all of the graphics, features, and links.