Just recently we posted about a story in which researchers from Carnegie Mellon were able to read people’s actual thoughts with a machine. The machine uses a technique called fMRI to noninvasively monitor a person’s brain activation patterns as they think about different objects. Today we would like to followup on this story with new information about how it works. Science Central has a decent article, but more importantly to our delight we have discovered that the actual paper published by the Carnegie Mellon researchers is freely available online.
So how does fMRI work anyway?
It turns out that when we think about an object only a small subset of the neurons in our brains are actively firing to produce the thought. These neurons in our brain that are firing require more energy, and they need it quickly, so they recieve more blood flow and their relative oxygen level changes. This change in oxygen level can be detected magnetically, and hence regions of the brain with firing neurons will give off a different magnetic signal than regions with neurons at rest. Current technology does not allow us to magnetically monitor the oxygen level of every single one of the billions of neurons in the brain. Instead the brain is logically divided into several thousand 45 mm3 cubic groups of neurons, called voxels. When you are thinking about an object, a certain subset of these thousands of voxels “lights up”, representing your thought. Machine learning algorithms are used to make a mapping of voxel activation patterns to thoughts about individual objects.
The most exciting revelation from this research is the discovery that brain activity of a person thinking about an object, such as a hammer, is very similar to the brain activity of a completely different person that is also thinking of a hammer. People grow up in different places and have completely different experiences, yet thoughts of common objects, such as hammers, seem to be held within our brains in a representation that is fairly consistent among all of us.
So how far can we go with this technology? Will we really be able to read people’s minds beyond the level of simple objects? The Carnegie Mellon researches seem pretty optimistic that they can do much better in the coming years. At least in terms of vision (instead of thinking) it appears that they can interpret almost exactly what you are seeing with fMRI. The following quote from the science central article is quite telling:
The researchers excluded the vision area of the brain from the scans “because it’s almost too easy a target,” explains Just. “The visual cortex really contains a very faithful, accurate representation of a shape that your looking at– whatever is on your retina gets translated to your visual cortex at the back of your brain. And if you look for that pattern, that’s a lot easier, so we can be very accurate there.”