Healthcare Technology Featured Article

January 28, 2014

MEG, fMRIs and New Insights on Human Vision and How We See


The other day we took a fairly deep dive into the use of an imaging technique known as magnetoencephalography (MEG) as a means to achieve a highly useful and promising path to delivering "neurofeedback" training and therapy. The technology and approach look quite promising for helping patients to overcome a wide variety of brain-related issues. The use of MEG for neurofeedback therapy is further aided by the additional combined use of MRIs. The technology relies in great part on what a patient "sees" (it's as far as we can re-tell here, you will need to read the article for more).

Given the combination of MEG, MRI and what a patient sees in the above, we were further quite intrigued to learn that MIT is also putting MEG to use, in MIT's case with the additional use of fMRI - functional magnetic resonance imaging - to uncover which parts of the brain are specifically active immediately after an image is seen. By "immediately after" we mean here measurements in the range of 50+ milliseconds and then just beyond that.

We also need to note that the techniques used and described here are specifically non-invasive and represent the very first time that a noninvasive process has been demonstrated to show an accurately mapped flow of information from when something is first seen by the eye to when it appears and begins to be processed in the human brain using a timeline consisting of milliseconds.

The use of fMRI enables researchers (and doctors and clinicians and so on) to measure changes in blood flow within the brain, which in turn indicates the brain areas that are used when executing any particular thing. The one problem with fMRI is that measuring changes in brain blood flow is a relatively delayed process - that is, the measurements happen too late to truly get a sense of the brain's activity on a millisecond by millisecond basis. This is where MEG comes to the rescue.

A new research paper recently published by Nature Neuroscience, Resolving Human Object Recognition in Space and Time (available for purchase and download online), the paper's authors - Radoslaw Martin Cichy (MIT's Computer Science and Artificial Intelligence Laborator),  Dimitrios Pantazis (MIT's McGovern Institute for Brain Research) and Aude Oliva (MIT's Computer Science and Artificial Intelligence Laboratory) - outlines how a team of MIT researchers created a methodology though which the combined use of fMRI and MEG data leads to the ability to accurately uncover which parts of the brain - and in what order - are active once an image is seen.

The National Eye Institute, the National Science Foundation, and a Feodor Lynen Research Fellowship from the Humboldt Foundation all provided funding for the team's research.

Brain and Visual Activity in Real Time

Before we provide any additional detail, let's take note of the following image, which provides three views of ongoing (and increasing) brain activity as processing takes place following a visual gathering of information through one's eyes. View the scans from left to right as a timeline.

The brain image on the left represents brain activity roughly 50 - 60 milliseconds after information from the eye was received. As one would expect - since the visual cortex resides there - information appears to only be processed in the back of the brain. As time progressed - again, think in milliseconds here - activity then spread to brain regions involved in later visual processing (middle image) at roughly 120 milliseconds, and finally to the area of the brain known as the inferior temporal cortex (image at right) at about 160 milliseconds, which is the brain region that processes complex shapes and categories of objects (that is, when whatever is being viewed is finally determined and truly noted by the volunteer).

The brain scans - as represented by the three images shown above - allow researchers to accurately identify both the location and timing of human brain activity. The MIT researchers scanned collections of individual brain responses as volunteers looked at different images. In turn, the researchers were then able to pinpoint millisecond by millisecond, when the brain recognizes and categorizes an object and where these processes occur (again, note the left to right sequence in the image).

For the study the MIT research team scanned 16 volunteers as they each looked at a series of 92 images. The scans all took place at the Athinoula A. Martinos Imaging Center at MIT's McGovern Institute. The images used included faces and animals, as well as natural and manmade objects. Each image was shown to each volunteer for half a second. Each volunteer underwent the test multiple times, twice in an fMRI scanner and twice in an MEG scanner. The research team clearly developed a massive database of timing and location of brain activity information.

Aude Oliva, one of the authors noted above, says that, “This method gives you a visualization of both the when and where at the same time and in real time. It’s a window into processes happening at the millisecond and millimeter scale."

The "both when and where" Oliva mentions above is the key to the entire research and methodology. Researchers have been able to track the when and where individually in the past, without direct millisecond by millisecond tracing of activity. It is the novel combined use of both MEG and fMRI that enables the MIT researchers to now be able to provide the types of scans shown above. As we also noted in our earlier article on neurofeedback therapy, it is the similar combination of the two types of scans that provides the advanced capabilities.

Non-invasive Data Capture is Key

Capturing the scans isn't enough however. To actually bring together the two sets of time and location details generated by the MEG and fMRI scanners into a meaningful set of combined images the researchers then needed to make use of representational similarity analysis - a computational technique that in turn is dependent on the fact that two similar objects that provoke similar signals in fMRI will also produce similar signals in MEG. The MIT research team is the first to use the technique to link fMRI and MEG data from human subjects.

Radoslaw Martin Cichy, another of the paper's authors, underscores that, "We want to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s an incredibly fast and complex process. It has been a challenge to get to where we are but we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.”

The combination of MEG and fMRI in humans is a technique that allows researchers to test humans in ways that substitute for methods that are not be appropriate for humans. Lots of "invasive" techniques have been used - particularly on monkeys - to study the same things the new research manages to deliver, but even so the new research doesn't quite deliver on the same level of spatial and temporal precision the invasive procedures deliver. We'll leave aside questions of why its ok to be invasive with monkeys.

The good news (maybe for the monkeys as well) is that the MIT research team methodology never the less delivers extraordinarily useful data. Over time the new techniques will be refined and will no doubt increase the accuracy of the research as well.

The paper's third author, Dimitrios Pantazis, points out that, "This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective on how we see, visualize, process and identify. We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain."

There is no reason why researchers shouldn't also be able to extend the use of the techniques developed by the research team here to also study how the human brain analyzes other types of information - for example motor, verbal and sensory signals. Nor is there any reason to think that over time and given enough accurate real time data that researchers cannot also use the fMRI/MEG-based methodology to gain deeper understandings of the processes that cause such conditions as memory disorders, paralysis, dyslexia, and neurodegenerative diseases.

It's all amazing stuff. Along the way perhaps we'll also be able to spare the monkeys as well.




Edited by Cassandra Tucker
Get stories like this delivered straight to your inbox. [Free eNews Subscription]




SHARE THIS ARTICLE



FREE eNewsletter

Click here to receive your targeted Healthcare Technology Community eNewsletter.
[Subscribe Now]