Electronic Thesis and Dissertation Repository


Doctor of Philosophy




Dr. Jody Culham


“You read my mind.” Although this simple everyday expression implies ‘knowledge or understanding’ of another’s thinking, true ‘mind-reading’ capabilities implicitly seem constrained to the domains of Hollywood and science-fiction. In the field of sensorimotor neuroscience, however, significant progress in this area has come from mapping characteristic changes in brain activity that occur prior to an action being initiated. For instance, invasive neural recordings in non-human primates have significantly increased our understanding of how highly cognitive and abstract processes like intentions and decisions are represented in the brain by showing that it is possible to decode or ‘predict’ upcoming sensorimotor behaviors (e.g., movements of the arm/eyes) based on preceding changes in the neuronal output of parieto-frontal cortex, a network of areas critical for motor planning. In the human brain, however, a successful counterpart for this predictive ability and a similar detailed understanding of intention-related signals in parieto-frontal cortex have remained largely unattainable due to the limitations of non-invasive brain mapping techniques like functional magnetic resonance imaging (fMRI). Knowing how and where in the human brain intentions or plans for action are coded is not only important for understanding the neuroanatomical organization and cortical mechanisms that govern goal-directed behaviours like reaching, grasping and looking – movements critical to our interactions with the world – but also for understanding homologies between human and non-human primate brain areas, allowing the transfer of neural findings between species.

In the current thesis, I employed multi-voxel pattern analysis (MVPA), a new fMRI technique that has made it possible to examine the coding of neural information at a more fine-grained level than that previously available. I used fMRI MVPA to examine how and where movement intentions are coded in human parieto-frontal cortex and specifically asked the question: What types of predictive information about a subject's upcoming movement can be decoded from preceding changes in neural activity?

Project 1 first used fMRI MVPA to determine, largely as a proof-of-concept, whether or not specific object-directed hand actions (grasps and reaches) could be predicted from intention-related brain activity patterns. Next, Project 2 examined whether effector-specific (arm vs. eye) movement plans along with their intended directions (left vs. right) could also be decoded prior to movement. Lastly, Project 3 examined exactly where in the human brain higher-level movement goals were represented independently from how those goals were to be implemented. To this aim, Project 3 had subjects either grasp or reach toward an object (two different motor goals) using either their hand or a novel tool (with kinematics opposite to those of the hand). In this way, the goal of the action (grasping vs. reaching) could be maintained across actions, but the way in which those actions were kinematically achieved changed in accordance with the effector (hand or tool). All three projects employed a similar event-related delayed-movement fMRI paradigm that separated in time planning and execution neural responses, allowing us to isolate the preparatory patterns of brain activity that form prior to movement.

Project 1 found that the plan-related activity patterns in several parieto-frontal brain regions were predictive of different upcoming hand movements (grasps vs. reaches). Moreover, we found that several parieto-frontal brain regions, similar to that only previously demonstrated in non-human primates, could actually be characterized according to the types of movements they can decode. Project 2 found a variety of functional subdivisions: some parieto-frontal areas discriminated movement plans for the different reach directions, some for the different eye movement directions, and a few areas accurately predicted upcoming directional movements for both the hand and eye. This latter finding demonstrates -- similar to that shown previously in non-human primates -- that some brain areas code for the end motor goal (i.e., target location) independent of effector used. Project 3 identified regions that decoded upcoming hand actions only, upcoming tool actions only, and rather interestingly, areas that predicted actions with both effectors (hand and tool). Notably, some of these latter areas were found to represent the higher-level goals of the movement (grasping vs. reaching) instead of the specific lower-level kinematics (hand vs. tool) necessary to implement those goals.

Taken together, these findings offer substantial new insights into the types of intention-related signals contained in human brain activity patterns and specify a hierarchical neural architecture spanning parieto-frontal cortex that guides the construction of complex object-directed behaviors.