Document Type


Publication Date



PLoS Computational Biology





URL with Digital Object Identifier



© 2019 Cashaback et al. Consideration of previous successes and failures is essential to mastering a motor skill. Much of what we know about how humans and animals learn from such reinforcement feedback comes from experiments that involve sampling from a small number of discrete actions. Yet, it is less understood how we learn through reinforcement feedback when sampling from a continuous set of possible actions. Navigating a continuous set of possible actions likely requires using gradient information to maximize success. Here we addressed how humans adapt the aim of their hand when experiencing reinforcement feedback that was associated with a continuous set of possible actions. Specifically, we manipulated the change in the probability of reward given a change in motor action-the reinforcement gradient-to study its influence on learning. We found that participants learned faster when exposed to a steep gradient compared to a shallow gradient. Further, when initially positioned between a steep and a shallow gradient that rose in opposite directions, participants were more likely to ascend the steep gradient. We introduce a model that captures our results and several features of motor learning. Taken together, our work suggests that the sensorimotor system relies on temporally recent and spatially local gradient information to drive learning.


©2019 Cashaback et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unlimited use, distribution, and reproduction in any medium, provided the original author and source are credited

The article was originally published at;

Cashaback JGA, Lao CK, Palidis DJ, Coltman SK, McGregor HR, et al. (2019) The gradient of the reinforcement landscape influences sensorimotor learning. PLOS Computational Biology 15(3): e1006839.

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.