Electronic Thesis and Dissertation Repository


Master of Science


Computer Science


Mike Katchabaw


Believable virtual humans have important applications in various fields, including computer based video games. The challenge in programming video games is to produce a non-player controlled character that is autonomous, and capable of action selections that appear human. In this thesis, motivations are used as a basis for learning using reinforcements. With motives driving the decisions of the agents, their actions will appear less structured and repetitious, and more human in nature. This will also allow developers to easily create game agents with specific motivations, based mostly on their narrative purposes. With minimum and maximum desirable motive values, the agents use reinforcement learning to maximize their rewards across all motives. Results show that an agent can learn to satisfy as many as four motives, even with significantly delayed rewards, and motive changes that are caused by other agents. While the actions tested are simple in nature, they show the potential of a more complicated motivation driven reinforcement learning system. The game developer need only define an agent's motivations, based on the game narrative, and the agent will learn to act realistically as the game progresses.