Spatio-temporal credit assignment in population learning

Friedrich, Johannes; Urbanczik, Robert; Senn, Walter (2010). Spatio-temporal credit assignment in population learning. In: Computational and Systems Neuroscience 2010. Salt Lake City, UT, USA. 25 Feb - 2 Mar, 2010. 10.3389/conf.fnins.2010.03.00263

Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.

Item Type:

Conference or Workshop Item (Abstract)

Division/Institute:

04 Faculty of Medicine > Pre-clinic Human Medicine > Institute of Physiology

UniBE Contributor:

Friedrich, Johannes, Urbanczik, Robert, Senn, Walter

Subjects:

600 Technology > 610 Medicine & health

Submitter:

Factscience Import

Date Deposited:

04 Oct 2013 14:11

Last Modified:

05 Dec 2022 14:01

Publisher DOI:

10.3389/conf.fnins.2010.03.00263

URI:

https://boris.unibe.ch/id/eprint/1912 (FactScience: 203992)

Actions (login required)

Edit item Edit item
Provide Feedback