Biologically motivated reinforcement learning in spiking neural networks
Master Thesis
2022
Permanent link to this Item
Authors
Supervisors
Journal Title
Link to Journal
Journal ISSN
Volume Title
Publisher
Publisher
Faculty
License
Series
Abstract
I consider the problem of Reinforcement Learning (RL) in a biologically feasible neural network model, as a proxy for investigating RL in the brain itself. Recent research has demonstrated that synaptic plasticity in the higher regions of the brain (such as the cortex and striatum) depends on neuromodulatory signals which encode, amongst other things, a response to reward from the environment. I consider which forms of synaptic plasticity rules might arise under the guidance of an Evolutionary Algorithm (EA), when an agent is tasked with making decisions in response to noisy stimuli (perceptual decision making). By proposing a general framework which captures many proposed biologically feasible phenomenological synaptic plasticity rules, including classical SpikeTime-Dependent Plasticity (STDP) rules and the triplet rules, and rate-based rules such as Oja's Rule and BCM rules, as well as their reward-modulated extensions (such as Reward-Modulated Spike-Time-Dependent Plasticity (R-STDP)), I allow a general biologically feasible neural network the ability to evolve the rules best suited for learning to solve perceptual decision-making tasks.
Description
Keywords
Reference:
Rance, D. 2022. Biologically motivated reinforcement learning in spiking neural networks. . ,Faculty of Science ,Department of Mathematics and Applied Mathematics. http://hdl.handle.net/11427/37753