Metacontrol of reinforcement learning

Modern theories of reinforcement learning posit two systems competing for control of behavior: a "model-free" or "habitual" system that learns cached state-action values, and a "model-based" or  "goal-directed" system that learns a world model which is then used to plan actions. I will argue that humans can adaptively invoke model-based computation when its benefits outweigh its costs. A simple meta-control learning rule can capture the dynamics of this cost-benefit analysis. Neuroimaging evidence points to the role of cognitive control regions in this computation.

Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets

The way how recurrently connected networks of spiking neurons in the
brain acquire powerful information processing capabilities through
learning has remained a mystery. This lack of understanding is linked
to a lack of learning algorithms for recurrent networks of spiking
neurons (RSNNs) that are both functionally powerful and can be
implemented by known biological mechanisms. The gold standard for
learning in recurrent neural networks in machine learning is
back-propagation through time (BPTT), which implements stochastic

Moving toward a cellular-based understanding of theta generation in the hippocampus

Oscillatory activities are a ubiquitous feature of brain recordings and 
likely form part of the neural code.  In particular, theta rhythms 
(3-12Hz) in the hippocampus play fundamental roles in memory processing. 
  Can we understand how theta rhythms are generated from cellular 
perspectives?  It is challenging to address this question largely 
because of the multi-scale nature of our brains.  However, we need to 
tackle this challenge as it is clear that cellular specifics can dictate