Estimating Scale-Invariant Future in Continuous Time.

Journal: Neural computation
PMID:

Abstract

Natural learners must compute an estimate of future outcomes that follow from a stimulus in continuous time. Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially discounted future reward using the Bellman equation (model-free algorithms). An important drawback of model-based algorithms is that computational cost grows linearly with the amount of time to be simulated. An important drawback of model-free algorithms is the need to select a timescale required for exponential discounting. We present a computational mechanism, developed based on work in psychology and neuroscience, for computing a scale-invariant timeline of future outcomes. This mechanism efficiently computes an estimate of inputs as a function of future time on a logarithmically compressed scale and can be used to generate a scale-invariant power-law-discounted estimate of expected future reward. The representation of future time retains information about what will happen when. The entire timeline can be constructed in a single parallel operation that generates concrete behavioral and neural predictions. This computational mechanism could be incorporated into future reinforcement learning algorithms.

Authors

  • Zoran Tiganj
    Center for Memory and Brain, Boston University, Boston, Massachusetts.
  • Samuel J Gershman
    Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA. gershman@fas.harvard.edu horvitz@microsoft.com jbt@mit.edu.
  • Per B Sederberg
    Department of Psychology, University of Virginia, Charlottesville, VA, 22904, U.S.A. pbs5u@virginia.edu.
  • Marc W Howard
    Department of Physics, Boston University, Boston, Massachusetts.