ShockLab

AUTHORS

Rowan Hodson1 Bruce Bassett, Charel van Hoof, Benjamin Rosman, Mark Solms, Jonathan P. Shock, Ryan Smith

17/8/2023

Abstract

Active Inference is a recently developed framework for modeling decision processes under un- certainty. Over the last several years, empirical and theoretical work has begun to evaluate the strengths and weaknesses of this approach and how it might be extended and improved. One recent extension is the “sophisticated inference” (SI) algorithm, which improves performance on multi- step planning problems through a recursive decision tree search. However, little work to date has been done to compare SI to other established planning algorithms in reinforcement learning (RL). In addition, SI was developed with a focus on inference as opposed to learning. The present paper therefore has two aims. First, we compare performance of SI to Bayesian RL schemes designed to solve similar problems. Second, we present and compare an extension of SI – sophisticated learning (SL) – that more fully incorporates active learning during planning. SL maintains beliefs about how model parameters would change under the future observations expected under each policy. This allows a form of counterfactual retrospective inference in which the agent considers what could be learned from current or past observations given different future observations. To accomplish these aims, we make use of a novel, biologically inspired environment that requires an optimal balance between goal-seeking and active learning, and which was designed to highlight the problem structure for which SL offers a unique solution. This setup requires an agent to continually search an open environment for available (but changing) resources in the presence of competing affordances for information gain. Our simulations demonstrate that SL outperforms all other algorithms in this context – most notably, Bayes-adaptive RL and upper confidence bound (UCB) algorithms, which aim to solve multi-step planning problems using similar principles (i.e., directed exploration and counterfactual reasoning about belief updates given different possible actions/observations). These results provide added support for the utility of Active Inference in solving this class of biologically-relevant problems and offer added tools for testing hypotheses about human cognition.