Cross Roads #6

Learning to Predict Without Looking Ahead

David Ha, Google Brain

November 18, 2019
University of Tokyo
David Ha of Google Brain at the podium for Cross Roads

David Ha from Google Brain, presented Learning to Predict Without Looking Ahead for Cross Roads #6.

David's abstract:

Much of model-based reinforcement learning involves learning a model of an agent’s world and training an agent to leverage this model to perform a task more efficiently. While these models are demonstrably useful for agents, every naturally occurring model of the world of which we are aware – e.g., a brain – arose as a byproduct of competing evolutionary pressures for survival, not minimization of a supervised forward-predictive loss via gradient descent. That useful models can arise out of the messy and slow optimization process of evolution suggests that forward-predictive modeling can arise as a side-effect of optimization under the right circumstances.

Crucially, this optimization process need not explicitly be a forward-predictive loss. In this work, we introduce a modification to traditional reinforcement learning which we call observational dropout, whereby we limit the agents ability to observe the real environment at each timestep. In doing so, we can coerce an agent into learning a world model to fill in the observation gaps during reinforcement learning. We show that the emerged world model, while not explicitly trained to predict the future, can help the agent learn key skills required to perform well in its environment.

Thank you to David, everyone who came out mid-week, Cross Compass for their sponsorship, and the University of Tokyo for kindly hosting us.