BRAIN Lectures – Reinforcement Learning
The BRAIN Lecture
LINK TO SLIDES:
Arjun Chandra is returning to Gløshaugen the 15th of November. He is a senior research scientist at Telenor Research focusing on deep reinforcement learning.
Last year he held a successful three-part lecture series on deep reinforcement learning. This year he is returning with a finer tuned 2-hour lecture on the same theme. Arjun has great examples to help get through the harder parts of this advanced lecture.
This lecture will introduce deep reinforcement learning (deep RL) and attempt to draw a comprehensive picture of the field as it stands. Work from various academic and industrial research labs will be covered from an intuitive point of view. In addition, the historical development of the field will be examined, building up to current frontiers. The lecture will also aim at evoking a sense of appreciation for the fundamental research challenges in the field.
More generally, RL is a conceptual framework allowing the study and synthesis of autonomous agents that can learn and plan to make far-sighted decisions from their experiences. These experiences are a result of agents attempting to solve problems interactively, thereby learning from feedback on their solutions, followed by adapting these solutions.
The RL framework primarily forges solutions to control problems. These solutions take the form of agent behaviours — action/decision sequences or plans. Crucially, the control problems are such that actions executed by an agent have delayed consequences. Furthermore, the dynamics of such problems are hard to model analytically. A robot that can walk, a car that can drive itself, a drone that can fly itself, a gaming agent that scores high/wins, an industrial plant that controls resource use to reduce long term energy consumption, a customer relationship agent that interacts with customers to keep them satisfied, etc., are some examples fitting the framework. Problems in the telecommunications, health, and education sectors, amongst others, also fit the framework.
Up until recently, theoretically sound RL algorithms were hard to apply in practice. One reason for this was the sheer number of problem states, e.g. observations via sensor measurements, which explode combinatorially as the problem becomes more realistic. This needed additional considerations. Deep learning and innovations in training regimens came to the rescue. Thus began the rise of what is now referred to as deep reinforcement learning.
This is a public lecture open to all.