Home

Deep Reinforcement Learning

Deep Reinforcement Learning is the textbook for the graduate course that we teach at Leiden University. The book is written by Aske Plaat and is published by Springer Nature in 2022. You can order a copy from the bookstore and via SpringerLink. A preprint is at arXiv (reproduced with permission of Springer Nature Singapore Pte Ltd).

Deep Reinforcement Learning has attracted much attention due to groundbreaking results by AlphaGo, in poker, StarCraft, protein folding, in robotics, and in many other areas. Progress in the field has benefited greatly from an open research culture where environments, benchmarks, code, and hyperparameters are shared on GitHub, and preprints of papers are shared on arXiv. In this spirit and in order to facilitate teaching this wonderful topic, a full preprint of this work is reproduced with permission of Springer Nature Singapore Pte Ltd at arXiv as https://arxiv.org/abs/2201.02135. The final authenticated version is available online at: https://dx.doi.org/.

Table of Contents:
Preface
1. Introduction
2. Tabular Value-based Reinforcement Learning
3. Deep Value-based Reinforcement Learning
4. Policy-based Reinforcement Learning
5. Model-based Reinforcement Learning
6. Two-Agent Self-Play
7. Multi-Agent Reinforcement Learning
8. Hierarchical Reinforcement Learning
9. Meta-Learning
10. Further Developments
A. Mathematical Background
B. Deep Supervised Learning
C. Deep Reinforcement Learning Suites

Slides for the course Reinforcement Learning (Master Computer Science 2022 at Leiden University):
1. Introduction
1B. Deep Supervised Learning
2. Tabular Value-Based Methods
3. Deep Value-Based Methods
4. Policy-Based Methods
5. Model-Based Methods
6. Two-Agent Self-Play
7. Multi-Agent
8. Hierarchical
9. Transfer & Meta
10. Eval & Future

Exercises

Exams

2020/2021
Before we transitioned fully to deep reinforcement learning, in 2020 and 2021 the course had a different focus, and also covered combinatorial search and games. We used a different book then: Learning to Play, also available from Springer, and as free pdf. (Before that, we used Sutton & Barto‘s classic, available from MIT Press and as free pdf.)