What you can expect
Lectures and one-on-ones with world-class AI researchers
Connect with talented researches from across 45 countries and dozens of leading industries
Discover the latest advances in deep learning and reinforcement learning
Agenda // updates will be made as they become available
Mon Aug 3
There has recently been some interest in letting communities of neural networks evolve their own language in order to solve tasks together. The lecture will review some of this work, with an emphasis on analyzing the kind of communication code that emerges in these simulations, and what it teaches us about neural networks and languages in general.
Tue Aug 4
What is the effect of depth on learning dynamics in neural networks? What interplay of dynamics, architecture, and data make good generalization possible in overparameterized networks? How do deep networks organize their internal representations to represent rich structure in the world like hierarchies? This talk will give an overview of advances in deep learning theory that are beginning to shed light on these questions.
Space is very limited for speaker 1:1 sessions. Imagine a 10 minute private conversation with one of the DLRLSS speakers?
Random draws will determine the lucky few who can participate!
In this lecture, Will Hamilton will discuss the area of graph representation learning. We will introduce standard techniques for learning low-dimensional embeddings of graph data, as well as the graph neural network (GNN) framework.
Wed Aug 5
This lecture provides an introduction to reinforcement learning and intelligence, which focuses on the study and design of agents that interact with a complex, uncertain world to achieve a goal. We will emphasize agents that can make near-optimal decisions in a timely manner with incomplete information and limited computational resources. The lecture will cover Markov decision processes, reinforcement learning, and function approximation (online supervised learning).
Reinforcement learning (RL) is a systematic approach to learning and decision making. Developed and studied for decades, recent combinations of RL with modern deep learning have led to impressive demonstrations of the capabilities of today's RL systems, and have fueled an explosion of interest and research activity. This seminar starts from fundamentals of reinforcement learning and builds up to a better understanding of how domain structure and recent deep learning advances can push current limits in terms of flexible and sample-efficient reinforcement learning.
This session will provide an introduction to bandits, more specifically to the stochastic bandit setting. We will review the most common algorithms, such as Upper Confidence Bound and Thompson Sampling.
Thu Aug 6
This talk presents a broad overview of the field of model-based (deep) reinforcement learning (MBRL). MBRL methods utilize a model of the environment to make decisions and present unique opportunities and challenges beyond model-free RL. I will discuss methods for learning transition and reward models, ways in which those models can effectively be used to make better decisions, and the relationship between planning and learning. I will also highlight ways that models of the world can be leveraged beyond the typical RL setting, and what insights might be drawn from human cognition when designing future MBRL systems.
Participants will receive an invitation link to Gathertown where posters will be virtually displayed. Move around the space and interact live with fellow participants.
We review basic concepts of convex duality and summarize how this duality may be applied to a variety of reinforcement learning (RL) settings, including policy evaluation or optimization, online or offline learning, and discounted or undiscounted rewards. The derivations yield a number of intriguing results, including entropy-regularized RL and the recently proposed *DICE family. Thus, through the lens of convex duality, we provide a unified treatment and perspective on these works, which we hope will enable researchers to better use and apply the tools of convex duality to make further progress in RL.
In recent years we have seen fast progress on a number of zero-sum benchmark problems in AI, e.g. Go, Poker and Dota. In contrast, success in the real world requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. Recently, the card game Hanabi has been established as a new benchmark environment to fill this gap. In particular, Hanabi is interesting to humans, since it is entirely focused on theory of mind, i.e., the ability to reason over the intentions, beliefs and point of view of other agents when observing their actions. This is particularly important in applications, such as communication, assistive technologies and autonomous driving.
This talk will provide an update on recent progress in this area. It will start out with novel state-of-the-art methods for the self-play setting. Next, it will introduce the Zero-Shot Coordination setting as a new frontier for multi-agent research. Finally it will introduce Other-Play as a novel learning algorithm, which allows agents to coordinate ad-hoc and biases learning towards more human compatible policies.
Fri Aug 7
Cras mattis consectetur purus sit amet fermentum. Cras mattis consectetur purus sit amet fermentum. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Maecenas sed diam eget risus varius blandit sit amet non magna.