Typically, reinforcement learning (RL) methods rely on trial-and-error interaction with the environment from scratch to discover effective behaviors. While this sort of paradigm has the potential to discover good strategies, this paradigm also inhibits RL methods from collecting enough experience or training data in real-world problems where active interaction is expensive (e.g., in drug design) or dangerous (e.g., for robots operating around humans). My work develops approaches to alleviate this limitation: how can we learn policies to effectively make decisions entirely from previously-collected, static datasets in an offline manner? In this talk, I will discuss challenges that appear in this kind of offline reinforcement learning (offline RL) and develop algorithms and techniques to address these challenges. I will then discuss how my approaches for offline RL and decision-making have enabled us to make progress in real-world problems such as hardware accelerator design, robotic manipulation, and computational chemistry. Finally, I will discuss how we can enable offline RL methods to benefit from generalization capabilities offered by large and expressive models, similar to supervised learning.
Bio: Aviral Kumar is a final year Ph.D. student at UC Berkeley. His research focuses on developing effective and reliable approaches for (sequential) decision-making. Towards this goal, he focuses on designing reinforcement learning techniques to static datasets and on understanding and applying these methods in practice. Before his Ph.D., Aviral obtained his B.Tech. in Computer Science from IIT Bombay in India. He is a recipient of the C.V. & Daulat Ramamoorthy Distinguished Research Award, given to 1 PhD student in EECS at Berkeley for outstanding contributions to a new area of research in computer science, Facebook Ph.D. Fellowship in Machine Learning and Apple Scholars in AI/ML Ph.D. Fellowship.
To request accommodations for a disability please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.