Natural language is built from a library of concepts and compositional operators that provide a rich source of information about how humans understand the world. Can this information help us build better machine learning models? In this talk, we'll explore three ways of integrating compositional linguistic structure and learning: using language as a source of modular reasoning operators for question answering, as a scaffold for fast and generalizable reinforcement learning, and as a tool for understanding representations in neural networks.
Bio:
Jacob Andreas is a fifth-year PhD student at UC Berkeley working in natural language processing. He holds a B.S. from Columbia and an M.Phil. from Cambridge, where he studied as a Churchill scholar. His papers were recognized at NAACL 2016 and ICML 2017. Jacob has been an NSF graduate fellow, a Huawei--Berkeley AI fellow, and a Facebook fellow.