03-10
Grounding natural language with autonomous interaction

[[{"fid":"845","view_mode":"embedded_left","fields":{"format":"embedded_left","field_file_image_alt_text[und][0][value]":"Photo of Karthik Narasimhan","field_file_image_title_text[und][0][value]":"Karthik Narasimhan","field_file_caption_credit[und][0][value]":"","field_file_caption_credit[und][0][format]":"full_html"},"type":"media","field_deltas":{"1":{"format":"embedded_left","field_file_image_alt_text[und][0][value]":"Photo of Karthik Narasimhan","field_file_image_title_text[und][0][value]":"Karthik Narasimhan","field_file_caption_credit[und][0][value]":"","field_file_caption_credit[und][0][format]":"full_html"}},"attributes":{"alt":"Photo of Karthik Narasimhan","title":"Karthik Narasimhan","height":257,"width":250,"class":"media-element file-embedded-left","data-delta":"1"},"link_text":false}]]The resurgence of deep neural networks has resulted in impressive advances in natural language processing (NLP). However, this success is dependent on access to large amounts of structured supervision, often manually constructed and unavailable for many applications and domains. In this talk, I will present novel computational models that integrate reinforcement learning with language understanding to induce grounded representations of semantics using unstructured feedback. These techniques not only enable task-optimized representations which reduce dependence on high quality annotations, but also exploit language in adapting control policies across different environments.  First, I will describe an approach for learning to play text-based games, where all interaction is through natural language and the only source of feedback is in-game rewards. Second, I will exhibit a framework for utilizing textual descriptions to assist cross-domain policy transfer for reinforcement learning. Finally, I will demonstrate how reinforcement learning can enhance traditional NLP systems in low resource scenarios. In particular, I describe an autonomous agent that can learn to acquire and integrate external information to improve information extraction.

Karthik Narasimhan is a PhD candidate working with Prof. Regina Barzilay at CSAIL, MIT. His research interests are in natural language understanding and deep reinforcement learning. His current focus is on developing autonomous systems that can acquire language understanding through interaction with their environment while also utilizing textual knowledge to drive their decision making. His work has received a best paper award at EMNLP 2016 and an honorable mention for best paper at EMNLP 2015. Karthik received a B.Tech in Computer Science and Engineering from IIT Madras in 2012 and an S.M in Computer Science from MIT in 2014.

 
Date and Time
Friday March 10, 2017 12:30pm - 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Host
Prof. Barbara Engelhardt

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List