03-11
Recursive Deep Learning for Modeling Compositional Meaning in Language

[[{"fid":"407","view_mode":"embedded_left","fields":{"format":"embedded_left","field_file_image_alt_text[und][0][value]":"Richard Socher","field_file_image_title_text[und][0][value]":"","field_file_caption_credit[und][0][value]":"%3Cp%3ERichard%20Socher%3C%2Fp%3E%0A","field_file_caption_credit[und][0][format]":"full_html"},"type":"media","attributes":{"alt":"Richard Socher","height":310,"width":250,"class":"media-element file-embedded-left"},"link_text":null}]]Great progress has been made in natural language processing thanks to many different algorithms, each often specific to one application. Most learning algorithms force language into simplified representations such as bag-of-words or fixed-sized windows or require human-designed features. I will introduce three models based on recursive neural networks that can learn linguistically plausible representations of language. These methods jointly learn compositional features and grammatical sentence structure for parsing or phrase level sentiment predictions. They can also be used to represent the visual meaning of a sentence which can be used to find images based on query sentences or to describe images with a more complex description than single object names.

Besides the state-of-the-art performance, the models capture interesting phenomena in language such as compositionality. For instance, people easily see that the "with" phrase in "eating spaghetti with a spoon" specifies a way of eating whereas in "eating spaghetti with some pesto" it specifies the dish. I show that my model solves these prepositional attachment problems well thanks to its distributed representations. In sentiment analysis, a new tensor-based recursive model learns different types of high level negation and how they can change the meaning of longer phrases with many positive words. They also learn that when contrastive conjunctions such as "but" are used the sentiment of the phrases following them usually dominates.

Richard Socher is a PhD student at Stanford working with Chris Manning and Andrew Ng. His research interests are machine learning for NLP and vision. He is interested in developing new deep learning models that learn useful features, capture compositional structure in multiple modalities and perform well across different tasks. He was awarded the 2011 Yahoo! Key Scientific Challenges Award, the Distinguished Application Paper Award at ICML 2011, a Microsoft Research PhD Fellowship in 2012 and a 2013 "Magic Grant" from the Brown Institute for Media Innovation.

Date and Time
Tuesday March 11, 2014 4:30pm - 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Host
Sebastian Seung

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List