03-26
Knowledge from Language via Deep Understanding

Almost all of human knowledge is now available online, but the vast majority of it is principally encoded in the form of human language explanations. In this talk, I explore novel neural network approaches that open up opportunities for getting a deep understanding of natural language text. First, I show how distributed representations enabled the building of a smaller, faster and more accurate dependency parser for finding the structure of sentences. Then I show how related neural technologies can be used to improve the construction of knowledge bases from text. However, maybe we don't need this intermediate step and can directly gain knowledge and answer people's questions from large textbases? In the third part, I explore this possibility by directly reading text with a simple yet highly effective neural architecture for question answering.

Bio:
Danqi Chen is a PhD student in Computer Science at Stanford University, working with Christopher Manning on deep learning approaches to natural language processing. Her research centers on how computers can achieve a deep understanding of human language and the information it contains. Danqi received Outstanding Paper Awards at ACL 2016 and EMNLP 2017, a Facebook Fellowship, a Microsoft Research Women’s Fellowship and an Outstanding Course Assistant Award from Stanford. Previously, she received her B.E. in Computer Science from Tsinghua University.

Date and Time
Monday March 26, 2018 12:30pm - 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Host
Barbara Engelhardt

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List