By Sharon Waters
Danqi Chen and Karthik Narasimhan, both assistant professors in computer science, have won National Science Foundation CAREER Awards to further their work in natural language processing and machine learning.
NSF’s CAREER Award supports early-career faculty who have the potential to serve as academic role models in research and education. The awards come with five years of funding and total around $600,000 each.
Both Narasimhan and Chen have broad research interests in natural language processing and machine learning. They are co-directors of the Princeton Natural Language Processing group, along with Sanjeev Arora, the Charles C. Fitzmorris Professor of Computer Science. Narasimhan and Chen are both part of the Princeton Artificial Intelligence and Machine Learning group and affiliated with the Center for Statistics and Machine Learning.
Danqi Chen
Chen’s research is driven by the goals of developing effective and fundamental methods for learning representations of language and knowledge. She also works to build practical systems, including question answering, information extraction and conversational agents. Chen’s work focuses on applications of deep neural networks, which are a key enabling technique for natural language processing.
Chen’s project will develop an alternative to autoregressive language models, which are the current dominant paradigm and are used in tools like ChatGPT. Chen's work will focus on retrieval-based language models, with the goal of reducing training and inference costs while also providing benefits such as better interpretability, adaptability and privacy. To achieve that, she has organized her project into four components: building a general learning framework for these models; improving inference efficiency; devising methods to quickly update and adapt them to unseen and privacy-sensitive domains; and designing effective approaches to use retrieval-based language models on downstream tasks.
Before joining Princeton in 2019, Chen received her Ph.D. from Stanford University, where she worked in the Stanford Natural Language Processing Group and was a visiting scientist at Facebook AI Research. She received a Sloan Research Fellow in 2022. Other awards include Samsung AI Researcher of the Year, a Lawrence Keyes, Jr/Emerson Electric Co. Faculty Advancement Award, and faculty research awards from Google, Meta, Amazon, Adobe and Salesforce. In 2020 and 2022, Chen won commendations for outstanding teaching from the School of Engineering and Applied Science.
Karthik Narasimhan
Narasimhan’s research spans the areas of natural language processing and reinforcement learning, with the goal of building intelligent agents that learn to operate in the world through both their own experience — doing things — and leveraging existing human knowledge — reading about things. His focus is on developing autonomous systems that can acquire an understanding of language through interaction with their environment and use textual knowledge to drive their decision-making.
His project will develop models for language-guided machine learning that acquire knowledge from textual sources and incorporate that knowledge into a better and more flexible learning process. The project will result in robust AI models that require less human effort to train, open new research directions for machine learning with language guidance, and enable better real-world human-machine collaboration.
Before joining Princeton in 2018, Narasimhan received his Ph.D. from the Massachusetts Institute of Technology and spent a year as a visiting research scientist at OpenAI contributing to the first GPT language model. His work has been recognized with a 2023 NAE Grainger Foundation grant, a 2022 Google Research Scholar Award, and a 2019 Amazon Research Award. From Princeton he has received the 2022 Howard B. Wentz, Jr. Junior Faculty Award, a Schmidt DataX Fund Award and a Project X Award. In 2018, he won a commendation for outstanding teaching from the School of Engineering and Applied Science.