Daphne Koller received her BSc and MSc degrees from the Hebrew University of Jerusalem, Israel, and her PhD from Stanford University in 1993. After a two-year postdoc at Berkeley, she returned to Stanford, where she is now a Professor in the Computer Science Department. Her main research interest is in developing and using machine learning and probabilistic methods to model and analyze complex domains. Her current research projects include models in computational biology and in extracting semantic meaning from sensor data of the physical world. Daphne Koller is the author of over 150 refereed publications, which have appeared in venues spanning Science, Nature Genetics, the Journal of Games and Economic Behavior, and a variety of conferences and journals in AI and Computer Science. She has received 9 best paper or best student paper awards, in conferences whose areas span computer vision (ECCV), artificial intelligence (IJCAI), natural language (EMNLP), machine learning (NIPS and UAI), and computational biology (ISMB). She has given keynote talks at over 10 different major conferences, also spanning a variety of areas. She was the program co-chair of the NIPS 2007 and UAI 2001 conferences, and has served on numerous program committees and as associate editor of the Journal of Artificial Intelligence Research and of the Machine Learning Journal. She was awarded the Arthur Samuel Thesis Award in 1994, the Sloan Foundation Faculty Fellowship in 1996, the ONR Young Investigator Award in 1998, the Presidential Early Career Award for Scientists and Engineers (PECASE) in 1999, the IJCAI Computers and Thought Award in 2001, the Cox Medal for excellence in fostering undergraduate research at Stanford in 2003, the MacArthur Foundation Fellowship in 2004, the ACM/Infosys award in 2008, and the Rajeev Motwani Endowed Chair in 2010.
10-06
Probabilistic Models for Holistic Scene Understanding
Over recent years, computer vision has made great strides towards annotating parts of an image with symbolic labels, such as object categories or segment types. However, we are still far from the ultimate goal of providing a semantic description of an image, such as "a man, walking a dog on a sidewalk, carrying a backpack". In this talk, I will describe our work in this direction, which uses machine learning to construct richly structured, probabilistic models
of multiple scene components. We demonstrate the value of such modeling for improvements in basic tasks such as image segmentation and object detection, as well as for making more semantic distinctions regarding shape and activity. The learning of such expressive models poses new challenges, especially when available training data is limited or only weakly labeled. I will describe novel machine learning methods that can train models using distantly or weakly labeled data, thereby making use of much larger amounts of available data.
Date and Time
Wednesday October 6, 2010 4:30pm -
5:30pm
Location
Computer Science Small Auditorium (Room 105)
Event Type
Speaker
Host
David Blei
Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.