[[{"fid":"846","view_mode":"embedded_left","fields":{"format":"embedded_left","field_file_image_alt_text[und][0][value]":"Photo of Felix Heide","field_file_image_title_text[und][0][value]":"Felix Heide","field_file_caption_credit[und][0][value]":"","field_file_caption_credit[und][0][format]":"full_html"},"type":"media","field_deltas":{"1":{"format":"embedded_left","field_file_image_alt_text[und][0][value]":"Photo of Felix Heide","field_file_image_title_text[und][0][value]":"Felix Heide","field_file_caption_credit[und][0][value]":"","field_file_caption_credit[und][0][format]":"full_html"}},"attributes":{"alt":"Photo of Felix Heide","title":"Felix Heide","height":348,"width":250,"class":"media-element file-embedded-left","data-delta":"1"},"link_text":false}]]Imaging has become an essential part of how we communicate with each other, how autonomous agents sense the world and act independently, and how we research chemical reactions and biological processes. Today's imaging and computer vision systems, however, often fail in critical scenarios, for example in low light or in fog. This is due to ambiguity in the captured images, introduced partly by imperfect capture systems, such as cellphone optics and sensors, and partly present in the signal before measuring, such as photon shot noise. This ambiguity makes imaging with conventional cameras challenging, e.g. low-light cellphone imaging, and it makes high-level computer vision tasks difficult, such as scene segmentation and understanding.
In this talk, I will present several examples of algorithms that computationally resolve this ambiguity and make sensing and vision systems robust. These methods rely on three key ingredients: accurate probabilistic forward models, learned priors, and efficient large-scale optimization methods. In particular, I will show how to achieve better low-light imaging using cell-phones (beating Google's HDR+), and how to classify images at 3 lux (substantially outperforming very deep convolutional networks, such as the Inception-v4 architecture). Using a similar methodology, I will discuss ways to miniaturize existing camera systems by designing ultra-thin, focus-tunable diffractive optics. Finally, I will present new exotic imaging modalities which enable new applications at the forefront of vision and imaging, such as seeing through scattering media and imaging objects outside direct line of sight.
Felix Heide is a postdoctoral research working with Professor Gordon Wetzstein in the Department of Electrical Engineering at Stanford University. He is interested in the theory and application of computational imaging and vision systems. Researching imaging systems end-to-end, Felix's work lies at the intersection of optics, machine learning, optimization, computer graphics and computer vision. Felix has co-authored over 25 publications and filed 3 patents. He received his Ph.D. in December 2016 at the University of British Columbia under the advisement of Professor Wolfgang Heidrich. His doctoral dissertation focuses on optimization for computational imaging and won the Alain Fournier Ph.D. Dissertation Award.
04-04
Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision
Date and Time
Tuesday April 4, 2017 12:30pm -
1:30pm
Location
Computer Science Small Auditorium (Room 105)
Event Type
Speaker
Host
Prof. Szymon Rusinkiewicz
Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.