The environment is full of rich sensory information. Our brain can parse this input, understand a scene, and learn from the resulting representations. The past decade has given rise to computational models that transform sensory inputs into representations useful for complex behaviors such as speech recognition or image classification. These models can improve our understanding of biological sensory systems and may provide a test bed for technology that aids sensory impairments, provided that model representations resemble those in the brain. In this talk, I will discuss my research program, which aims to develop methods to compare model representations with those of biological systems and to use insights from these methods to better understand perception and cognition. I will cover experiments in both the auditory and visual domains that bridge between neuroscience, cognitive science, and machine learning. By investigating the similarities and differences between computational model representations and those present in biological systems, we can use these insights to improve current computational models and better explain how our brain utilizes robust representations for perception and cognition.
Bio: Jenelle Feather is a Flatiron Research Fellow at the Center for Computational Neuroscience (CCN), working with SueYeon Chung and Eero Simoncelli. She received her Ph.D. in 2022 from the Department of Brain and Cognitive Sciences at MIT, working in the Laboratory for Computational Audition with Josh McDermott. During that time, she was a Friends of McGovern Institute Graduate Fellow as part of the McGovern Institute, a DOE Computational Science Graduate Fellow, and was affiliated with the Center for Brains Minds and Machines. Previously, she was an intern at Google, a research assistant with Nancy Kanwisher, and received undergraduate degrees in physics and brain and cognitive sciences from MIT.
Coffee and Refreshments will be available outside A32 before the seminar.