Much recent success in machine learning has been through optimizing simple feedforward procedures, such as neural networks, using gradients. Surprisingly, many complex procedures such as message passing, filtering, inference, and even optimization itself can be meaningfully differentiated though as well. Composing these procedures lets us build sophisticated models that generalize existing methods but retain their good properties. We'll show applications to chemical design, gradient-based tuning of optimization procedures, and training procedures that don't require cross-validation.
David Duvenaud is a postdoc in the Harvard Intelligent Probabilistic Systems group, working with Prof. Ryan Adams on model-based optimization, synthetic chemistry, and neural networks. He did his Ph.D. at the University of Cambridge with Carl Rasmussen and Zoubin Ghahramani. Previous to that, he worked on machine vision both with Kevin Murphy at the University of British Columbia, and later at Google Research. David also co-founded Invenia, an energy forecasting and trading firm.