In this talk, I will discuss recent work on using linear programming relaxations to perform approximate inference. By solving the LP relaxations in the dual, we obtain efficient message-passing algorithms that, when the relaxations are tight, can provably find the most likely (MAP) configuration.
Our algorithms succeed at finding the MAP configuration in protein side-chain placement, protein design, and stereo vision problems. More broadly, this talk will highlight emerging connections between machine learning, polyhedral combinatorics, and combinatorial optimization.