03-12
Multiagent Planning with Factored MDPs

Many real-world tasks require multiple agents to coordinate in order to achieve a common goal. Examples include multi-robot systems, network routing and supply chain management. Unfortunately, these large-scale problems are often quite complex, involving many states and multiple decision makers. Factored Markov decision processes (MDPs) allow us to represent a complex transition model compactly using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size, but the complexity of exact solution algorithms for such MDPs grows exponentially in the number of variables describing the state of the system and in the number of agents.

This talk presents a framework for approximate planning that can exploit structure in a factored MDP to solve problems with many trillions of states and actions. The talk will focus on three key elements:

  • Factored Linear Programs -- A novel LP decomposition technique, using ideas from inference in Bayesian networks, which can yield exponential reductions in planning time.
  • Distributed Coordination -- A distributed multiagent decision making algorithm, where the coordination structure arises naturally from the system dynamics.
  • Generalization in Relational MDPs -- A method for learning general solutions from solved tasks, that allows us to act in new scenarios without replanning.

We demonstrate our approach on the task of multiagent coordination in a real strategic computer war game.

Date and Time
Wednesday March 12, 2003 4:00pm - 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Event Type
Speaker
Carlos Guestrin, from Stanford University
Host
Robert Schapire

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List