03-28
Efficient Matching for Recognition and Retrieval

Local image features have emerged as a powerful way to describe images of objects and scenes. Their stability under variable image conditions is critical for success in a wide range of recognition and retrieval applications. However, comparing images represented by their collections of local features is challenging, since each set may vary in cardinality and its elements lack a meaningful ordering. Existing methods compare feature sets by searching for explicit correspondences between their elements, which is too computationally expensive in many realistic settings.

I will present the pyramid match, which efficiently forms an implicit partial matching between two sets of feature vectors. The matching has linear time complexity, naturally forms a Mercer kernel, and is robust to clutter or outlier features, a critical advantage for handling images with variable backgrounds, occlusions, and viewpoint changes. I will show how this dramatic increase in performance enables accurate and flexible image comparisons to be made on large-scale data sets, and removes the need to artificially limit the size of images' local descriptions. As a result, we can now access a new class of applications that relies on the analysis of rich visual data, such as place or object recognition and meta-data labeling. I will provide results on several important vision tasks, including our algorithm's state of the art recognition performance on a challenging data set of object categories.

Date and Time
Tuesday March 28, 2006 4:00pm - 5:30pm
Location
Fine Hall 101
Event Type
Speaker
Kristen Grauman, from MIT
Host
Thomas Funkhouser

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List