Skip to main content

Highlight

Better Object Recognition by Using Contextual Knowledge

Achievement/Results

Carolina Galleguillos, Brian McFee, Serge Belongie and Gert Lanckriet at the University of California at San Diego have developed a new computer vision algorithm that improves the recognition of objects in real world images by efficiently integrating different sources of contextual cues.

Their novel model for object recognition supplements appearance features (derived from the object being recognized), with contextual information (derived from surrounding regions or objects) in order to improve recognition accuracy. Contextual cues are incorporated form different levels of interaction, such as pixel, region and object level. Pixel interactions capture low-level feature interactions between spatially adjacent objects. Region interactions capture higher-level information from the region surrounding an object. Finally, object interactions capture high-level information from objects in the scene, which may be separated by large distances. The first figure shows an example of the different contextual levels used in the system. Pixel interactions are captured by the surrounding area of the bird. Region interactions are captured by expanding the window to include surrounding objects, such as water and road. Object interactions are captured by the co-occurrence of objects road and water in the scene.

The researchers were able to evaluate the relative contribution of local contextual interactions for the task of multi-object localization over different data sets and object classes. The second figure shows an example where a chair was initially labeled as “road” by using only appearance features. By incorporating together pixel and region interactions the system is able to change the object label to “boat”, and affecting the confidence of the rest of the labels. Finally by adding object interactions the system achieves the final labeling “chair”.

Carolina Galleguillos is an NSF IGERT (Integrative Graduate Education and Research Traineeship) fellow in the Vision and Learning in Humans and Machines Traineeship program at UCSD run by Professors Virginia de Sa and Garrison Cottrell. This work is an excellent example of using real world contextual knowledge and sophisticated machine learning methods to improve computer vision algorithms.

Address Goals

Creating better computer vision systems by giving them more human-like contextual knowledge and the ability to use it has broad impact in a variety of areas. Carolina is a promising Latina Computer Scientist.