Home > Demonstrator > INRIA

CLASS Demonstrators developed by INRIA

This page collects the demonstrators developed by the LEAR Team at INRIA within Workpackage 5 of the CLASS Project. The demonstrators showcase work done in Workpackages 2, 3, and 4. Below, we briefly describe the demonstrators that we developed. The titles provide a direct link to the demonstrators.

A complete overview of all CLASS demonstrators can be found here.

Celebrity Spotter

The Celebrity Spotter demonstrator presents the automatically determined links between detected image faces and detected names in the text. When the mouse is positioned over a face, the associated name in the text is highlighted and vice versa. Results are shown on the Yahoo! News database collected by INRIA. This contains news images and the (short) corresponding story texts.

Face Finder

A demonstration of a sorting mechanism applied to faces found in news images. The goal is to return all faces of a certain person from a database of captioned news images, using nothing but the captions to infer which faces belong to the queried person.
First all images in the database are selected where the queried person appears in the caption. Then a classifier is learned to separate the remaining faces from other faces in the database. The classifier confidence on the faces returned by the text query is then used to rank the returned faces.
This new version demonstrates the improved ranking mechanism presented in our ECCV 2008 paper.

Object & Scence Classes

A demonstration of a generic image-sorting mechanism that is applied to the result images of a keyword-based search in a database of images downloaded from Flickr.com.
First, images that are annotated with a selected keyword, or combination of keywords, are retrieved from Flickr. As in the Face Finder demo, in the second stage a classifier is learned to separate the images found using their annotation from a general "background" set of images. The classifier confidence is then used to rank the images found by their annotation. Also an unsorted selection of the images annotated with the keyword(s) can be displayed.
This is an improved version of a similar demonstrator in Deliverable D5.2, and is based on our BMVC 2009 paper.

Image Annotator

A demonstration of automatic image annotation: given a new image the system can assign relevant keywords to it based on a database of annotated images. In this way a human annotator can quickly select the most relevant keywords from a list proposed by the system. By using the predicted relevance of keywords for images, keyword based queries can also be directly used to find images that are not manually annotated.
To compute the relevance of keywords for a new image the system uses a database of annotated images. The new image is compared to all database images, and the annotations of the most similar images are combined to predict the relevances for the new image.
This demonstrator is based on our ICCV 2009 paper.


Contact: Bill Triggs (coordinator), Bill.Triggs@imag.fr, phone +33 4 7651 4553
Laboratoire Jean Kuntzmann, 51 rue des Mathematiques, 38402 Saint Martin d'Heres, Grenoble, France