about us . research . publications . news . people . academics . resources . contact us
barcode recognition

bioinformatics

biometrics

forensics research

handwriting recognition

digital libraries

information retrieval

m.m.i.r.


show & tell


piction


information revelation


stamp recognition

text processing

image evaluation

Show & Tell: Multimedia Image Interpretation

. project overview . project members . publications .
. system prototype . research . send questions & comments .

This task combines image understanding, natural language processing and speech recognition. An image analyst currently manually prepares natural language collateral material related to an image. Annotations on the image are also prepared manually. The goal of this image annotation project is to utilize advances in speech technology and natural language (NL) and image understanding (IU) research to prepare image-related collateral more efficiently.

We have been developing a prototype system consisting of two phases. Figure 1 illustrates the system's functionality. Phase 1 consists of automaticly interpretating/indexing images. In this phase, an analyst views an image, and describes the image in spoken language, pointing from time to time to indicate objects or regions in the image. A state of the art speech recognition system is employed in transcribing the input and synchronizing the speech with the mouse input. The text along with the mouse input is used to identify all entities which have been given a label by the analyst. The system output is an NL collateral description and an annotated image. The annotated image consists of (i) a semantic representation of the text and (ii) locations of objects, regions identified in the image. Information is represented such that spatial reasoning, temporal reasoning and other contextual reasoning is enabled.

In the second phase, we provide point and click querying synchronized with speech on the annotated images. For example, the query show all man-made structures to the west of this <click> forest would cause the appropriate areas in the image to be highlighted; each of these could be further queried for corresponding textual information.

Figure 1: Various sources of collateral information: natural language and deictic input (the cross on the building).

Figures 2 and 3 show show a sample aerial image and its corresponding speech input. The task is to label all objects explicitly named by the analyst in the speech input.

Figure 2: Synthetically produced aerial image (from the RADIUS model-board images)

    This image depicts the Sun-yab Military Base and 
    Intelligence headquarters which is located in the 
    southeastern portion of the country approximately 
    25 miles from Nickel City. The size of the facility 
    indicates that it can house several battalions.
    The base is divided up into several functional areas.

There is an S-shaped road (series of mouse clicks) running through the entire facility. We will call this RD-1. The main gate to the facility is located on the westernmost point of this road. In this image, there appears to be a convoy proceeding along this road.

Area A (use mouse to outline area) is in the center of the facility. It looks like an athletic complex. It consists of an oval track and various other buildings surrounded by foliage areas. It is bounded by roads on the western and northern perimeters. These will be designated as RD-2 and RD-3 respectively.

Area B (click) is the vehicle storage area. It is situated to the southwest of the track and is bounded by RD1 on the northern border and by roads RD-4 and RD-5 on the western and eastern borders respectively. It consists of ten wooden vehicle storage sheds which we will designate as VSS 1 through 10. The western part of this area contains additional buildings. The multi-story building must be Baldy Tower (click).

Figure 3: Speech input accompanying the aerial image shown in previous figure

There are several assumptions we are making in specifying the task and our proposed solution. From the perspective of an image analyst, this approach constitutes a healthy compromise between (i) tedious manual annotation, even when tools such as snakes are provided, and (ii) completely automated (image-based) interpretation. Since the task involves co-referencing image areas with textual descriptions, our system uses the text for dual purposes: co-referencing as well as for assisting image interpretation.

The second assumption concerns the use of preconstructed geometric site models. These have been used effectively in the RADIUS community for registering new images of a known site and subsequently for change detection. Figure 4 illustrates the manner in which text accompanied by mouse clicks are interpreted in both the presence and absence of 3D site models.

Figure 4: The role of site models in semi-automated image interpretation.

Finally, at this point we assume that an approximate shape representation for objects is sufficient for reasoning purposes. We represent objects such as buildings or areas by simple bounding boxes with orientation; polylines are used for roads, rivers. Our system does allow for more exact representations for display purposes.