about us . research . publications . news . people . academics . resources . contact us
barcode recognition

bioinformatics

biometrics

forensics research

handwriting recognition

digital libraries

information retrieval

m.m.i.r.


show & tell


piction


information revelation


stamp recognition

text processing

image evaluation

Show & Tell: Multimedia Image Interpretation

. project overview . project members . publications .
. system prototype . research . send questions & comments .
Processing Speech Input

There are various language issues to be dealt with when processing the speech input accompanying the images. In general, the language here makes the visual semantics more explicit compared to PICTION, where much of the information is implicit and requires very deep semantic understanding to recover. Caption writers tend to go out of their way to provide novel and descriptive methods of identification, so as to avoid repetitive captions. Image analysts on the other hand are not concerned with writing (or speaking style) and are interested in conveying the required information in the most direct and unambiguous manner. Some domain-specific issues which must be dealt with:

Image Interpretation

This domain is different from the PICTION domain in several respects. First, there are several object classes (buildings, roads, foliage areas) which must be dealt with. Second, there is additional collateral provided via the mouse clicks. However, the interpretation task itself is more challenging since there are numerous objects to be labeled. At this point we are using the following tools in interpreting the image:

The interpretation of mouse clicks during the annotation phase also poses a new problem. When the analyst clicks on a point in the image and says This is the Baldy Tower or This must be the Ellicott Forest, the referent of the mouse click is the image area corresponding to the object of interest. Since the mouse click may be a single point, it is necessary to determine how best to use this information. If a texture segmentation has been performed already, this could isolate the object to a particular segment.

Query and Retrieval

The processing of spatial queries is similar to the processing of collateral text. A query such as Highlight buildings along the river involves the same semantic interpretation of spatial terms that was discussed earlier. Once the query has been interpreted (with respect to any mouse input accompanying it), retrieval is accomplished by either (i) directly querying the knowledge base or (ii) consulting an efficient data structure which stores all 2D binary, spatial and topological relations between entities. The latter is computed after all image interpretation has taken place. We are currently employing a 2D string for this purpose but investigating alternatives due to the limitations of this method. The last step is the effective presentation of this information (both text and image) to the user.