Word Candidate Selection Using Linguistic Context


Word candidate selection is a problem in text recognition. Given word images from a text page, a word recognizer may generate multiple word candidates as its recognition result. Postprocessing techniques can be applied to select a candidate, which fits its linguistic context, as the decision for each word image.

Word recognition result for a sequence of word images can be represented as a lattice W, which is a two-dimensional array. Each element of the lattice is a word candidate. w[i,j] denotes the j-th word candidate for the word image at the position i. For each word candidate, w[i,j], there is a confidence score, p[i,j], which indicates how likely the candidate to be correct. For example, a lattice provided by a word recognizer on a sentence with 10 words is

position              1          2          3          4          5          6          7
top1           Children       came         to       lire     garden         tv     admits
                 1.0000     0.3300     0.5000     0.5000     0.6000     0.6000     0.5500
top2                          coma         tv        the     golden         to     admire
                            0.3300     0.5000     0.2500     0.4000     0.4000     0.4500
top3                          come                  tire
                            0.3300                0.2500

position              8          9         10         11
top1                the     poetry       swam          .
                 0.6000     0.5500     0.5000     1.0000
top2               tire     pretty       swan
                 0.2000     0.4500     0.5000
top3                toe
                 0.2000

To solve the word candidate selection problem, linguistic contextual constraints can be applied to the lattice by re-evaluating the confidence score of each word candidate. Then, the candidates with highest confidence score will be selected as the decision words.

Statistical language models such as word collocation are useful for word candidate selection. A word-collocation-based relaxation algorithm has been designed. The relaxation algorithm runs iteratively. In each iteration, the confidence score of each candidate is adjusted based on its current confidence and its collocation scores with the currently most preferred candidates for its neighboring word images. Relaxation ends when all candidates reach a stable state. For each word image, those candidates with a low confidence score will be removed from the candidate sets. For the example lattice above, the evolution of the lattice along with the relaxation process is shown here.

A word-collocation-based election algorithm has also been designed. In the algorithm, the confidence score of each candidate is adjusted based on its current confidence and its highest collocation scores with candidates of its neighboring word images. For the example lattice above, the result of election is shown here.

Both the relaxation and election algorithms depend on word collocation data, which are collected from large text corpora, such as the Brown Corpus and the Penn Treebank. Given a word pair (w1, w2), its collocation score can be measured by the mutual information score or the frequency of their co-occurrence inside a window ( window size = 2, in the simplest case). Because word collocation is very local, and the data collected from corpora are usually incomplete and biased, the relaxation and election algorithms can not do candidate selection correctly for some word positions.

Language syntax is also useful for word candidate selection. A probabilistic lattice chart parser has been designed for the task. Given a word lattice, the parser can generate all possible parse trees. The parse trees are ranked by their confidence scores. The word candidates involved in the parse trees with the highest confidence scores will be selected as the decision words. The lattice parser has three important components: a large-scale English lexicon, a broad-coverage English grammar, and a chart parsing algorithm. It has to be pointed out that when the average number of candidates for a word image is large (e.g., more than 2), or the number of the words in a sentence is large (e.g., more than 20), the lattice parser becomes very inefficient and unreliable. Syntactic constraints sometimes can not provide enough information to distinguish two word candidates which have the same syntactic tags.

Statistical language models and syntactic information can be integrated to achieve better performance for candidate selection. We integrated the relaxation/election algorithm with the lattice parser. The overall design is shown below,

For the example lattice above, the result of relaxation+parsing is shown here.

Recently, we also developed methods to integrate linguistic and visual context for word candidate selection. More details of the research described here can be found in our recent publications .


Tao Hong (taohong@cedar.buffalo.edu)