*

June 18, 1993




*

Recognizing Handwriting in Context

Computerized handwriting recognition, once a sleepy field populated by a few software engineers, has been jolted awake by an explosion of interest in hand-held computers that work with a pen. It is now a hotbed of activity. But even the best commercial systems today are hardly able to recognize half of the inconsistently scrawled words they tackle. So most systems restrict the vocabulary a user can employ and outlaw script, making the user print --slowly and carefully--instead. Now a research team at the University of Buffalo has given some old grade school rules a new spin, placed them into a writing recognition system, and nearly doubled its accuracy. The system, presented last month at a University of Buffalo handwriting recognition conference, identifies words by their context, relying on a series of probabilities known as "statistical grammar."

Conventional recognition systems identify a written word by guessing at each letter based on an analysis of its lines and curves, and then picking a word from the dictionary that most closely matches the string of guessed letters, along with a few alternative guesses. The new system developed by Rohini Srihari, Stayvis Ng, Charlotte Baltus and Jackie Kud of the University of Buffalo, evaluates the grammar for these guessed words as they appear in a sentence and determines which ones make the most sense.

To do that, the researchers had to overcome the basic challenge of "context-dependent" recognition: Standard rules of grammar permit words to be put together in many different ways. If the system gave each word candidate one of 60 different grammar "tags," based on categories like "noun," and individual pronouns, a computer would still come up with thousands of grammatically correct ways in which these tags could be strung together. "There are just too many ways to make a sentence." explains Srihari. "It leads to a combinatorial explosion."

To muffle the explosion, Srihari dumped formal grammar in favor of "statistical grammar": She had a computer pore over thousands of electronic mail messages, noting the probability of each grammar tag being followed by another particular tag. (E-mail messages tend to employ the same informal language typical of handwritten messages, notes Srihari.) Other researchers have already applied this technique to character recognition, so that for example, a system might know that the letter after a "q" is likely to be "u" even if it looks more like an "n." Similarly, the Buffalo system can figure out that a "determiner" such as "the" or "a" is likely to be followed by a noun.

The context-based system works in tandem with a conventional recognition system. Once the conventional system makes a pass at a sentence and generates a list of candidate words, Srihari's system can on a first pass eliminate an average of 60% of the guesses because they represent an unlikely word transition. To narrow the choice down to a single, best-guess sentence, the system takes a second pass looking for the proper order of "hypertags": phrases such as the noun phrase "the cat", the verb phrase "under the table."

One elimination of the context-based system is that the conventional system underlying it must at least list the correct word though it doesn't have to identify it as such: otherwise the context-based system has nothing to pick from. But if the right word is in the list, Srihari has found that her system improves accuracy from the conventional system's partly 58% to an impressive 95%.

Srihari's group's work is "quite good," says Kai-FuLee, manager of speech and language technologies at Apple, adding that "the use of context will be one of the key techniques that will make handwritten computer communications possible."

--David H. Freedman