Abstract for niesler_tr265

Cambridge University Engineering Department Technical Report CUED/F-INFENG/TR265


Thomas Niesler and Phil Woodland

July 1996

Conventional n-gram language models employ the occurrence counts of word n-tuples to calculate probabilities for word sequences. It has been demonstrated, however, that language models using n-tuples of word-categories rather than words exhibit certain advantages, such as the intrinsic ability to generalise to unseen word sequences, and attactive size versus performance tradeoffs. This document compares the behaviour of word- and category-based language models in detail, and among the significant findings are that the category-based model is less likely to deliver very small probability estimates, that it performs better in situations where the word-model backs-off, and that the category-based model is less sensitive to changes in the character of the test-text

(ftp:) niesler_tr265.ps.Z (http:) niesler_tr265.ps.Z
PDF (automatically generated from original PostScript document - may be badly aliased on screen):
  (ftp:) niesler_tr265.pdf | (http:) niesler_tr265.pdf

If you have difficulty viewing files that end '.gz', which are gzip compressed, then you may be able to find tools to uncompress them at the gzip web site.

If you have difficulty viewing files that are in PostScript, (ending '.ps' or '.ps.gz'), then you may be able to find tools to view them at the gsview web site.

We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.