Abstract for mrva_icslp06

Proc ICSLP, September 2006, Pittsburgh, PA, USA.

UNSUPERVISED LANGUAGE MODEL ADAPTATION FOR MANDARIN BROADCAST CONVERSATION TRANSCRIPTION

D. Mrva and P.C. Woodland

September 2006

This paper investigates unsupervised language model adaptation on a new task of Mandarin broadcast conversation transcription. It was found that N-gram adaptation yields 1.1% absolute character error rate gain and continuous space language model adaptation done with PLSA and LDA brings 1.3% absolute gain. Moreover, using broadcast news language model alone trained on large data under-performs a model that includes additional small amount of broadcast conversations by 1.8% absolute character error rate. Although, broadcast news and broadcast conversation tasks are related, this result shows their large mismatch. In addition, it was found that it is possible to do a reliable detection of broadcast news and broadcast conversation data with the N-gram adaptation.


| (ftp:) mrva_icslp06.pdf | (http:) mrva_icslp06.pdf | (ftp:) mrva_icslp06.ps.gz | (http:) mrva_icslp06.ps.gz |

If you have difficulty viewing files that end '.gz', which are gzip compressed, then you may be able to find tools to uncompress them at the gzip web site.

If you have difficulty viewing files that are in PostScript, (ending '.ps' or '.ps.gz'), then you may be able to find tools to view them at the gsview web site.

We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.