VARIANCE COMPENSATION WITHIN THE MLLR FRAMEWORK
Mark Gales and Phil Woodland
Speaker adaptation techniques try to obtain near speaker dependent (SD) performance with only small amounts speaker specific data, and are often based on initial speaker independent (SI) recognition systems. One of the key issues faced in speaker adaptation is to adapt a large number of parameters with only a small amount of data. This report examines the Maximum Likelihood Linear Regression (MLLR) technique for speaker adaptation and presents a number of enhancements to the basic MLLR approach. MLLR estimates linear transformations for the models parameters to maximise the likelihood of the adaptation data. Previously, MLLR has been applied to the mean parameters in mixture Gaussian HMM systems. In this report MLLR is extended to also adapt the Gaussian variances. Re-estimation formulae are derived for variance transforms. A second approach called Normalised Domain MLLR is also introduced that can be used efficiently for HMMs with both diagonal and full covariance matrices. A number of issues concerning the use of regression class trees in MLLR are also discussed. MLLR with variance compensation is evaluated on several large vocabulary recognition tasks. The use of mean and variance MLLR adaptation was found to give an additional 1% to 8% decrease in word error rate over mean-only MLLR adaptation.
If you have difficulty viewing files that end
which are gzip compressed, then you may be able to find
tools to uncompress them at the gzip
If you have difficulty viewing files that are in PostScript, (ending
'.ps.gz'), then you may be able to
find tools to view them at
We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.