next up previous contents
Next: The autocorrelation method Up: Linear Prediction analysis Previous: Motivation from lossless tubes

Parameter estimation

Given N samples of speech, we would like to compute estimates to tex2html_wrap_inline2855 that result in the best fit. One reasonable way to define ``best fit'' is in terms of mean squared error. These can also be regarded as ``most probable'' parameters if it is assumed the distribution of errors is Gaussian and a priori there were no restrictions on the values of tex2html_wrap_inline2855.

The error at any time, tex2html_wrap_inline2963, is:
Hence the summed squared error, E, over a finite window of length N is:
The minimum of E occurs when the derivative is zero with respect to each of the parameters, tex2html_wrap_inline3069. As can be seen from equation 67 the value of E is quadratic in each of the tex2html_wrap_inline3069 therefore there is a single solution. Very large positive or negative values of tex2html_wrap_inline3069 must lead to poor prediction and hence the solution to tex2html_wrap_inline3077 must be a minimum.

Figure 38: Schematic showing single minimum of a quadratic

Hence differentiating equation 67 with respect to tex2html_wrap_inline3079 and setting equal to zero gives the set of p equations:
rearranging equation 69 gives:

Define the covariance matrix, tex2html_wrap_inline3083 with elements tex2html_wrap_inline3085:

Now we can write equation 70 as:

or in matrix form:


or simply:


Hence the Covariance method solution is obtained by matrix inverse:


Note that tex2html_wrap_inline3083 is symmetric, i.e. tex2html_wrap_inline3089, and that this symmetry can be expoited in inverting tex2html_wrap_inline3083 (see [9]).

These equations reference the samples tex2html_wrap_inline3093.

Speech Vision Robotics group/Tony Robinson