Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONTROLLERS FOR MULTICHANNEL FEEDFORWARD CONTROL OF STOCHASTIC DISTURBANCES
Document Type and Number:
WIPO Patent Application WO/2001/035175
Kind Code:
A1
Abstract:
A time domain formulation for the multichannel feedforward control problem (Figure 1) is used to derive an optimality condition for the least squares controller. It is also used to motivate an instantaneous steepest descent algorithm for the adaptation of the controller, which is the (known) filtered error LMS algorithm (Figure 2). The convergence rate of this algorithm is limited by the correlation properties of each reference signal, their cross-correlation properties and the dynamics and coupling within the plant response. An expression is then derived for the transfer function of the optimum least squares controller. A new architecture (Figure 3) for the adaptive controller whose convergence rate is not limited by the factors mentioned above is described. A set of white and uncorrelated reference signals are generated to drive a modified matrix C(z) of control filters, whose outputs are multiplied by the inverse of the minimum phase part G?-1¿¿min?(z) of the plant response matrix G(z) before being fed to the physical plant. The error signals e(n) from the plant are then used to update this controller after being fed through the time-reverse transpose z?-J¿ G?T¿¿all?(Z?-1¿) of the all-pass part of the plant matrix.

Inventors:
ELLIOTT STEPHEN JOHN (GB)
COOK JONATHAN GORDON (GB)
Application Number:
PCT/GB2000/004273
Publication Date:
May 17, 2001
Filing Date:
November 09, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ADAPTIVE CONTROL LTD (GB)
ELLIOTT STEPHEN JOHN (GB)
COOK JONATHAN GORDON (GB)
International Classes:
G05B5/01; G05B13/04; G05B21/02; (IPC1-7): G05B11/32; G05B13/04
Foreign References:
EP0465174A21992-01-08
US5325437A1994-06-28
US5018202A1991-05-21
Other References:
S.ELLIOTT: "FILTERED REDERENCE AND FILTERD ERROR LMS ALGORITHMS FOR ADAPTIVE FEEDFORWARD CONTROL", MECHANICAL SYSTEMS AND SIGNAL PROCESSING, vol. 12, no. 6, November 1998 (1998-11-01), UK, pages 769 - 781, XP000980485
I.PROUDLER ET AL: "INCREASING THE PERFORMANCE OF THE LMS ALGORITHM USING AN ADAPTIVE PRECONDITIONER", PROCEEDINGS OF EUSIPCO-96,EIGHTH EUROPEAN SIGNAL PROCESSING CONFERENCE, vol. 2, 10 September 1996 (1996-09-10), ITALY, pages 1385 - 1388, XP000979686
Attorney, Agent or Firm:
Barker, Brettell (Medina Chambers Town Quay Southampton SO14 2AQ, GB)
Download PDF:
Claims:
CLAIMS
1. A method of determining the filter coefficients of a filter matrix C (z) in a least squares controller suitable for use in a multichannel feedforward control system, the controller comprising a least squares computation means CM utilising an LMS algorithm to determine said filter coefficients, the method comprising providing the controller with reference signals x (n) from the system to be controlled, feeding the outputs u (n) of the controller to a plant G (z), measuring the error signals e (n), which are the combination of the output of the plant response with any disturbance signal d (n) and feeding said error signal e (n) to the computation means CM of the controller, characterised by the provision of a preconditioning means G l ; n (Z) associated with the outputs u (n) of said filter matrix C (z), said preconditioning means being so chosen as to provide a relatively faster convergence rate of the LMS algorithm.
2. The method of claim 1 comprising the provision of a preconditioning means F"' (z) to the reference inputs to an FIR filter matrix C (z), the reference input preconditioning means being arranged to reduce the correlation of the reference signals.
3. The method of claim 2 in which the reference input preconditioning means is the inverse F' (z) of an estimated spectral factor F (z) of the spectral density matrix S, (z), where F (z) is the matrix of filters which could have been used to generate the reference signals from an equal number of white uncorrelated innovation signals, so that S (z) = F (z) FT (z').
4. The method of any one of claims 1 to 3 in which the preconditioning means applied to the output of the filter matrix of the controller comprises a matrix Gm Iin (z) for conditioning the input to the plant response G (z), said matrix Gmlin (Z) being the inverse of the minimum phase part of the plant response where Gmjn (z) is the matrix of the minimum phase components of the plant response, and Gal, (z) is the matrix of the allpass components of the plant response.
5. The method of claim 4 in which the error signal e (n) is filtered by the time reversed allpass part ZJ GallT(z1) of the plant response G (z) to produce a filtered error signal a (nJ) for feeding to the computation means CM.
6. The method of claim 4 in which the reference signals are passed through the all pass part Gal, (z) of the plant response G (z) to produce a filtered reference signal f (n), for feeding to the computational means CM.
7. The method of claim 5 or 6 where the matrix of all pass parts of the plant response Gal, (z) is approximated by a series of delay functions.
8. An adaptive multichannel feedforward controller in which adaptation of the filter coefficients Gmijn (z) or F' (z) of the preconditioning FIR filters are arranged to be performed by repeated execution of the method of any of the preceding claims.
9. A method of determining the filter coefficients of a filter matrix C (z) in a least squares controller suitable for use in a multichannel feedforward control system, the controller comprising a least squares computation means CM utilising an LMS algorithm to determine said filter coefficients, the method comprising providing the controller with reference signals x (n) from the plant to be controlled, feeding the output u (n) of the controller to a plant G (z), measuring the error signal e (n), which is a combination of the output of the plant response model with any disturbance signal d (n), and feeding said error signal e (n) to the computation means CM of the controller, characterised by the provision of a preconditioning means F (z), associated with an input to the filter matrix C (z), said preconditioning means being in the form of the inverse F"' (z) of an estimated spectral factor F (z) of the spectral density matrix S, (z) where F (z) is the matrix of filters which could have been used to generate the reference signals from an equal number of white uncorrelated innovation signals, so that S (z) = F (z) FT (z').
10. The method of claim 2 or claim 9 comprising adapting the preconditioning filter F' (z) in which it behaves as a multichannel prediction error filter and so both reduces the crosscorrelation between the reference signals and reduces the auto correlation of the individual reference signals.
11. The method of any one of claims 1 to 7, or claim 9 or claim 10 in which the algorithm is implemented in the frequency domain.
Description:
CONTROLLERS FOR MULTICHANNEL FEEDFORWARD CONTROL OF STOCHASTIC DISTURBANCES The present invention relates to controllers for multichannel feedforward control of stochastic disturbances. The invention relates particularly5 but not exclusively, to optimal controllers and adaptive controllers.

In the following description references will be placed in square brackets, such as [10], and the list of references is to be found before the claims.

Background To The Invention The control of stochastic disturbances at multiple sensors using multiple actuators has been the subject of a number of studies in recent years, partially because of its application to the active control of sound and vibration [1, 2, 3]. Although the control system may have either a feedforward or a feedback structure in different applications, the design of a feedback controller can still be cast as a feedforward problem with a suitable parameterisation of the controller [4, 5]. Such controllers are often made adaptive to maintain good performance in the face of nonstationarity in the disturbance being controlled.

Current gradient descent adaptation algorithms have problems associated with the potentially slow modes of convergence, which in the multichannel feedforward control case may be due to two distinct effects. First, the reference signals are not white and may also be partially correlated, which can cause slow modes even for electrical cancellation problems using the LMS algorithm. Second, the multiple input multiple output response of the system under control, which will be referred to as the plant, will generally have dynamic behaviour and cross coupling which will also slow the convergence rate of a gradient descent algorithm.

Summaries Of The Invention According to one aspect of the invention we provide a method of determining the filter coefficients of a filter matrix C (z) in a least squares controller suitable for use in a multichannel feedforward control system, the controller comprising a least squares computation means CM utilising an LMS algorithm to determine said filter coefficients, the method comprising providing the controller with reference signals x (n) from the system to be controlled, feeding the outputs u (n) of the controller to a plant G (z), measuring the error signals e (n), which are the combination of the output of the plant response with any disturbance signal d (n) and feeding said error signal e (n) to the computation means CM of the controller, characterised by the provision of a preconditioning means Gml;" (z) associated with the outputs u (n) of said filter matrix C (z), said preconditioning means being so chosen as to provide a relatively faster convergence rate of the LMS algorithm.

The filter matrix will usually be an FIR filter matrix.

In addition to such a preconditioning means for the plant response a preconditioning means to the reference inputs to the FIR filter matrix is preferably provided, the reference input preconditioning means being arranged to reduce the correlation of the reference signals.

Preferably the reference input preconditioning means is the inverse F' (z) of an estimated spectral factor F (z) of the spectral density matrix S, (z), where F (z) is the matrix of filters which could have been used to generate the reference signals from an equal number of white uncorrelated innovation signals, so that Su (z) = F (z) FT (z-') The preconditioning means applied to the output of the FIR filter matrix of the controller preferably comprises a matrix Gñljn (z) for conditioning the input to the plant response G (z), said matrix Gnljn (z) being the inverse of the minimum phase

part of the plant response where G (z) = Ga"(z) Gm, n (z), Gmjn (z) is the matrix of the minimum phase components of the plant response, and Ga" (z) is the matrix of the all-pass components of the plant response.

The error signal e (n) is desirably filtered by the time-reversed all-pass part z'G GT, (z-') of the plant response G (z) to produce a filtered error signal a (n-J) for feeding to the computation means CM.

In one embodiment the reference signals x (n) are passed through all pass parts Gal, (z) of the plant response to produce a filtered reference signal f (n), for feeding to the computational means CM.

The matrix of all pass parts of the plant response Gal, (z) may conveniently be approximated by a series of delay functions in some cases.

Adaptation of the filter coefficients Gmjn (z) or'(z) of the preconditioning FIR filters may be arranged to be performed by repeated execution of the aforementioned methods.

According to a second aspect of the invention we provide a method of determining the filter coefficients of a filter matrix C (z) in a least squares controller suitable for use in a multichannel feedforward control system, the controller comprising a least squares computation means CM utilising an LMS algorithm to determine said filter coefficients, the method comprising providing the controller with reference signals x (n) from the plant to be controlled, feeding the output u (n) of the controller to a plant G (z), measuring the error signal e (n), which is a combination of the output of the plant response with any disturbance signal d (n), and feeding said error signal e (n) to the computation means CM of the controller,

characterised by the provision of a preconditioning means F-' (z) associated with an input to the filter matrix C (z), said preconditioning means being in the form of the inverse F-' (z) of an estimated spectral factor F (z) of the spectral density matrix Sxx (z) where F (z) is the matrix of filters which could have been used to generate the reference signals from an equal number of white uncorrelated innovation signals, so that Su (z) = F (z) F '').

In one advantageous embodiment the method comprises adapting the preconditioning filter F' (z) so that said filter behaves as a multichannel prediction error filter and so both reduces the cross-correlation between the reference signals and reduces the auto-correlation of the individual reference signals.

By way of example only, the various aspects of the invention will now be explained further with reference to the accompanying drawings.

Brief Description of the Drawings Figure 1 is a block diagram of a (prior art) general multichannel feedforward control system with K reference signals, M secondary actuators and L error sensors.

Figure 2 is a block diagram of a (prior art) filtered error LMS adaptive algorithm for a feedforward controller which uses the outer product of the vector of delayed reference signals, x (n-J), and the vector of delayed filtered error signals, f (n-J), which is denoted by the symbolOg), Figure 3 is a block diagram of the adaptive algorithm of a controller in accordance with the invention, the algorithm determining the transformed matrix of controllers, C (z), which uses the outer product of the vector of delayed innovations for the reference signals, v (n-J) and the vector of delayed filtered error signal a (n-J), which is denoted by the symbol 0,

Figure 4 is a block diagram of a simulation of a system in accordance with the invention. The block diagram is of the simulation performed in which two uncorrelated white noise signals, n, and n2, are used to generate both the disturbance signals at the error microphones via band pass filters, and the observed reference signals, x, and x2, via filters with a slope of 3 dB/octave and a mixing matrix M. The acoustic plant is a symmetric arrangement of two secondary loudspeakers and two error microphones in free space.

Figure 5 shows time histories of the reduction in the sum of squared error signals for the simulation whose block diagram is shown in Figure 4 for three adaptation algorithms. Upper graph : LMS algorithm using reference signals x, and x2 and error signals e, and e2 ; middle graph : LMS algorithm using uncorrelated reference signals n, and n2 obtained by filtering x, and X2 with F-' (z) ; and lower graph : the new algorithm which uses F-' (z) to generate uncorrelated reference signals and G~íjn (z) to diagonalise the overall plant response.

Figure 6 is a block diagram for the prior art adaptive SVD controller for the attenuation of tonal disturbances, Figure 7 is a block diagram of a further system in accordance with the invention in which the plant response G (z) is preconditioned by G (z) the inverse of the minimum phase part of the plant response, and the reference signals x (n) are passed through Gall (z), the matrix of the all-pass part of the plant response, and Figure 8 is a block diagram of a modified system in accordance with the invention in which the plant response G (z) is preconditioned by the matrix Gm'in () but the FIR filter C (z) does not operate on preconditioned 9

reference signals.

Time Domain Formulation And The Filtered Error LMS Algorithm The block diagram of a general multichannel feedforward control problem is shown in Figure 1. The K sampled-time random reference signals, which are assumed to be stationary but in general will be partially correlated, are described by the vector x (n) = [x, (n)... XK (n)] T These reference signals are used to drive an MxK matrix of digital FIR control filters, W (z), which generate the M control signals contained in the vector u (n) = [u1(n) ... uM(n)]T. (2) The L sampled-time error signals are contained in the vector ... which are equal to the superposition of the L disturbance signals, which are also assumed to be stationary, defined by the vector d (n) = [dl (M)... (, (4) and the L outputs of the physical system under control, whose response is defined by the LxM plant matrix G (z). The matrix of plant responses, G (z), from the sampled time control inputs to the sampled time outputs, contain the responses of the data converters, antialiasing and reconstruction filters, actuators, sensors and the physical system under control, which is assumed to be stable. Any feedback from the control signals back to the reference signals is assumed to be perfectly cancelled by a feedback cancellation, or neutralisation, filter [6, 5].

The vector of time histories of the L error signals in Figure 1 can thus be expressed as where Gj is the LxM matrix of the j-th elements of the plant's impulse response functions and J is the number of coefficients in this impulse response, which can be arbitrarily large. The vector of control signals can also be written as where Wi is the MxK matrix of the i-th coefficients of the FIR controller matrix and each filter is assumed to have I coefficients. The vector of error signals is thus equal to The cost function to be minimised is assumed to be equal to the expectation of the sum of the squared error signals which can be written as = trace [E () ))], (8) where E denotes the expectation operator and trace denotes the trace of the matrix in square brackets. Note that the trace of the outer product of the vector of error signals is equal to their inner product, as used to define the cost function in most previous analyses [I], but the outer product has the advantage of keeping products of terms involving random signals adjacent to one another in the subsequent analysis, so that their expectation can be conveniently defined. More complete cost functions could also be used, in which the outer product of the error signals is multiplied by a weighting matrix and an effort weighting term is included which is proportional to the outer product of the control signals in equation (6). Although these modifications to the cost function can be important in practice they do not

change the form of the optimal solution or adaptive algorithms derived below and so will be omitted here in order not to complicate the analysis unnecessarily.

Using equation (7) the expectation of the outer product in equation (8) can be written as where the LxK cross correlation matrix between the reference signals and disturbances is defined as R (m) = E[d(n + m)xT(n)], (10) the KxK matrix of auto and cross correlations between the reference signals is defined as R. (m) = E[x(n + m)xT(n)], (11) and the LxL matrix Rdd (m) is similarly defined.

Using the rules for differentiating the trace of a matrix function with respect to the elements of one of the matrices [7], the derivative of the cost function with respect to the elements of the i-th matrix of controller coefficients, W ;, can be written as

The condition which must be satisfied by the set of optimal control filters is given by setting equation (12) to zero for i=0 to I-1, although there appears to be no simple analytic solution to this equation.

An important result is obtained by defining the matrix of cross-correlation functions between each error signal and each reference signal to be Re (m) = E[e(n + m)xT(n)], (13) which can be written, using equation (7) as The matrix of derivatives of the cost function with respect to the elements of the i-th matrix of controller coefficients, equation (12), can then be written as Using the definition of Rxe (m) and the assumed stationarity of the signals, equation (15) can then be written in the form or <BR> <BR> <BR> -2E [f (n) xT (n-i)], (17)<BR> aï where

is the vector of M filtered error signals.

If the controller coefficients in W ; are adapted at each sample using the instantaneous version of equation (17) the filtered-error LMS algorithm [8, 9, 10] is obtained, which can be written as W, (n+1) = Wj (n)-af (n) xT (n-i), (19) where a is a convergence coefficient. Notice, however, that equation (18) cannot be implemented with a causal filter since it requires future values of the error signals. In order to implement a causal algorithm bothftn) and xT (n) in equation (19) may be delayed by J samples to give W (n + 1) = W ; (n)-af (n-J) xT (n-J-i), (20) where and j'=Jj, which emphasises the fact that e (n) is causally filtered by a time reversed version of the transposed impulse response matrix of the plant to generate these filtered error signals. It is assumed that a perfect model of the plant response is available to generate the filtered error signals. A block diagram for this adaptation algorithm is shown in Figure 2.

The convergence and robustness properties of the filtered error LMS algorithm are similar to those of the more widely-used filtered reference LMS algorithm [9, 10].

The filtered error algorithm can, however, be more efficient to implement if there are many reference signals. The additional delay involved in ensuring the causality of equation (21) can slow down the convergence rate of the filtered error algorithm if there are relatively few channels in the control system, but for control systems with many channels other effects limit the convergence rate, which are

similar for both the filtered error and filtered reference LMS algorithms [9, 10].

These algorithms are both based on the method of steepest descent and their convergence speed is not only limited by the eigenvalue spread of the autocorrelation matrix of the individual reference signals, as for the normal LMS algorithm [I I], and by the correlation between the individual reference signals, but also by the frequency response and coupling properties of the plant [12]. This can result in very slow convergence for ill conditioned systems with many channels.

Transform Domain Formulation Of The Optimal Controller If the number of coefficients of the optimum causal controller which satisfis equation (15) is allowed to go to infinity, then the condition for the optimal solution can be written as Taking the z-transforrn of this equation we obtain T sse (Z)} - where the notation {. I. indicates that the z-transform of the causal part of the time- domain version of the function inside the brackets has been taken [13, 2], GT ( ?') is said to be the adjoint to the matrix of plant responses G (z) [9, 14], and Sr (z) is the z-transform of Rxe (m), which can also be written as Sxe(z) = E[e(z)xT(z-1)], (24) where it is understood that e (z) and x (z) are z-transforms of finite sections of e (n) and x (n), whose lengths are allowed to become infinite, and the expectation is taken over these sections since the signals are assumed to be stationary, as explained more fully in [14] for example.

Since e (z) = d (z) + G (z) W (z) x (z), (25) then S, (z) = Sxd(z) + G (z) W (z) Sxx (z), (26) where the z-transform of Rxd(m) is Sxd (z) = E [d (z) X (z)], (27) and the z-transform of R= (m) is Srr (z) = E z (28) The condition for the optimal causal filter may thus be written as k (z-') (z) + GT(z-1)G(z)Wopt(z)sxx(z)}+ = 0. (29) We now introduce the spectral factorisation of the spectral density matrix Sxx (z) [15, 16, 17] Su (z) = F (z) FT (z~'), (30) in which F (z) is causal and minimum phase so that its inverse, F-' (z), is also stable and causal. The conditions under which such a spectral factorisation exist are discussed for the continuous time case in [18], for example, and for the sampled time case considered here these conditions may be summarised as i) S (Z) = S Tz-'), which is true from the definition of S (z), ii) S., (ej#T)must be analytic for all #T, and iii) Sxx(ei#T)must be positive definite for all tT.

The spectral factor may be physically interpreted as the transfer function of a matrix of filters which is used to generate the K random reference signals in x (n) from an equal number of uncorrelated white random innovation signals of unit variance, v (n), so that x (z) = F (z) v (z), (31) <BR> <BR> <BR> <BR> <BR> where Sw (z) = I and so the definition of S (z) in equation (28) leads directly to equation (30). These innovation signals could also be reconstructed from the observed reference signals by using the inverse of F (z), which by definition is stable and causal, so that, v (z) = F-' (z) x (z). (32) By filtering the observed reference signals with the inverse of the causal spectral factor matrix we thus obtain a set of uncorrelated, white innovation signals which contain the same information.

We also separate the LxM matrix of plant responses, G (z), for which there are assumed to be at least as many sensors as actuators so that Z, f, into its all-pass and minimum phase components so that where the LxM matrix of all-pass components is assumed to have the property that <BR> <BR> <BR> <BR> <BR> . 34<BR> Gall1(z-1)Gall(z) = I, (34) so that GT(z-1)G(z) = GminT(z-1)Gmin(z), (35) where Gmjn (z) is the MxM matrix which corresponds to the minimum phase component of the plant response and thus has a stable causal inverse. Assuming

the matrix GT (z-') G (z) satisfies the conditions under which a spectral factorisation exists, as described above, then the decomposition in equation (35) has a similar form to the factorisation in equation (30). If the cost function originally minimised by the controller includes a component equal to p times the sum of squared control signals, as well as the sum of squared errors, then the right hand side of equation (35) becomes GT (z-') G (z) + p I, which can improve the conditioning of the spectral factorisation. The all-pass component can be calculated from G (z) and Gm, n (z) as so that su min where [Gmin-1(z-1)]T has been written as Gm inT(z-1) to keep the notation compact.

The condition for the optimal causal filter, equation (29), can now be written, using similar notation, as <BR> <BR> <BR> <BR> <BR> min (z-1)[Gmin-T(z-1)GT(z-1)Sxd(z)F-T(z-1)+ Gmin(z)Wopt(z)F(z)]FT(z-1)}+ = 0,<BR> <BR> <BR> + where G (z') and FT (z~') correspond to matrices of entirely non-causal sequences and since they are both minimum phase, the optimality condition reduces to [16] T, (z-1)Sxd(z)F-T(z-1) + Gmin(z)Wopt(z)F(z)}+ = 0, (3 where equation (37) has been used to simplify the term Gn, inT(z-1)GT(z-1) in equation (38).

Because W,, (z), as well as Gmm (z) and F (z), corresponds to a matrix of entirely causal sequences, the causality brackets can be removed from the second term in equation (39), which can then be solved to give Ww, = -Gmin-1(z){GallT(z-1)Sxd(z)F-T(z-1)}+F-1(z). (40) Equation (40) provides an explicit method of calculating the matrix of optimal causal filters which minimise the sum of squared errors in terms of the cross spectral matrix between the reference and desired signals, the minimum phase and all-pass components of the plant response and the spectral factors of the spectral density matrix for the reference signals. If the plant has as many outputs as inputs, so that G (z) is a square matrix, then G (z'') is equal to G,,-,'(z-') and equation (40) becomes equivalent to the corresponding result in [4].

Alternative Structure for an Adaptive Controller Instead of implementing the MxKmatrix of control filters W (z) in Figure 1 directly, this matrix could also be implemented as W (z) = Gmjl (z) C (z) F-' (z) (41) where Gril (z) is the MxM inverse matrix of the minimum phase plant response, F-' (z) is the inverse of the spectral factor of the reference signal's spectral density matrix, and C (z) is an MxKmatrix of transformed controllers. The optimal responses for the transformed controller C (z) can be seen from equation (40) to take on the particularly simple form CO" (z) = -{GallT(z-1)Sxd(z)F-T(z-1)}+.

The transformed controller matrix C (z) could now be made adaptive, and adjusted at the m-th iteration according to Newton's algorithm [11],

C+1(z) = Cm(z) + α # Cm(z) (43) where a is a convergence coefficient and XCm (z) is the difference between the optimum matrix of controller responses and the current controller, at the m-th iteration. This is equal to #Cm(z) = -{GallT(z-1)Sxe,m(z)F-T(z-1)}+ (44) where Sxe,m(z) = E[em(z)xT(z-1)] (45) and em (z) is the vector of errors at the m-th iteration. Equation (44) can also be written as AC, (z) = -{E[am(z)vT(z-1)]}, (46) where am(z) = GallT(z-1)em(z), (47) which is a vector of M filtered error signals, and v (z) = F-' (z) x (z), (48) is the vector of innovation signals assumed to generate the reference signals, as in equation (32).

Taking the inverse z-transform of the spectral density matrix in equation (46) gives the required adjustment for the i-th element of the causal impulse response of C (z) at the n-th sample time to be tC (n) = e[a(n)vT(n-i)]. (49) where a (n) and v (n) are the time domain version of the signals defined by equations (47) and (48). In moving from the z domain to the time domain we can describe control filters with a finite number of coefficients, although clearly a matrix of control filters with a limited number of coefficients cannot converge to the optimal solution in the z domain, given by equation (42), which assumes a matrix of control filters which are causal but of infinite duration.

If an instantaneous version of this matrix of cross correlation functions is used to adapt the matrix of i-th coefficients of an FIR transformed controller matrix at every sample time, a new algorithm which is similar in form to the filtered error LMS algorithm is derived, which can be written as C. (n + l) = C. (n)-a a (n) vT (n-i). (50) The generation of the signals a (n) and v (n) is shown in the block diagram in Figure 3.

Comparing this block diagram to that shown in Figure 2, we notice that W (z) is now implemented according to equation (41), with the reference signals, x (n), being pre-processed by F-' (z) to give v (n), which drives the transformed controller matrix C (z), and the output of C (z) is then multiplied by G-' (z) to generate the control signals, u (n), for the plant. The vector of innovation signals v (n) is also required in the adaptation equation (50), together with the vector of filtered error signals, which are generated in this case by passing e (n) through the transposed and time reversed version of the all-pass component of the plant response, G,, (z'').

In order to implement equation (50) with a causal filter, however, delays must be introduced into v (n) and a (n), as for the filtered error LMS algorithm described above, which are also shown in Figure 3. Alternatively the algorithm could be implemented in the discrete frequency domain, provided precautions were taken to avoid circular convolution and correlation effects [19]. An algorithm which is

analogous to the filtered reference LMS could also be implemented, as in Figure 7, in which the reference signals are filtered by Gall (z) instead of filtering the error signals by GallT(z-1), which would avoid the need for additional delays since Gal, (z) is causal.

The adaptive algorithm implemented with the block diagram shown in Figure 3 avoids many of the convergence problems of the filtered error LMS algorithm.

The reference signals are whitened and decorrelated by the inverse spectral factor F-' (z). Also the transfer function from the output of the transformed controller, b (n), to the filtered error signals used to update this controller, a (n), can be deduced by setting the disturbance to zero in Figure 3, to give a (z) = GallT(z-1)G(z)Gmin-1(z) b (z). (51) Using the definition and properties of G. 11 (z) and Gui,, (z) in equation (33) and (34) we find that a (z) is equal to b (z), without any cross coupling, and the adaptation loop consists only of the delay required to make G, (z*') causal in a practical system. The algorithm defined by equation (50) thus overcomes many of the causes of slow convergence in traditional filtered error and filtered reference LMS algorithms.

Simulation Example A simulation is presented to illustrate the properties of the algorithm described by equation (50) for a control system with two reference signals, two secondary sources and two error sensors (K = L = M = 2). The physical arrangement is illustrated in Figure 4. Two independent Gaussian white noise signals, ni and n2, are used to generate the disturbance signals via band pass filters, with a bandwidth between normalise frequencies of 0. 1 and 0. 4, and also to generate the reference signals available to the control algorithm, x, and x2, via filters which give the

reference signals a pink noise spectrum (3 dB/octave slope), and a mixing matrix of real numbers M which in this case was equal to <BR> <BR> <BR> <BR> <BR> M= [0.75 0.25]<BR> <BR> <BR> L0. 25 0. 75 (52) In the arrangement for the normal LMS algorithm illustrated in Figure 4 the reference signals are fed to a matrix of FIR control filters, W", W2"W, 2, W22, which drive the two secondary loudspeakers. In the simulations the sample rate is 2. 5 kHz and each filter in W has 128 coefficients. The secondary loudspeakers are assumed to operate under free-field conditions and are spaced 0. 5m apart and are lm away from the two error microphones which are symmetrically positioned Im apart. The disturbance signals at the two microphones are assumed to be generated by two primary loudspeakers symmetrically positioned 3. 6m apart in the plane of the error sensors. Because the secondary loudspeakers are further away from the error microphones than the primary loudspeakers are, perfect cancellation at the error sensors is not possible and the exact least square solution for the control filters gives a reduction in the sum of squared errors in this case of about 18 dB.

Because the secondary loudspeakers and error microphones are symmetrically arranged, in free space the continuous-time plant response can be written in this case as [20] [Ae-jkl, Ae-jk2] li riz<BR> G(j#)= [A/l2e-jkl2 A/l1e-jil1], (@@) where A is is an arbitrary constant, l1 is the distance from the upper secondary loudspeaker to the upper error sensor (1. 03m in this simulation), 12 is the distance from the upper secondary loudspeaker to the lower error microphone (1. 25m in

this simulation) and k is the acoustic wavenumber, which is equal to c/Co where co is the speed of sound.

If the signals are sampled at a rate off the normalised plant response matrix can be written in the z domain as where N, is the integer value of 1I 5/co, N2 is the integer value of 12 flco and 11/12<1 by definition. For this plant matrix the all-pass and minimum phase decomposition can be derived by inspection to give where AN = N2-N"which is positive.

The all-pass component of the plant matrix in this case is just the identity matrix multiplied by a delay of N, samples. It is thus not necessary in this simulation to use the transpose of the all-pass component of the plant response to filter the error signals, as shown in Figure 4, and the delay required to obtain a causal overall impulse response, J, can be set equal to N,. The minimum phase component of the plant matrix has a stable causal inverse which is equal in this case to It is also possible in this simple example to analytically calculate the inverse of the spectral factors of the reference signal's spectral density matrix, F l (z), which will

convert the measured reference signals back into the noise signals n, and n2, using equation (32). In this case Fol (z) consists of the inverse of the mixing matrix, M in equation (52), and a pair of minimum phase filters which will whiten the resulting pink noise signals. A method of performing this spectral factorisation in the general case using only the discrete frequency version of the power spectral matrix is presented in [21] and the use of similar techniques to perform the all- pass and minimum phase decomposition of a matrix of discrete-frequency plant responses are currently being explored.

The upper curve in Figure 5 shows the averaged time history of the sum of squared errors when the filtered error LMS algorithm is used to adapt the coefficients of the control filters in the arrangement shown in Figure 4. In all the simulations shown in Figure 5 the convergence coefficient was set to half the lowest value, which resulted in instability. The sum of squared errors has been normalise by the value before control and is plotted as a reduction in Figure 5.

The multiple time constants associated with the LMS algorithm are clearly seen and the reduction has only reached about 17 dB after 10, 000 samples. If the inverse spectral factors are used to reconstruct the original noise signals, n, and n2, and these are used as the reference signals in the normal filtered error LMS algorithm, the filter converges in about 8, 000 samples, as also shown in Figure 5 as LMS with F-' (z). Keeping the original reference signals but using the inverse of the minimum phase part of the plant response to reduce the overall plant response to the all-pass components results in a similar convergence time to that of the LMS with F' (z) and is not shown in Figure 5. The lowest curve in Figure 5 corresponds to the case in which the inverse spectral factors and the minimum phase plant response are used to implement the algorithm given by equation (50), which is denoted LMS with F (z) and Gm'i,, (z). This results in the fastest convergence rate for which 18 dB reduction is achieved in about 3, 000 samples.

SVD Controller for Tonal Disturbances When the disturbance signals are tonal, only a single reference signal at the same frequency is required to implement an adaptive feedforward controller. The vector of complex error signals can then be expressed as e #T) = d(j##T)+G(ej##T)u(ej##T), (57) where u (ei eT) iS the vector of complex control signals. Under these conditions the problems associated with a number of partially correlated reference signals discussed above, do not arise, but the cross coupling within the plant matrix is still a limiting factor in the convergence rate of adaptive gradient descent control algorithms [12]. A number of authors have recently suggested a modified form of the adaptive controller for tonal disturbances [22, 23, 24] which uses the Singular Value Decomposition (SVD) of the LxMplant matrix at the excitation frequency.

This can be written as G(ej##t) = U#VH, where U is the LxM matrix of selected eigenvectors of G(ej##T)GH(ej##T), # is the MxM diagonal matrix of singular values and V is the MxM matrix of eigenvectors of GH(ej##T)G(ej##T).

If the transofmred vector of error signals at the m-th iterationis defined as y, n(ej##T) = UHem(ej##T), (59) the transformed vector of disturbance signals is similarly defined as p ( and the transformed vector of plant inputs is expressed as <BR> <BR> <BR> <BR> ejw. T (61)<BR> J - v um(e) then the transformed error signals at the m-th iteration can be written as

ym(ej##T) = p(ej##T) + # tm(ej##T). (62) Because E is diagonal, each of the equations described by this expression are uncoupled.

The optimum set of transformed control signals, which sets each of the transformed error signals to zero, is given by t,, (ej"-'r) = (63) which suggests tous that if the tranformed control singals are implement as tm(ej##T) = -#-1cm(ej##T), (6 <BR> <BR> <BR> <BR> then<BR> then<BR> <BR> <BR> <BR> <BR> then<BR> ) and c^ could be simply adapted using the gradient descent method, given by the expression cm+1(ej##T) = cm(ej##T)-αym(ej##T). (66) The block diagram for the adaptation of cm(ej##T)using this expression is shown in Figure 6. In practice individual convergence coefficients could be used for each element of em (e im. T) to ensure robust stability in the face of plant uncertainty, which will affect the smaller singular values in # more than the larger ones.

Notice, however, that the form of Figure 6 is very similar to that of Figure 3 except that the inverse of the spectral factor F-' (z) is no longer required, as noted above. In this case, however, the role of the minimum phase component of the

plant at the normalise frequency of OT is played by the MxM matrix TVT and that of the all-pass component by the LxM matrix U. The SVD controller for tonal disturbances can thus be viewed as a special case of the general controller shown in Figure 3, where the plant response only has to be considered at a single frequency and so the SVD can be used to perform the factorisation of the plant response into minimum phase and all-pass components. The SVD controller of Figure 6 does not therefore deal with stochastic disturbances.

Figure 8 shows a modified system which is similar to that of Figure 3 except that the FIR filter C (z) does not operate on filtered reference signals but upon the unfiltered reference signals x (n).

References [1] S. J. Elliott and P. A. Nelson,"Active noise control"IEEE Signal Processing Magazine, 12-35, 1993.

[2] B. Widrow and E. Walach, Adaptive Inverse Control. Upper Saddle River, NJ : Prentice-Hall, 1996.

[3] J. Minkoff,"The operation of multichannel feedforward adaptive systems", IEEE Transactions on Signal Processing, SP45 (12), 2993-3005, 1997.

[4] M. Morari and E. Zafiriou, Robust Process Control. Englewood Cliffs, NJ : Prentice-Hall, 1989.

[5] S. J. Elliott and T. J. Sutton,"Performance of feedforward and feedback systems for active control", IEEE Transactions on Speech and Audio Processing, SAP-4, 214-223, 1996.

[6] L. J. Eriksson,"Recursive algorithms for active noise control", Proc. Int. Symposium Active Control of Sound & Vibration, Tokyo, Japan.

137-144, 1991.

[7] R. E. Skelton, Dynamic Systems Control, John Wiley & Sons, 1988.

[8] S. R. Popovich,"A simplified parameter update for identification of multiple input multiple output systems", Proceedings of Inter-Noise 94, 1129-1232, 1994.

[9] E. A. Wan,"Adjoint LMS : an efficient alternative to the filtered-x LMS and multiple error LMS algorithms", Proc. ICASSP96, 1996.

[10] S. J. Elliott,"Filtered reference and filtered error LMS algorithms for adaptive feedforward control", Mechanical Systems & Signal Processing, 12 (6), 769-781, 1998.

[11] B. Widrow and S. D. Stearns, Adaptive Signal Processing. Englewood Cliffs, NJ : Prentice-Hall, 1985.

[12] S. J. Elliott, C. C. Boucher and P. A. Nelson,"The behaviour of a multichannel active control system", IEEE Trans. Signal Processing, Vol. 40, 1042-1052, 1992.

[13] T. Kailath, Lectures on Wiener and Kalman Filtering, Springer Verlag, 1982.

[14] M. J. Grimble and M. E. Johnson, Optimal control and stochastic estimation.

Chichester, John Wiley and Sons, Volume 2, 1988.

[15] D. C. Youla,"On the factorisation of rational matrices", IRE Transactions on Information Theory, Vol. 7, 172-189, 1961.

[16] M. C. Davis,"Factoring the spectral matrix", IEEE Transactions on Automatic Control, AC-8, 296-305, 1963.

[17] G. T. Wilson"The factorisation of matricial spectral densities", SIAM J. Appl. Math., Vol. 32, 420-426, 1972.

[18] J. J. Bongiorno,"Minimum sensitivity design of linear Multivariable feedback control systems by matrix spectral factorisation", IEEE Trans. on Automatic Control, Vol. 14, 665-673, 1969.

[19] J. J. Shynk,"Frequency domain and multirate adaptive filtering", IEEE Signal Processing Magazine, 14-37, January 1992.

[20] S. J. Elliott and C. C. Boucher,"Interaction between multiple feedforward active control systems", IEEE Trans. on Speech and Audio Processing, 2, 521-530, 1994.

[21] J. G. Cook and S. J. Elliott,"Connection between the multichannel prediction error filter and spectral factorisation", Electronics Letters, 35, 1218-1220, 1999.

[22] S. R. Popovich,"An efficient adaptation structure for high speed tracking in tonal cancellation systems", Proceedings of Inter-Noise 96, 2825-2828, 1996.

[23] M. Maurer,"An orthogonal algorithm for the active control of sound and the effects of realistic changes in the plant", MSc Thesis, University of Southampton, UK, 1996. [24] R. H. Cabell,"A principal component algorithm for feedforward active noise and vibration control", PhD Thesis, Virginia Tech, Blacksburg, VA, 1998.