The result processing method is based on representation of the recorded signal model as a sum of several (two) deterministic signals with unknown parameters [13].
Let us consider the following model of a measured time signal:
$${\mathbf{x}} = {\mathbf{\mu }}({\mathbf{\theta }}) + {\mathbf{\xi }},$$
(1)
where
\({\mathbf{x}}\),
\({\mathbf{\mu }}({\mathbf{\theta }})\),
\({\mathbf{\xi }}\) are column vectors of dimension
J × 1,
\({\mathbf{\mu }}({\mathbf{\theta }})\) is the useful deterministic signal,
\({\mathbf{\theta }}\) is the vector of unknown parameters to be estimated,
\({\mathbf{\xi }}\) is Gaussian white noise with zero mean, and
J is the number of time samplings. The deterministic signal can be modeled as the sum
K of complex sinusoids:
$${{\mu }_{j}}({{{\mathbf{\theta }}}_{0}},{{{\mathbf{\theta }}}_{1}}) = \sum\limits_{k = 1}^K {{{\theta }_{{0,k}}}{{e}^{{2\pi i{{\theta }_{{1,k}}}{{t}_{j}}}}}} ,$$
(2)
where
\({{{\mathbf{\theta }}}_{0}},{{{\mathbf{\theta }}}_{1}}\) are column vectors with dimension
K × 1,
\({{{\mathbf{\theta }}}_{0}}\) are the unknown complex amplitudes,
\({{{\mathbf{\theta }}}_{1}}\) are the unknown frequencies,
\({{t}_{j}}\) is a time corresponding to sampling number
j. Expression (2) can be rewritten in compact matrix form:
$${\mathbf{\mu }}({\mathbf{\theta }}) = {\mathbf{A}}({{{\mathbf{\theta }}}_{1}}){{{\mathbf{\theta }}}_{0}},$$
(3)
where
\({\mathbf{A}}\) is a matrix of
\(N \times K\) formed from
\(K\) \(N \times 1\) column vectors
ak:
\({\mathbf{A}}({{{\mathbf{\theta }}}_{1}}) = \) \(\left[ {{{{\mathbf{a}}}_{1}}({{\theta }_{{1,1}}}),...,{{{\mathbf{a}}}_{K}}({{\theta }_{{1,K}}})} \right]\). Here
\({{{\mathbf{a}}}_{k}} = {{\left( {{{e}^{{2\pi i{{\theta }_{{1,k}}}{{t}_{1}}}}},...,{{e}^{{2\pi i{{\theta }_{{1,k}}}{{t}_{J}}}}}} \right)}^{T}}\). Further, to seek the unknown parameters in accordance with the LSM, we minimize the function
$$S({\mathbf{\mu }},{\mathbf{x}}) = {{({\mathbf{\mu }} - {\mathbf{x}})}^{H}}({\mathbf{\mu }} - {\mathbf{x}})$$
(4)
over the unknown parameters. In order to find the minimum over the unknown complex amplitudes (over the vector
\({{{\mathbf{\theta }}}_{0}}\)), it is necessary to write the extremum condition
\(\frac{{\partial S}}{{\partial {{{\mathbf{\theta }}}_{0}}}} = 0\). After differentiating with respect to vector
\({{{\mathbf{\theta }}}_{0}}\), we obtain the equation
\(({{{\mathbf{A}}}^{H}}{\mathbf{A}}){{{\mathbf{\theta }}}_{0}} - {{{\mathbf{A}}}^{H}}{\mathbf{x}} = 0\). Its solution will be the estimate of the unknown complex amplitudes:
$${{{\mathbf{\hat {\theta }}}}_{0}} = {{({{{\mathbf{A}}}^{H}}{\mathbf{A}})}^{{ - 1}}}{{{\mathbf{A}}}^{H}}{\mathbf{x}}$$
(5)
as functions of the unknown frequencies
\({{{\mathbf{\theta }}}_{1}}\). Since this is the only solution and the maximum is infinite and unattainable, it can only be a minimum. Substituting (5) into (4), we obtain the objective function to be maximized, which depends on
\({{{\mathbf{\theta }}}_{1}}\):
For the case
K = 2, function (6) depends on two unknown frequencies. It is not possible to find the maximum over them using the necessary extremum condition, since the resulting equation has a complex nonlinear form. In addition, such an equation has many solutions corresponding to many local maxima. The only reliable way to find the global maximum (6) is an exhaustive search. After determining the frequencies of the sinusoids
\({{{\mathbf{\theta }}}_{1}}\) it is also possible to estimate their complex amplitudes by substituting
\({{{\mathbf{\theta }}}_{1}}\) in
formula (5).