Skip to main content
Top

A De-noising 2-D DOA Estimation Method for Random EMVS Arrays

  • Open Access
  • 10-05-2025
Published in:

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The article explores the critical issue of Direction-of-Arrival (DOA) estimation in the presence of non-uniform noise, a common challenge in wireless communication, radar, and sonar applications. It introduces a sophisticated de-noising method specifically designed for random Electromagnetic Vector Sensor (EMVS) arrays, which capture both electric and magnetic field components of electromagnetic waves. The proposed method addresses the limitations of traditional techniques such as MUSIC, Maximum Likelihood, and ESPRIT, which often struggle with high computational costs and sensitivity to noise. By constructing a covariance tensor model and employing tensor completion techniques, the article presents a robust framework for suppressing non-uniform noise without compromising the array aperture. The PARAFAC algorithm is utilized to decompose the noise-free covariance tensor, enabling precise joint estimation of the target's Direction of Departure (DOD) and DOA. Simulation results demonstrate the superior estimation accuracy and robustness of the proposed method, making it a valuable contribution to the field of signal processing and tensor analysis. The article also discusses the practical implications of this technology for advanced communication systems, multitarget tracking, and polarization-sensitive applications, paving the way for future research in dynamic noise adaptation and near-field localization.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

The estimation of Direction-of-Arrival (DOA) is a fundamental topic in fields such as wireless communication, radar, and sonar. Over the past few decades, DOA estimation has attracted considerable attention from researchers, leading to the development of various techniques [18]. These include methods such as Multiple Signal Classification (MUSIC) [13, 22], Maximum Likelihood (ML) method [39], matrix pencil method [21, 37], Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) [23], cumulant-based estimation methods [27, 35], and tensor-based algorithms [32].
As the need for precise signal processing increases, traditional scalar sensors are becoming insufficient for many practical applications. In recent years, Electromagnetic Vector Sensor (EMVS) have attracted increasing attention due to their ability to capture both the electric and magnetic field components of the electromagnetic wave. An EMVS can sense the three-dimensional characteristics of an incoming signal, producing six-dimensional complex vector data that includes spatial, temporal, and polarization information. Its advantages, such as super-resolution, strong anti-interference capability, reliable repeatability, stability, and polarization multiple access, make it a promising solution for secure wireless communications.
While existing algorithms based on EMVS arrays have demonstrated success in their respective domains, each has its limitations. For instance, the MUSIC algorithm is highly regarded for its super-resolution capabilities [20, 24], but its reliance on complex eigenvalue decomposition and exhaustive spectral peak searching results in high computational cost [14, 26]. ML method is favored for its flexibility and robustness, particularly in low Signal-to-Noise Ratio (SNR) scenarios; However, it requires optimization over a multi-dimensional objective function, resulting in significant computational overhead [26, 33]. The Matrix Pencil Method enhances computational efficiency, but its accuracy depends on the careful construction of data matrices, often necessitating long data sequences for stable estimation. ESPRIT provides a closed-form solution for DOA estimation but does so at the cost of reduced array aperture, resulting in performance that is often inferior to spectral search methods. Tensor-based methods can achieve superior estimation accuracy by leveraging the multi-dimensional structure of array measurements [25]. The inherent multi-dimensionality of tensors, often termed "tensor gain", enhances the noise robustness of tensor-based algorithms. Generally, there are two types of tensor decomposition: Tucker decomposition and Parallel Factor (PARAFAC) algorithm [16]. Tucker decomposition, similar to Singular Value Decomposition (SVD), factors a tensor into a core tensor and unitary matrices, while PARAFAC represents a special case of Tucker decomposition where the core tensor is diagonal, and factor matrices are composed of rank-1 tensors. Typically, Tucker decomposition can be directly obtained via Higher-Order Singular Value Decomposition (HOSVD) [1], while PARAFAC is achieved through Alternating Least Squares (ALS) [17]. Although ALS has higher computational complexity, it generally provides better estimation performance compared to Tucker decomposition.
Most existing DOA estimation algorithms assume the presence of ideal white Gaussian noise, under which these methods perform well. However, in real-world applications, sensor arrays are often subject to complex environments and varied mission requirements, where noise is rarely ideal. Nonuniform noise, a common type of real-world interference, alters the structure of the covariance matrix, making it deviate from the ideal identity matrix. This deviation can significantly degrade the performance of conventional matrix and tensor decomposition algorithms, and in extreme cases, lead to complete failure. To address the challenges posed by nonuniform noise, various noise suppression techniques have been proposed. These include spatial cross-covariance methods [6], temporal cross-covariance methods [34], covariance differencing methods [2, 3], cumulant-based techniques [35], and matrix completion methods [5]. Spatial cross-covariance methods, for instance, mitigate noise by partitioning the array into subarrays, making the cross-covariance of the subarray outputs independent, although this reduces the virtual aperture and can degrade parameter estimation accuracy at high SNR. Temporal cross-covariance methods partition the array’s matched filter output in the time domain, assuming that noise between different pulses is uncorrelated. While this approach avoids aperture loss, it imposes strict requirements on the noise and source matrix. High-order cumulant methods eliminate noise by utilizing the higher-order cumulants (e.g., fourth-order) of nonuniform noise, but these methods are highly sensitive to the non-Gaussian distribution of the target’s source matrix and are computationally demanding. Covariance differencing techniques suppress noise by leveraging the Toeplitz structure of stationary nonuniform noise covariance matrices, but these methods may degrade signal distinguishability and often require additional operations to resolve ambiguity in angle estimation.
Matrix completion methods proposed in [5, 29], which exploit the sparsity of the noise covariance matrix, have been proposed as an effective framework for suppressing nonuniform noise. These methods do not require specific noise characteristics and do not cause aperture loss. By removing the noise-corrupted elements from the signal covariance matrix, the recovery of the noise-free covariance matrix is posed as a matrix completion problem, which is ultimately solved using algorithms such as ESPRIT. While convex optimization toolboxes like CVX (a MATLAB-based modeling system for disciplined convex programming [7]) can solve the matrix completion problem, they are computationally inefficient because interior-point optimization methods rely on iterative procedures with high computational complexity. Alternatively, the Singular Value Thresholding (SVT) algorithm offers faster computation but suffers from sensitivity to parameter settings and limited robustness. Moreover, existing methods often fail to fully exploit the multi-dimensional structure of the data, leaving room for further improvement in estimation accuracy.
Notations: Matrices are denoted by boldface capital letters, such as \(\varvec{Y}\), while vectors are represented by bold lowercase letters, such as \(\varvec{y}\). For tensors, we adopt boldface Euler script letters \({\mathcal {Y}}\). Scalar values are expressed using lowercase letters y. The identity matrix and all-ones matrix of size \(M\times M\) are designated as \({{{\varvec{I}}}_{M}}\) and \({{{\varvec{1}}}_{M\times M}}\), respectively. Regarding matrix operations, the transpose, conjugate, Hermitian transpose, inverse and pseudo-inverse are marked with https://static-content.springer.com/image/art%3A10.1007%2Fs00034-025-03132-7/MediaObjects/34_2025_3132_IEq7_HTML.gif , https://static-content.springer.com/image/art%3A10.1007%2Fs00034-025-03132-7/MediaObjects/34_2025_3132_IEq8_HTML.gif , \((\varvec{Y})^{-1}\), and \((\varvec{Y})^{\dagger }\), respectively. In terms of matrix products, Kronecker product, Khatri-Rao product, Hadamard product, vector cross-product and outer product, are represented by \(\otimes \), \(\odot \), \(\oplus \) and \(\circledast \) respectively. Additional transformations include the formation of a diagonal matrix from the m-th row of the \(\varvec{Y}\), denoted by https://static-content.springer.com/image/art%3A10.1007%2Fs00034-025-03132-7/MediaObjects/34_2025_3132_IEq16_HTML.gif . The reverse process, known as vectorization, which converts a matrix into a vector, is signified by https://static-content.springer.com/image/art%3A10.1007%2Fs00034-025-03132-7/MediaObjects/34_2025_3132_IEq17_HTML.gif . Besides \(E\{\centerdot \}\) denotes the mathematical expectation, and \(\left| \centerdot \right| \) indicates that an absolute value is taken.

2 Problem Formulation

2.1 Signal Model

Assume that in a space filled with a homogeneous isotropic medium, there exist far-field narrowband planar transverse electromagnetic (TEM) waves, which are received by an array consisting of M EMVS units. The geometry of the array is random, as shown in Fig. 1. Denote the position of the m-th EMVS unit as \({\varvec{\gamma }}={{\left[ {{x}_{m}},{{y}_{m}},{{z}_{m}} \right] }^{T}}\), and the first element fixed at position \({\varvec{\gamma }_{1}}={{\left[ 0,0,0 \right] }^{T}}\) as a reference. According to the classical EMVS array model [19], the signal received at time t can be expressed as:
$$\begin{aligned} \begin{aligned} \varvec{z}(t)&\triangleq \left[ \varvec{A} \odot \varvec{B} \right] \varvec{s}(t) + \varvec{n}(t) \\&= \varvec{C} \varvec{s}(t) + \varvec{n}(t) \end{aligned}, \end{aligned}$$
(1)
where \(\varvec{C}=\varvec{A}\odot \varvec{B}\); \(\varvec{A}=[{{\varvec{a}}_{1}},{{\varvec{a}}_{2}},\cdots ,{{\varvec{a}}_{K}}]\in {{\mathbb {C}}^{M\times K}}\) is the spatial response matrix, whose k-th column vector is given as: \({\varvec{a}_{k}}\triangleq {{[1,{{e}^{-j2\pi {{\tau }_{2,k}}}},\cdots ,{{e}^{-j2\pi {{\tau }_{M,k}}}}]}^{T}}\), with \({{\tau }_{m,k}}\triangleq \varvec{\gamma } _{m}^{T}{\varvec{p}_{k}}/\lambda \) and \(\lambda \) represents the carrier wavelength. \(\varvec{s}(t)\in {{\mathbb {C}}^{K\times 1}}\) represents the vector of the source signal at the time of t, \(\varvec{n}(t)\in {{\mathbb {C}}^{6\,M\times 1}}\) is the noise received from the array. Then Eq.1 can be rewritten as:
$$\begin{aligned} \begin{aligned} \varvec{Z}&\triangleq [\varvec{A}\odot \varvec{B}]{\varvec{S}^{T}}+\varvec{N} \\&=\varvec{C}{\varvec{S}^{T}}+\varvec{N} \\ \end{aligned}, \end{aligned}$$
(2)
where \(\varvec{B}=[{{\varvec{b}}_{1}},{{\varvec{b}}_{2}},\cdots ,{{\varvec{b}}_{K}}]\in {{\mathbb {C}}^{6\times K}}\) denotes the polarization steering matrix, \({\varvec{b}_{k}}\) is defined as follow:
$$\begin{aligned} {\varvec{b}_{k}} \triangleq \left[ \begin{matrix} \cos ({{\varphi }_{k}})\cos ({{\vartheta }_{k}})\sin ({{\zeta }_{k}}){{e}^{j\eta k}} - \sin ({{\varphi }_{k}})\cos ({{\zeta }_{k}}) \\ \sin ({{\varphi }_{k}})\cos ({{\vartheta }_{k}})\sin ({{\zeta }_{k}}){{e}^{j\eta k}} + \cos ({{\varphi }_{k}})\cos ({{\zeta }_{k}}) \\ -\sin ({{\vartheta }_{k}})\sin ({{\zeta }_{k}}){{e}^{j\eta k}} \\ -\sin ({{\varphi }_{k}})\sin ({{\zeta }_{k}}){{e}^{j\eta k}} - \cos ({{\varphi }_{k}})\cos ({{\vartheta }_{k}})\cos ({{\zeta }_{k}}) \\ \cos ({{\varphi }_{k}})\sin ({{\zeta }_{k}}){{e}^{j\eta k}} - \sin ({{\varphi }_{k}})\cos ({{\vartheta }_{k}})\cos ({{\zeta }_{k}}) \\ \sin ({{\vartheta }_{k}})\cos ({{\zeta }_{k}}) \\ \end{matrix} \right] , \end{aligned}$$
(3)
where the first three elements of \( {\varvec{b}}_k \) form the vector \( {\varvec{e}}_k \), representing the response of the electric field, while the last three elements form the vector \( {\varvec{m}}_k \), representing the response of the magnetic field, respectively. Importantly, their vector cross product (VCP) yields the Poynting vector \({\varvec{p}}_k\) corresponding to the k-th target.
Fig. 1
Arbitrarily placed EMVS array
Full size image
The Poynting vector represents the instantaneous direction of energy flow of electromagnetic waves, characterizing both the direction and the magnitude of energy propagation in the far field. Its physical significance lies in the fact that it is directly related to the radiation pattern and energy distribution of the target signal [28]. Specifically, for a TEM, the Poynting vector points in the direction of wave propagation, and its magnitude is proportional to the square of the field strength. This fundamental relationship enables precise estimation of the target’s direction of arrival (DOA) through the analysis of the cross-product structure.
$$\begin{aligned} \frac{{{\varvec{e}}}_k}{{\left\| {{\varvec{e}}_k} \right\| }_F}\circledast \frac{{{\varvec{m}}_k^ * }}{{\left\| {{\varvec{m}}}_k \right\| }_F} = {\varvec{p}}_k. \end{aligned}$$
(4)
Assuming that the noise satisfies a color Gaussian distribution with zero mean and \(\varvec{H}\) covariance:
$$\begin{aligned} E\{\varvec{n}({{t}_{1}},\tau ){\varvec{n}^{H}}({{t}_{2}},\tau )\}=\varvec{H}\delta \text {(}{{t}_{1}}-{{t}_{2}}\text {)}, \end{aligned}$$
(5)
where \(\delta (\centerdot )\) denotes the unit impulse function. The noise considered in this paper is additive and uncorrelated with the source signal, then the covariance matrix with respect to \(\varvec{z}(t)\) is given by:
$$\begin{aligned} \begin{aligned} {{\varvec{R}}_{z}}&= E\{\varvec{z}(t){{\varvec{z}}^{H}}(t)\} \\&= \varvec{C }{{\varvec{R}}_{s}}{{\varvec{C}}^{H}} + {{\varvec{R}}_{n}} \\&= \varvec{{\tilde{R}}} + {{\varvec{R}}_{n}} \end{aligned} \end{aligned}$$
(6)
where \({{\varvec{R}}_{s}}=E\{\varvec{s}(t){{\varvec{s}}^{H}}(t)\}\) and \({{\varvec{R}}_{n}}=E\{\varvec{n}{{(t)}^{H}}\varvec{n}(t)\}\) represent the covariance matrix of the source signal and the covariance matrix of the noise, respectively. Since the source signals are not correlated with each other, \({{\varvec{R}}_{s}}=\text {diag}( [{{\lambda }_{1}},{{\lambda }_{2}},\cdots ,{{\lambda }_{K}}])\) , \(diag\{\centerdot \}\) represent the construction of diagonal matrices, and \({{\lambda }_{k}}\) is the power of the k-th source signal. In practice, when there are L snapshots available, \({{\varvec{R}}_{z}}\) can be estimated as follows:
$$\begin{aligned} {{\varvec{{\hat{R}}}}_{z}}=\frac{1}{L}\sum \limits _{t=1}^{L}{\varvec{z}(t)}{{\varvec{z}}^{H}}(t). \end{aligned}$$
(7)
From equation Eq. (5), it is easy to see that \({\varvec{R}_{n}}\) is a matrix of diagonal form:
$$\begin{aligned} {\varvec{R}_{n}}=\text {diag}\{\sigma _{1}^{2},\sigma _{2}^{2},\cdots ,\sigma _{6M}^{2}\}, \end{aligned}$$
(8)
where \(\sigma _{6(m-1)+q}^{2}\) denotes the noise power of the q-th component of the m-th EMVS sensor unit. If the noise at this point is Gaussian white noise, its power is satisfied:
$$\begin{aligned} {\varvec{R}_{n}}=\sigma _{ }^{2}{\varvec{I}_{6M}}. \end{aligned}$$
(9)
Since the unit matrix does not affect the feature distribution of the signal, traditional methods are effective under Gaussian white noise conditions. However, under the influence of non-uniform noise, the noise covariance matrix is no longer proportional to the unit matrix, and thus many traditional algorithms have varying degrees of failure.

2.2 Preliminaries of Tensors

A tensor can be considered a multi-dimensional generalization of vectors and matrices. Understanding tensor operations is essential for constructing the PARAFAC model. Below, we outline the tensor fundamentals used in this paper, with further details available in reference [10].
[Style1 Style3 Style3]Definition 1
(Mode-n unfolding): The mode-m unfolding of an M-order tensor \(\varvec{{\mathcal {X}}}\in {{\mathbb {C}}^{{{I}_{1}}\times {{I}_{2}}\times \cdot \cdot \cdot \times {{I}_{M}}}}\) is defined as \(\varvec{{\mathcal {X}}}\), where the unfolded result is a matrix of size \({\varvec{I}_{m}}\times ({\varvec{I}_{m}},\cdot \cdot \cdot ,{\varvec{I}_{m-1}},{\varvec{I}_{m+1}},\cdot \cdot \cdot ,{\varvec{I}_{M}})\). The relationship between the element indices \(\left( {{i}_{1}},{{i}_{2}},\cdots ,{{i}_{M}} \right) \) of the tensor \(\varvec{{\mathcal {X}}}\) and their corresponding positions \(({{i}_{m}},{L})\) in \({{\left[ \varvec{{\mathcal {X}}} \right] }_{(m)}}\) can be described as follows: \({L}=1+\sum \nolimits _{k=1,k\ne m}^{M}{({{i}_{k}}-1){{J}_{k}},{{J}_{k}}=}\prod \nolimits _{n=1,n\ne m}^{k-1}{{\varvec{I}_{n}}}\).
[Style1 Style3 Style3]Definition 2
(PARAFAC model): The PARAFAC of a M-th order tensor of rank R can be represented as:
$$\begin{aligned} \varvec{{\mathcal {X}}}={\varvec{{\mathcal {I}}}_{\times 1}}\varvec{A}_{\times 2}^{(1)}\varvec{A}_{\times 3}^{(2)}\cdots \varvec{A}_{\times M}^{(M-1)}\varvec{A}_{ }^{(M)}, \end{aligned}$$
(10)
where \(\varvec{{\mathcal {I}}}\) \(\in {{\mathbb {C}}^{{{I}_{1}}\times {{I}_{2}}\times \cdot \cdot \cdot \times {{I}_{M}}}}\) is a unit vector (with elements equal to 1 at the same index positions and 0 elsewhere), and \(\varvec{a}_{r}^{(m)}\in {{\mathbb {C}}^{{{I}_{m}}\times 1}}\) (m=1, 2, \(\cdots \) , M) is a rank-1 vector. The matrix \({\varvec{A}^{(m)}}\in {{\mathbb {C}}^{{{I}_{m}}\times R}}\) is the factor matrix. Here, \(\times _n\) denotes the n-mode product, which contracts the n-th dimension of the tensor \(\varvec{{\mathcal {X}}}\) with the rows of the factor matrix \(\varvec{A}^{(m)}\). Specifically, the mode-m unfolding of \(\varvec{{\mathcal {X}}}\) can be written as:
$$\begin{aligned} \varvec{{\mathcal {X}}}={\varvec{A}_{m}}{{[{\varvec{A}_{m+1}}\odot {\varvec{A}_{m+2}}\odot \cdots \odot {\varvec{A}_{M}}\odot {\varvec{A}_{1}}\odot {\varvec{A}_{2}}\odot \cdots \odot {\varvec{A}_{m-1}}]}^{T}}. \end{aligned}$$
(11)
[Style1 Style3 Style3]Definition 3
(Generalized Tensorization of the PARAFAC Model ): For the PARAFAC model in Eq. 7, assume the ordering of the factor matrices is given by the set \({\mathrm O}=\{1,2,\cdots ,M\}\). Let \({{{\mathrm O}}_{j}}=\{{{o}_{j,1}},{{o}_{j,2}},\cdots ,{{o}_{j,{{M}_{j}}}}\}\), \(j=1,2,\cdots ,J\), which \({{o}_{j,m}}\) is composed of a subset of elements from \({\mathrm O}\). The generalized tensorization of the tensor \(\varvec{{\mathcal {X}}}\) can be represented by a new tensor \({\varvec{{\mathcal {X}}}_{{{{\mathrm O}}_{1}},{{{\mathrm O}}_{2}},\cdots ,{{{\mathrm O}}_{J}}}}\in {{\mathbb {C}}^{{{G}_{1}}\times {{G}_{2}}\times \cdots \times {{G}_{J}}}}\) as follows:
$$\begin{aligned} {\varvec{{\mathcal {X}}}_{{{{\mathrm O}}_{1}},{{{\mathrm O}}_{2}},\cdots ,{{{\mathrm O}}_{J}}}}=\sum \limits _{k=1}^{K}{\varvec{b}_{k}^{(1)}\circ \varvec{b}_{k}^{(2)}\circ \cdots \circ \varvec{b}_{k}^{(J)}}, \end{aligned}$$
(12)
where \({{G}_{j}}=\prod _{m=1}^{{{M}_{j}}}{\varvec{I}_{{{o}_{j,m}}}}\), and the term \(\varvec{b}_{k}^{(j)}\) is given by \(\varvec{b}_{k}^{(j)}=\varvec{a}_{k}^{({{o}_{j,{{M}_{j}}}})}\otimes \varvec{a}_{k}^{({{o}_{j,{{M}_{j}}-1}})}\otimes \cdots \otimes \varvec{a}_{k}^{({{o}_{j,1}})}\).
[Style1 Style3 Style3]Definition 4
(Identifiability): As noted in [4], the PARAFAC model has uniqueness if the following condition is satisfied:
$$\begin{aligned} \text {KR}\ [{\varvec{A}_{1}}]\ \text {+KR}\ [{\varvec{A}_{2}}] +\cdots \text {+KR}\ [\varvec{{A}}_{M}]\ge 2 R+M-1, \end{aligned}$$
(13)
where \(\text {KR }[{\varvec{A}_{m}}]\)(m=1,2,...,M) represents the Kruskal rank of matrix \({\varvec{A}_{m}}\) (Kruskal rank represents the smallest number of linearly independent columns in the matrix).

3 The Proposed Method

3.1 PARAFAC Model

Looking at the noiseless part of \(\varvec{Y}\) in Eq. 2, it is coincides with the tensor unfolding model in Eq. 11. Then \(\varvec{Y}\) can be regarded as the unfolding of a third-order PARAFAC tensor \({\mathcal {Y}}\in {{\mathbb {C}}^{M\times 6\times L}}\), which is given by
$$\begin{aligned} \varvec{{\mathcal {Z}}}\triangleq {\varvec{{\mathcal {I}}}_{K\times 1}}{\varvec{A}_{\times 2}}{\varvec{B}_{\times 3}}\varvec{S}+\varvec{{\mathcal {N}}}, \end{aligned}$$
(14)
where \(\varvec{{\mathcal {N}}}\in {{\mathbb {C}}^{M\times 6\times L}}\) denotes the noise tensor. Clearly, \(\varvec{Z}\) is the mode-3 unfolding of \(\varvec{{\mathcal {Z}}}\), so:
$$\begin{aligned} \begin{aligned} \varvec{Z}&\triangleq [\varvec{{\mathcal {Z}}}]_{(3)}^{T} \\&= [\varvec{A} \odot \varvec{B}]\varvec{S}^{T} + \varvec{N}''', \end{aligned} \end{aligned}$$
(15)
where \(\varvec{N} = [\varvec{{\mathcal {N}}}]_{(3)}^{T}\). Similarly, the mode-1 and mode-2 unfolding of \(\varvec{{\mathcal {Z}}}\) can be expressed as follows:
$$\begin{aligned} \begin{aligned} \varvec{X}&\triangleq [\varvec{{\mathcal {Z}}}]_{(1)}^{T} \\&= [\varvec{B} \odot \varvec{S}]\varvec{A}^{T} + \varvec{N}', \end{aligned} \end{aligned}$$
(16)
$$\begin{aligned} \begin{aligned} \varvec{Y}&\triangleq [\varvec{{\mathcal {Z}}}]_{(2)}^{T} \\&= [\varvec{S} \odot \varvec{A}]\varvec{B}^{T} + \varvec{N}'', \end{aligned} \end{aligned}$$
(17)
where \( \varvec{N}'=[\varvec{{\mathcal {N}}}]_{(1)}^{T}\in {{\mathbb {C}}^{6\,L\times M}}\), and \( \varvec{N}''=[\varvec{{\mathcal {N}}}]_{(2)}^{T}\in {{\mathbb {C}}^{ML\times 6}}\).

3.2 Noise Suppression

Define \(\varOmega \!\!\) as the set of non-zero items recorded in \({\varvec{R}_{n}}\):
$$\begin{aligned} \!\!\varOmega \!\!=\{(n,n)|n=1,2,\cdots ,6M\}. \end{aligned}$$
(18)
In addition, define a sampling operator \({{S}_{\varOmega }}\{\centerdot \}\) that extracts the matrix elements within the coverage of the index \(\varOmega \!\!\):
$$\begin{aligned} {{S}_{\varOmega }}\{\varvec{R}\}=\bar{\varvec{R}}\in {{\varvec{R}}^{6M\times 6M}}, \end{aligned}$$
(19)
among them:
$$\begin{aligned} \varvec{{\bar{R}}}(m,n) = {\left\{ \begin{array}{ll} \varvec{R}(m,n), & (m,n) \in \varOmega \\ 0, & (m,n) \notin \varOmega \end{array}\right. }, \end{aligned}$$
(20)
where \(\varvec{R}(m,n)\) denotes the (m,n)-th term of \(\varvec{R}\). From Eq. (5), \({\varvec{R}_{n}}\) is a diagonal matrix, so we have \({\varvec{R}_{n}}={{S}_{\varOmega }}\{{\varvec{R}_{n}}\}\). The non-uniform noise can be eliminated by simplifying the covariance matrix in (6) to obtain a noise-free data matrix:
$$\begin{aligned} \begin{aligned} {\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z&= \varvec{R}_z - S_{\varOmega } \{ \varvec{R}_z \} \\&= \varvec{{\tilde{R}}} - S_{\varOmega } \{ \varvec{{\tilde{R}}} \} \end{aligned}. \end{aligned}$$
(21)
However, the source signal covariance matrix \({\varvec{R}_{s}}\) in \(\varvec{{\tilde{R}}}\) is corrupted during the noise reduction process. Considering the low-rank property of the signal covariance matrix \(\varvec{{\tilde{R}}}\) and the sparse property of the index positions of the affected signal covariance matrix, the matrix completion technique can be used to recover \(\varvec{{\tilde{R}}}\) from \({{\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z}\) with the optimization objective function:
$$\begin{aligned} \hbox {min rank}\ \{\varvec{R}\}\ \hbox {s.t.}\ {{S}_{\varOmega }}\ \{\varvec{R}\}={{\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z}, \end{aligned}$$
(22)
where \(\text {rank}\ \{\varvec{R}\}\) denotes the rank of \(\varvec{R}\) . The above optimization is a nondeterministic polynomial problem since the rank of the matrix is nonconvex. An effective convex relaxation method is to replace the above constraints on the rank using the kernel paradigm constraints of the matrix, thus turning the above optimization problem into:
$$\begin{aligned} \text {min }{{\left\| \varvec{R} \right\| }_{*}}\quad \text {s.t. }{{S}_{\varOmega }}(\varvec{R})={{\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z}, \end{aligned}$$
(23)
where \({{\left\| \varvec{R} \right\| }_{*}}\) is the nuclear norm of \(\varvec{R}\), i.e., the trace of \(\varvec{R}\). Since only the estimate of \({\varvec{R}_{z}}\) can be obtained in the actual measurement \({\varvec{{\hat{R}}}_{z}}\), at this time the estimate of \({\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z\) is obtained by Eq.(21), which is denoted as \({\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z\). There is a fitting error between \({\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z\) and \({\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z\), define an error coefficient \(\varepsilon \), then the optimization problem in Eq.(23) can be converted into an optimization problem with constraints:
$$\begin{aligned} \text {min }{{\left\| \varvec{R} \right\| }_{*}}\quad \text {s.t. }\left\| {{S}_{\varOmega }}(\varvec{R})-{{\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z} \right\| \le \varepsilon . \end{aligned}$$
(24)
This optimization problem can be solved by a convex optimization toolbox such as CVX, or by a matrix-completion algorithm such as SVT. However, most of the convex optimization toolboxes are based on the interior point method, which has a very high computational complexity. The SVT algorithm, although computationally efficient, is sensitive to parameter settings. In addition, since this optimization process ignores the multidimensional characteristics of the array data structure, the data recovery results in poor performance under low SNR conditions.

3.3 Tensor Completion

Define a set \(\varDelta \) to record the non-zero element indexes in \({\varvec{R}_{n}}\), which is defined similarly to Eq.(21)(22). Imitating Eq. (21), a noise-free covariance tensor can be constructed at :
$$\begin{aligned} {{\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z}={{\varvec{R}}_{z}}-{{S}_{\varDelta }}\{{{\varvec{R}}_{z}}\}. \end{aligned}$$
(25)
The method of recovering the noiseless covariance tensor is similar to the optimization process of (24):
$$\begin{aligned} \text {min} \, \Vert \varvec{R}\Vert _{*} \quad \text {s.t. } S_{\varDelta }(\varvec{R}) = {\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z. \end{aligned}$$
(26)
The nuclear norm of the tensor is defined as the weighted sum of the mode-n unfolding of the tensor:
$$\begin{aligned} {{\left\| \varvec{R} \right\| }_{*}}=\sum \limits _{n=1}^{4}{{{\alpha }_{n}}{{\left\| {{[\varvec{R}]}_{(n)}} \right\| }_{*}}}, \end{aligned}$$
(27)
where \({{\alpha }_{n}}\) is a weighting factor, which is constant greater than 0 and satisfying \(\sum {_{n=1}^{4}{{\alpha }_{n}}}=1\). Similarly, by setting \({\varvec{{\tilde{R}}}_{z}}\) to be the estimated value of \({\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z\) and then setting the fitting error coefficient \({{\beta }_{n}}\), the optimization in Eq. (26) can be transformed into an optimization problem with constraints:
$$\begin{aligned} \begin{array}{c} \min \sum \limits _{n=1}^{4} \left( \alpha _n \left\| [\varvec{R}]_{(n)} \right\| _* + \frac{\beta _n}{2} \left\| [\varvec{{\tilde{R}}}_{z} - \varvec{R}]_{(n)} \right\| _F \right) , \\ \text {s.t. } S_{\varDelta }(\varvec{R}) = \varvec{{\tilde{R}}}_{z}. \end{array} \end{aligned}$$
(28)
This optimization problem is a non-differentiable convex optimization problem which can be solved with the help of variable separation technique. Reference [15] proposes to solve the optimization problem by smoothing the above optimization problem with an optimization objective function:
$$\begin{aligned} \underset{\varvec{R}\in {\varvec{\Xi }}}{\mathop {\text {min}}}\,{{f}_{0}}(\varvec{R})+\sum \limits _{n=1}^{4}{{{\alpha }_{n}}{{\left\| {{[\varvec{R}]}_{(n)}} \right\| }_{*}}}, \end{aligned}$$
(29)
where \({\varvec{\Xi }} \) is a convex set; \({{f}_{0}}(\varvec{R})\) is a smooth convex function. Since the nuclear norm \({{\left\| {\varvec{[R]}_{(n)}} \right\| }_{*}}\) is non-smooth, it needs to be smoothed. For a matrix \(\varvec{X}\), its dual form is given as:
$$\begin{aligned} g(\varvec{X})=\underset{\left\| \varvec{Y} \right\| \le 1}{\mathop {\max }}\,\left\langle \varvec{X},\varvec{Y} \right\rangle , \end{aligned}$$
(30)
where \(\left\| \centerdot \right\| \) denotes the 2-parameter of the matrix, \(\left\langle \centerdot \right\rangle \) is the inner product operation, and \(g(\varvec{X})\) has the smooth form:
$$\begin{aligned} g(\varvec{X})=\underset{\left\| \varvec{Y} \right\| \le 1}{\mathop {\max }}\,\left\langle \varvec{X},\varvec{Y} \right\rangle -{{d}_{\mu }}(\varvec{Y}), \end{aligned}$$
(31)
where \({{d}_{\mu }}(\varvec{Y})\) is a convex function, \(\mu \) is a constant, and \({{d}_{\mu }}(\varvec{Y})=\mu /2\left\| \varvec{Y} \right\| _{F}^{2}\), \({{\left\| \centerdot \right\| }_{F}}\) are chosen to denote the Frobenius norm. The smooth problem in Eq. (29) and the above smooth process can be extended to the tensor domain by introducing four dyadic variables\(\{{\varvec{Y}_{n}}\}_{n=1}^{4}\) and four variables\(\{{{\mu }_{n}}\}_{n=1}^{4}\) :
$$\begin{aligned} {{f}_{\mu }}(\varvec{R})={{f}_{0}}(\varvec{R})+\sum \limits _{n=1}^{4}{\underset{\left\| {{[{\varvec{Y}_{n}}]}_{(n)}} \right\| \le 1}{\mathop {\max }}\,}\left\langle \varvec{R},{\varvec{Y}_{n}} \right\rangle -\frac{\mu }{2}\left\| {\varvec{Y}_{n}} \right\| _{F}^{2}, \end{aligned}$$
(32)
where \({{f}_{\mu }}(\varvec{R})\) is smooth and hence it is derivable. Let \({{f}_{0}}(\varvec{R})=0\), for the tensor completion problem in this paper, an optimization strategy can be used:
$$\begin{aligned} \begin{array}{c} \text {min }\sum \limits _{n=1}^{4}{\underset{\left\| {{[{{\varvec{Y}}_{n}}]}_{(n)}} \right\| \le 1}{{\max }}\,}\left\langle \varvec{R},{{\varvec{Y}}_{n}} \right\rangle -\frac{\mu }{2}\left\| {{\varvec{Y}}_{n}} \right\| _{F}^{2} \\ \text {s}. \text {t}. {{S}_{\varDelta }}(\varvec{R})={{\overset{\scriptscriptstyle \smile }{\varvec{R}}}_z} \\ \end{array}. \end{aligned}$$
(33)
The solution of the above optimization problem can be done quickly by using alternating iterations, after which an estimate of the noise-free covariance tensor \({\varvec{R}_{s}}\) can be obtained \({\varvec{{\hat{R}}}_{s}}\).

3.4 Tensor Model

According to the reference [8, 36], \({\varvec{R}_{z}}\) can be rearranged into a fourth order tensor \(\varvec{{\mathcal {R}}}\in {{\mathbb {R}}^{M\times 6\times M\times 6}}\), with the (mpnq)-th element denoted as:
$$\begin{aligned} {\varvec{{\mathcal {R}}}_{m,p,n,q}}=\sum \limits _{t=1}^{L}{{{\varvec{z}}_{m,p}}(t)}\varvec{z}_{n,q}^{*}(t), \end{aligned}$$
(34)
where \(m,n\in \{1,2,\cdots ,M\}\) and \(p,q\in \{1,2,\cdots ,6\}\), \({\varvec{z}_{m,p}}(t)\) represents the elements up to the \([6(m-1)+p]\)-th position, while \({\varvec{z}_{n,q}}(t)\) is defined similar.
Reference [8] points out that \({\varvec{R}_{z}}\) can be viewed as a symmetric Hermitian expansion of \(\varvec{{\mathcal {R}}}\), which can be obtained by stacking \(\varvec{{\mathcal {R}}}\) along the columns, transforming the subscript indices of the first two dimensions, successively contracting the first and second subscript indices, and then stacking the last two indices along the rows:
$$\begin{aligned} {\varvec{R}_{z}} = \left[ \begin{array}{cccccc} {\varvec{{\mathcal {R}}}_{1,1,1,1}} & \cdots & {{\varvec{{\mathcal {R}}}}_{1,1,N,1}} & {\varvec{{\mathcal {R}}}_{1,1,1,2}} & \cdots & {\varvec{{\mathcal {R}}}_{1,1,N,6}} \\ {\varvec{{\mathcal {R}}}_{2,1,1,1}} & \cdots & {\varvec{{\mathcal {R}}}_{2,1,N,1}} & {\varvec{{\mathcal {R}}}_{2,1,1,2}} & \cdots & {\varvec{{\mathcal {R}}}_{2,1,N,6}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ {\varvec{{\mathcal {R}}}_{N,1,1,1}} & \cdots & {\varvec{{\mathcal {R}}}_{N,1,N,1}} & {\varvec{{\mathcal {R}}}_{N,1,1,2}} & \cdots & {\varvec{{\mathcal {R}}}_{N,1,N,6}} \\ {\varvec{{\mathcal {R}}}_{1,2,1,1}} & \cdots & {\varvec{{\mathcal {R}}}_{1,2,N,1}} & {\varvec{{\mathcal {R}}}_{1,2,1,2}} & \cdots & {\varvec{{\mathcal {R}}}_{1,2,N,6}} \\ {\varvec{{\mathcal {R}}}_{2,2,1,1}} & \cdots & {\varvec{{\mathcal {R}}}_{2,2,N,1}} & {\varvec{{\mathcal {R}}}_{2,2,1,2}} & \cdots & {\varvec{{\mathcal {R}}}_{2,2,N,6}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ {\varvec{{\mathcal {R}}}_{N,6,1,1}} & \cdots & {\varvec{{\mathcal {R}}}_{N,6,N,1}} & {\varvec{{\mathcal {R}}}_{N,6,1,2}} & \cdots & {\varvec{{\mathcal {R}}}_{N,6,N,6}} \\ \end{array} \right] , \end{aligned}$$
(35)
It can be observed that \(\varvec{{\mathcal {R}}}\) is a fourth-order PARAFAC tensor, which can be expressed in accordance with Eq. (14):
$$\begin{aligned} \varvec{{\mathcal {R}}}={\varvec{I}_{\times 1}}{{\varvec{A}}_{\times 2}}{{\varvec{B}}_{\times 3}}\varvec{A}_{\times 4}^{*}(\varvec{B}_{ }^{*}{{\varvec{R}}_{s}})+{\varvec{{\mathcal {N}}}}. \end{aligned}$$
(36)
After noise reduction the effect of \({\varvec{{\mathcal {R}}}_{n}}\) is eliminated and \(\varvec{{\mathcal {R}}}\) becomes the noise-free tensor \(\varvec{\tilde{{\mathcal {R}}}}\), where according to Definition 3, such that \({{{\mathrm O}}_{1}}=\{1\}\), \({{{\mathrm O}}_{2}}=\{2\}\), \({{{\mathrm O}}_{3}}=\{3,4\}\), \(\varvec{\tilde{{\mathcal {R}}}}\) can be rearranged into a third-order PARAFAC tensor:
$$\begin{aligned} \varvec{\tilde{{\mathcal {R}}}}={\varvec{I}_{\times 1}}{{\varvec{A}}_{\times 2}}{{\varvec{B}}_{\times 3}}\varvec{V}, \end{aligned}$$
(37)
where \(\varvec{V}={{\varvec{A}}^{*}}\odot {{\varvec{B}}^{*}}{{\varvec{R}}_{s}}\). According to Definition 2, the multi-mode expansions of \(\varvec{\tilde{{\mathcal {R}}}}\) are formulated as:
$$\begin{aligned} \varvec{X}&\triangleq [\varvec{\tilde{{\mathcal {R}}}}]_{(1)}^{T} \nonumber \\&= [\varvec{B} \odot \varvec{V}] \, \varvec{A}^{T}, \quad \text {(mode-1 unfolding)} \end{aligned}$$
(38a)
$$\begin{aligned} \varvec{Y}&\triangleq [\varvec{\tilde{{\mathcal {R}}}}]_{(2)}^{T} \nonumber \\&= [\varvec{V} \odot \varvec{A}] \, \varvec{B}^{T}, \quad \text {(mode-2 unfolding)} \end{aligned}$$
(38b)
$$\begin{aligned} \varvec{Z}&\triangleq [\varvec{\tilde{{\mathcal {R}}}}]_{(3)}^{T} \nonumber \\&= [\varvec{A} \odot \varvec{B}] \, \varvec{V}^{T}. \quad \text {(mode-3 unfolding)} \end{aligned}$$
(38c)
This tensor model provides a low-rank structural foundation for the subsequent decomposition of PARAFAC, the details of which are presented in the following section.

3.5 PARAFAC Algorithm

By replacing the theoretical value \(\varvec{X},\varvec{Y},\varvec{Z}\) with the estimated value \(\varvec{{\hat{X}}},\varvec{{\hat{Y}}},\varvec{{\hat{Z}}}\), the PARAFAC for \(\varvec{\tilde{{\mathcal {R}}}}\) can be completed through joint optimization:
$$\begin{aligned} \begin{aligned}&\underset{\varvec{A,B,V}}{\mathop {\text {min}}}\,\left\| \varvec{{\hat{X}}} - [\varvec{B} \odot \varvec{V}] \varvec{A}^{T} \right\| _{F}^{2} \\&\underset{\varvec{A,B,V}}{\mathop {\text {min}}}\,\left\| \varvec{{\hat{Y}}} - [\varvec{V} \odot \varvec{A}] \varvec{B}^{T} \right\| _{F}^{2} \\&\underset{\varvec{A,B,V}}{\mathop {\text {min}}}\,\left\| \varvec{{\hat{Z}}} - [\varvec{A} \odot \varvec{B}] \varvec{V}^{T} \right\| _{F}^{2} \\ \end{aligned}. \end{aligned}$$
(39)
The optimization problem of decomposing the third-order PARAFAC model is generally accomplished by using the trilinear alternate least squares (TALS) method, which is also the basis for decomposition of the third-order PARAFAC model. TALS works by assuming that two of \(\varvec{A}, \varvec{B}\) and \(\varvec{V}\) are known and fixing the two factors to complete the estimation of the unknown matrix by least squares (LS). The three LS problems described above are fitted alternately until the algorithm converges. For example, in the t-th iteration (t =1, 2, ..., T), \(\varvec{B}\) and \(\varvec{V}\) are known, and \(\varvec{{\hat{A}}}\) can be obtained using the following method:
$$\begin{aligned} \varvec{{\hat{A}}}_{(t)}^{T}={{[{{\varvec{{\hat{B}}}}_{(t-1)}}\odot {{\varvec{{\hat{V}}}}_{(t-1)}}]}^{\dagger }}\varvec{X}, \end{aligned}$$
(40a)
where \({\varvec{{\hat{B}}}_{(t-1)}}\) represents the estimate of \(\varvec{B}\) after the \((t-1)\)-th iteration, Similarly:
$$\begin{aligned} & \varvec{{\hat{B}}}_{(t)}^{T}={{[{{\varvec{{\hat{V}}}}_{(t-1)}}\odot {{\varvec{{\hat{A}}}}_{(t)}}]}^{\dagger }}\varvec{Y}, \end{aligned}$$
(40b)
$$\begin{aligned} & \varvec{{\hat{V}}}_{(t)}^{T}={{[{{\varvec{{\hat{A}}}}_{(t)}}\odot {{\varvec{{\hat{B}}}}_{(t)}}]}^{\dagger }}\varvec{Z}. \end{aligned}$$
(40c)
Due to the sensitivity of the TALS algorithm to the initial value, which can significantly impact its convergence rate, this paper adopts the COMFAC algorithm. The COMFAC algorithm compresses the third-order PARAFAC model to a lower-dimensional tensor, iterates on the compressed tensor, and finally recovers the solution to the original tensor space.

3.6 2D-DOA Estimation

When the PARAFAC is complete, \(\varvec{{\hat{A}}},\varvec{{\hat{B}}}\) and \(\varvec{{\hat{V}}}\) are available. The PARAFAC of a tensor is often unique. According to Definition 4, if the Kruskal rank of the factor matrices \(\varvec{{\hat{A}}},\varvec{{\hat{B}}}\) and \(\varvec{{\hat{V}}}\) are satisfying:
$$\begin{aligned} \hbox {KR}\ [\varvec{A}]+\hbox {KR}\ [\varvec{B}]+\hbox {KR}\ [\varvec{V}]\ge 2R+2, \end{aligned}$$
(41)
Then the above PARAFAC possesses uniqueness, but the obtained factor matrix suffers from column ambiguity and scale ambiguity, and they can be expressed as follows:
$$\begin{aligned} \left\{ \begin{aligned}&\varvec{{\hat{A}}}=\varvec{A\varPi }{{\varvec{\varOmega }}_{1}}+{{\varvec{F}}_{1}} \\&\varvec{{\hat{B}}}=\varvec{B\varPi }{{\varvec{\varOmega }}_{2}}+{{\varvec{F}}_{2}} \\&\varvec{{\hat{V}}}=\varvec{V\varPi }{{\varvec{\varOmega }}_{3}}+{{\varvec{F}}_{3}} \\ \end{aligned} \right. , \end{aligned}$$
(42)
where \(\varvec{\varPi }\) is a permutation matrix, \({{\varvec{\varOmega }}_{n}}\) (n=1,2,3) are the corresponding scale ambiguity matrices, which are all diagonal matrices, and \({{\varvec{\varOmega }}_{1}}{{\varvec{\varOmega }}_{2}}{{\varvec{\varOmega }}_{3}}=\varvec{I}\). \({\varvec{F}_{n}}\) (n=1,2,3) are the error matrices. Let \({\varvec{{\hat{b}}}_{k}}\) represent the k-th column of \(\varvec{{\hat{B}}}\), then according to the second equation in Eq. 40a there is:
$$\begin{aligned} {{\varvec{{\hat{b}}}}_{k}}={{\bar{\omega }}_{2,k}}{\varvec{{\tilde{b}}}_{k}}+{{\varvec{f}}_{2,k}}, \end{aligned}$$
(43)
where \({{\bar{\omega }}}_{2,k}\) represents the k-th diagonal element of \({{\varvec{\varOmega }}_{2}}\), \({\varvec{f}_{2,k}}\) represents the k-th column of \({\varvec{F}_{2}}\), and \(\varvec{{\tilde{B}}}=\varvec{B\varPi }\). Due to scale ambiguity, let \({\varvec{{\tilde{b}}}_{k}}\) represent the \(k'\)-th column of \(\varvec{B}\) (\({\varvec{{\tilde{b}}}_{k}}={\varvec{b}_{k'}}\)). Then \({\varvec{{\hat{b}}}_{k}}\) can be rewritten as:
$$\begin{aligned} {\varvec{{\hat{b}}}_{k}}\approx {{\bar{\omega }}_{2,k}}{\varvec{b}_{k'}}={{{\bar{\omega }}}_{2,k}}\left[ \begin{matrix} {{\varvec{e}}_{k'}} \\ {{\varvec{m}}_{k'}} \\ \end{matrix} \right] . \end{aligned}$$
(44)
As shown in Eq. 3, the directional cosine can be estimated by normalizing the vector fork multiplication. Therefore, the estimated value of \({\varvec{p}_{k'}}\) can be found by the following equation:
$$\begin{aligned} \varvec{{\hat{p}}}_{k}^{(u)}\triangleq \left[ \begin{aligned}&\varvec{{\hat{u}}}_{k}^{(u)} \\&\varvec{{\hat{v}}}_{k}^{(u)} \\&\varvec{{\hat{w}}}_{k}^{(u)} \\ \end{aligned} \right] =\frac{{{{\varvec{{\hat{e}}}}}_{k}}}{\left\| {{{\varvec{{\hat{e}}}}}_{k}} \right\| }\circledast \frac{\varvec{{\hat{m}}}_{k}^{*}}{\left\| {{{\varvec{{\hat{m}}}}}_{k}} \right\| }, \end{aligned}$$
(45)
where \({\varvec{{\hat{e}}}_{k}}\) and \({\varvec{{\hat{m}}}_{k}}\) represent the electrical response vector and magnetic response vector of \({\varvec{{\hat{b}}}_{k}}\). The above equation estimates \(\varvec{{\hat{u}}}_{k}^{(u)}\), \(\varvec{{\hat{v}}}_{k}^{(u)}\)and \(\varvec{{\hat{w}}}_{k}^{(u)}\) are unambiguous. However, the spatial structure of the array of \(\varvec{{\hat{B}}}\) is not included, leading to the problem of low resolution of the estimation, which can be further improved by the following method. Focusing on the spatial response vector \({\varvec{a}_{k}}\), let \({\varvec{h}_{k}}\) denote the phase vector derived from \({\varvec{a}_{k}}\), defined as:
$$\begin{aligned} \begin{aligned} \varvec{h}_{k}&\triangleq \text {phase}\{{\varvec{a}_{k}}\} \\&\triangleq {{[0,-2\pi {{\tau }_{2,k}},\cdots ,-2\pi {{\tau }_{M,k}}]}^{T}} \,\, \end{aligned} \,, \end{aligned}$$
(46)
where \(\text {phase }\!\!\{\!\!\centerdot \!\!\}\!\!\) represents the operation of extracting the phase of the vector and the output range of the phase function is \((\pi ,\pi ]\). The relationship between \({\varvec{h}_{k}}\) and \({\varvec{p}_{k}}\) can be expressed as follows:
$$\begin{aligned} {\varvec{h}_{k}}=\varvec{\varGamma } {\varvec{p}_{k}}, \end{aligned}$$
(47)
where \(\varvec{\varGamma } \triangleq {{[{{\gamma }_{1}},{{\gamma }_{2}},\cdots ,{{\gamma }_{M}}]}^{T}}\in {{\mathbb {R}}^{M\times 3}}\). From Eq.47, \(\varvec{p}_{k}\) can be recovered by least squares:
$$\begin{aligned} {\varvec{p}_{k}}={\varvec{\varGamma }^{\dagger }}{\varvec{h}_{k}}. \end{aligned}$$
(48)
In practice, \(\varvec{h}_{k}\) can be obtained via:
$$\begin{aligned} {\varvec{{\tilde{h}}}_{k}}\triangleq \text {phase}\{{\varvec{a}_{k}}\}. \end{aligned}$$
(49)
However, in a sparse array structure, the phase difference between two neighboring array elements may be greater than \(2\pi \), and when the phase is out of the basic range, the function cannot handle it correctly, leading to the issue of phase ambiguity.
The periodicity of the array phase can be described as:
$$\begin{aligned} {{\varvec{{\tilde{h}}}}_{k}}-{{\varvec{h}}_{k}}=2\pi {{\varvec{\upsilon }}_{k}}, \end{aligned}$$
(50)
where \({{\varvec{\upsilon }}_{k}}\in {{\mathbb {R}}^{M\times 1}}\) is a real-valued integer vector, the true phase of \({{\varvec{h}}_{k}}\) can be obtained from \({{\varvec{{\tilde{h}}}}_{k}}\) by estimating \({{\varvec{\upsilon }}_{k}}\). Although the estimated \(\varvec{{\hat{u}}}_{k}^{(u)}\) ,\(\varvec{{\hat{v}}}_{k}^{(u)}\) and \(\varvec{{\hat{w}}}_{k}^{(u)}\) are of low resolution, they have no ambiguity and are close to the true directional cosine. Therefore, Eq.45 can be utilized to estimate \({{\varvec{\upsilon }}_{k}}\):
$$\begin{aligned} \varvec{h}_{k}^{(u)}\triangleq \varvec{\varGamma } \varvec{{\hat{p}}}_{k}^{(u)}, \end{aligned}$$
(51)
simultaneously, the phase vector estimates that can be obtained \({{\varvec{{\tilde{h}}}}_{k}}\):
$$\begin{aligned} \varvec{{\hat{h}}}_{k}^{ }\triangleq \text {phase}\{{{\varvec{{\hat{a}}}}_{k}}\}, \end{aligned}$$
(52)
where \({\varvec{{\hat{a}}}_{k}}\) represents the k-column of \(\varvec{{\hat{A}}}\). Then, \({{\varvec{\upsilon }}_{k}}\) can be estimated as follows:
$$\begin{aligned} {{\varvec{{\hat{\upsilon }}}}_{k}}\triangleq \text {round}\left\{ \frac{\varvec{h}_{k}^{(u)}-{{{\varvec{{\hat{h}}}}}_{k}}}{2\pi } \right\} , \end{aligned}$$
(53)
where \(\text {round }\!\!\{\!\!\centerdot \!\!\}\!\!\) indicates that the nearest integer is taken. Subsequently, phase compensation is performed:
$$\begin{aligned} {{\varvec{{{\hat{h}}}'}}_{k}}\triangleq {{\varvec{{\hat{h}}}}_{k}}+2\pi {{\varvec{{\hat{\upsilon }}}}_{k}}. \end{aligned}$$
(54)
The direction cosine of the refinement is estimated as:
$$\begin{aligned} {{\varvec{{\hat{p}}}}_{k}}\triangleq {\varvec{\varGamma }^{\dagger }}{{\varvec{{{\hat{h}}}'}}_{k}}, \end{aligned}$$
(55)
Finally, the 2D-DOA can be estimated through calculating:
$$\begin{aligned} & {{{\hat{\varphi }}}_{k}}\triangleq \text {arctan}\frac{{{{\varvec{{\hat{p}}}}}_{k}}(2)}{{{{\varvec{{\hat{p}}}}}_{k}}(1)}, \end{aligned}$$
(56a)
$$\begin{aligned} & {{{\hat{\vartheta }}}_{k}}\triangleq \text {arcsin}\sqrt{\varvec{{\hat{p}}}_{k}^{2}(1)+\varvec{{\hat{p}}}_{k}^{2}(2)}, \end{aligned}$$
(56b)
where \({{\varvec{{\hat{p}}}}_{k}}(\text {1})\) and \({{\varvec{{\hat{p}}}}_{k}}(2)\) are the first and second elements of the vector \({{\varvec{{\hat{p}}}}_{k}}\).

3.7 Polarization Parameters Estimation

According to reference [31], the steering vector \({\varvec{b}_{k}}\) is obtained as the inner product of a direction-dependent matrix \({\varvec{D}_{k}}\in {{\mathbb {C}}^{6\times 2}}\) and a polarization-dependent vector \({\varvec{g}_{k}}\in {{\mathbb {C}}^{2\times 1}}\):
$$\begin{aligned} {\varvec{b}_{k}}={{\varvec{D}}_{k}}{{\varvec{g}}_{k}}, \end{aligned}$$
(57)
where:
$$\begin{aligned} {\varvec{D}_{k}} \triangleq \left[ \begin{array}{cc} \cos ({{\varphi }_{k}})\cos ({{\vartheta }_{k}}) & -\sin ({{\varphi }_{k}}) \\ \sin ({{\varphi }_{k}})\cos ({{\vartheta }_{k}}) & \cos ({{\varphi }_{k}}) \\ -\sin ({{\vartheta }_{k}}) & 0 \\ -\sin ({{\varphi }_{k}}) & -\cos ({{\varphi }_{k}})\cos ({{\vartheta }_{k}}) \\ \cos ({{\varphi }_{k}}) & -\sin ({{\varphi }_{k}})\cos ({{\vartheta }_{k}}) \\ 0 & \sin ({{\vartheta }_{k}}) \\ \end{array} \right] , \end{aligned}$$
(58)
and
$$\begin{aligned} {\varvec{g}_{k}}\triangleq \left[ \begin{matrix} \sin ({{\zeta }_{k}}){{e}^{j{{\eta }_{k}}}} \\ \cos ({{\zeta }_{k}}) \\ \end{matrix} \right] . \end{aligned}$$
(59)
The polarization steering vector is formed by generating a matrix that depends only on the direction and a vector that depends only on the polarization parameters:
$$\begin{aligned} {{\varvec{{\hat{b}}}}_{k}}\approx {{\bar{\omega }}_{2,k}}{{\varvec{b}}_{k'}}={{\bar{\omega }}_{2,k}}{{\varvec{D}}_{k'}}{{\varvec{g}}_{k'}}. \end{aligned}$$
(60)
After estimating the 2D - DOA pair \(({{\hat{\varphi }}_{k}},{{{\hat{\vartheta }}}_{k}})\), the orientation-dependent matrix \({\varvec{D}_{k}}\) can be recovered (denoted \({\varvec{{\hat{D}}}_{k}}\)):
$$\begin{aligned} {{\varvec{{\hat{g}}}}_{k}}\triangleq \varvec{{\hat{D}}}_{k}^{\dagger }{{\varvec{{\hat{b}}}}_{k}}, \end{aligned}$$
(61)
where \({\varvec{{\hat{g}}}_{k}}\) represents the estimate of \({{{\bar{\omega }}}_{2,k}}{\varvec{g}_{k'}}\) and \({{\bar{\omega }}_{2,k}}\) is a scalar that does not affect the phase of \({\varvec{g}_{k'}}\). So, the estimates of \({{\eta }_{{{k}'}}}\) and \({{\zeta }_{{{k}'}}}\) can be obtained as follows:
$$\begin{aligned} & {{{\hat{\eta }}}_{k}}\triangleq \text {phase}\left\{ {{{\varvec{{\hat{g}}}}}_{k}}(1)/{{{\varvec{{\hat{g}}}}}_{k}}(2) \right\} , \end{aligned}$$
(62a)
$$\begin{aligned} & {{{\hat{\zeta }}}_{k}}\triangleq \text {arctan}\left\{ \left| {{{\varvec{{\hat{g}}}}}_{k}}(1)/{{{\varvec{{\hat{g}}}}}_{k}}(2) \right| \right\} , \end{aligned}$$
(62b)

4 Algorithm Analysis

Remark 1
As stated in Eq. 42, the factor matrices \(\varvec{{\hat{A}}}\) and \(\varvec{{\hat{B}}}\) estimated by the algorithm exhibit the same column ambiguity, which establishes that the column vectors of \(\varvec{{\hat{A}}}\) and \(\varvec{{\hat{B}}}\) have one-to-one correspondence. Since the coarse direction cosine estimate and the fine direction cosine estimate depend on thek-th column of \(\varvec{{\hat{B}}}\) and \(\varvec{{\hat{A}}}\) , respectively, the coarse and the direction cosine estimate are paired. In addition, the pair of parameters \(({{{\hat{\eta }}}_{k}},{{{\hat{\zeta }}}_{k}})\) can be directly mapped to the pair of 2D angles \(({{{\hat{\varphi }}}_{k}},{{\hat{\vartheta }}_{k}})\), which means that all the 2D-DOA and polarization parameters are automatically paired.
Remark 2
The fine direction cosine estimation in the proposed algorithm is not restricted by the array geometry. Therefore, the algorithm is applicable to arbitrary EMVS array geometries. However, it is worth noting that compared to 1D array geometries, 2D or 3D array geometries are more advantageous in achieving accurate 2D-DOA estimation. In 1D array geometries arranged along the coordinate axes, fine processing can only update one direction cosine.
Remark 3
Usually, a larger array element spacing results in a larger array aperture, which improves the angular resolution. Therefore, from the perspective of estimation accuracy, sparse array geometries offer more advantages in the proposed algorithm compared to half-wavelength spaced array geometries.

4.2 Identifiability

Identifiability, which refers to the maximum number of targets that the algorithm can estimate, is an important indicator of algorithm performance. Assuming that the targets have different DOD and DOA, the Vandermonde matrices \(\varvec{A}\) and \(\varvec{B}\) are of full Kruskal rank. When analyzing the limit of the number of targets, it should let \(\text {KR}\ [\varvec{A}]=M\), \(\text {KR}\ [\varvec{B}]=6\), \(\text {KR}\ [\varvec{V}]=L\), then Eq.41 becomes:
$$\begin{aligned} \frac{M+L}{2}+2\ge R. \end{aligned}$$
(63)
Obviously, the algorithm based on PARAFAC can discriminate up to \((M+L)/2+2\) sources. For the subspace-based methods, the identifiability is constrained by rotational invariance and eigen decomposition. The algorithm identifiability comparison is shown in Table 1.
Table 1
Analysis of Identifiability and Computational Burden
Algorithm
Geometry
Aperture(\(\cdot \lambda /2\)) x/y
Identifiability
ESPRIT
ULA
\(M_1M_2\) / 1
\(\text {min}\{M_1M_2-1, L\}\)
IESPRIT
Sparse planar array
\(M_1\) / \(M_2\)
\(\text {min}\{M_1-1, M_2-1, L\}\)
ESPRIT-Like
Arbitrary
1 / 1
\(\text {min}\{M, L\}\)
MPARAFAC
Coprime array
\(\text {min}\{M_1, M_2\}\) / 1
\(\frac{M+L}{2} + 2\)
Proposed
Arbitrary
\(M_1\) / \(M_2\)
\(\frac{M+L}{2} + 2\)

4.3 CRB

The derivation of the Cramér-Rao Bound (CRB) is carried out based on the model in Eq.6, where \({\varvec{R}_{n}}=\varvec{Q}(\varvec{q})\), \(\varvec{q}={{[{{q}_{1}},{{q}_{2}},\cdots ,{{q}_{P}}]}^{T}}\) is the unknown parameter of \({\varvec{R}_{n}}\), and \(\varvec{q}\) is a real vector. Derived from the reference [30], we can obtain the CRB on the 2D-DOA and polarization parameters:
$$\begin{aligned} CRB\left( \vartheta ,\varphi \right) =\frac{1}{L}{{\left[ \varvec{H}-\varvec{M}{{\varvec{T}}^{-1}}{{\varvec{M}}^{T}} \right] }^{-1}}, \end{aligned}$$
(64)
where:
https://static-content.springer.com/image/art%3A10.1007%2Fs00034-025-03132-7/MediaObjects/34_2025_3132_Equ71_HTML.png
(65)
where \(\text {Re}\{\centerdot \}\) is the real part of the complex matrix \(\varvec{{\tilde{C}}}=\varvec{R}_{n}^{-1/2}\varvec{C}\), \(\varvec{C}=\varvec{A}\odot \varvec{B}\), \(\varvec{\varPi }_{{{\tilde{C}}}}^{\bot }=\varvec{I}-\varvec{\varPi }_{{{\tilde{C}}}}^{ }\), \({\varvec{\varPi }_{{{\tilde{C}}}}}=\varvec{{\tilde{C}}}{{\varvec{{\tilde{C}}}}^{\dagger }}\), \(\varvec{{\tilde{D}}}=[{{\varvec{{\tilde{D}}}}_{d}},{{\varvec{{\tilde{D}}}}_{p}}]\), \({{\varvec{{\tilde{D}}}}_{d}}=\varvec{R}_{n}^{-1/2}{{\varvec{D}}_{d}}\), \({{\varvec{{\tilde{D}}}}_{p}}=\varvec{R}_{n}^{-1/2}{{\varvec{D}}_{p}}\), \({{\varvec{1}}_{2\times 2}}\in {{\mathbb {R}}^{2\times 2}}\), all of whose elements are 1. so that \({\varvec{c}_{k}}\) denotes the k-th column of \(\varvec{C}\), there:
$$\begin{aligned} \begin{aligned}&\varvec{D}_{d} = \left[ \frac{\partial \varvec{c}_{1}}{\partial \vartheta _{1}}, \frac{\partial \varvec{c}_{2}}{\partial \vartheta _{2}}, \cdots , \frac{\partial \varvec{c}_{K}}{\partial \vartheta _{K}}, \frac{\partial \varvec{c}_{1}}{\partial \varphi _{1}}, \frac{\partial \varvec{c}_{2}}{\partial \varphi _{2}}, \cdots , \frac{\partial \varvec{c}_{K}}{\partial \varphi _{K}} \right] \\&\varvec{D}_{p} = \left[ \frac{\partial \varvec{c}_{1}}{\partial \eta _{1}}, \frac{\partial \varvec{c}_{2}}{\partial \eta _{2}}, \cdots , \frac{\partial \varvec{c}_{K}}{\partial \eta _{K}}, \frac{\partial \varvec{c}_{1}}{\partial \zeta _{1}}, \frac{\partial \varvec{c}_{2}}{\partial \zeta _{2}}, \cdots , \frac{\partial \varvec{c}_{K}}{\partial \zeta _{K}} \right] \end{aligned}, \end{aligned}$$
(66)
and \({{ \varvec{{\tilde{R}}}}_{z}}={{ \varvec{Q}}^{-1/2}}{{ \varvec{R}}_{z}}{{ \varvec{Q}}^{-1/2}}\), \( \varvec{J}=[vec\{{{ \varvec{e}}_{1}} \varvec{e}_{1}^{T}\},vec\{{{ \varvec{e}}_{2}} \varvec{e}_{2}^{T}\},\cdots ,vec\{{{ \varvec{e}}_{K}} \varvec{e}_{K}^{T}\}]\), where \({ \varvec{e}_{k}}\) denotes the k-th column of \({ \varvec{I}_{K}}\) and \(vec\{\centerdot \}\) denotes vectorization. \( \varvec{{\tilde{Q}}}=[vec\{{{ \varvec{{{\tilde{Q}}}'}}_{1}}\},vec\{{{ \varvec{{{\tilde{Q}}}'}}_{2}}\},\cdots ,vec\{{{ \varvec{{{\tilde{Q}}}'}}_{P}}\}]\), within \({{ \varvec{{{\tilde{Q}}}'}}_{p}}={{ \varvec{Q}}^{-1/2}}{{ \varvec{{Q}'}}_{p}}{{ \varvec{Q}}^{-1/2}}\), \({{ \varvec{{Q}'}}_{p}}=\partial \varvec{Q}/\partial { \varvec{q}_{p}}\).

5 Simulation Results

To evaluate the performance of the proposed algorithm, we employ the Root Mean Square Error (RMSE) as the primary metric. RMSE is a widely used statistical measure for assessing the accuracy of predictive models, particularly in regression and estimation tasks. It quantifies the deviation between predicted values and true values, providing a comprehensive evaluation of model performance. [9, 11] The RMSE for the 2D-DOA estimation is defined as:
$$\begin{aligned} \text {RMSE} = \sqrt{\frac{1}{N_{\text {MC}}K} \sum _{n=1}^{N_{\text {MC}}} \sum _{k=1}^K \left[ ({\hat{\theta }}_k^{(n)} - \theta _k)^2 + ({\hat{\phi }}_k^{(n)} - \phi _k)^2 \right] }, \end{aligned}$$
(67)
where \(N_{\text {MC}} = 500\) represents the number of Monte Carlo trials, ensuring statistical reliability, while \(K\) denotes the number of targets. Here, \({\hat{\theta }}_k^{(n)}\) and \({\hat{\phi }}_k^{(n)}\) are the estimated azimuth and elevation angles for the \(k\)-th target in the \(n\)-th trial.
In our experiments, we verify the effectiveness of the proposed algorithm by employing Monte Carlo methods. First, consider an EMVS receiving array consisting of M array elements and having L available measurements. Next, assume that there are \(K=3\) far-field TEM signals impinging on the EMVS array of received signals in a non-uniform noise background. The parameters are set to \(\vartheta ={{[{{20}^{\circ }},{{30}^{\circ }},{{40}^{\circ }}]}^{T}}\), \(\varphi ={{[{{15}^{\circ }},{{25}^{\circ }},{{35}^{\circ }}]}^{T}}\), \(\zeta ={{[{{12}^{\circ }},{{39}^{\circ }},{{63}^{\circ }}]}^{T}}\), \(\eta ={{[{{13}^{\circ }},{{47}^{\circ }},-{{21}^{\circ }}]}^{T}}\), and each estimate is based on 500 independent experiments. To ensure a fair comparison with existing methods, we compared the proposed algorithm with the representative matrix decomposition algorithms: ESPRIT algorithm [12], ESPRIT-Like method [31], and IESPRIT method [38]. Additionally, we included the CRB corresponding to a L-shaped array (labeled as "CRB-L") for comparison of estimation accuracy. Among them, ESPRIT-Like and IESPRIT are tested under sparse L-array geometry. Setting \({{M}_{1}}=2\) and \({{M}_{2}}=3\), ESPRIT used an EMVS with a half-wavelength spaced ULA (uniform linear array) with a total number of array elements of \(M={{M}_{1}}\times {{M}_{2}}\). The simulation results of the scatter plot for an arbitrary EMVS array are as shown in Fig. 2.
Fig. 2
Scatter results of the proposed estimator with arbitrary EMVS array
Full size image
Experiment 1: Setting \(M=4\), \(L=200\), and we plotted the 2D DOA estimation root mean square error (RMSE) curves as a function of SNR. Figure 3 shows that as the SNR increases, the RMSE performance of all algorithms improves. Additionally, because of the larger array aperture, both IESPRIT and the proposed method demonstrate better accuracy compared to ESPRIT and ESPRIT-like methods. Furthermore, leveraging the tensor properties, the proposed method outperforms IESPRIT when \(\text {SNR}\ge -10\text { dB}\).
Fig. 3
Scatter results of the proposed estimator with arbitrary EMVS array
Full size image
Experiment 2: Fixing the SNR at 10 dB and keeping other conditions the same as in Experiment 1, we plotted the average RMSE performance as a function of L. Figure 4 shows that the RMSE of all algorithms improves as L increases. Notably, the method proposed in this paper consistently exhibits the best RMSE performance across the entire range of L. This performance improvement is especially significant when \(L \ge 60\), In this scenario, while there is a substantial gap between the other methods and the CRB, the proposed method demonstrates performance that approaches the CRB, highlighting its superiority in estimation accuracy.
Fig. 4
Comparison RMSE performance versus L
Full size image
Experiment 3: Fixing the SNR at 10 dB, \({{M}_{2}}=3\), and keeping the other conditions the same as in Experiment 1, the average RMSE performance is plotted as a function of L. Figure 5 shows that as \({{M}_{1}}\) increases, the CRB shows a decreasing trend. That is mainly due to the increased aperture of the L-shaped array. Consistent with previous experimental results, both IESPRIT and the method proposed in this paper significantly outperform other methods in the entire range \({{M}_{1}}\). Moreover, the RMSE performance of the proposed method outperforms that of IESPRIT in all cases, which is especially obvious when \({{M}_{1}}<5\).
Fig. 5
Comparison RMSE performance versus \({M}_{1}\)
Full size image
Experiment 4: Fixing the SNR at 10 dB, setting inter-element spacing \(d_2 = 2.5\lambda \), and keeping the other conditions the same as Experiment 1. The average RMSE performance is plotted as a function of \(d_1\), as shown in Fig. 6. In this experiment, we focus solely on the variation of the element spacing in the L-shaped array, as the sensor positions in the coprime array are fixed. The results show that the RMSE of the proposed method improves rapidly as \(d_1\) increases. However, when \(d_1\) increases to \(10\lambda \) or larger, the RMSE stabilizes with minimal changes. When \(d_1 > 15\lambda \), the aperture of the array exceeds the Rayleigh distance, leading to wavefront curvature effects that violate the assumption of plane waves. This explains the saturation of the improvement of the RMSE beyond \(d_1 = 10\lambda \). It should be noted that IESPRIT occasionally fails at certain values \(d_1\), whereas the proposed method remains robust in all scenarios. In addition, the RMSE gap between the proposed method, ESPRIT and ESPRIT-like remains approximately around 10 dB, further validating the stability and superiority of the proposed method.
Fig. 6
Comparison RMSE performance versus \({d}_{1}\)
Full size image
Fig. 7
Comparison RMSE comparison versus K
Full size image
Experiment 5: : We also plotted the average RMSE performance with the number of sources K and the results are shown in Fig. 7, at which the parameters were set as \({{M}_{1}}=4\), \({{M}_{2}}=5\), \({{d}_{1}}=2.5\lambda \), \({{d}_{2}}=5\lambda \), and the SNR is fixed at 20 dB. In this experiment, the K source parameters are generated at random. The experimental results show that the RMSE performance of all algorithms decreases significantly with increasing K. When \(K\le 7\), the proposed method shows significantly better performance than all other algorithms. However, the RMSE gap between the proposed method and other algorithms narrows when \(8\le K<10\). After \(K\ge 10\), the RMSE of the proposed method is gradually surpassed by the IESPRIT and ESPRIT algorithms, but it still outperforms the ESPRIT-Like methods.

6 Conclusion

In this paper, we revisit the problem of non-uniform noise in 2D angle estimation based on EMVS arrays and propose a signal processing framework based on tensor analysis. First, the covariance tensor model applicable to angle estimation is constructed. Then, the non-uniform noise suppression problem is transformed into a tensor complementation problem, and the noise-free covariance tensor is recovered using existing efficient algorithms. The covariance tensor is then decomposed using the PARAFAC algorithm. Finally, the least-squares method is used to realize the joint estimation of the target DOD and DOA. The algorithm suppresses non-uniform noise in the null domain without causing loss of the array aperture. The simulation results show that the proposed method has excellent estimation accuracy and strong robustness in random EMVS arrays. This technology holds significant engineering value for 5 G/6 G massive MIMO systems, multitarget tracking in low-SNR environments (e.g., urban canyon radar detection) and polarization-sensitive applications such as weather radar and automotive sensing. Future extensions could further explore its adaptation to deep learning-driven dynamic noise adaptation and near-field localization scenarios.

Declarations

Conflict of Interest

The authors declare that there is no Conflict of interest with respect to the publication of this document.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Download
Title
A De-noising 2-D DOA Estimation Method for Random EMVS Arrays
Authors
Yunzhe Ruan
Shanshan Li
Publication date
10-05-2025
Publisher
Springer US
Published in
Circuits, Systems, and Signal Processing / Issue 9/2025
Print ISSN: 0278-081X
Electronic ISSN: 1531-5878
DOI
https://doi.org/10.1007/s00034-025-03132-7
1.
go back to reference S. Ahmadi-Asl, S. Abukhovich, M.G. Asante-Mensah, A. Cichocki, A.H. Phan, T. Tanaka, I. Oseledets, Randomized algorithms for computation of tucker decomposition and higher order svd (hosvd). IEEE Access. 9, 28684–28706 (2021)CrossRef
2.
go back to reference E. Baidoo, J. Hu, B. Zeng, B.D. Kwakye, Joint dod and doa estimation using tensor reconstruction based sparse representation approach for bistatic mimo radar with unknown noise effect. Signal Process. 182, 107912 (2021)CrossRef
3.
go back to reference Y. Ding, J. Shi, Z. Yang, Z. Zhang, Y. Liu, X. Li, D2cn: Distributed deep convolutional network. IEEE Trans. Signal Process. 73, 1309–1322 (2025)MathSciNetCrossRef
4.
go back to reference M.-Y. Dong, W. Hai-Long, T. Wang, K. Huang, H. Ren, Yu. Ru-Qin, Parafacm: A second-order calibration algorithm for handling data with missing values. Chemom. Intell. Lab. Syst. 244, 105030 (2024)CrossRef
5.
go back to reference A. Ertug Zorkun, M.A. Salas-Natera, R.M. Rodríguez-Osorio, S. Chatzinotas, Energy efficient low-complexity ris-aided 3-d doa estimation and target tracking algorithm via matrix completion. IEEE Access 12, 197929–197941 (2024)CrossRef
6.
go back to reference Y. Fang, S. Zhu, Y. Gao, C. Zeng, Doa estimation for coherent signals with improved sparse representation in the presence of unknown spatially correlated gaussian noise. IEEE Trans. Veh. Technol. 69(9), 10059–10069 (2020)CrossRef
7.
go back to reference M. Grant, S. Boyd. Cvx: Matlab software for disciplined convex programming. http://cvxr.com/cvx (2014)
8.
go back to reference M. Haardt, F. Roemer, G. Del Galdo, Higher-order svd-based subspace estimation to improve the parameter estimation accuracy in multidimensional harmonic retrieval problems. IEEE Trans. Signal Process. 56(7), 3198–3213 (2008)MathSciNetCrossRef
9.
go back to reference T.O. Hodson, Root mean square error (rmse) or mean absolute error (mae): When to use them or not. Geosci. Model Develop. Discuss. 2022, 1–10 (2022)
10.
go back to reference J. Huang, L. Cui, Tensor singular spectrum decomposition: Multisensor denoising algorithm and application. IEEE Trans. Instrum. Meas. 72, 1–15 (2023)
11.
go back to reference M.T. Hussain, M.R. Hussan, M. Tariq, A. Sarwar, S. Ahmad, M. Poshtan, H.A. Mahmoud, Archimedes optimization algorithm based parameter extraction of photovoltaic models on a decent basis for novel accurate rmse calculation. Front. Energy Res. 11, 1326313 (2024)CrossRef
12.
go back to reference J. Li, R.T. Compton, Angle and polarization estimation using esprit with a polarization sensitive array. IEEE Trans. Antennas Propag. 39(9), 1376–1383 (1991)CrossRef
13.
go back to reference J. Li, D. Li, D. Jiang, X. Zhang, Extended-aperture unitary root music-based doa estimation for coprime array. IEEE Commun. Lett. 22(4), 752–755 (2018)CrossRef
14.
go back to reference Z. Li, W. Wang, R. Jiang, S. Ren, X. Wang, C. Xue, Hardware acceleration of music algorithm for sparse arrays and uniform linear arrays. IEEE Trans. Circuits Syst. I Regul. Pap. 69(7), 2941–2954 (2022)CrossRef
15.
go back to reference J. Liu, P. Musialski, P. Wonka, J. Ye, Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2012)CrossRef
16.
go back to reference L. Ma, E. Solomonik, Fast and accurate randomized algorithms for low-rank tensor decompositions. Adv. Neural. Inf. Process. Syst. 34, 24299–24312 (2021)
17.
go back to reference L. Ma, E. Solomonik, Accelerating alternating least squares for tensor decomposition by pairwise perturbation. Numer. Linear Algeb. Appl. 29(4), e2431 (2022)MathSciNetCrossRef
18.
go back to reference A.M. Molaei, B. Zakeri, S. Andargoli, M. Abbasi, V. Fusco, O. Yurduseven, A comprehensive review of direction-of-arrival estimation and localization approaches in mixed-field sources scenario. IEEE Access 12, 65883–65918 (2024)CrossRef
19.
go back to reference A. Nehorai, E. Paldi, Vector-sensor array processing for electromagnetic source localization. Signal Process. IEEE Trans 42(2), 376–398 (1994)CrossRef
20.
go back to reference W. Nie, H. Da-zheng Feng, J.L. Xie, X. Peng-fei, Improved music algorithm for high resolution angle estimation. Signal Process. 122, 87–92 (2016)CrossRef
21.
go back to reference A. Nilay, A. Erkan, İ. Nihat, Direction of arrival estimation in time modulated linear arrays using matrix pencil method with single snapshot and optimized time steps. AEUE - Int. J. Electron. Commun. 150, (2022)
22.
go back to reference Y. Pan, L. Zhang, X. Liyan, F. Duan, Doa estimation on one-bit quantization observations through noise-boosted multiple signal classification. Sensors 24(14), 4719–4719 (2024)CrossRef
23.
go back to reference S. Qiu, X. Ma, R. Zhang, Y. Han, W. Sheng, A dual-resolution unitary esprit method for doa estimation based on sparse co-prime mimo radar. Signal Process. 202, 108753 (2023)CrossRef
24.
go back to reference R. Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 34(3), 276–280 (1986)MathSciNetCrossRef
25.
go back to reference J. Shi, F. Wen, T. Liu, Nested mimo radar: Coarrays, tensor modeling, and angle estimation. IEEE Trans. Aerosp. Electron. Syst. 57(1), 573–585 (2020)CrossRef
26.
go back to reference P. Stoica, A. Nehorai, Music, maximum likelihood, and cramer-rao bound. IEEE Trans. Acoust. Speech Signal Process. 37(5), 720–741 (1989)MathSciNetCrossRef
27.
go back to reference Y. Tian, X. He, Calibration nested arrays for underdetermined doa estimation using fourth-order cumulant. IEEE Commun. Lett. 24(9), 1949–1952 (2020)CrossRef
28.
go back to reference S. Wang, Z. Zhou, Z. Zheng, J. Sun, H. Cao, S. Song, Z.-L. Deng, F. Qin, Y. Cao, X. Li, Topological structures of energy flow: Poynting vector skyrmions. Phys. Rev. Lett. 133(7), 073802 (2024)CrossRef
29.
go back to reference F. Wen, H. Wang, G. Gui, H. Sari, F. Adachi, Polarized intelligent reflecting surface aided 2d-doa estimation for nlos sources. IEEE Trans. Wireless Commun. 23(7), 8085–8098 (2024)CrossRef
30.
go back to reference F. Wen, X. Zhang, Z. Zhang, Crbs for direction-of-departure and direction-of-arrival estimation in collocated mimo radar in the presence of unknown spatially coloured noise. IET Radar Sonar Navig. 13(4), 530–537 (2019)CrossRef
31.
go back to reference K.T. Wong, M.D. Zoltowski, Closed-form direction finding and polarization estimation with arbitrarily spaced electromagnetic vector-sensors at unknown locations. IEEE Trans. Antennas Propag. 48(5), 671–681 (2000)CrossRef
32.
go back to reference F. Xu, M.W. Morency, S.A. Vorobyov, Doa estimation for transmit beamspace mimo radar via tensor decomposition with vandermonde factor matrix. IEEE Trans. Signal Process. 70, 2901–2917 (2022)MathSciNetCrossRef
33.
go back to reference Z. Yang, X. Chen, W. Xunmeng, A robust and statistically efficient maximum-likelihood method for doa estimation using sparse linear arrays. IEEE Trans. Aerosp. Electron. Syst. 59(5), 6798–6812 (2023)
34.
go back to reference Z. Yu, W. Liu, H. Chen, L. Jin, G. Xu, and J. Liu, 2-d doa estimation algorithm for three-parallel co-prime arrays via spatial–temporal processing. Circuits Syst. Signal Process. 1–14 (2024)
35.
go back to reference J. Yuan, Yu. Gong, G. Zhang, Y. Zhang, H. Leung, A gridless fourth-order cumulant-based doa estimation method under unknown colored noise. IEEE Wireless Commun. Lett. 11(5), 1037–1041 (2022)CrossRef
36.
go back to reference N. Yuen, B. Friedlander, Doa estimation in multipath: an approach using fourth-order cumulants. IEEE Trans. Signal Process. 45(5), 1253–1263 (1997)CrossRef
37.
go back to reference X. Zhang, F. Zhang, J. He, and W. Gao, An enhanced matrix pencil method for parameter identification of sub-/super-synchronous oscillations using synchrophasors. IEEE Trans. Indust. Inform. (2024)
38.
go back to reference M.D. Zoltowski, K.T. Wong, Esprit-based 2-d direction finding with a sparse uniform array of electromagnetic vector sensors. IEEE Trans. Signal Process. 48(8), 2195–2204 (2000)CrossRef
39.
go back to reference T. Zong, J. Li, L. Guoping, Maximum likelihood lm identification based on particle filtering for scarce measurement-data mimo hammerstein box-jenkins systems. Math. Comput. Simul. 230, 241–255 (2025)MathSciNetCrossRef