Skip to main content
Erschienen in: Journal of Inequalities and Applications 1/2016

Open Access 01.12.2016 | Research

Least-squares Hermitian problem of complex matrix equation \((AXB,CXD)=(E,F)\)

verfasst von: Peng Wang, Shifang Yuan, Xiangyun Xie

Erschienen in: Journal of Inequalities and Applications | Ausgabe 1/2016

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we present a direct method to solve the least-squares Hermitian problem of the complex matrix equation \((AXB,CXD)=(E,F)\) with complex arbitrary coefficient matrices A, B, C, D and the right-hand side E, F. This method determines the least-squares Hermitian solution with the minimum norm. It relies on a matrix-vector product and the Moore-Penrose generalized inverse. Numerical experiments are presented which demonstrate the efficiency of the proposed method.
Hinweise

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors read and approved the final manuscript.

1 Introduction

Let A, B, C, D, E, and F are given matrices of suitable sizes defined over the complex number field. We are interested in the analysis of the linear matrix equation
$$ (AXB,CXD)=(E,F) $$
(1)
to be solved for \(X\in \mathbf{C}^{n\times n}\). Here and in the following \(\mathbf{C}^{m\times n}\) denotes the set of all \(m\times n\) complex matrices, while the set of all \(m\times n\) real matrices is denoted by \(\mathbf{R}^{m\times n}\). In particular, we focus on the least-squares Hermitian solution with the least norm of (1), which can be described as follows.
Problem 1
Given \(A\in \mathbf{C}^{m\times n}\), \(B\in \mathbf{C}^{n\times s}\), \(C\in \mathbf{C}^{m\times n}\), \(D\in \mathbf{C}^{n\times t}\), \(E\in \mathbf{C}^{m\times s}\), and \(F\in \mathbf{C}^{m\times t}\), let
$$\begin{aligned} H_{L} =&\Bigl\{ X|X\in {\mathbf{HC}}^{n\times n}, \\ &\Vert AXB-E\Vert^{2}+\Vert CXD-F\Vert^{2}=\min_{X_{0}\in {\mathbf{HC}}^{n\times n}} \bigl(\Vert AX_{0}B-E\Vert^{2}+\Vert CX_{0}D-F \Vert^{2} \bigr)\Bigr\} . \end{aligned}$$
Find \(X_{H}\in H_{L}\) such that
$$\begin{aligned} \|X_{H}\|=\min_{X\in H_{L}}\|X\|, \end{aligned}$$
(2)
where \(\mathbf{HC}^{n\times n}\) denotes the set of all \(n\times n\) Hermitian matrices.
Correspondingly, the set of all \(n\times n\) real symmetric matrices and the set of all \(n\times n\) real anti-symmetric matrices are denoted by \(\mathbf{SR}^{n\times n}\) and \(\mathbf{ASR}^{n\times n}\), respectively.
Nowadays, matrix equations are very useful in numerous applications such as control theory [1, 2], vibration theory [3], image processing [4, 5] and so on. Therefore it is an active area of research to solve different matrix equations [2, 615]. For the real matrix equation (1), in [16], least-squares solutions with the minimum-norm were obtained by using generalized singular value decomposition and canonical correlation decomposition of matrices (CCD). In [17], the quaternion matrix equation (1) was considered and the least-squares solution with the least norm was given through the use of the Kronecker product and the Moore-Penrose generalized inverse. Though the matrix equations of the form (1) are studied in the literature, less or even no attention was paid to the least-squares Hermitian solution of (1) over the complex field which is studied in this paper. A special vectorization, as we defined in [18], of the matrix equation (1) is carried out and Problem 1 is turned into the least-squares unconstrained problem of a system of real linear equations.
The notations used in this paper are summarized as follows: For \(A\in \mathbf{C}^{m\times n}\), the symbols \(A^{T}\), \(A^{H}\) and \(A^{+}\) denote the transpose matrix, the conjugate transpose matrix and the Moore-Penrose generalized inverse of matrix A, respectively. The identity matrix is denoted by I. For \(A=(a_{ij})\in\mathbf{C}^{m\times n}\), \(B=(b_{ij})\in \mathbf{C}^{p\times q}\), the Kronecker product of A and B is defined by \(A\otimes B=(a_{ij}B)\in \mathbf{C}^{mp\times nq}\). For matrix \(A\in \mathbf{C}^{m\times n}\), the stretching operator \(\operatorname{vec}(A)\) is defined by \(\operatorname{vec}(A)=(a_{1},a_{2},\ldots,a_{n})^{T} \), where \(a_{i}\) is the ith column of A.
For all \(A,B\in \mathbf{C}^{m\times n}\), we define the inner product \(\langle A, B\rangle=\operatorname{tr}(A^{H}B)\). Then \(\mathbf{C}^{m\times n}\) is a Hilbert inner product space and the norm of a matrix generated by this inner product is the matrix Frobenius norm \(\|\cdot\|\). Further, denote the linear space \(\mathbf{C}^{m\times n}\times \mathbf{C}^{m \times t}= \{[A,B]| A\in \mathbf{C}^{m\times n} , {B \in \mathbf{C}^{m \times t}}\}\), and for the matrix pairs \([A_{i},B_{i}]\in \mathbf{C}^{m\times n}\times \mathbf{C}^{m \times t}\) (\(i=1,2\)), we can define their inner product as follows: \(\langle[A_{1},B_{1}],[A_{2},B_{2}]\rangle=\operatorname{tr}(A_{2}^{H}A_{1})+\operatorname{tr}(B_{2}^{H}B_{1})\). Then \(\mathbf{C}^{m\times n}\times \mathbf{C}^{m \times t}\) is also a Hilbert inner space. The Frobenius norm of the matrix pair \([A,B]\in \mathbf{C}^{m\times n}\times \mathbf{C}^{m \times t}\) can be derived:
$$\begin{aligned} \bigl\| [A,B]\bigr\| & =\sqrt{\bigl\langle [A,B],[A,B]\bigr\rangle } \\ &=\sqrt{\operatorname{tr}\bigl(A^{H}A\bigr)+\operatorname{tr}\bigl(B^{H}B \bigr)} \\ &=\sqrt{\|A\|^{2}+\|B\|^{2}}. \end{aligned}$$
The structure of the paper is the following. In Section 2, we deduce some results in Hilbert inner product \(\mathbf{C}^{m\times n}\times \mathbf{C}^{m \times t}\) which are important for our main results. In Section 3, we introduce a matrix-vector product for the matrices and vectors of \(\mathbf{C}^{m\times n}\) based on which we consider the structure of \((AXB, CXD)\) over \(X\in \mathbf{HC}^{n\times n}\). In Section 4, we derive the explicit expression for the solution of Problem 1. Finally we express the algorithm for Problem 1 and perform some numerical experiments.

2 Some preliminaries

In this section, we will prove some theorems which are important for the proof of our main result. Given \(A\in \mathbf{R}^{m\times n}\) and \(b\in \mathbf{R}^{n}\), the solution of the linear equations \(Ax=b\) involves two cases: inconsistent and consistent. The former leads to the solution in the least-squares sense, which can be expressed as \(\operatorname{argmin}_{x\in \mathbf{R}^{n}} \|Ax-b\|\). This problem can be solved by solving the corresponding normal equations:
$$\begin{aligned} A^{T}Ax=A^{T}b, \end{aligned}$$
(3)
moreover, we have
$$ \bigl\{ x|x\in \mathbf{R}^{n}, \|Ax-b\|=\min\bigr\} =\bigl\{ x|x\in \mathbf{R}^{n}, A^{T}Ax=A^{T}b\bigr\} . $$
As for the latter, (3) still holds, further, the solution set of \(Ax=b\) and that of (3) is the same, that is,
$$\begin{aligned} \bigl\{ x|x\in \mathbf{R}^{n}, Ax=b\bigr\} =\bigl\{ x|x\in \mathbf{R}^{n}, A^{T}Ax=A^{T}b\bigr\} . \end{aligned}$$
It is should be noticed that (3) is always consistent. Therefore, solving \(Ax=b\) is usually translated into solving the corresponding consistent equations (3). In the following, we will extend the conclusion to a more general case. To do this, we first give the following problem.
Problem 2
Given complex matrices \(A_{1},A_{2},\ldots,A_{l}\in \mathbf{C}^{m\times n}\), \(B_{1},B_{2},\ldots,B_{l}\in \mathbf{C}^{m\times n}\), \(C\in \mathbf{C}^{m\times n}\), and \(D\in \mathbf{C}^{m\times n}\), find \(k=(k_{1},k_{2},\ldots, k_{l})^{T} \in \mathbf{R}^{l}\) such that
$$\begin{aligned} \sum_{i=1}^{l}k_{i}[A_{i},B_{i}]= \Biggl[\sum_{i=1}^{l} k_{i}A_{i} , \sum_{i=1}^{l} k_{i}B_{i} \Biggr]=[C, D]. \end{aligned}$$
(4)
Theorem 1
Assume the matrix equation (4) in Problem 2 is consistent. Let
$$\begin{aligned} E_{i}=\left [\textstyle\begin{array}{c} \operatorname{Re}(A_{i})\\ \operatorname{Im}(A_{i})\\ \operatorname{Re}(B_{i})\\ \operatorname{Im}(B_{i}) \end{array}\displaystyle \right ],\qquad F=\left [\textstyle\begin{array}{c} \operatorname{Re}(C)\\ \operatorname{Im}(C)\\ \operatorname{Re}(D)\\ \operatorname{Im}(D) \end{array}\displaystyle \right ]. \end{aligned}$$
Then the set of vectors k that satisfies (4) is exactly the set that solves the following consistent system:
$$\begin{aligned} \left [\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \langle E_{1},E_{1}\rangle &\langle E_{1},E_{2}\rangle&\cdots&\langle E_{1},E_{l}\rangle\\ \langle E_{2},E_{1}\rangle&\langle E_{2},E_{2}\rangle&\cdots&\langle E_{2},E_{l}\rangle\\ \vdots&\vdots& &\vdots\\ \langle E_{l},E_{1}\rangle&\langle E_{l},E_{2}\rangle&\cdots&\langle E_{l},E_{l}\rangle \end{array}\displaystyle \right ]\left [\textstyle\begin{array}{c} k_{1}\\ k_{2}\\ \vdots\\ k_{l}\end{array}\displaystyle \right ]= \left [\textstyle\begin{array}{c} \langle E_{1},F\rangle\\ \langle E_{2},F\rangle\\ \vdots\\ \langle E_{l},F\rangle\end{array}\displaystyle \right ]. \end{aligned}$$
(5)
Proof
By (3), we have
$$\begin{aligned} & \Biggl[\sum_{i=1}^{l} k_{i}A_{i} , \sum_{i=1}^{l} k_{i}B_{i} \Biggr]= [C, D] \\ &\quad\Longleftrightarrow\quad\sum_{i=1}^{l}k_{i}[A_{i},B_{i}]= [C, D] \\ &\quad\Longleftrightarrow\quad\sum_{i=1}^{l}k_{i} \left [\textstyle\begin{array}{c} \operatorname{Re}(A_{i})\\ \operatorname{Im}(A_{i})\\ \operatorname{Re}(B_{i})\\ \operatorname{Im}(B_{i}) \end{array}\displaystyle \right ]=\left [\textstyle\begin{array}{c} \operatorname{Re}(C)\\ \operatorname{Im}(C)\\ \operatorname{Re}(D)\\ \operatorname{Im}(D) \end{array}\displaystyle \right ] \\ &\quad\Longleftrightarrow \quad \left [\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \operatorname{vec} (\operatorname{Re} A_{1})^{T}&\operatorname{vec} (\operatorname{Im} A_{1})^{T}&\operatorname{vec} (\operatorname{Re} B_{1})^{T}&\operatorname{vec} (\operatorname{Im} B_{1})^{T}\\ \operatorname{vec} (\operatorname{Re} A_{2})^{T}&\operatorname{vec} (\operatorname{Im} A_{2})^{T}&\operatorname{vec} (\operatorname{Re} B_{2})^{T}&\operatorname{vec} (\operatorname{Im} B_{2})^{T}\\ \vdots&\vdots&\vdots&\vdots\\ \operatorname{vec} (\operatorname{Re} A_{l})^{T}&\operatorname{vec} (\operatorname{Im} A_{l})^{T}&\operatorname{vec} (\operatorname{Re} B_{l})^{T}&\operatorname{vec} (\operatorname{Im} B_{l})^{T}\end{array}\displaystyle \right ] \\ &\quad\quad\qquad\qquad{}\times \left [\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \operatorname{vec} (\operatorname{Re} A_{1})&\operatorname{vec} (\operatorname{Re} A_{2})&\cdots&\operatorname{vec} (\operatorname{Re} A_{l})\\ \operatorname{vec} (\operatorname{Im} A_{1})&\operatorname{vec} (\operatorname{Im} A_{2})&\cdots&\operatorname{vec} (\operatorname{Im} A_{l})\\ \operatorname{vec} (\operatorname{Re} B_{1})&\operatorname{vec} (\operatorname{Re} B_{2})&\cdots&\operatorname{vec} (\operatorname{Re} B_{l})\\ \operatorname{vec} (\operatorname{Im} B_{1})&\operatorname{vec} (\operatorname{Im} B_{2})&\cdots&\operatorname{vec} (\operatorname{Im} B_{l})\\ \end{array}\displaystyle \right ]k \\ &\quad\qquad\qquad=\left [\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \operatorname{vec} (\operatorname{Re} A_{1})^{T}&\operatorname{vec} (\operatorname{Im} A_{1})^{T}&\operatorname{vec} (\operatorname{Re} B_{1})^{T}&\operatorname{vec} (\operatorname{Im} B_{1})^{T}\\ \operatorname{vec} (\operatorname{Re} A_{2})^{T}&\operatorname{vec} (\operatorname{Im} A_{2})^{T}&\operatorname{vec} (\operatorname{Re} B_{2})^{T}&\operatorname{vec} (\operatorname{Im} B_{2})^{T}\\ \vdots&\vdots&\vdots&\vdots\\ \operatorname{vec} (\operatorname{Re} A_{l})^{T}&\operatorname{vec} (\operatorname{Im} A_{l})^{T}&\operatorname{vec} (\operatorname{Re} B_{l})^{T}&\operatorname{vec} (\operatorname{Im} B_{l})^{T}\end{array}\displaystyle \right ]\\ &\quad\qquad\qquad\quad{}\times \left [\textstyle\begin{array}{c} \operatorname{vec}(\operatorname{Re}C)\\ \operatorname{vec}(\operatorname{Im}C)\\ \operatorname{vec}(\operatorname{Re}D)\\ \operatorname{vec}(\operatorname{Im}D) \end{array}\displaystyle \right ] \\ &\quad\Longleftrightarrow \quad \left [\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \langle E_{1},E_{1}\rangle &\langle E_{1},E_{2}\rangle&\cdots&\langle E_{1},E_{l}\rangle\\ \langle E_{2},E_{1}\rangle&\langle E_{2},E_{2}\rangle&\cdots&\langle E_{2},E_{l}\rangle\\ \vdots&\vdots& &\vdots\\ \langle E_{l},E_{1}\rangle&\langle E_{l},E_{2}\rangle&\cdots&\langle E_{l},E_{l}\rangle \end{array}\displaystyle \right ]\left [\textstyle\begin{array}{c} k_{1}\\ k_{2}\\ \vdots\\ k_{l}\end{array}\displaystyle \right ]= \left [\textstyle\begin{array}{c} \langle E_{1},F\rangle\\ \langle E_{2},F\rangle\\ \vdots\\ \langle E_{l},F\rangle\end{array}\displaystyle \right ]. \end{aligned}$$
Thus we have (5). □
We now turn to the case that matrix equation (4) in Problem 2 is inconsistent, just as (3), the related least-squares problem should be considered.
Problem 3
Given matrices \(A_{1},A_{2},\ldots,A_{l}\in \mathbf{C}^{m\times n}\), \(B_{1},B_{2},\ldots,B_{l}\in \mathbf{C}^{s\times t}\), \(C\in \mathbf{C}^{m\times n}\), and \(D\in \mathbf{C}^{s\times t}\), find
$$k=(k_{1},k_{2},\ldots,k_{l})^{T}\in \mathbf{R}^{l} $$
such that
$$ \|k_{1}A_{1}+k_{2}A_{2}+ \cdots+k_{l}A_{l}-C\|^{2}+\|k_{1}B_{1}+k_{2}B_{2}+ \cdots+k_{l}B_{l}-D\|^{2}=\min. $$
(6)
Based on the results above, we list the following theorem which concludes that solving Problem 3 is equivalent to solving the consistent matrix equation system (5).
Theorem 2
Suppose that the notations and conditions are the same as in Theorem 1. Then the solution set of Problem 3 is the solution set of system (5).
Proof
By (4), we have
$$\begin{aligned} & \Biggl\| \Biggl[\sum_{i=1}^{l} k_{i}A_{i}-C , \sum_{i=1}^{l} k_{i}B_{i}-D \Biggr] \Biggr\| ^{2} \\ &\quad= \Biggl\| \Biggl(\sum_{i=1}^{l} k_{i}(\operatorname{Re}A_{i})-(\operatorname{Re}C)\Biggr)+\sqrt{-1}\Biggl(\sum _{i=1}^{l} k_{i}(\operatorname{Im}A_{i})-(\operatorname{Im}C) \Biggr) \Biggr\| ^{2} \\ &\qquad{} +\Biggl\| \Biggl(\sum_{i=1}^{l} k_{i}(\operatorname{Re}B_{i})-(\operatorname{Re}D)\Biggr)+\sqrt{-1}\Biggl(\sum _{i=1}^{l} k_{i}(\operatorname{Im}B_{i})-(\operatorname{Im}D) \Biggr) \Biggr\| ^{2} \\ &\quad=\left\| \sum_{i=1}^{l} k_{i}\left [\textstyle\begin{array}{c} \operatorname{Re}(A_{i})\\ \operatorname{Im}(A_{i})\\ \operatorname{Re}(B_{i})\\ \operatorname{Im}(B_{i}) \end{array}\displaystyle \right ]-\left [\textstyle\begin{array}{c} \operatorname{Re}(C)\\ \operatorname{Im}(C)\\ \operatorname{Re}(D)\\ \operatorname{Im}(D) \end{array}\displaystyle \right ] \right\| ^{2} \\ &\quad= \Biggl\| \sum_{i=1}^{l} k_{i}E_{i}-F \Biggr\| ^{2}. \end{aligned}$$
It follows that (5) holds. This implies the conclusion. □

3 The structure of \((AXB, CXD)\) over \(X\in \mathbf{HC}^{n\times n}\)

Based on the discussion in [18], we first recall a matrix-vector product for vectors and matrices in \(\mathbf{C}^{m\times n}\).
Definition 1
Let \(x=(x_{1},x_{2},\ldots, x_{k})^{T}\in \mathbf{C}^{k}\), \(y=(y_{1},y_{2},\ldots, y_{k})^{T}\in \mathbf{C}^{k}\) and \(A=(A_{1},A_{2},\ldots, A_{k})\), \(A_{i}\in \mathbf{C}^{m\times n}\) (\(i=1,2,\ldots, k\)). Define
(i)
\(A\circ x=x_{1}A_{1}+x_{2}A_{2}+\cdots+x_{k} A_{k}\in \mathbf{C}^{m\times n}\);
 
(ii)
\(A\circ (x,y)=(A\circ x, A\circ y)\).
 
From Definition 3.1, we list the following facts which are useful for solving Problem 1. Let \(y=(y_{1},y_{2},\ldots, y_{k})^{T}\in \mathbf{C}^{k}\), \(P=(P_{1},P_{2},\ldots,P_{k})\), \(P_{i}\in \mathbf{C}^{m\times n}\) (\(i=1,2,\ldots, k\)), and \(a,b\in \mathbf{C}\).
(i)
\(x^{H}\circ y=x^{H}y=(x,y)\);
 
(ii)
\(A\circ x+P\circ x=(A+P)\circ x\);
 
(iii)
\(A\circ(ax+by)=a(A\circ x)+b(A\circ y)\);
 
(iv)
\((aA+bP)\circ x=a(A\circ x)+b(P\circ x)\);
 
(v)
\((A,P)\circ[{\scriptsize\begin{matrix}{}x\cr y\end{matrix}} ]=A\circ x+P\circ y\);
 
(vi)
\([{\scriptsize\begin{matrix}{}A\cr P\end{matrix}} ]\circ x=[{\scriptsize\begin{matrix}{}A\circ x\cr P\circ x\end{matrix}} ]\);
 
(vii)
\(\operatorname{vec}(A\circ x)=(\operatorname{vec}(A_{1}),\operatorname{vec}(A_{2}),\ldots,\operatorname{vec}(A_{k}))x\);
 
(viii)
\(\operatorname{vec}((aA+bP)\circ x)=a\operatorname{vec}(A\circ x)+b\operatorname{vec}(P\circ x)\).
 
Suppose \(B=(B_{1},B_{2},\ldots,B_{s})\in \mathbf{C}^{k\times s}\), \(B_{i}\in \mathbf{C}^{k}\) (\(i=1,2,\ldots, s\)), \(C=(C_{1},C_{2},\ldots,C_{t})\in \mathbf{C}^{k\times t}\), \(C_{i}\in \mathbf{C}^{k}\) (\(i=1,2,\ldots, t\)), \(D\in \mathbf{C}^{l\times m}\), \(H\in \mathbf{C}^{n\times q}\). Then
(ix)
\(D(A\circ x)=(DA)\circ x\);
 
(x)
\((A\circ x)H=(A_{1}H, A_{2}H,\ldots,A_{k}H)\circ x\).
 
For \(X=\operatorname{Re}X+\sqrt{-1}\operatorname{Im}X\in\mathbf{HC}^{n\times n}\), by \(X^{H}=X\), we have \((\operatorname{Re} X+\sqrt{-1}\operatorname{Im} X)^{H}=\operatorname{Re} X+\sqrt{-1} \operatorname{Im}X\). Thus we get \(\operatorname{Re}X^{T}=\operatorname{Re}X\), \(\operatorname{Im} X^{T}=-\operatorname{Im}X\).
Definition 2
For matrix \(A\in \mathbf{R}^{n\times n}\), let \(a_{1}=(a_{11},a_{21},\ldots,a_{n1})\), \(a_{2}=(a_{22},a_{32}, \ldots, a_{n2})\), …, \(a_{n-1}=(a_{(n-1)(n-1)},a_{n(n-1)})\), \(a_{n}=a_{nn}\), the operator \(\operatorname{vec}_{S}(A)\) is denoted
$$\begin{aligned} \operatorname{vec}_{S}(A)=(a_{1},a_{2}, \ldots,a_{n-1},a_{n})^{T}\in \mathbf{R}^{\frac{n(n+1)}{2}}. \end{aligned}$$
(7)
Definition 3
For matrix \(B\in \mathbf{R}^{n\times n}\), let \(b_{1}=(b_{21},b_{31},\ldots,b_{n1})\), \(b_{2}=(b_{32},b_{42}, \ldots,b_{n2})\), …, \(b_{n-2}=(b_{(n-1)(n-2)},b_{n(n-2)})\), \(b_{n-1}=b_{n(n-1)}\), the operator \(\operatorname{vec}_{A}(B)\) is denoted
$$\begin{aligned} \operatorname{vec}_{A}(B)=(b_{1},b_{2}, \ldots,b_{n-2},b_{n-1})^{T}\in \mathbf{R}^{\frac{n(n-1)}{2}}. \end{aligned}$$
(8)
Let
$$\begin{aligned} E_{ij}=(e_{st})\in \mathbf{R}^{n\times n}, \end{aligned}$$
(9)
where
$$\begin{aligned} e_{st}=\left \{\textstyle\begin{array}{l@{\quad}l} 1 &(s,t)=(i,j),\\ 0 &\text{otherwise}. \end{array}\displaystyle \right . \end{aligned}$$
Let
$$\begin{aligned} K_{S} =&(E_{11},E_{21}+E_{12},\ldots,E_{n1}+E_{1n}, \\ &E_{22},E_{32}+E_{23}, \ldots,E_{n2}+E_{2n},\ldots, E_{(n-1)(n-1)},E_{n(n-1)}+E_{(n-1)n},E_{nn}). \end{aligned}$$
(10)
Note that \(K_{S}\in \mathbf{R}^{n\times\frac{n^{2}(n+1)}{2}}\).
Let
$$\begin{aligned} K_{A}=(E_{21}-E_{12},\ldots,E_{n1}-E_{1n},E_{32}-E_{23},\ldots,E_{n2}-E_{2n}, \ldots, E_{n(n-1)}-E_{(n-1)n}). \end{aligned}$$
(11)
Note that \(K_{A}\in \mathbf{R}^{n\times\frac{n^{2}(n-1)}{2}}\).
Based on Definition 2, Definition 3, (10) and (11) we get the following lemmas which are necessary for our main results.
Lemma 1
Suppose \(X\in \mathbf{R}^{n\times n}\), then
(i)
$$\begin{aligned} X\in {\mathbf{SR}}^{n\times n}\quad \Longleftrightarrow\quad X=K_{S}\circ\operatorname{vec}_{S}(X), \end{aligned}$$
(12)
 
(ii)
$$\begin{aligned} X\in {\mathbf{ASR}}^{n\times n} \quad\Longleftrightarrow\quad X=K_{A}\circ\operatorname{vec}_{A}(X). \end{aligned}$$
(13)
 
Lemma 2
Suppose \(X=\operatorname{Re} X+\sqrt{-1}\operatorname{Im} X\in \mathbf{C}^{n\times n}\), then
$$\begin{aligned} X\in {\mathbf{HC}}^{n\times n}\quad \Longleftrightarrow\quad X=K_{S}\circ\operatorname{vec}_{S}(\operatorname{Re} X)+ \sqrt{-1}K_{A}\circ\operatorname{vec}_{A}(\operatorname{Im} X). \end{aligned}$$
(14)
Lemma 3
Given \(A\in \mathbf{C}^{m\times n}\), \(B\in \mathbf{C}^{n\times s}\), \(C\in \mathbf{C}^{p\times n}\), and \(D\in \mathbf{C}^{n\times q}\), and \(X=\operatorname{Re} X+\sqrt{-1}\operatorname{Im} X\in \mathbf{HC}^{n\times n}\). The complex matrices \(F_{ij}\in \mathbf{C}^{m\times s}\), \(G_{ij}\in \mathbf{C}^{m\times s}\), \(H_{ij}\in \mathbf{C}^{p\times q}\) and \(K_{ij}\in \mathbf{C}^{p\times q}\) (\(i,j=1,2,\ldots, n\);\(i\geq j\)) are defined by
$$\begin{aligned} &F_{ij}=\left \{\textstyle\begin{array}{l@{\quad}l} A_{i} B_{j},&i=j,\\ A_{i} B_{j}+A_{j} B_{i},& i>j, \end{array}\displaystyle \right .\qquad G_{ij}=\left \{\textstyle\begin{array}{l@{\quad}l} 0,&i=j,\\ \sqrt{-1}(A_{i} B_{j}-A_{j} B_{i}),& i>j, \end{array}\displaystyle \right .\\ &H_{ij}=\left \{\textstyle\begin{array}{l@{\quad}l} C_{i} D_{j},&i=j,\\ C_{i} D_{j}+C_{j} D_{i},& i>j, \end{array}\displaystyle \right .\qquad K_{ij}=\left \{\textstyle\begin{array}{l@{\quad}l} 0,&i=j,\\ \sqrt{-1}(C_{i} D_{j}-C_{j} D_{i}),& i>j, \end{array}\displaystyle \right . \end{aligned}$$
where \(A_{i}\in \mathbf{C}^{m}\), \(C_{i}\in \mathbf{C}^{p}\) is the ith column vector of matrix A and C, meanwhile, \(B_{j}\in \mathbf{C}^{s}\), \(D_{j}\in \mathbf{C}^{q}\) is the jth row vector of matrix B and D, respectively. Then
$$\begin{aligned}{} [AXB,CXD] =& \left[\vphantom{\left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X)\end{array}\displaystyle \right ]}(F_{11},F_{21},\ldots,F_{n1},F_{22},F_{32}, \ldots,F_{n2},\ldots,F_{(n-1)(n-1)},F_{n(n-1)},F_{nn},\right. \\ &G_{21},G_{31},\ldots,G_{n1},G_{32}, \ldots,G_{n2},\ldots,G_{n(n-1)})\circ \left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X)\end{array}\displaystyle \right ], \\ &(H_{11},H_{21},\ldots,H_{n1},H_{22},H_{32}, \ldots,H_{n2},\ldots,H_{(n-1)(n-1)},H_{n(n-1)},H_{nn}, \\ &\left.K_{21},K_{31},\ldots,K_{n1},K_{32}, \ldots,K_{n2},\ldots,K_{n(n-1)})\circ \left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X)\end{array}\displaystyle \right ] \right]. \end{aligned}$$
Proof
By Lemma 2, we can get
$$\begin{aligned} AXB =& A\bigl[K_{S}\circ\operatorname{vec}_{S}(\operatorname{Re}X)+\sqrt{-1}\bigl(K_{A}\circ\operatorname{vec}_{A}(\operatorname{Im}X)\bigr)\bigr]B \\ =&\bigl[(A K_{S})\circ\operatorname{vec}_{S}(\operatorname{Re}X)+\sqrt{-1}(A K_{A})\circ\operatorname{vec}_{A}(\operatorname{Im}X)\bigr]B \\ =&\bigl[(A K_{S})\circ\operatorname{vec}_{S}(\operatorname{Re}X)\bigr]B+ \sqrt{-1}\bigl[(A K_{A})\circ\operatorname{vec}_{A}(\operatorname{Im}X)\bigr]B \\ =&\bigl[\bigl(A(E_{11},E_{21}+E_{12}, \ldots,E_{n(n-1)}+E_{(n-1)n},E_{nn})\bigr)\circ \operatorname{vec}_{S}(\operatorname{Re}X)\bigr]B \\ &{}+\sqrt{-1}\bigl[A (E_{21}-E_{12},\ldots,E_{n1}-E_{1n}, \ldots, E_{n(n-1)}-E_{(n-1)n})\circ\operatorname{vec}_{A}(\operatorname{Im}X) \bigr]B \\ =&\bigl(AE_{11}B, A(E_{21}+E_{12})B, \ldots,A(E_{n(n-1)}+E_{(n-1)n})B, AE_{nn}B\bigr)\circ \operatorname{vec}_{S}(\operatorname{Re}X) \\ &{}+\sqrt{-1}\bigl(A(E_{21}-E_{12})B, \ldots, A(E_{n1}-E_{1n})B, \ldots, A(E_{n(n-1)}-E_{(n-1)n})B\bigr)\circ\operatorname{vec}_{A}(\operatorname{Im}X) \\ =&(A_{1}B_{1},A_{2}B_{1}+A_{1}B_{2}, \ldots,A_{n}B_{n-1}+A_{n-1}B_{n} ,A_{n}B_{n})\circ\operatorname{vec}_{S}(\operatorname{Re}X) \\ &{}+\sqrt{-1}(A_{2}B_{1}-A_{1}B_{2}, \ldots,A_{n}B_{1}-A_{1}B_{n},\ldots ,A_{n}B_{n-1}-A_{n-1}B_{n})\circ \operatorname{vec}_{A}(\operatorname{Im}X) \\ =&(F_{11},F_{21},\ldots,F_{n(n-1)},F_{nn}) \circ\operatorname{vec}_{S}(\operatorname{Re}X)+(G_{21},G_{31}, \ldots,G_{n(n-1)})\circ\operatorname{vec}_{A}(\operatorname{Im}X) \\ =&(F_{11},F_{21},\ldots,F_{(n-1)(n-1)},F_{n(n-1)},F_{nn},G_{21},G_{31}, \ldots,G_{n(n-1)})\circ \left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X)\end{array}\displaystyle \right ]. \end{aligned}$$
Similarly, we have
$$\begin{aligned} CXD =&\textstyle\begin{array}{c} (H_{11},H_{21},\ldots,H_{(n-1)(n-1)},H_{n(n-1)},H_{nn},K_{21},K_{31},\ldots,K_{n(n-1)})\circ \left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X)\end{array}\displaystyle \right ]\end{array}\displaystyle . \end{aligned}$$
Thus we can get the structure of \([AXB,CXD]\) and complete the proof. □

4 The solution of Problem 1

Based on the above results, in this section, we will deduce the solution of Problem 1. From [17], the least-squares problem
$$\|AXB-E\|^{2}+\|CXD-F\|^{2}=\min $$
with respect to the Hermitian matrix X is equivalent to
$$\begin{aligned} \left \Vert \left [\textstyle\begin{array}{c} P_{1}\\ P_{2}\end{array}\displaystyle \right ]\left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X)\\ \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X)\end{array}\displaystyle \right ]-\left [\textstyle\begin{array}{c} \operatorname{vec}(\operatorname{Re} E)\\ \operatorname{vec}(\operatorname{Im} E)\\ \operatorname{vec}(\operatorname{Re} F)\\ \operatorname{vec}(\operatorname{Im} F)\end{array}\displaystyle \right ]\right \Vert ^{2}=\min, \end{aligned}$$
(15)
where
$$\begin{aligned} P= \left [\textstyle\begin{array}{c} (B^{T}\otimes A)L_{S}\\ \sqrt{-1}(B^{T}\otimes A)L_{A}\\ (D^{T}\otimes C)L_{S}\\ \sqrt{-1}(D^{T}\otimes C)L_{A} \end{array}\displaystyle \right ],\qquad P_{1}=\operatorname{Re}(P),\qquad P_{2}= \operatorname{Im}(P). \end{aligned}$$
It should be noticed that (15) is an unconstrained problem over the real field and can easily be solved by existing methods, therefore, the original complex constrained Problem 1 is translated into an equivalent real unconstrained problem (15). Since the process has been expressed in [17], we omit it here. Based on Theorems 1, 2, and Lemma 3, we now turn to the least-squares Hermitian problem for the matrix equation (1). The following lemmas are necessary for our main results.
Lemma 4
[19]
Given \(A\in \mathbf{R}^{m\times n}\) and \(b\in \mathbf{R}^{n}\), the solution of equation \(Ax=b\) involves two cases:
(i)
The equation has a solution \(x\in \mathbf{R}^{n}\) and the general solution can be formulated as
$$ x=A^{+}b+\bigl(I-A^{+}A\bigr)y $$
(16)
if and only if \(AA^{+}b=b\), where \(y\in \mathbf{R}^{n}\) is an arbitrary vector.
 
(ii)
The least-squares solutions of the equation has the same formulation as (16) and the least-squares solution with the minimum norm is \(x=A^{+}b\).
 
For convenience, we introduce the following notations and lemmas.
Given \(A\in \mathbf{C}^{m\times n}\), \(B\in \mathbf{C}^{n\times s}\), \(C\in \mathbf{C}^{p\times n}\), \(D\in \mathbf{C}^{n\times q}\), \(E\in \mathbf{C}^{m\times s}\), and \(F\in \mathbf{C}^{p\times q}\), let
$$\begin{aligned} &\widehat{\Gamma}_{ij}=\left [\textstyle\begin{array}{c} \operatorname{Re}(F_{ij})\\ \operatorname{Im}(F_{ij})\\ \operatorname{Re}(H_{ij})\\ \operatorname{Im}(H_{ij})\end{array}\displaystyle \right ],\qquad \widehat{ \Upsilon}_{ij}=\left [\textstyle\begin{array}{c} \operatorname{Re}(G_{ij})\\ \operatorname{Im}(G_{ij})\\ \operatorname{Re}(K_{ij})\\ \operatorname{Im}(K_{ij})\end{array}\displaystyle \right ],\qquad \Omega_{0}= \left [\textstyle\begin{array}{c} \operatorname{Re}(E)\\ \operatorname{Im}(E)\\ \operatorname{Re}(F)\\ \operatorname{Im}(F) \end{array}\displaystyle \right ], \\ &W=\left [\textstyle\begin{array}{c@{\quad}c} P&U\\ U^{T}&V \end{array}\displaystyle \right ],\qquad e=\left [\textstyle\begin{array}{c} e_{1}\\ e_{2} \end{array}\displaystyle \right ], \end{aligned}$$
(17)
where \(n\geq i\geq j\geq 1\),
$$\begin{aligned} &P=\left [\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \langle \widehat{\Gamma}_{11},\widehat{\Gamma}_{11}\rangle &\langle \widehat{\Gamma}_{11},\widehat{\Gamma}_{21}\rangle &\cdots&\langle \widehat{\Gamma}_{11},\widehat{\Gamma}_{nn}\rangle \\ \langle \widehat{\Gamma}_{21},\widehat{\Gamma}_{11}\rangle &\langle \widehat{\Gamma}_{21},\widehat{\Gamma}_{22}\rangle &\cdots&\langle \widehat{\Gamma}_{21},\widehat{\Gamma}_{nn}\rangle \\ \vdots&\vdots& &\vdots\\ \langle \widehat{\Gamma}_{nn},\widehat{\Gamma}_{11}\rangle &\langle \widehat{\Gamma}_{nn},\widehat{\Gamma}_{21}\rangle &\cdots&\langle \widehat{\Gamma}_{nn},\widehat{\Gamma}_{nn}\rangle \end{array}\displaystyle \right ],\\ &U=\left [\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \langle \widehat{\Gamma}_{11},\widehat{\Upsilon}_{21}\rangle &\langle \widehat{\Gamma}_{11},\widehat{\Upsilon}_{31}\rangle &\cdots&\langle \widehat{\Gamma}_{11},\widehat{\Upsilon}_{n(n-1)}\rangle \\ \langle \widehat{\Gamma}_{21},\widehat{\Upsilon}_{21}\rangle &\langle \widehat{\Gamma}_{21},\widehat{\Upsilon}_{31}\rangle &\cdots&\langle \widehat{\Gamma}_{21},\widehat{\Upsilon}_{n(n-1)})\\ \vdots&\vdots& &\vdots\\ \langle \widehat{\Gamma}_{nn},\widehat{\Upsilon}_{21}\rangle &\langle \widehat{\Gamma}_{nn},\widehat{\Upsilon}_{31}\rangle &\cdots&\langle \widehat{\Gamma}_{nn},\widehat{\Upsilon}_{n(n-1)}\rangle \end{array}\displaystyle \right ],\\ &V=\left [\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \langle \widehat{\Upsilon}_{21},\widehat{\Upsilon}_{21}\rangle &\langle \widehat{\Upsilon}_{21},\widehat{\Upsilon}_{31}\rangle &\cdots&\langle \widehat{\Upsilon}_{21},\widehat{\Upsilon}_{n(n-1)}\rangle \\ \langle \widehat{\Upsilon}_{31},\widehat{\Upsilon}_{21}\rangle &\langle \widehat{\Upsilon}_{31},\widehat{\Upsilon}_{31}\rangle &\cdots&\langle \widehat{\Upsilon}_{31},\widehat{\Upsilon}_{n(n-1)}\rangle \\ \vdots&\vdots& &\vdots\\ \langle \widehat{\Upsilon}_{n(n-1)},\widehat{\Upsilon}_{21}\rangle &\langle \widehat{\Upsilon}_{n(n-1)},\widehat{\Upsilon}_{31}\rangle &\cdots&\langle \widehat{\Upsilon}_{n(n-1)},\widehat{\Upsilon}_{n(n-1)}\rangle \end{array}\displaystyle \right ],\\ &e_{1}=\left [\textstyle\begin{array}{c} \langle \widehat{\Gamma}_{11},\Omega_{0}\rangle \\ \langle \widehat{\Gamma}_{21},\Omega_{0}\rangle \\ \vdots\\ \langle \widehat{\Gamma}_{nn},\Omega_{0}\rangle \end{array}\displaystyle \right ],\qquad e_{2}= \left [\textstyle\begin{array}{c} \langle \widehat{\Upsilon}_{21},\Omega_{0}\rangle \\ \langle \widehat{\Upsilon}_{31},\Omega_{0}\rangle \\ \vdots\\ \langle \widehat{\Upsilon}_{n(n-1)},\Omega_{0}\rangle \end{array}\displaystyle \right ]. \end{aligned}$$
Theorem 3
Given \(A\in {\mathbf{C}}^{m\times n}\), \(B\in {\mathbf{C}}^{n\times s}\), \(C\in {\mathbf{C}}^{m\times n}\), \(D\in {\mathbf{C}}^{n\times t}\), \(E\in {\mathbf{C}}^{m\times s}\), and \(F\in {\mathbf{C}}^{m\times t}\), let W, e be as in (17). Then
$$\begin{aligned} H_{L}= \bigl\{ X\vert X=(K_{S}, \sqrt{-1}K_{A})\circ \bigl[W^{+}e+\bigl(I-W^{+}W\bigr)y\bigr] \bigr\} , \end{aligned}$$
(18)
where \(y\in \mathbf{R}^{n^{2}}\) is an arbitrary vector. Problem 1 has a unique solution \(X_{H}\in H_{L}\). This solution satisfies
$$\begin{aligned} X_{H}=(K_{S}, \sqrt{-1}K_{A}) \circ \bigl(W^{+}e\bigr). \end{aligned}$$
(19)
Proof
By Lemma 3 and Theorem 2, the least-squares problem
$$\|AXB-E\|^{2}+\|CXD-F\|^{2}=\min $$
with respect to the Hermitian matrix X can be translated into an equivalent consistent linear equations over the real field
$$\begin{aligned} W\left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X) \end{array}\displaystyle \right ]=e. \end{aligned}$$
It follows by Lemma 4 that
$$\left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X) \end{array}\displaystyle \right ]= W^{+}e+\bigl(I-W^{+}W\bigr)y. $$
Thus
$$X=(K_{S}, \sqrt{-1}K_{A})\circ \bigl(W^{+}e+\bigl(I-W^{+}W\bigr)y\bigr). $$
The proof is completed. □
We now turn to the consistency of the matrix equation (1). Denote
$$\begin{aligned} N=\left [\textstyle\begin{array}{c} N_{1}\\ N_{2}\\ N_{3}\\ N_{4}\end{array}\displaystyle \right ],\qquad \hat{e}=\left [\textstyle\begin{array}{c} \operatorname{vec}(\operatorname{Re} E)\\ \operatorname{vec}(\operatorname{Im} E)\\ \operatorname{vec}(\operatorname{Re} F)\\ \operatorname{vec}(\operatorname{Im} F)\end{array}\displaystyle \right ], \end{aligned}$$
(20)
where
$$\begin{aligned} &{N_{1}=\bigl[\operatorname{vec}\bigl(\operatorname{Re}(F_{11})\bigr), \operatorname{vec}\bigl(\operatorname{Re}(F_{21})\bigr),\ldots,\operatorname{vec}\bigl( \operatorname{Re}(F_{n1})\bigr), \operatorname{vec}\bigl(\operatorname{Re}(F_{22}) \bigr),\ldots,} \\ &{\hphantom{N_{1}=}\operatorname{vec}\bigl(\operatorname{Re}(F_{n2})\bigr),\ldots,\operatorname{vec}\bigl(\operatorname{Re}(F_{(n-1)(n-1)})\bigr), \operatorname{vec}\bigl( \operatorname{Re}(F_{n(n-1)})\bigr),\operatorname{vec}\bigl(\operatorname{Re}(F_{nn}) \bigr),} \\ &{\hphantom{N_{1}=}\operatorname{vec}\bigl(\operatorname{Re}(G_{21})\bigr),\ldots,\operatorname{vec} \bigl(\operatorname{Re}(G_{n1})\bigr),\operatorname{vec}\bigl(\operatorname{Re}(G_{32})\bigr),\ldots,}\\ &{\hphantom{N_{1}=}\operatorname{vec}\bigl( \operatorname{Re}(G_{n2})\bigr),\ldots,\operatorname{vec}\bigl(\operatorname{Re}(G_{n(n-1)}) \bigr)\bigr],}\\ &{N_{2}=\bigl[\operatorname{vec}\bigl(\operatorname{Im}(F_{11})\bigr), \operatorname{vec}\bigl(\operatorname{Im}(F_{21})\bigr),\ldots,\operatorname{vec}\bigl( \operatorname{Im}(F_{n1})\bigr), \operatorname{vec}\bigl(\operatorname{Im}(F_{22}) \bigr),\ldots,} \\ &{\hphantom{N_{2}=}\operatorname{vec}\bigl(\operatorname{Im}(F_{n2})\bigr),\ldots,\operatorname{vec}\bigl(\operatorname{Im}(F_{(n-1)(n-1)})\bigr), \operatorname{vec}\bigl( \operatorname{Im}(F_{n(n-1)})\bigr),\operatorname{vec}\bigl(\operatorname{Im}(F_{nn}) \bigr),} \\ &{\hphantom{N_{2}=}\operatorname{vec}\bigl(\operatorname{Im}(G_{21})\bigr),\ldots,\operatorname{vec} \bigl(\operatorname{Im}(G_{n1})\bigr),\operatorname{vec}\bigl(\operatorname{Im}(G_{32})\bigr),\ldots,}\\ &{\hphantom{N_{2}=}\operatorname{vec}\bigl( \operatorname{Im}(G_{n2})\bigr),\ldots,\operatorname{vec}\bigl(\operatorname{Im}(G_{n(n-1)}) \bigr)\bigr],}\\ &{N_{3}=\bigl[\operatorname{vec}\bigl(\operatorname{Re}(H_{11})\bigr), \operatorname{vec}\bigl(\operatorname{Re}(H_{21})\bigr),\ldots,\operatorname{vec}\bigl( \operatorname{Re}(H_{n1})\bigr), \operatorname{vec}\bigl(\operatorname{Re}(H_{22}) \bigr),\ldots,} \\ &{\hphantom{N_{3}=}\operatorname{vec}\bigl(\operatorname{Re}(H_{n2})\bigr),\ldots,\operatorname{vec}\bigl(\operatorname{Re}(H_{(n-1)(n-1)})\bigr), \operatorname{vec}\bigl( \operatorname{Re}(H_{n(n-1)})\bigr),\operatorname{vec}\bigl(\operatorname{Re}(H_{nn}) \bigr),} \\ &{\hphantom{N_{3}=}\operatorname{vec}\bigl(\operatorname{Re}(K_{21})\bigr),\ldots,\operatorname{vec} \bigl(\operatorname{Re}(K_{n1})\bigr),\operatorname{vec}\bigl(\operatorname{Re}(K_{32})\bigr),\ldots,}\\ &{\hphantom{N_{3}=}\operatorname{vec}\bigl( \operatorname{Re}(K_{n2})\bigr),\ldots,\operatorname{vec}\bigl(\operatorname{Re}(K_{n(n-1)}) \bigr)\bigr],}\\ &{N_{4}=\bigl[\operatorname{vec}\bigl(\operatorname{Im}(H_{11})\bigr), \operatorname{vec}\bigl(\operatorname{Im}(H_{21})\bigr),\ldots,\operatorname{vec}\bigl( \operatorname{Im}(H_{n1})\bigr), \operatorname{vec}\bigl(\operatorname{Im}(H_{22}) \bigr),\ldots,} \\ &{\hphantom{N_{4}=}\operatorname{vec}\bigl(\operatorname{Im}(H_{n2})\bigr),\ldots,\operatorname{vec}\bigl(\operatorname{Im}(H_{(n-1)(n-1)})\bigr), \operatorname{vec}\bigl( \operatorname{Im}(H_{n(n-1)})\bigr),\operatorname{vec}\bigl(\operatorname{Im}(H_{nn}) \bigr),} \\ &{\hphantom{N_{4}=}\operatorname{vec}\bigl(\operatorname{Im}(K_{21})\bigr),\ldots,\operatorname{vec} \bigl(\operatorname{Im}(K_{n1})\bigr),\operatorname{vec}\bigl(\operatorname{Im}(K_{32})\bigr),\ldots,}\\ &{\hphantom{N_{4}=}\operatorname{vec}\bigl( \operatorname{Im}(K_{n2})\bigr),\ldots,\operatorname{vec}\bigl(\operatorname{Im}(K_{n(n-1)}) \bigr)\bigr].} \end{aligned}$$
By Lemma 3, we have
$$\begin{aligned} (AXB,CXD)=(E,F)\quad\Longleftrightarrow\quad N\left [\textstyle\begin{array}{c} \operatorname{vec}_{S}(\operatorname{Re} X)\\ \operatorname{vec}_{A}(\operatorname{Im} X)\end{array}\displaystyle \right ]= \hat{e}. \end{aligned}$$
(21)
Thus we can get the following conclusions by Lemma 4 and Theorem 3.
Corollary 1
The matrix equation (1) has a solution \(X\in \mathbf{HC}^{n\times n}\) if and only if
$$\begin{aligned} NN^{+}\hat{e}=\hat{e}. \end{aligned}$$
(22)
In this case, denote by \(H_{E}\) the solution set of (1). Then
$$\begin{aligned} H_{E}= \bigl\{ X\vert X=(K_{S}, \sqrt{-1}K_{A})\circ \bigl[M^{+}e+\bigl(I-W^{+}W\bigr)y\bigr]\bigr\} , \end{aligned}$$
(23)
where \(y\in \mathbf{R}^{n^{2}}\) is an arbitrary vector.
Furthermore, if (22) holds, then the matrix equation (1) has a unique solution \(X\in H_{E}\) if and only if
$$\begin{aligned} \operatorname{rank}(N)=n^{2}. \end{aligned}$$
(24)
In this case,
$$\begin{aligned} H_{E}= \bigl\{ X\vert X=(K_{S}, \sqrt{-1}K_{A})\circ \bigl(W^{+}e\bigr) \bigr\} . \end{aligned}$$
(25)
The least norm problem
$$\begin{aligned} \|X_{H}\|=\min_{X\in H_{E}}\|X\| \end{aligned}$$
has a unique solution \(X_{H}\in H_{E}\) and \(X_{H}\) can be expressed as (19).

5 Numerical experiments

In this section, based on the results in the above sections, we first give the numerical algorithm to find the solution of Problem 1. Then numerical experiments are proposed to demonstrate the efficiency of the algorithm. The following algorithm provides the main steps to find the solutions of Problem 1.
Algorithm 4
(1)
Input \(A\in \mathbf{C}^{m\times n}\), \(B\in \mathbf{C}^{n\times s}\), \(C\in \mathbf{C}^{m\times n}\), \(D\in \mathbf{C}^{n\times t}\), \(E\in \mathbf{C}^{m\times s}\), and \(F\in \mathbf{C}^{m\times t}\).
 
(2)
Compute \(F_{ij}\), \(G_{i,j}\), \(H_{i,j}\) and \(K_{i,j}\) (\(i,j=1,2, \ldots, n\), \(i\geq g\)) by Lemma 3.
 
(3)
Compute P, U, V and e according to (17).
 
(4)
If (22) and (24) hold then calculate \(X_{H}\) (\(X_{H} \in H_{E}\)) according to (25).
 
(5)
If (22) holds then calculate \(X_{H}\) (\(X_{H} \in H_{E}\)) according to (19), otherwise go to the next step.
 
(6)
Calculate \(X_{H}\) (\(X_{H} \in H_{L}\)) according to (19).
 
For convenience, in the following examples, the random matrix, the Hilbert matrix, the Toeplitz matrix, the matrix whose all elements are one and the magic matrix are all denoted as by the corresponding Matlab function.
Example 1
Let \(A_{r}=10\operatorname{rand} (8,10)\), \(B_{r}=\operatorname{rand}(10,12)\), \(C_{r}=10\operatorname{rand}(8,10)\), \(D_{r}=\operatorname{rand}(10,12)\), \(A_{i}=\operatorname{rand}(8,10)\), \(B_{i}=10\operatorname{rand}(10,12)\), \(C_{i}=\operatorname{rand}(8,10)\), \(D_{i}=10\operatorname{rand}(10,12)\). Let \(\tilde{X}= \operatorname{rand}(10,10)\), \(X_{r}=\tilde{X}+\tilde{X}^{T}\); \(\hat{X}=\operatorname{hilb}(10)\), \(X_{i}=\hat{X}-\hat{X}^{T}\); \(A=A_{r}+\sqrt{-1}A_{i}\), \(B=B_{r}+\sqrt{-1}B_{i}\), \(C=C_{r}+\sqrt{-1}C_{i}\), \(D=D_{r}+\sqrt{-1}D_{i}\), \(X=X_{r}+\sqrt{-1}X_{i}\), \(E=AXB\), \(F=CXD\). By using Matlab 7 and Algorithm 4, we obtain \(\operatorname{rank}(W)=100\), \(\|W\|=2.1069e{+}07\), \(\|WW^{+}e-e\|=6.1091e{-}06\). \(\operatorname{rank}(N)=100\), \(\|N\|=5.9155e{+}03\), \(\|NN^{+}\hat{e}-\hat{e}\|=4.0346e{-}11\). From Algorithm 4(4), it can be concluded that the complex matrix equation \([AXB, CXD]=[E,F]\) is consistent, and it has a unique solution \(X_{H}\in H_{E}\); further, \(\|X_{H}-X\|=1.4536e{-}11\) can easily be tested.
Example 2
Let \(m=8\), \(n=10\), \(s=12\). Take \(A_{r}=[\operatorname{toeplitz}(1:8),0_{m\times 2}]\), \(B_{r}=\operatorname{ones}(10,12)\), \(C_{r}=[\operatorname{hilb}(8),0_{8\times 2}]\), \(D_{r}=\operatorname{ones}(10,12)\), \(A_{i}=0_{8\times 10}\), \(B_{i}=\operatorname{ones}(10,12)\), \(C_{i}=0_{8\times 10}\), \(D_{i}=0_{10\times 12}\). Let \(\tilde{X}=\operatorname{rand}(10,10)\), \(X_{r}=\tilde{X}+\tilde{X}^{T}\); \(\hat{X}=\operatorname{hilb}(n)\), \(X_{i}=\hat{X}-\hat{X}^{T}\); \(A=A_{r}+\sqrt{-1}A_{i}\), \(B=B_{r}+\sqrt{-1}B_{i}\), \(C=C_{r}+\sqrt{-1}C_{i}\), \(D=D_{r}+\sqrt{-1}D_{i}\), \(X=X_{r}+\sqrt{-1}X_{i}\), \(E=AXB\), \(F=CXD\). From Algorithm 4(5), we can obtain \(\operatorname{rank}(W)=16\), \(\|W\|=3.6293e{+}05\), \(\|WW^{+}e-e\|=5.6271e{-}08\). \(\operatorname{rank}(N)=16\), \(\|N\|=699.6486\), \(\|NN^{+}\hat{e}-\hat{e}\|=7.1024e{-}12\). From Algorithm 4, it can be concluded that the complex matrix equation \([AXB, CXD]=[E,F]\) is consistent, and it has a unique solution \(X_{H}\in H_{E}\), further, \(\|X_{H}-X\|=5.7501\) can easily be tested.
Example 3
Let \(m=8\), \(n=10\), \(s=12\). Take \(A_{r}=0_{8 \times 10}\), \(B_{r}=\operatorname{rand}(10,12)\), \(C_{r}=0_{8\times 10}\), \(D_{r}=\operatorname{rand}(10,12)\), \(A_{i}=\operatorname{rand}(8,10)\), \(B_{i}=0_{10 \times 12}\), \(C_{i}=\operatorname{rand}(8,10)\), \(D_{i}=0_{10\times 12}\). Let \(\tilde{X}=\operatorname{rand}(10,10)\), \(X_{r}=\tilde{X}+\tilde{X}^{T}\); \(\hat{X}=\operatorname{rand}(10,10)\), \(X_{i}=\hat{X}-\hat{X}^{T}\); \(A=A_{r}+\sqrt{-1}A_{i}\), \(B=B_{r}+\sqrt{-1}B_{i}\), \(C=C_{r}+\sqrt{-1}C_{i}\), \(D=D_{r}+\sqrt{-1}D_{i}\), \(X=X_{r}+\sqrt{-1}X_{i}\), \(E=AXB+10\operatorname{ones}(m,s)\), \(F=CXD+10\operatorname{ones}(m,s)\). By using Matlab 7 and Algorithm 4, we obtain \(\operatorname{rank}(W)=100\), \(\|W\|= 2.6274e{+}03\), \(\|WW^{+}e-e\|=2.4857e{-}09\). \(\operatorname{rank}(N)=100\), \(\|N\|=64.7511\), \(\|NN^{+}\hat{e}-\hat{e}\|=109.1680\). According to Algorithm 4(6) we can see that the complex matrix equation \([AXB, CXD]=[E,F]\) is inconsistent, and it has a unique solution \(X_{H}\in H_{E}\) and we can get \(\|X_{H}-X\|= 41.9521\).

6 Conclusions

In this paper, we derive the explicit expressions of least-squares Hermitian solution with the minimum norm for the complex matrix equations \((AXB,CXD)=(E,F)\). A numerical algorithm and examples show the effectiveness of our method.

Acknowledgements

This work is supported by the National Natural Science Foundation (No.11271040, 11361027), the Key Project of Department of Education of Guangdong Province (No.2014KZDXM055), the Training plan for the Outstanding Young Teachers in Higher Education of Guangdong (No.SYq2014002), Guangdong Natural Science Fund of China (No.2014A030313625, 2015A030313646, 2016A030313005), Opening Project of Guangdong Province Key Laboratory of Computational Science at the Sun Yat-sen University (No.2016007), Science Foundation for Youth Teachers of Wuyi University (No.2015zk09) and Science and technology project of Jiangmen City, China.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors read and approved the final manuscript.
Literatur
1.
Zurück zum Zitat Datta, BN: Numerical Methods for Linear Control Systems: Design and Analysis. Academic Press, San Diego (2004) MATH Datta, BN: Numerical Methods for Linear Control Systems: Design and Analysis. Academic Press, San Diego (2004) MATH
2.
Zurück zum Zitat Dehghan, M, Hajarian, M: An efficient algorithm for solving general coupled matrix equations and its application. Math. Comput. Model. 51, 1118-1134 (2010) MathSciNetCrossRefMATH Dehghan, M, Hajarian, M: An efficient algorithm for solving general coupled matrix equations and its application. Math. Comput. Model. 51, 1118-1134 (2010) MathSciNetCrossRefMATH
3.
Zurück zum Zitat Dai, H, Lancaster, P: Linear matrix equations from an inverse problem of vibration theory. Linear Algebra Appl. 246, 31-47 (1996) MathSciNetCrossRefMATH Dai, H, Lancaster, P: Linear matrix equations from an inverse problem of vibration theory. Linear Algebra Appl. 246, 31-47 (1996) MathSciNetCrossRefMATH
4.
Zurück zum Zitat Sanches, JM, Nascimento, JC, Marques, JS: Medical image noise reduction using the Sylvester-Lyapunov equation. IEEE Trans. Image Process. 17(9), 1522-1539 (2008) MathSciNetCrossRefMATH Sanches, JM, Nascimento, JC, Marques, JS: Medical image noise reduction using the Sylvester-Lyapunov equation. IEEE Trans. Image Process. 17(9), 1522-1539 (2008) MathSciNetCrossRefMATH
5.
Zurück zum Zitat Calvetti, D, Reichel, L: Application of ADI iterative methods to the restoration of noisy images. SIAM J. Matrix Anal. Appl. 17, 165-186 (1996) MathSciNetCrossRefMATH Calvetti, D, Reichel, L: Application of ADI iterative methods to the restoration of noisy images. SIAM J. Matrix Anal. Appl. 17, 165-186 (1996) MathSciNetCrossRefMATH
6.
Zurück zum Zitat Dehghan, M, Hajarian, M: The generalized Sylvester matrix equations over the generalized bisymmetric and skew-symmetric matrices. Int. J. Syst. Sci. 43, 1580-1590 (2012) MathSciNetCrossRefMATH Dehghan, M, Hajarian, M: The generalized Sylvester matrix equations over the generalized bisymmetric and skew-symmetric matrices. Int. J. Syst. Sci. 43, 1580-1590 (2012) MathSciNetCrossRefMATH
7.
Zurück zum Zitat Liu, Y-H: On the best approximation problem of quaternion matrices. J. Math. Study 37(2), 129-134 (2004) MathSciNetMATH Liu, Y-H: On the best approximation problem of quaternion matrices. J. Math. Study 37(2), 129-134 (2004) MathSciNetMATH
8.
Zurück zum Zitat Krishnaswamy, D: The skew-symmetric ortho-symmetric solutions of the matrix equations \(A^{*}XA=D\). Int. J. Algebra 5, 1489-1504 (2011) MathSciNetMATH Krishnaswamy, D: The skew-symmetric ortho-symmetric solutions of the matrix equations \(A^{*}XA=D\). Int. J. Algebra 5, 1489-1504 (2011) MathSciNetMATH
9.
Zurück zum Zitat Wang, Q-W, He, Z-H: Solvability conditions and general solution for mixed Sylvester equations. Automatica 49, 2713-2719 (2013) MathSciNetCrossRef Wang, Q-W, He, Z-H: Solvability conditions and general solution for mixed Sylvester equations. Automatica 49, 2713-2719 (2013) MathSciNetCrossRef
10.
Zurück zum Zitat Xiao, QF: The Hermitian R-symmetric solutions of the matrix equation \(AXA^{*}=B\). Int. J. Algebra 6, 903-911 (2012) MathSciNetMATH Xiao, QF: The Hermitian R-symmetric solutions of the matrix equation \(AXA^{*}=B\). Int. J. Algebra 6, 903-911 (2012) MathSciNetMATH
11.
Zurück zum Zitat Yuan, Y-X: The optimal solution of linear matrix equation by matrix decompositions. Math. Numer. Sin. 24, 165-176 (2002) MathSciNet Yuan, Y-X: The optimal solution of linear matrix equation by matrix decompositions. Math. Numer. Sin. 24, 165-176 (2002) MathSciNet
12.
Zurück zum Zitat Wang, QW, Wu, ZC: On the Hermitian solutions to a system of adjointable operator equations. Linear Algebra Appl. 437, 1854-1891 (2012) MathSciNetCrossRefMATH Wang, QW, Wu, ZC: On the Hermitian solutions to a system of adjointable operator equations. Linear Algebra Appl. 437, 1854-1891 (2012) MathSciNetCrossRefMATH
13.
Zurück zum Zitat Kyrchei, I: Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra Appl. 438, 136-152 (2013) MathSciNetCrossRefMATH Kyrchei, I: Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra Appl. 438, 136-152 (2013) MathSciNetCrossRefMATH
14.
Zurück zum Zitat Kyrchei, I: Analogs of Cramer’s rule for the minimum norm least squares solutions of some matrix equations. Appl. Math. Comput. 218, 6375-6384 (2012) MathSciNetMATH Kyrchei, I: Analogs of Cramer’s rule for the minimum norm least squares solutions of some matrix equations. Appl. Math. Comput. 218, 6375-6384 (2012) MathSciNetMATH
15.
Zurück zum Zitat Dehghani-Madiseh, M, Dehghan, M: Generalized solution sets of the interval generalized Sylvester matrix equation and some approaches for inner and outer estimations. Comput. Math. Appl. 68(12), 1758-1774 (2014) MathSciNetCrossRef Dehghani-Madiseh, M, Dehghan, M: Generalized solution sets of the interval generalized Sylvester matrix equation and some approaches for inner and outer estimations. Comput. Math. Appl. 68(12), 1758-1774 (2014) MathSciNetCrossRef
16.
Zurück zum Zitat Liao, A-P, Lei, Y: Least squares solution with the minimum-norm for the matrix equation \((AXB, GXH)=(C, D)\). Comput. Math. Appl. 50, 539-549 (2005) MathSciNetCrossRefMATH Liao, A-P, Lei, Y: Least squares solution with the minimum-norm for the matrix equation \((AXB, GXH)=(C, D)\). Comput. Math. Appl. 50, 539-549 (2005) MathSciNetCrossRefMATH
17.
Zurück zum Zitat Yuan, SF, Liao, AP, Lei, Y: Least squares Hermitian solution of the matrix equation \((AXB, CXD)=(E,F)\) with the least norm over the skew field of quaternions. Math. Comput. Model. 48, 91-100 (2008) MathSciNetCrossRefMATH Yuan, SF, Liao, AP, Lei, Y: Least squares Hermitian solution of the matrix equation \((AXB, CXD)=(E,F)\) with the least norm over the skew field of quaternions. Math. Comput. Model. 48, 91-100 (2008) MathSciNetCrossRefMATH
18.
Zurück zum Zitat Yuan, SF, Liao, AP: Least squares Hermitian solution of the complex matrix equation \(AXB+CXD=E\) with the least norm. J. Franklin Inst. 351, 4978-4997 (2014) MathSciNetCrossRefMATH Yuan, SF, Liao, AP: Least squares Hermitian solution of the complex matrix equation \(AXB+CXD=E\) with the least norm. J. Franklin Inst. 351, 4978-4997 (2014) MathSciNetCrossRefMATH
19.
Zurück zum Zitat Ben-Israel, A, Greville, TNE: Generalized Inverses: Theory and Applications. Wiley, New York (1974) MATH Ben-Israel, A, Greville, TNE: Generalized Inverses: Theory and Applications. Wiley, New York (1974) MATH
Metadaten
Titel
Least-squares Hermitian problem of complex matrix equation
verfasst von
Peng Wang
Shifang Yuan
Xiangyun Xie
Publikationsdatum
01.12.2016
Verlag
Springer International Publishing
Erschienen in
Journal of Inequalities and Applications / Ausgabe 1/2016
Elektronische ISSN: 1029-242X
DOI
https://doi.org/10.1186/s13660-016-1231-9

Weitere Artikel der Ausgabe 1/2016

Journal of Inequalities and Applications 1/2016 Zur Ausgabe