17.04.2020  Original Paper  Ausgabe 3/2021 Open Access
Two algorithms for the exchange lemma
 Zeitschrift:
 Numerical Algorithms > Ausgabe 3/2021
Wichtige Hinweise
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
Let
V be a finite dimensional vector space over a field
K. Denote by span
\(\mathcal {S}\) the subspace of
V generated by a system
\(\mathcal {S}\) of vectors in
V. Let
\(\mathcal {A}=\left (a_{1}, \ldots , a_{r} \right )\) and
\({\mathcal{B}}=\left (b_{1}, \ldots , b_{s} \right )\) be two systems of vectors in
V such that
\(\mathcal {A}\) is linearly independent and
\(\mathrm {span } \mathcal {A} \subset \mathrm {span } {\mathcal{B}}\). The latter condition can be written in terms of matrix multiplication as
where superscript
T denotes the matrix transpose operation and
M is an appropriate matrix in
M
_{r, s}(
K). The Exchange Lemma, attributed by van der Waerden [
8, p. 96], to Steinitz [
7, p. 133], but already discovered by Grassmann many decades earlier [
3, p. 30], says that
r ≤
s and that there exists a system
\(\mathcal {C}\) consisting of the vectors of
\(\mathcal {A}\) and of (
s −
r) vectors from
\({\mathcal{B}}\) such that
\(\mathrm {span } \mathcal {C} = \mathrm {span } {\mathcal{B}}\). The proofs and the resulting algorithms for constructing
\(\mathcal {C}\) presented in the cited works as well as in our contemporary textbooks use somewhat tedious procedures by induction on
r, called the
exchange method or
pivoting (cf. Section 3.3 in [
5]).
$$ \mathcal{A}^{T} = M \mathcal{B}^{T}, $$
(1)
The main aim of this note is to present two new concise algorithms for constructing
\(\mathcal {C}\) (Section
2) and, in consequence, two new alternative proofs of the Exchange Lemma. The second algorithm, using basic minors, is in a sense more general than the first, using row echelon forms; the relation between the two algorithms is described in Remark 3. As to the proofs, we comment on them in Remark 4 (for the computational complexities, see Remarks 1 and 6).
Anzeige
The organization of the note is as follows. In Section
2, we present the algorithms, and in an example juxtapose them with one by pivoting. In Section
3, we prove correctness of the algorithm via row echelon forms. Prior to proving correctness of the algorithm via basic minors in Section
6, in Section
4, we recall some properties of basic minors, and in Section
5, we obtain certain spanning criterion. In Section
7, we deal with this criterion in more detail. We establish two additional algorithms in Section
8.
2 The algorithms
Input: Two systems of vectors
\(\mathcal {A}=\left (a_{1}, \ldots , a_{r} \right )\) and
\({\mathcal{B}}=\left (b_{1}, \ldots , b_{s} \right )\), and a matrix
M such that
\(\mathcal {A}\) is linearly independent and
\(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\).
Output: A system
\(\mathcal {C}\) consisting of the vectors of
\(\mathcal {A}\) and of
s −
r vectors of
\({\mathcal{B}}\) such that
\(\text {span } \mathcal {C} = \text {span } {\mathcal{B}}\).
Algorithm via row echelon forms.
1.
Transform
M into a matrix
G in row echelon form.
2.
Let
R be the set of the labels of those columns of
G which do not contain the leading coefficients.
3.
Set
\(\mathcal {C}\) to be the concatenation of the systems
\(\mathcal {A}\) and (
b
_{l})
_{l∈R}.
Anzeige
Algorithm via basic minors.
2.
Let
R be the set of the labels of those columns of
M which do not intersect with
\(\tilde {M}\).
3.
Set
\(\mathcal {C}\) to be the concatenation of the systems
\(\mathcal {A}\) and (
b
_{l})
_{l∈R}.
Example 1
Let (
e
_{1},
e
_{2},
e
_{3},
e
_{4}) be the standard basis in
\({\mathbb {R}}^{4}\). Denote
\(\mathcal {A}=(a_{1}, a_{2}, a_{3})\), and
\({\mathcal{B}}=(b_{1},\ldots , b_{5})=(e_{1}, e_{2}, e_{3}, e_{4}, e_{4})\). For
we have
\(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\). One can check that
\(\mathcal {A}\) is linearly independent.
$$\begin{array}{ccc} a_{1} & = & [1, 1, 1,0],\\ a_{2} & = & [1, 1, 1,0],\\ a_{3} & = & [1, 1, 1,2], \end{array} $$
$$M=[m_{ij}]= \left[ \begin{array}{rrrrr} 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 \end{array}\right]$$
First, we find
\(\mathcal {C}\) using the algorithm via row echelon forms.
1.
Add row 1 to rows 2 and 3 to get
$$G= \left[ \begin{array}{rrrrr} 1 & 1 & 1 & 1 & 1\\ 0 & 2 & 0 & 2 & 2\\ 0 & 0 & 0 & 0 & 2 \end{array}\right].$$
2.
\(R=\left \{3,4\right \}\).
3.
\(\mathcal {C}=(a_{1}, a_{2}, a_{3}, b_{3}, b_{4})=(a_{1}, a_{2}, a_{3}, e_{3}, e_{4})\).
Second, we find
\(\mathcal {C}\) using the algorithm via basic minors.
1.
Apply the method described in Remark 2. Select Δ(1) =
m
_{35} = 1 ≠ 0. Minor
\({\Delta }(2)=\det \left [ \begin {array}{cc} m_{2 4} & m_{2 5}\\ m_{3 4} & m_{3 5} \end {array}\right ]= \det \left [ \begin {array}{cc} 1 & 1\\ 1 & 1 \end {array}\right ]=2\) contains Δ(1) and is nonzero. Minor
\({\Delta }(3)=\det (\tilde {M})=\det ([m_{ij}]_{1\le i\le 3 \le j \le 5})=4\) contains Δ(2) and is nonzero, thus is a basic minor.
2.
\(R=\left \{1,2\right \}\).
3.
\(\mathcal {C}=(a_{1}, a_{2}, a_{3}, b_{1}, b_{2})=(a_{1}, a_{2}, a_{3}, e_{1}, e_{2})\).
Now we find
\(\mathcal {C}\) using pivoting.
1.
Since
m
_{11} = − 1 ≠ 0, we can exchange vectors
a
_{1} and
b
_{1}, and get
\([b_{1} a_{2} a_{3}]^{T}=M^{\prime } [a_{1} b_{2} b_{3} b_{4} b_{5}]^{T}\) with
$$M^{\prime}=[m^{\prime}_{ij}]= \left[ \begin{array}{rrrrr} 1 & 1 & 1 & 1 & 1\\ 1 & 2 & 0 & 2 & 2\\ 1 & 0 & 0 & 0 & 2 \end{array}\right].$$
2.
Since
\(m^{\prime }_{2 2}=2\ne 0\), we can exchange vectors
a
_{2} and
b
_{2}, and get
\([b_{1} b_{2} a_{3}]^{T}=M^{\prime \prime } [a_{1} a_{2} b_{3} b_{4} b_{5}]^{T}\) with
$$M^{\prime\prime}=[m^{\prime\prime}_{ij}]= \left[ \begin{array}{ccrrr} 1/2 & 1/2 & 1 & 0 & 0\\ 1/2 & 1/2 & 0 & 1 & 1\\ 1 & 0 & 0 & 0 & 2 \end{array}\right].$$
3.
Since
\(m^{\prime \prime }_{3 5}=2\ne 0\), we can exchange vectors
a
_{3} and
b
_{5}, and get
\([b_{1} b_{2} b_{5}]^{T}=M^{\prime \prime \prime } [a_{1} a_{2} b_{3} b_{4} a_{3}]^{T}\) with some matrix
\(M^{\prime \prime \prime }\).
4.
\(\mathcal {C}=(a_{1}, a_{2}, a_{3}, b_{3}, b_{4})=(a_{1}, a_{2}, a_{3}, e_{3}, e_{4})\).
Remark 1
The number of arithmetic operations (additions, subtractions, multiplications, and divisions) in the algorithms we consider is as follows (recall that
s ≥
r = rank(
M)).
(i)
Algorithm via row echelon forms, provided
M is transformed using Gaussian elimination:
\(r^{2}(s\frac {r}{3}) + O(r(s+r))\).
(ii)
Algorithm via basic minors, provided Δ is indicated using the method described in Remark 2 with the intermediate minors found by elementary operations: the same as in (i).
(iii)
Algorithm via the exchange of vectors: 2
r
^{2}
s +
O(
r(
s +
r)).
3 Correctness of the algorithm via row echelon forms
We can assume that the matrix
G is in the reduced row echelon form. Indeed, if it is not the case, we transform
G further to the reduced row echelon form using GaussJordan elimination. This does not affect
R.
M is transformed to
G by means of a finite sequence of elementary operations. Apply these operations simultaneously to both sides of (
1), namely at the left hand side to the column matrix
\(\mathcal {A}^{T}\) and at the right hand side to the matrix
M. We get
Denote
\(\mathcal {A}^{\prime }=(a_{1}^{\prime }, \ldots , a_{r}^{\prime })\). Since the elementary operations on a system of vectors do not affect either its span or its linear independence, we conclude that
and
\(\mathcal {A}^{\prime }\) is linearly independent. In particular, this means that the system
\(\mathcal {A}^{\prime }\) consists of nonzero vectors. Thus, all the rows of the matrix
G are nonzero. The number of nonzero rows of any matrix in (the reduced) row echelon form is less than or equal to the number of its columns. The inequality
r ≤
s follows.
$$ \left[ \begin{array}{c} a_{1}^{\prime}\\ a_{2}^{\prime}\\ \vdots\\ a_{r}^{\prime} \end{array}\right] =G \left[ \begin{array}{c} b_{1}\\ b_{2}\\ \vdots\\ b_{s} \end{array}\right]. $$
(2)
$$ \text{span } \mathcal{A}^{\prime} = \text{span } \mathcal{A} $$
(3)
Put
G = [
g
_{ij}]. For
i = 1,…,
r denote by
j
_{i} the label of the column of
G which contains the leading coefficient of the
i th row. Let
E be the set of these labels, i.e.,
\(E=\left \lbrace j_{1}, \ldots , j_{r}\right \rbrace \). So
\(R=\left \lbrace l_{1}, \ldots , l_{sr} \right \rbrace =\left \lbrace 1, \ldots , s\right \rbrace \setminus E\) and
Since
\( \text {span } \mathcal {A} \subset \text {span } {\mathcal{B}}\), we have
\(\text {span } \mathcal {C} \subset \text {span } {\mathcal{B}}\). In order to prove the converse inclusion
\(\text {span } {\mathcal{B}} \subset \text {span } \mathcal {C}\), it suffices to check that
\(b_{j} \in \text {span } \mathcal {C}\) for each
j ∈
E. By the definition of the reduced row echelon form, the submatrix of
G consisting of the columns labeled by
j ∈
E is the identity matrix. Thus, by (
2), we have for
i = 1,…,
r
Hence, for each
j ∈
E
Thus,
\(\text {span } {\mathcal{B}} \subset \text {span } \mathcal {C}\) by (
3) and the definition of
\(\mathcal {C}\).
$$\mathcal{C}=(a_{1}, \ldots, a_{r}, b_{l_{1}}, \dots, b_{l_{sr}}).$$
$$ a_{i}^{\prime} = b_{j_{i}}+\sum\limits_{l \in R} g_{i l} b_{l}. $$
$$b_{j} \in \text{span } \mathcal{A}^{\prime} + \text{span } (b_{l})_{l \in R}.$$
4 Basic minors
From now on, we use the following notation. For a matrix
A ∈
M
_{k, l}(
K), we denote by
A
_{i} (resp.
A
^{j}) the
i th row (resp. the
j th column) of
A. If
\(S \subset \left \lbrace 1, \ldots , l\right \rbrace \) then
A
_{S} denotes the submatrix of
A consisting of the columns
A
^{j}, where
j ∈
S.
Let
D = [
d
_{ij}] ∈
M
_{k, l}(
K) and Δ be a nonzero minor of
D of order
p. We call Δ a
basic minor of
D if either
\( p = {\min \limits } (k,l)\) or all of the minors of order
p + 1 that contain Δ, i.e., from which Δ is obtained by crossing out a row and column, do vanish.
Lemma 1
Let
\(\tilde {D}\)
be a square submatrix of D.
If
\({\Delta } = \det (\tilde {D})\)
is a basic minor of D,
then the system consisting of the rows of D
intersecting
\(\tilde {D}\)
is a basis of the row space span (
D
_{1},
D
_{2},…,
D
_{k})
of D.
For a proof using the Laplace expansion, see [
6, pp. 9–10]. Although in [
6] the definition of a basic minor differs from the one we use by demanding the maximality of the order of a nonzero minor, the proof referred to only uses the conditions we impose. The equivalence of these two definitions is irrelevant to our note; actually, it is implied by the equinumerosity of bases.
Remark 2
There is a concise inductive procedure of indicating a basic minor of a matrix, called the
bordering minors method. Since this method is presented in very few textbooks available in English (e.g., [
4, pp. 70–73]), let us sketch it here. Given a nonzero matrix in
M
_{k, l}(
K), select any nonzero first order minor Δ(1) of it. In the inductive
n th step, the input is the nonzero minor Δ(
n). If
\(n=\min \limits (k,l)\), then Δ(
n) is a basic minor and the procedure terminates. Else, check only those minors of order
n + 1 that contain Δ(
n), i.e., from which Δ(
n) is obtained by crossing out a row and column. If all of these minors are zero, then Δ(
n) is a basic minor and the procedure terminates. Else, choose the first encountered nonzero one as Δ(
n + 1) and proceed to the next step.
5 Spanning criterion
Let
\(\mathcal {A}=\left (a_{1}, \ldots , a_{r}\right )\) and
\({\mathcal{B}}=\left (b_{1}, \ldots , b_{s}\right )\) be two systems of vectors, where
r ≤
s. Note well that we do not assume that the system
\(\mathcal {A}\) is linearly independent. Suppose that
\(\text {span } \mathcal {A} \subset \text {span } {\mathcal{B}}\). This means that there exists a matrix
M ∈
M
_{r, s}(
K) such that
\(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\). For a given set
\(R \subset \left \lbrace 1, \ldots , s\right \rbrace \) of cardinality (
s −
r), define
\(\mathcal {C}\) to be the concatenation of the systems
\(\mathcal {A}\) and (
b
_{l})
_{l∈R}. Define
\({\Delta }=\det (M_{E})\), where
\(E= \left \lbrace 1, \ldots , s\right \rbrace \setminus R\). We have the following spanning criterion.
Lemma 2
If Δ≠ 0, then
\(\text {span } \mathcal {C} = \text {span } {\mathcal{B}}\).
Proof
It simplifies the exposition of our argument, and causes no loss of generality, to assume that
\(E=\left \{1, \ldots , r\right \}\). Indeed, it is enough to permute appropriately the vectors of
\({\mathcal{B}}\) and the columns of
M. Then
\(R=\left \{r+1, \ldots , s\right \}\) and the new
M equals the block matrix
\(\left [ \begin {array}{cc} M_{E} & M_{R} \end {array}\right ]\).
The condition
\(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\) reads
Since
\({\Delta }=\det (M_{E})\ne 0 \), the matrix
M
_{E} is invertible. On multiplying (
4) by
\(M_{E}^{1}\) we get
so
\(\text {span } {\mathcal{B}} \subset \text {span } \mathcal {C}\). The converse inclusion is clear. □
$$ \left[ \begin{array}{c} a_{1}\\ \vdots\\ a_{r} \end{array}\right]=M_{E} \left[ \begin{array}{c} b_{1}\\ \vdots\\ b_{r} \end{array}\right] +M_{R}\left[ \begin{array}{c} b_{r+1}\\ \vdots\\ b_{s} \end{array}\right]. $$
(4)
$$ \left[ \begin{array}{c} b_{1}\\ \vdots\\ b_{r} \end{array}\right]= M_{E}^{1} \left[ \begin{array}{c} a_{1}\\ \vdots\\ a_{r} \end{array}\right] M_{E}^{1} M_{R}\left[ \begin{array}{c} b_{r+1}\\ \vdots\\ b_{s} \end{array}\right], $$
6 Correctness of the algorithm via basic minors
Let
I be the set of the labels of those rows of
M which intersect with
\(\tilde {M}\). Suppose that
\(I\ne \left \{1, {\ldots } r\right \}\). Take any
\(i_{0} \in \left \{1, {\ldots } r\right \} \setminus I\). By Lemma 1 on basic minors, the row
\(M_{i_{0}}\) is a linear combination of the system (
M
_{i})
_{i∈I}. Hence, the vector
\(a_{i_{0}}\) is the corresponding linear combination of the system (
a
_{i})
_{i∈I} by (
1). This contradicts the assumption that
\(\mathcal {A}\) is linearly independent. Thus,
\(I= \left \{1, \ldots r\right \}\), so the order of Δ equals
r. The inequality
r ≤
s follows.
From Lemma 2, we get that
\(\text {span } \mathcal {C} = \text {span } {\mathcal{B}}\).
Remark 3
Let
\(E= \left \lbrace 1, \ldots , s\right \rbrace \setminus R\), where
R is as in the first algorithm. Then
\(\det (M_{E})\) is one of the basic minors of
M since
\(\det (G_{E}) \ne 0\). If one chooses
\(\tilde {M}=M_{E}\) in the second algorithm, then the corresponding system
\(\mathcal {C}\) equals the one from the first.
Remark 4
The part of the above proof concerning the inequality
r ≤
s is essentially the same as in [
2], pp. 30–31. This inequality is also being deduced, alternatively to our direct argument from Section
3 and to the inductive procedure, from the theorem on the existence of nontrivial solutions to systems of homogeneous linear equations with the number of variables exceeding the number of equations (see, e.g., [
1], Chapter 3, Proposition 3.16 and Chapter 1, Corollary 2.17). This theorem is a corollary of the fact that every matrix can be brought to the reduced row echelon form by GaussJordan elimination.
7 More on the spanning criterion
The condition in Lemma 2 is in general not necessary, even when
\(\mathcal {A}\) is linearly independent. Indeed, we have the following example.
Example 2
Let (
e
_{1},
e
_{2},
e
_{3}) be the standard basis in
\(\mathbb {R}^{3}\). Denote
a
_{1} = [1,− 1,0] and
a
_{2} = [1,1,0]. Define
\(\mathcal {A}=(a_{1}, a_{2})\) and
\({\mathcal{B}}=(e_{1}, e_{2}, e_{3}, e_{3})\). Thus, for
\(M= \left [ \begin {array}{rrrr} 1 & 1 & 0 & 0\\ 1 & 1 & 0 & 0 \end {array}\right ]\), we have
\(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\). Set
\(R=\left \{2,4\right \}\). Then
\(\mathcal {C}=(a_{1}, a_{2}, e_{2}, e_{3})\), so
\(\text {span } \mathcal {C}=\text {span } {\mathcal{B}}\) but Δ = 0.
However, under the assumption that
\({\mathcal{B}}\) is linearly independent, the condition turns out to be necessary.
Proposition 3
Δ ≠ 0
if and only if
\(\text {span } \mathcal {C}=\text {span } {\mathcal{B}}\),
provided
\({\mathcal{B}}\)
is linearly independent.
Proof
Since we have Lemma 2, we only need to prove that the assumption
\(\text {span } \mathcal {C}\)
\( = \text {span } {\mathcal{B}}\) implies Δ≠ 0. As in the proof of Lemma 2, we assume that
\(E=\left \{1, \ldots , r\right \}\). Write
\( \text {span } {\mathcal{B}} \subset \text {span } \mathcal {C}\) and
\(\text {span } \mathcal {C} \subset \text {span } {\mathcal{B}}\) as
with
X ∈
M
_{s, s}(
K) and
where 0 ∈
M
_{s−r, r}(
K) is the zero matrix and
I ∈
M
_{s−r, s−r}(
K) is the identity matrix. By (
5),
Since
\({\mathcal{B}}\) is linearly independent it follows that
XY is the identity matrix, so
\(\det (X Y) = 1\). Thus
\({\Delta }=\det (M_{E})= \det (Y) \ne 0\) by (
6) and the multiplicativity of determinant. □
$$ \mathcal{B}^{T} = X \mathcal{C}^{T} \mathrm{ and } \mathcal{C}^{T} = Y \mathcal{B}^{T}, $$
(5)
$$ Y=\left[ \begin{array}{cc} M_{E} & M_{R}\\ 0 & I \end{array}\right], $$
(6)
$$ \mathcal{B}^{T} = X Y \mathcal{B}^{T}. $$
Remark 5
If
\({\mathcal{B}}\) is linearly independent and Δ≠ 0, then both
\(\mathcal {A}\) and
\(\mathcal {C}\) are linearly independent. This is clear by the equinumerosity of bases.
8 Two additional algorithms
Assuming the linear independence of
\({\mathcal{B}}\) rather than of
\(\mathcal {A}\), we have two algorithms, analogue to those in Section
2. Their correctness can be easily established by combining our arguments in Sections
3 and
6 with Remark 5.
Input: Two systems of vectors
\(\mathcal {A}=\left (a_{1}, \ldots , a_{r} \right )\) and
\({\mathcal{B}}=\left (b_{1}, \ldots , b_{s} \right )\), and a matrix
M such that
\({\mathcal{B}}\) is linearly independent and
\(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\).
Algorithm I.
Output: A system
\(\mathcal {A}^{*}\) which is a basis of
\(\text {span } \mathcal {A}\), and a system
\(\mathcal {C}\) which is the completion of
\(\mathcal {A}^{*}\) by some vectors of
\({\mathcal{B}}\) to a basis of
\(\text {span } {\mathcal{B}}\).
1.
Transform
M into a matrix
G in row echelon form.
2.
Let
R be the set of the labels of those columns of
G which do not contain the leading coefficients.
3.
Set
\(\mathcal {A}^{*}=(G_{i}{\mathcal{B}}^{T})_{i \in \left \{1, \ldots , \text {rank} (G)\right \}}\).
4.
Set
\(\mathcal {C}\) to be the concatenation of the systems
\(\mathcal {A}^{*}\) and (
b
_{l})
_{l∈R}.
Algorithm II.
Output: A subsystem
\(\tilde {\mathcal {A}}\) of
\(\mathcal {A}\) which is a basis of
\(\text {span } \mathcal {A}\), and a system
\(\mathcal {C}\) which is the completion of
\(\tilde {\mathcal {A}}\) by some vectors of
\({\mathcal{B}}\) to a basis of
\(\text {span } {\mathcal{B}}\).
1.
Find a basic minor
\({\Delta }= \det (\tilde {M})\) of
M.
2.
Let
I be the set of the labels of those rows of
M which intersect with
\(\tilde {M}\) and
R the set of the labels of those columns of
M which do not intersect with
\(\tilde {M}\).
3.
Set
\(\tilde {\mathcal {A}}=(a_{i})_{i \in I}\).
4.
Set
\(\mathcal {C}\) to be the concatenation of the systems
\(\tilde {\mathcal {A}}\) and (
b
_{l})
_{l∈R}.
Example 3
Let
\({\mathcal{B}}=(e_{1}, e_{2}, e_{3}, e_{4})\) and
\(\mathcal {A}^{T} = M {\mathcal{B}}^{T}\) where
$$M=[m_{ij}]= \left[ \begin{array}{rrrr} 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1\\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \end{array}\right]$$
First, we find
\(\mathcal {A}^{*}\) and
\(\mathcal {C}\) using Algorithm I.
1.
Add row 1 to rows 2 and 4; add row 2 multiplied by 1/2 to row 3; subtract row 2 multiplied by 1/2 from row 4. The result is
$$G= \left[ \begin{array}{rrrrr} 1 & 1 & 1 & 1 \\ 0 & 2 & 0 & 2 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right].$$
2.
\(R=\left \{3,4\right \}\).
3.
\(\mathcal {A}^{*}=(G_{1}, G_{2})\).
4.
\(\mathcal {C}=(G_{1}, G_{2}, e_{3}, e_{4})\).
Second, we find
\(\tilde {\mathcal {A}}\) and
\(\mathcal {C}\) using Algorithm II.
1.
Select Δ(1) =
m
_{43} = − 1 ≠ 0. Minor
\({\Delta }(2)=\det (\tilde {M})= \det \left [ \begin {array}{cc} m_{3 3} & m_{3 4}\\ m_{4 3} & m_{4 4} \end {array}\right ]= \det \left [ \begin {array}{cc} 0 & 1\\ 1 & 0 \end {array}\right ]=1\) contains Δ(1) and is nonzero. All four minors of order 3 that contain Δ(2) are zero. Thus Δ(2) is a basic minor.
2.
\(I=\left \{3,4\right \}\),
\(R=\left \{1,2\right \}\).
3.
\(\tilde {\mathcal {A}}=(a_{3},a_{4})=(M_{3}, M_{4})\).
4.
\(\mathcal {C}=(a_{3}, a_{4}, e_{1}, e_{2})\).
Remark 6
Given a vector space and a finite system of generating vectors, one can find a basis by successively examining the vectors in the system, and retaining only those that are not linear combinations of the predecessors. This common idea, presented in standard (linear) algebra textbooks (see, e.g., [
1], Chapter 3, Proposition 3.13), can be turned into the following algorithm with the same input and output as in Algorithm II.

Standard algorithm.
1.
Form the block matrix
\(\left [\begin {array}{cc} M^{T} & I \end {array}\right ]\) where
I ∈
M
_{s, s}(
K) is the identity matrix, and transform it into a matrix
H in row echelon form.
2.
Let
E be the set of the labels of those columns of
H which contain the leading coefficients. Put
\(K= \left \{j \in E: j \leq r\right \}\) and
R =
E ∖
K.
3.
Set
\(\tilde {\mathcal {A}}=(a_{i})_{i \in K}\).
4.
Set
\(\mathcal {C}\) to be the concatenation of the systems
\(\tilde {\mathcal {A}}\) and (
b
_{l})
_{l∈R}.
The number of arithmetic operations in the standard algorithm is
$$2 \sum\limits_{n=1}^{s} n(r+n) + O(s(s+r))= s^{2}(r+\frac{2}{3}s) + O(s(s+r)). $$
For large
r =
s, the number of arithmetic operations in the algorithms we consider in this section is approximately as follows.
(i)
Algorithm I:
\(\frac {2}{3}s^{3}\).
(ii)
Algorithm II: the same as in (i).
(iii)
Standard algorithm:
\(\frac {5}{3}s^{3}\).
(iv)
Algorithm via the exchange of vectors: 2
s
^{3}.
Acknowledgments
We are grateful to the referees for valuable suggestions.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Footnotes