Skip to main content
Erschienen in: Quantum Information Processing 5/2021

Open Access 01.05.2021

Understanding mathematics of Grover’s algorithm

verfasst von: Paweł J. Szabłowski

Erschienen in: Quantum Information Processing | Ausgabe 5/2021

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We analyze the mathematical structure of the classical Grover’s algorithm and put it within the framework of linear algebra over the complex numbers. We also generalize it in the sense, that we are seeking not the one ‘chosen’ element (sometimes called a ‘solution’) of the dataset, but a set of m such ‘chosen’ elements (out of \(n>m)\). Besides, we do not assume that the so-called initial superposition is uniform. We assume also that we have at our disposal an oracle that ‘marks,’ by a suitable phase change \(\varphi \), all these ‘chosen’ elements. In the first part of the paper, we construct a unique unitary operator that selects all ‘chosen’ elements in one step. The constructed operator is uniquely defined by the numbers \(\varphi \) and \(\alpha \) which is a certain function of the coefficients of the initial superposition. Moreover, it is in the form of a composition of two so-called reflections. The result is purely theoretical since the phase change required to reach this heavily depends on \(\alpha \). In the second part, we construct unitary operators having a form of composition of two or more reflections (generalizing the constructed operator) given the set of orthogonal versors. We find properties of these operations, in particular, their compositions. Further, by considering a fixed, ‘convenient’ phase change \(\varphi ,\) and by sequentially applying the so-constructed operator, we find the number of steps to find these ‘chosen’ elements with great probability. We apply this knowledge to study the generalizations of Grover’s algorithm (\(m=1,\phi =\pi \)), which are of the form, the found previously, unitary operators.
Hinweise
The author is very grateful to an unknown referee for pointing out the important positions of the letter overlooked in the first version of the paper.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Let us note at the beginning that we will avail a notation very commonly used in quantum physics, i.e., the Dirac notation. We will deal with finite- or infinite-dimensional Hilbert spaces \(\mathbb {H}\) over the complex numbers. \(\left| x\right\rangle \) will denote an element of this space (column vector in the case of the finite-dimensional linear space over the complex numbers), \(\left\langle x\right| \) will denote the dual conjugate of \(\left| x\right\rangle \), i.e., row vector in the finite-dimensional case, with entries that are conjugates of the entries of the vector \(\left| x\right\rangle \). \(\left\langle a\right. \left| b\right\rangle \) will denote the inner (scalar) product in the space \(\mathbb {H}\), while \(\left| a\right\rangle \left\langle b\right| \) will denote the so-called outer product that is explained below.
Namely, the outer product of vectors two \(\left| x\right\rangle \), \(\left| y\right\rangle \) denoted by \(\left| x\right\rangle \left\langle y\right| \) means a projection on the one-dimensional space \({\text {span}}(\left| x\right\rangle )\). More precisely
$$\begin{aligned} \left| x\right\rangle \left\langle y\right| :\mathbb {H\ni }\left| z\right\rangle {\rightarrow }\left| x\right\rangle \left\langle y\right. \left| z\right\rangle \in {\mathbb {H}}\text {.} \end{aligned}$$
Let \(\varvec{A}:{{{\mathbb {H}\rightarrow \mathbb {H}}}}\) be a unitary operator (matrix in the case of finite-dimensional \(\mathbb {H}\)) of the form \(\varvec{A}=\varvec{I}-2\left| \varvec{1}\right\rangle \left\langle \varvec{1}\right| ,\) with \(\varvec{I}\) denoting identity operator matrix and \(\left| \varvec{1}\right\rangle \) being the vector of unit length and in the case of \(\mathbb {H}\) being n-dimensional, having all identical entries equal to \(1/\sqrt{n}\).
By \(\left| b_{j}\right\rangle \) let us denote, here, versors having all but j-th entries equal to 0 and the j-th one equal to 1. In terms of quantum mechanics the versors \(\left| b_{j}\right\rangle \) are the so-called pure states and the vector \(\left| \varvec{1}\right\rangle \) is, in fact, certain superposition of the pure states \(\left\{ \left| b_{j}\right\rangle \right\} _{j=1}^{n}\).
The basis of the famous Grover’s algorithm (see, e.g., [1, 4] or [6]) is the following remarkable property of the operator \(\varvec{A}\) for \(n=4.\) Namely, if we consider versors \(\left| j\right\rangle ,\) \(j=1\ldots ,4\) having all entries except for the j-th equal to 1/2 and the j-th equal to \(-1/2,\) then \(\varvec{A}\left| j\right\rangle =-\left| b_{j}\right\rangle \) for all \(j=1,\ldots ,4\).
To appreciate this property more, let us imagine that one has a database consisting of 4 elements. We are looking for a particular, ‘chosen’ element of this base. Suppose that a certain function (an oracle) can be evaluated at every element of this base. In the considered situation, the oracle assigns the value \(\ 1\) (in fact multiplies by 1, since the action of the oracle must be a unitary operation) to all except for the ‘chosen’ element of the base and the value \(-1\) to the ‘chosen’ one. Then, using operator \(\varvec{A}\) it is possible to regain this ‘chosen’ element just in one step.
We will generalize the problem discussed above and construct an algorithm that finds m ‘chosen’ (or ‘special’) elements of the database of the size \(n>m.\) The generalization is also in the sense that the marking of ‘selected’ elements by an oracle is done by the so-called phase change \(\varphi \). In the discussed above example, \(\varphi \) was equal to \(\pi \).
It has to be clearly stated that the problem of finding all m chosen elements of n-element database in one step by suitable choice of phase changing oracle is by no means not new. One has to mention a sequence of papers of Long and his associates such as [79, 1214], or [10, 11] or the summarizing and putting in the linear algebraic context, the paper [15].
In the sequel, we are repeating most of the results of these papers, but from the broader and general purely linear algebraic perspective. Moreover, we show that the structure of Grover’s algorithm that has a form of the composition of two the so-called reflection operators (for the definition and properties of such operators see Sect. 3) is uniquely defined. We also slightly generalize the original statement of the algorithm, assuming that the initial superposition is not uniform. In the sequel, we develop a small theory of analysis of generalized reflection operators.
We will construct a unitary operator that finds all chosen m elements of the database of size n in just one step by properly choosing the phase \(\varphi =2\arcsin \sqrt{\frac{n}{4m}}\), with which the oracle marks chosen elements, provided \(n/m\le 4\). We will also show that so-constructed unitary operator must be of the form:
$$\begin{aligned} ie^{-i\varphi /2}(\varvec{I}-(1-e^{i\varphi })\left| \varvec{1}\right\rangle \left\langle \varvec{1}\right| ), \end{aligned}$$
with \(\left| \varvec{1}\right\rangle \) as described above. Hence, the form of the operator is unique. More precise development of this thought will be presented in Sect. 2.1, however, now we can state, that one cannot find any other finite-dimensional unitary operator in n-dimensional Hilbert space over complex numbers that would find all ‘marked elements in the database’ in one step. Thus, Grover’s algorithm is unique and the ‘only one’ that does it. The result is purely theoretical (see also Remark 2), however, in the author’s belief, it adds to a better understanding of Grover’s algorithm.
In particular, it will turn out that if \(n/m=4\) then in just one step, one can find all ‘chosen’ elements when the phase change is \(\varphi =\pi \) like in the classical Grover’s algorithm. To complete the list of simple cases, we find also unitary operators that ‘find the “chosen” element in one step’ when \(n/m=3,\) then \(\varphi =2\pi /3\) and when \(n/m =2\) then \(\varphi =\pi /2\).
In fact, we generalize Grover’s algorithm even further. Namely, we assume that a ‘database’ is given in the form of the vector \(\varvec{\left| \varvec{y}\right\rangle =}\sum _{k=1}^{n}\sigma _{k}\left| b_{k}\right\rangle ,\) with \(\sum _{k=1}^{n}\left| \sigma _{k}\right| ^{2}=1,\) \(\forall k=1,\ldots ,n\) \(\sigma _{k}\ne 0,\) and we are interested in getting all elements of this database, i.e., say all the elements of the set \(\left\{ \left| b_{k}\right\rangle \right\} _{k\in S}\) for some subset S of the set \(\left\{ 1,\ldots ,n\right\} \) that are ‘marked’ by an oracle. In quantum mechanical terms, the state \(\varvec{\left| \varvec{y}\right\rangle }\) is a superposition of states \(\left\{ \left| b_{k}\right\rangle \right\} _{k=1}^{n}\) with weights \(\left\{ \sigma _{k}\right\} _{k=1}^{n}\). The results similar to the ones described above are presented in Theorem 1.
One has to remark that, obviously, if one makes a measurement (the so-called collapse of the superposition) after applying our unitary operator \(\varvec{A}\), one would get only one of those ‘selected’ elements. A similar situation would happen if we would have made a measurement on the database itself.
Generally, we can use the outcome of operator \(\varvec{A}\) in at least two ways. Firstly, if the phase change \(\varphi \) is exactly equal to \(2\arcsin \sqrt{\frac{n}{4m}}\), we can treat it as the smaller new database, of the size m. That is the outcome is now the superposition of m chosen elements.
Secondly, if the phase change \(\varphi \) is not exactly equal to \(2\arcsin \sqrt{\frac{n}{4m}}\), then one can apply the composition of the operator A. That is to consider \(A^{k}\) and find k such, that the probability of observing elements from the set of ‘marked’ elements, be equal say greater than, say, .99999. How to find such k is given, below, e.g., by Formula (3.15). Notice that estimating the probability of the outcome is particularly important in the case of quantum systems because of their random nature. Let us remark that the classical Grover’s algorithm works the same way with \(m=1\) and \(\varphi =\pi .\)
To find such k and generally how to analyze and deal with the composition of reflection operators (the operator \(\varvec{A}\) itself and \(\varvec{A}^{k} \) are examples of such composition) is shown in Sect. 3, where the mathematical foundations of our mathematical analysis of the generalized Grover’s algorithm are presented in the general setting. More specifically, we will define a certain class of unitary operators, so to say generated by a system of orthonormal versors. Then, to prove certain properties of this class of operators, in particular, to show that the operators of this class constitute a non-commutative, multiplicative ring over complex numbers. As stated above, the operator considered in the literature and known under the name of the Grover’s algorithm belongs to this class. Finally, we will make use of the properties of this class of unitary operators and we will analyze the Grover’s algorithm and will prove known properties of this algorithm in a different way, in particular, we will estimate the number of its iterations to get the probability of getting the ‘chosen’ element very close to one. See Subsection 3.1.
Returning to the exact choice of the phase change mentioned above, the outcome of operator \(\varvec{A,}\) as the new, smaller database can be used in various ways. The simplest way seems to be speeding up Grover’s algorithm at least in some special cases.
To see what we mean and how this idea of speeding up Grover’s algorithm would work, let us imagine that we have a database of the size 64 and that we are interested in finding a chosen element. Imagine that the elements of this base are assigned labels \(^{\prime }i_{1}i_{2}i_{3}^{\prime }\) where \(i_{j} \in \{1,2,3,4\},\) \(j=1,2,3.\) Suppose that the ‘chosen’ element has label \(^{\prime }111^{\prime }.\) So now, first we apply oracle that marks all elements with \(i_{1}=1.\) Hence, by applying operator \(\varvec{A}\) (with \(\varphi =\pi \) as in the Grover’s algorithm case) we will get the ‘new’ database (of the size 16) with elements labeled \(^{\prime }1i_{2}i_{3}^{\prime }.\) We apply new oracle that marks all elements with \(i_{2}=1,\) and then the operator \(\varvec{A}\) (again with \(\varphi =\pi )\) getting the new database of size 4 with elements labeled \(^{\prime }11i_{3}^{\prime }.\) And finally we apply oracle and the operator \(\varvec{A}\) (again with \(\varphi =\pi )\) likewise, for the last time getting our element with label \(^{\prime }111^{\prime }.\) Notice, that we obtained our ‘chosen element’ in 3 steps not in \(\left\lfloor 8\pi /4+1\right\rfloor =7\) as it would be in the case of the classical Grover’s algorithm! This idea can be easily generalized, and we could see that in the case of \(n=4^{k}\) we could get chosen element in k steps not in \(\left\lfloor \pi 2^{k-2}+1\right\rfloor \) as the results of Grover’s algorithm would suggest.
Another possible way of ‘speeding up’ Grover’s algorithm using the results of the paper, although less spectacular than the example above, could be reducing the number of iterations in order to get the ‘chosen’ element of the base with very high probability. Namely, if, say, the size of the base is \(n = n_{1}n_{2}.\) Then, direct application of the Grover’s algorithm would require \(\left\lfloor \pi \sqrt{n_{1}n_{1}}/4+1\right\rfloor \) iterates, while the two-stage application of Grover’s algorithm firstly with \(m= n_{1}\) and then with \(m=1\) and the vector \(\left| \varvec{1}\right\rangle \) in the original Grover’s algorithm substituted by the outcome of the Grover’s algorithm applied in the first stage as the second stage. One can notice that then with such two-stage algorithm would require only \(\left\lfloor \pi \sqrt{n_{2}}/4+1\right\rfloor +\left\lfloor \pi \sqrt{n_{1}}/4+1\right\rfloor \) iterations. That is why we needed generalization of the Grover’s algorithm presented in Theorem 1. Of course, such reduction was possible under the assumption that we have some partial knowledge about the ‘chosen’ item, i.e., that this item is an element of the dataset of size \(n_{1}.\)
The reduction of the number of iterates was possible, since, according to Remark 8, the necessary number of iterates to find the set of chosen elements with probability close to 1 is \(\left\lfloor \frac{\pi }{4}\sqrt{\frac{n}{m}}+1\right\rfloor \).
Another purpose of this note is to show that this feature is specific (for the classical Grover’s algorithm) only when the ratio \(n/m \le 4\) and \(\varphi \in [0,\pi ]\). The Grover’s algorithm apart from being a source of other important algorithms (to mention only one [3]) was one of the algorithms pointing out the great difference between classical (so to say sequential) computers and quantum ones (at least partially parallel).
The paper consists of three sections. The next one (number 2, below) solves the problem of finding a unitary operator that finds all m ‘chosen’ elements of the data of size n in one step.
The second Sect. 3 presents, as mentioned above, a generalization of the operator \(\varvec{A},\) mentioned above, in the sense that we consider \(k\ge 1\), mutually orthogonal versors \(\left| x_{j}\right\rangle ,\) \(j=1,\ldots ,k\) that are used in the construction. We do not confine here ourselves to the fact that we deal with a finite-dimensional situation. The selected versors \(\left| x_{j}\right\rangle \) can be ‘chosen’ from any Hilbert space. We show how to briefly describe an operator of this kind and what is more we show how to compose such operators. The detailed analysis of the generalization of Grover’s algorithm is performed, as stated above, in Subsection 3.1.
The last Sect. 4 contains less important properties of the unitary operator, analyzed in the first section, as well as some less interesting or longer proofs.

2 The choice of all marked elements

In all formulae in the sequel by i, we will denote imaginary unit, i.e., \(i=\sqrt{-1}.\)
Let us consider a state space \(\mathbb {H}\) of dimension n. Let \(\left\{ \left| b_{k}\right\rangle \right\} _{k=1}^{n}\) constitute the n elements of its orthonormal basis. That is \(\left\langle b_{j}\right. \left| b_{k}\right\rangle =1\) if \(k=j\) and 0 otherwise.
We start with the following auxiliary problem. Let us imagine that a system is in the state \(\left| \varvec{y}\right\rangle = \sum _{k=1}^{n}\sigma _{k}\left| b_{k}\right\rangle \), where \(\sum _{k=1} ^{n}\left| \sigma _{k}\right| ^{2}=1\). Let us also assume that \(\forall k=1,\ldots ,n:\) \(\sigma _{k} \ne 0\) in order to avoid unnecessary confusion. Let us assume that some of the (pure) states of \(\left\{ \left| b_{k}\right\rangle \right\} _{k=1}^{n}\) for example \(\{\left| b_{k}\right\rangle \}_{k\in S} \), where S is a subset of \(\{1,\ldots n\}\), are ‘chosen,’ meaning that there exists a unitary operation \(J_{S}\) (an oracle) that maps say all \(\left| b_{j}\right\rangle ,\) \(j=\{1,\ldots n\}\backslash S\) into themselves but \(\left| b_{k}\right\rangle \) into \(e^{i\varphi }\left| b_{k} \right\rangle ,\) for \(k\in S,\) for some \(\varphi \in (0,2\pi )\). Assume that the cardinality of S is m. Let us further denote
$$\begin{aligned} \alpha= & {} \sqrt{\sum _{k\in S}\left| \sigma _{k}\right| ^{2}},~~\left| \varvec{y}_{S}\right\rangle =\frac{1}{\alpha }\sum _{k\in S}\sigma _{k}\left| b_{k}\right\rangle \\&\text { and }\left| \varvec{z}_{S}\right\rangle = \frac{1}{\sqrt{1-\alpha ^{2}}}\sum _{k\notin S}\sigma _{k}\left| b_{k}\right\rangle . \end{aligned}$$
Hence, \(\left| \varvec{x}_{S}\right\rangle \) and \(\left| \varvec{z} _{S}\right\rangle \) are mutually orthogonal and we have
$$\begin{aligned} \left| \varvec{y}\right\rangle =\alpha \left| \varvec{y}_{S}\right\rangle +\sqrt{1-\alpha ^{2} }\left| \varvec{z}_{S}\right\rangle . \end{aligned}$$
Let us also define the following versor:
$$\begin{aligned} \left| \varvec{v}_{S}\right\rangle&=\sum _{k\notin S}\sigma _{k}\left| b_{k}\right\rangle +e^{i\varphi }\sum _{k\in S}\sigma _{k}\left| b_{k}\right\rangle \\&=\sqrt{1-\alpha ^{2}}\left| \varvec{z}_{S}\right\rangle +e^{i\varphi }\alpha \left| \varvec{y}_{S}\right\rangle . \end{aligned}$$
Now, notice that the action of the oracle can be described in terms of unitary operations as an operation defined by an operator
$$\begin{aligned} J_{S}=\varvec{I}-(1-e^{i\varphi })\left| \varvec{y}_{S}\right\rangle \left\langle \varvec{y}_{S}\right. , \end{aligned}$$
(2.1)
that has the property that \(J_{S}\left| \varvec{y}\right\rangle =\left| \varvec{v}_{S}\right\rangle .\)
We want to find a unitary operator \(\mathcal {A}\) that maps every vector \(\left| \varvec{v}_{S}\right\rangle \) into vector \(\left| \varvec{y}_{_{S}}\right\rangle \) for every \(S\subset \left\{ 1,\ldots ,n\right\} \) of cardinality m. Or, in other words, using operator \(J_{S}\), that for every subset \(S\subset \left\{ 1,\ldots ,n\right\} \) of cardinality m
$$\begin{aligned} \mathcal {A}J_{S}\left| \varvec{y}\right\rangle = \left| \varvec{y}_{S}\right\rangle . \end{aligned}$$
(2.2)
Now let us denote by \(\varphi =2\arcsin \frac{1}{2\alpha }.\) By construction, we have \(\alpha \in (0,1).\)
Theorem 1
The only unitary operator \(\mathcal {A}\) satisfying relationship (2.2) for every \(S\subset \left\{ 1,\ldots ,n\right\} \) is when \(\frac{1}{2}\le \alpha <1\) or equivalently \(\pi /3<\varphi \le \pi \) , and \(\mathcal {A}\) has the following form:
$$\begin{aligned} \mathcal {A=-}ie^{-i\varphi /2}(\varvec{I}-(1-e^{i\varphi })\left| \varvec{y}\right\rangle \left\langle \varvec{y}\right. ). \end{aligned}$$
(2.3)
In particular, we have:
if \(\alpha =1/2,\) then \(\varphi = \pi \) and
$$\begin{aligned} \mathcal {A}=-\varvec{I}+2\left| \varvec{y}\right\rangle \left\langle \varvec{y})\right. , \end{aligned}$$
if \(\alpha =1/\sqrt{3},\) then \(\varphi =2\pi /3\) and
$$\begin{aligned} \mathcal {A}= e^{-i5\pi /6}\left( \varvec{I}-\left( 1-e^{2i\pi /3}\right) \left| \varvec{y}\right\rangle \left\langle \varvec{y}\right. \right) , \end{aligned}$$
if \(\alpha =1/\sqrt{2},\) then \(\varphi =\pi /2\) and
$$\begin{aligned} \mathcal {A}= e^{-3i\pi /4}\left( \varvec{I}-\left( 1-e^{i\pi /2}\right) \left| \varvec{y}\right\rangle \left\langle \varvec{y}\right. \right) . \end{aligned}$$
Proof Is shifted to Sect. 4.

2.1 The miracle can happen only for \(n/m\le 4\)

Now, let us apply the above-mentioned result to the following setting that is, in fact, a generalization of situation met in a famous Grover’s algorithm.
Let \(\varvec{\left| \varvec{1}\right\rangle }\) denote the vector having all entries equal to \(1/\sqrt{n}\). Hence, formally, \(\left| \varvec{1} \right\rangle =\frac{1}{\sqrt{n}}\sum _{j=1} ^{n}\left| b_{j}\right\rangle \). \(\varvec{\left| \varvec{1} \right\rangle }\) is a state vector (versor) of some quantum system.
Let us define vectors \(\left| V_{S}\right\rangle \), by the formula:
$$\begin{aligned} \left| V_{S}\right\rangle =\frac{1}{\sqrt{n}} \sum _{j\notin S}^{n}\left| b_{j}\right\rangle + \frac{e^{i\varphi }}{\sqrt{n}}\sum _{j\in S}\left| b_{j}\right\rangle . \end{aligned}$$
Let us denote, as before, \(\left| U_{S}\right\rangle =\frac{1}{\sqrt{n-m}}\sum _{j\notin S}^{n}\left| b_{j} \right\rangle \) and \(\left| E_{S}\right\rangle =\frac{1}{\sqrt{m}}\sum _{j\in S}\left| b_{j}\right\rangle \). Thus, utilizing notation from above we have \(\sigma _{k}=1/\sqrt{n},\) \(k=1,\ldots ,n\), \(\alpha =\sqrt{m/n}\) and so on.
Hence, we have
$$\begin{aligned} \left| V_{S}\right\rangle =\sqrt{\frac{n-m}{n}}\left| U_{S} \right\rangle +e^{i\varphi }\sqrt{\frac{m}{n}}\left| E_{S}\right\rangle . \end{aligned}$$
(2.4)
We have the following result.
Theorem 2
The only unitary operator \(\mathcal {A}\) satisfying relationship (2.2) for every \(S\subset \left\{ 1,\ldots ,n\right\} \) of cardinality m is when \(\frac{n}{m}\le 4\) (equivalently \(\frac{1}{2}\le \alpha <1\) or \(\pi /3<\varphi \le \pi \) ) and \(\mathcal {A}\) has the following form:
$$\begin{aligned} \mathcal {A=-}ie^{-i\varphi /2}\left( \varvec{I}-\left( 1-e^{i\varphi }\right) \left| \varvec{1}\right\rangle \left\langle \varvec{1}\right| \right) . \end{aligned}$$
(2.5)
In particular, we have:
if \(n/m=4,\) then \(\varphi =\pi \) and
$$\begin{aligned} \mathcal {A}=-\varvec{I}+2\left| \varvec{1}\right\rangle \left\langle \varvec{1}\right| , \end{aligned}$$
if \(n/m=3,\) then \(\varphi =2\pi /3\) and
$$\begin{aligned} \mathcal {A}= e^{-i5\pi /6}\left( \varvec{I}-\left( 1-e^{2i\pi /3}\right) \left| \varvec{1}\right\rangle \left\langle \varvec{1}\right| \right) , \end{aligned}$$
if \(n/m=2,\) then \(\varphi =\pi /2\) and
$$\begin{aligned} \mathcal {A}= e^{-3i\pi /4}\left( \varvec{I}-\left( 1-e^{i\pi /2}\right) \left| \varvec{1}\right\rangle \left\langle \varvec{1}\right| \right) . \end{aligned}$$
Proof
For the proof, let us notice that in the setting of Theorem 1 we take \(\sigma _{k}=1/\sqrt{n},\) \(k =1,\ldots ,n\). Then, consequently we have \(\alpha =\sqrt{m/n}\), \(\left| \varvec{y}\right\rangle \varvec{=\left| \varvec{1}\right\rangle ,}\) \(\left| \varvec{y}_{S}\right\rangle =\left| E_{S}\right\rangle ,\) \(\left| \varvec{z}_{S}\right\rangle =\left| U_{S}\right\rangle ,\) \(\left| \varvec{v} _{S}\right\rangle =\left| V_{S}\right\rangle .\) Then, the result follows immediately. \(\square \)
Remark 1
Let us note, that the assertion of the theorem states that any proportion \(\frac{m}{n}\) of chosen elements of the database of the size n can be found in just one step provided \(m/n\ge 1/4\). Moreover, that operator \(\mathcal {A}\) must have a form of generalized reflection ‘in the direction \(\left| \varvec{1}\right\rangle \)’ and then oracle applied. Since the action of the oracle can also be described as a generalized reflection (compare Formula (2.1)), we see that any algorithm for finding a set of ‘chosen’ elements has to have a form of the composition of two reflections. Hence, we need a new definition.
Definition 1
Let \(\left| x\right\rangle \) be any versor in \(\mathbb {H}\) and \(\varphi \) any real number from the segment \([0,2\pi ).\) Then, the operator
$$\begin{aligned} R(\left| x\right\rangle ,\varphi )= & {} \varvec{I-}(1-e^{i\varphi })\left| x\right\rangle \left\langle x\right| \\= & {} \varvec{I+}2ie^{i\varphi /2}\sin (\varphi /2)\left| x\right\rangle \left\langle x\right| , \end{aligned}$$
will be called a generalized reflection (in brief reflection).
Remark 2
Notice also that, if we could subtract (in the sense of the theory of sets but of course with the help of some unitary operator) from the whole database the set of ‘chosen’ elements, then theoretically by the proper choice of the phase \(\varphi \) we could identify just one chosen element in two steps regardless of the size of the base. Namely, first finding the number \(1/\alpha ^{2}=\frac{n}{n-1}\) and then applying an oracle with the phase change \(\varphi =2\arcsin (\frac{1}{2\alpha }).\) Now following the argument proceeding the theorem we can find just in one step all elements that are not ‘chosen.’ Then, subtracting from the whole base the whole set of ‘unchosen’ elements we can find the ‘chosen’ one. Of course, this is only theory since function \(2\arcsin (\frac{1}{2\alpha })\) is highly nonlinear, so one cannot precisely find the phase \(\varphi \) and another problem is with is the oracle marking all not chosen elements by the precise phase change \(\varphi .\)

3 General unitary operator generated by the ‘chosen’ elements of the base

Keeping in mind that the versor \(\left| \varvec{1}\right\rangle \) is a linear combination of two orthogonal versors \(\left| U_{S}\right\rangle \) and \(\left| E_{S}\right\rangle \) and that, in fact, we deal with the composition of operators
$$\begin{aligned} \mathcal {G}&\mathcal {=}\mathcal {A}J_{S}=\mathcal {-}ie^{-i\varphi /2}\left( \varvec{I}-\left( 1-e^{i\varphi }\right) \left( \sqrt{1-\alpha ^{2}}\left| U_{S} \right\rangle +\alpha \left| E_{S}\right\rangle \right) \right. \nonumber \\&\quad \left. \times \left( \sqrt{1-\alpha ^{2}}\left\langle U_{S}\right. +\alpha \left\langle E_{S}\right. \right) \right) \left( \varvec{I}-\left( 1-e^{i\varphi }\right) \left| E_{S}\right\rangle \left\langle E_{S}\right. \right) , \end{aligned}$$
(3.1)
which, after some algebra, leads to the conclusion that the product is of the general form (3.2) considered below. So first let us analyze the unitary operators of the form (3.2).
Let k be such a natural number that is less or equal than the dimension of \(\mathbb {H}.\) Finally, let \(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k}\) be a collection of orthonormal, unit vectors (versors) of \(\mathbb {H}\). Let us define operator \(\varvec{A}_{k}:{\mathbb {H} }\rightarrow {\mathbb {H}}\) in the following way:
$$\begin{aligned} \varvec{A}_{k}=\varvec{I}-\sum _{j=1}^{k}\beta _{j}\left| x_{j}\right\rangle \left\langle x_{j}\right| -\sum _{j\ne l}^{k}\alpha _{jl}\left| x_{j}\right\rangle \left\langle x_{l}\right| , \end{aligned}$$
(3.2)
where \(\beta _{j},\alpha _{jl}\in \mathbb {C}\), \(j,l= 1,\ldots ,k\) and \(\varvec{I}\) denotes an identity operator on \(\mathbb {H}\).
We will develop, now, tools to deal with the operators given by (3.2).
Let us denote by \(\mathbb {H}_{k}\) the subspace of \(\mathbb {H}\) that is spanned by the elements \(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k}\). Further, let \(\mathbb {U}_{k}\) denote the orthogonal complement of \(\mathbb {H}_{k}\) in \(\mathbb {H}\). Now we have \(\mathbb {H} =\mathbb {H}_{k}\oplus \mathbb {U}_{k}\). Notice that the operator \(\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right| \) is the projection on \(\mathbb {H}_{k}\). Hence, \(\varvec{I}-\sum _{j=1} ^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right| \) is the projection on \(\mathbb {U}_{k}\). Now, let consider also a finite-dimensional, involutive algebra over the complex numbers generated by the column vectors \(\varvec{y}_{j}\) with an element \(\left\langle x_{j}\right| \) being its j-th entry and 0 being all others. Let the involution \(^{*}\) be defined by the rule that \(\varvec{y}_{j}^{*}\) is a row vector having all entries equal to zero and the \(j-\)being equal to \(\left| x_{j}\right\rangle .\) Let \(\mathbb {K}_{k}\) denote a space spanned by elements \(\varvec{y}_{j},\) \(j=1,\ldots ,k.\) Obviously and naturally \(\mathbb {K}_{k}\) is isomorphic with the subspace \(\mathbb {H}_{k}\) of the space \(\mathbb {H}\). Let us denote \(\varvec{Y=} \sum _{j=1}^{k}\varvec{y}_{j}.\) Notice that \(\varvec{Y}^{*} \varvec{Y=}\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right| ,\) and \(\varvec{YY}^{*} =\varvec{I}_{k}\)- the identity matrix in \(\mathbb {K}_{k}\).
Remark 3
Further, let’s define matrix \(\varvec{G}_{k}\) such that its (jl)-th entry is given by
$$\begin{aligned} \left( \varvec{G}_{k}\right) _{jl}=\left\{ \begin{array} [c]{ccc} \left( 1-\beta _{j}\right) &{} if &{} j=l\\ -\alpha _{jl} &{} if &{} j\ne l \end{array} \right. . \end{aligned}$$
(3.3)
Notice that in our notation \(\varvec{Y}^{*}=\left[ \begin{array} [c]{ccccc} \left| x_{1}\right\rangle&\ldots&\left| x_{j}\right\rangle&\ldots&\left| x_{k}\right\rangle \end{array} \right] .\) Notice that using our notation we have the following equality:
$$\begin{aligned} \varvec{Y}^{*}\varvec{G}_{k}\varvec{Y=}\sum _{j=1} ^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right| -\sum _{j=1}^{k}\beta _{j}\left| x_{j}\right\rangle \left\langle x_{j}\right| -\sum _{j\ne l}^{k}\alpha _{jl}\left| x_{j}\right\rangle \left\langle x_{l}\right| . \end{aligned}$$
Hence, operator \(\varvec{A}_{k}\) can be decomposed as follows:
$$\begin{aligned} \varvec{A}_{k}=\varvec{I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right| +\varvec{Y}^{*} \varvec{G}_{k}\varvec{Y.} \end{aligned}$$
(3.4)
As noticed above, the operator \(\varvec{I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right| \) is the projection on the space \(\mathbb {U}_{k}\). Notice also that the operator \(\varvec{A}_{k}\) is completely defined by the set \(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k}\) and the matrix \(\varvec{G}_{k}.\) Matrix \(\varvec{G}_{k}\) will be called an associated matrix of an operator \(\varvec{A}_{k}\). We will use simplified notation \(O(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k},\varvec{G}_{k})\) to denote the operator of the form (3.2) defined by the set \(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k} \) and the matrix \(\varvec{G}_{k}.\)
We have the following result.
Theorem 3
Let us consider two operators \(O_{1}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k},\varvec{G}_{k}^{(1)})\) and
\(O_{2}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1} ^{k},\varvec{G}_{k}^{(2)}),\) then their composition, i.e., of the operator \(O_{1}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k} ,\varvec{G}_{k}^{(1)})\)
and of the operator \( O_{2}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k} ,\varvec{G}_{k}^{(2)}),\) is of the same class and is equal to \( O_{3}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1} ^{k},\varvec{G}_{k}^{(1)}\varvec{G}_{k}^{(2)}).\) That is the composition operator has its associated matrix being equal to the product of the matrices \(\varvec{G}_{k}^{(1)}\) and \(\varvec{G}_{k}^{(2)}\).
Proof
Is shifted to Sect. 4. \(\square \)
Theorem 4
Let us consider an operator \(O(\left\{ \left| x_{j} \right\rangle \right\} _{j=1}^{k},\varvec{G}_{k}).\) The following statements are equivalent:
a) The constants \(\beta _{j},\alpha _{jl}\in \mathbb {C}\), \(l,j =1,\ldots ,k\) are selected in such a way that
$$\begin{aligned} -2{\text {Re}}\beta _{j}+\left| \beta _{j}\right| ^{2}+\sum _{j\ne l}\left| \alpha _{jl}\right| ^{2}&=0, \end{aligned}$$
(3.5)
$$\begin{aligned} -\left( \alpha _{jl}+\alpha _{lj}^{*}\right) +\beta _{j}^{*}\alpha _{jl}+\alpha _{lj}^{*}\beta _{l}+\sum _{m\ne j,m\ne l}\alpha _{jm}^{*}\alpha _{ml}&=0, \end{aligned}$$
(3.6)
for \(l,j=1,\ldots ,k.\)
b) Matrix \(\varvec{G}_{k}\) associated with O (defined by Formula (3.3)) is unitary.
If one of these conditions is satisfied, then the operator \(\varvec{A}_{k}\) is unitary.
Proof
Is shifted to Sect. 4. \(\square \)
Remark 4
It can be useful to view \(\varvec{A}_{k}\left| y\right\rangle \) in the following way
$$\begin{aligned} \varvec{A}_{k}\left| y\right\rangle =\left| y\right\rangle -\sum _{j=1}^{k}\chi _{j}(y)\left| x_{j}\right\rangle , \end{aligned}$$
where
$$\begin{aligned} \chi _{j}(y)=\left( \beta _{j}\left\langle x_{j}\right| y\rangle +\sum _{l=1,l\ne j}^{k}\alpha _{jl}\left\langle x_{l}\right| \langle x_{l}\left| y\right\rangle \right) . \end{aligned}$$
(3.7)
Proposition 1
k eigenvalues of the operator \(\varvec{A}_{k} = O(\left\{ \left| x_{j}\right\rangle \right\} _{j=1} ^{k},\varvec{G}_{k})\) are equal to the eigenvalues of the \(k\times k\) unitary matrix \(\varvec{G}_{k}\) associated with \(\varvec{A}_{k}.\) Moreover, the first k eigenvectors of the operator \(\varvec{A}_{k}\) are linear combinations of the vectors \(\left| x_{j}\right\rangle .\) More precisely, suppose that the vector \(\varvec{\gamma }^{T}\varvec{=(}\gamma _{1} ,\ldots ,\gamma _{k})\) is the eigenvector of the matrix \(\varvec{G}_{k}\) related to the eigenvalue \(\lambda .\) Then, \(\varvec{Y}^{*}\varvec{\gamma =}\) \(\sum _{j=1}^{k}\gamma _{j}\left| x_{j} \right\rangle \) is the eigenvector of the operator \(\varvec{A}_{k}\) related to the eigenvalue \(\lambda \).
Proof
We have
$$\begin{aligned} \varvec{A}_{k}\sum _{j=1}^{k}\gamma _{j}\left| x_{j}\right\rangle&=(\varvec{I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right| +\varvec{Y}^{*}\varvec{G}_{k}\varvec{Y)Y}^{*}\varvec{\gamma }\\&=\varvec{Y}^{*}\varvec{G}_{k}\varvec{YY}^{*}\varvec{\gamma =Y}^{*}\varvec{G}_{k}\varvec{\gamma =}\lambda \varvec{Y}^{*}\varvec{\gamma .} \end{aligned}$$
\(\square \)
In particular, for any \(n\ge 1\) and sets of orthonormal vectors \(\{\left| x_{j}\right\rangle \}_{j=1}^{n}\) and n reals \(\left\{ \alpha _{j}\right\} _{j=1}^{n}\) such that \(\alpha _{j}\in (-\pi ,\pi ]\) the operator
$$\begin{aligned} \varvec{A}_{n}=\varvec{I}-\sum _{j=1}^{n}\left( 1+e^{i\alpha _{j}}\right) \left| x_{j}\right\rangle \left\langle x_{j}\right| , \end{aligned}$$
is unitary. Moreover, its first n eigenvectors are vectors \(\left| x_{j}\right\rangle ,\) \(j=1,\ldots ,n\) and eigenvalues are numbers \(e^{i\alpha _{j}}\).
Corollary 1
Let \(\varvec{A}_{k}\) be the unitary operator defined by vectors \(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k}\) given by Formula (3.2) with associated matrix \(\varvec{G}_{k}\) according to (3.3). For any natural m, we have
$$\begin{aligned} \varvec{A}_{k}^{m}=\varvec{I}-\sum _{j=1}^{k}\beta _{j}^{(m)}\left| x_{j}\right\rangle \left\langle x_{j}\right| -\sum _{l\ne j}^{k} \alpha _{jl}^{(m)}\left| x_{j}\right\rangle \left\langle x_{l}\right| , \end{aligned}$$
where the matrix \(\varvec{G}_{k}^{(m)}\) defined in the following way
$$\begin{aligned} \left( \varvec{G}_{k}^{(m)}\right) _{jl}=\left\{ \begin{array} [c]{ccc} \left( 1-\beta _{j}^{(m)}\right) &{} if &{} j=l\\ -\alpha _{jl}^{(m)} &{} if &{} j\ne l \end{array} \right. \end{aligned}$$
is the m-th power of the matrix \(\varvec{G}_{k}\) defined by (3.3) or equivalently
$$\begin{aligned} \varvec{A}_{k}^{m}=\varvec{I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right. +\varvec{Y}^{*}\varvec{G} _{k}^{m}\varvec{Y.} \end{aligned}$$
Proof
Follows directly from Theorem 3 and Formula (3.4). \(\square \)
Corollary 2
Let \(\varvec{A}_{k}= O(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k},\varvec{G}_{k})\) be given by Formula (3.2) with the associated matrix \(\varvec{G}_{k}\) according to (3.3)and let us consider vector \(\left| \varvec{t}\right\rangle =\sum _{j=1}^{k}\chi _{j}\left| \varvec{x} _{j}\right\rangle =\varvec{Y}^{*}\varvec{\chi }\), where \(\varvec{\chi =}\left[ \begin{array} [c]{c} \chi _{1}\\ \ldots \\ \chi _{k} \end{array} \right] \) and \(\chi _{j}\in \mathbb {C}\), then
$$\begin{aligned} \varvec{A}_{k}\left| t\right\rangle =\varvec{Y} ^{*}\varvec{G}_{k}\varvec{\chi .} \end{aligned}$$
In particular,
$$\begin{aligned} \varvec{A}_{k}^{m}\left| t\right\rangle =\varvec{Y}^{*}\varvec{B} _{k}\varvec{\Lambda }_{k}^{m}\varvec{B}_{k}^{-1}\varvec{\chi ,} \end{aligned}$$
where \(\varvec{B}_{k}\) is a \(k\times k\) matrix composed of eigenvectors of the matrix \(\varvec{G}_{k}\) (i.e., for which we have \(\varvec{B}_{k} ^{-1}\varvec{G}_{k}\varvec{B}_{k}=\varvec{\Lambda }_{k})\) and \(\varvec{\Lambda }_{k}\) is a diagonal matrix with eigenvalues of \(\varvec{G}_{k}\) on its diagonal.

3.1 Analysis of the generalization of the Grover’s algorithm

Let us return to the generalization of the Grover’s algorithm presented by Formula (3.1). As remarked above, it is a composition of two reflections. So to analyze it, we will make use of Theorem 3 in order to find the associated matrix of an operator \(\mathcal {A}J_{S}.\) Let us select vector \(\varvec{Y}^{*}\) to be \((\left| U_{S} \right\rangle ,\left| E_{S}\right\rangle )).\) So operator
$$\begin{aligned} \varvec{I}-\left( 1-e^{i\varphi }\right) \left( \sqrt{1-\alpha ^{2}}\left| U_{S}\right\rangle +\alpha \left| E_{S}\right\rangle \right) \left( \sqrt{1-\alpha ^{2}}\langle U_{S}|+\alpha \langle E_{S}|\right) \end{aligned}$$
has associated matrix equal to (after some elementary algebra)
$$\begin{aligned} \varvec{G}_{1}=\left[ \begin{array} [c]{cc} e^{i\varphi }+\alpha ^{2}\left( 1-e^{i\varphi }\right) &{} -\left( 1-e^{i\varphi }\right) \alpha \sqrt{1-\alpha ^{2}}\\ -\left( 1-e^{i\varphi }\right) \alpha \sqrt{1-\alpha ^{2}} &{} \left( 1-e^{i\varphi }\right) \alpha ^{2} \end{array} \right] , \end{aligned}$$
while the operator
$$\begin{aligned} \left( \varvec{I}-\left( 1-e^{i\varphi }\right) \left| E_{S}\right\rangle \langle E_{S}|\right) , \end{aligned}$$
has associated matrix given by (also after some elementary algebra)
$$\begin{aligned} \varvec{G}_{2}=\left[ \begin{array} [c]{cc} 1 &{} 0\\ 0 &{} e^{i\varphi } \end{array} \right] . \end{aligned}$$
Theorem 5
Operator \(\mathcal {G\varvec{=}A}J_{S}\) has the following associated matrix:
$$\begin{aligned} \varvec{G}_{S}=\left[ \begin{array} [c]{cc} -ie^{i\varphi /2}-2\alpha ^{2}\sin (\varphi /2) &{} 2\alpha \sqrt{1-\alpha ^{2} }e^{i\varphi }\sin (\varphi /2)\\ 2\alpha \sqrt{1-\alpha ^{2}}\sin (\varphi /2) &{} -ie^{i\varphi /2}+2\alpha ^{2}e^{i\varphi }\sin (\varphi /2) \end{array} \right] . \end{aligned}$$
(a) \(\det (\varvec{G}_{S})=-e^{i\varphi }\), \({\text {tr}}(\varvec{G}_{S})=-2ie^{i\varphi /2}(1-2\alpha ^{2}\sin ^{2}(\varphi /2))\) and the characteristic polynomial of \(\varvec{G}_{S}\) is given by
$$\begin{aligned} \lambda ^{2}+2\lambda ie^{i\varphi /2}\left( 1-2\alpha ^{2}\sin ^{2}\left( \varphi /2\right) \right) -e^{i\varphi }=0. \end{aligned}$$
(b) Consequently, we get two eigenvalues of the matrix \(\varvec{G}_{S}\)
$$\begin{aligned} \lambda _{1}=-ie^{i\varphi /2}e^{i\theta }, \lambda _{2}=-ie^{i\varphi /2}e^{-i\theta }. \end{aligned}$$
(3.8)
where we denoted by \(\theta (\alpha ,\varphi )\in [-\pi /2,0]\) (or \(\theta \) if it will be not confusing) the positive angle such that
$$\begin{aligned} e^{i\theta }=(1-2\alpha ^{2}\sin ^{2}(\varphi /2))+2i\alpha \sin (\varphi /2)\sqrt{1-\alpha ^{2}\sin ^{2}(\varphi /2)}. \end{aligned}$$
(3.9)
(c) Eigenversors of \(\varvec{G}_{S}\) are the columns of the following matrix
$$\begin{aligned} \mathcal {B=}\left[ \begin{array} [c]{cc} e^{i\varphi /2}\sin \gamma &{} -e^{i\varphi /2}\cos \gamma \\ \cos \gamma &{} \sin \gamma \end{array} \right] . \end{aligned}$$
(3.10)
where angle \(\gamma \) is defined by
$$\begin{aligned} e^{i\gamma }=\frac{1}{\sqrt{b^{2}+1}}+ i\frac{b}{\sqrt{b^{2}+1}}, \end{aligned}$$
and where we denoted
$$\begin{aligned} b=\frac{\sqrt{1-\alpha ^{2}\sin ^{2}(\varphi /2)} -\alpha \cos (\varphi /2)}{\sqrt{1-\alpha ^{2}}}. \end{aligned}$$
(3.11)
Proof
Is shifted to Sect. 4. \(\square \)
Remark 5
Notice that both extreme values of \(\theta \) are reached since we see that for \(\varphi =0\) we have \(\theta =0,\) while for \(\alpha \) and \(\varphi \) satisfying \(\sin (\varphi /2)=-\frac{\sqrt{2} }{2\alpha }\) we have \(\theta =-\pi /2.\)
As a corollary, we get the following expression for the result of k-iterations of the operator \(\mathcal {G}\) acting on the overseer \(\left| \varvec{1}\right\rangle .\)
Corollary 3
$$\begin{aligned}&(\mathcal {G})^{k}\left| \varvec{1}\right\rangle =\left( \left| U_{S}\right\rangle ,\left| E_{S}\right\rangle \right) ) \end{aligned}$$
(3.12)
$$\begin{aligned}&\left[ \begin{array} [c]{c} \left( -ie^{i\varphi }\right) ^{k}\left( \cos \delta (-i\cos (2\gamma )\sin (k\theta )+\cos (k\theta ))+ie^{i\varphi /2}\sin (k\theta )\sin (2\gamma )\sin \delta \right) \\ \left( -ie^{i\varphi }\right) ^{k}\left( ie^{-i\varphi /2}\sin (k\theta )\sin 2\gamma \cos \delta +\sin \delta \left( i\cos (2\gamma )\sin (k\theta )+\cos (k\theta )\right) \right) \end{array} \right] ,\nonumber \\ \end{aligned}$$
(3.13)
where we denoted for simplicity by \(\delta \) the angle defined by the relationship
$$\begin{aligned} e^{i\delta }=\sqrt{1-\alpha ^{2}}+i\alpha . \end{aligned}$$
(3.14)
Consequently,
$$\begin{aligned} \Pr \left( (\mathcal {G})^{k}\left| \varvec{1}\right\rangle =E_{S}\right) =\left| ie^{-i\varphi /2}\sin (k\theta )\sin 2\gamma \cos \delta +\sin \delta \left( i\cos (2\gamma )\sin (k\theta )+\cos (k\theta )\right) \right| ^{2}. \end{aligned}$$
(3.15)
Proof
Is shifted to Sect. 4. \(\square \)
One can simplify this expression further by noticing that
$$\begin{aligned} \sin (2\gamma )&=\frac{2b}{b^{2}+1}=\sqrt{\frac{1-\alpha ^{2}}{1-\alpha ^{2}\sin (\varphi /2)^{2}}},\\ \cos (2\gamma )&=\frac{b^{2}-1}{b^{2}+1}=\frac{\alpha \cos (\varphi /2)}{\sqrt{1-\alpha ^{2}\sin (\varphi /2)^{2}}}, \end{aligned}$$
and recalling definitions of the Chebyshev polynomials of the first and second kind, namely that:
$$\begin{aligned} T_{m}(\cos x)=\cos (mx)~\text {and~}U_{m}(\cos x)=\frac{\sin ((m+1)x)}{\sin x}. \end{aligned}$$
Thus, we could express \(\Pr ((\mathcal {G})^{k}\left| \varvec{1}\right\rangle =E_{S})\) as a function of \(\varphi ,\) \(\alpha \) and k only, to study what is the smallest number of iterates k,  to ensure, for a given \(\alpha \) and \(\varphi ,\) that this probability is, say, greater than .999. We could also construct tables of \(\Pr ((\mathcal {G})^{k}\left| \varvec{1}\right\rangle =E_{S})\) for given \(\varphi ,\) \(\alpha \) and k.
Since intention of this paper is to give a theoretical outlook for the Grover’s algorithm, we will skip those calculations. We will make use of Chebyshev polynomials in the following subsection when we will apply the obtained formulae to the classical Grover’s algorithm.
Remark 6
Notice that the properties of the matrix \(\varvec{G}_{S}\), essential in finding the desired properties of the output versor, depend only on parameters \(\alpha \) and \(\varphi \). Hence, we can treat the output of, say, k iterates of the operator \(\varvec{A}_{n}\) as the input of the similar operation with the other oracle to refine the search. Just as described in the introductory section.

3.1.1 Analysis of the classical Grover’s algorithm

Returning to the first section and the definition of the Grover’s algorithm (see, e.g., [4] or [6]) we see that we deal with the classical case when \(\varphi =\pi .\) Then, notice that parameter b (defined by (3.11)) is equal to 1 which means that angle \(\gamma =\pi /4.\) More precisely in classical Grover’s algorithm \(\alpha \) was equal to \(1/\sqrt{n},\) where n is size of the database. We will extend the definition of the Grover’s algorithm by assuming that \(\alpha \in (0,\frac{1}{2}].\) In this subsection, we are going to derive known properties of the Grover’s algorithm from the general considerations and formulae presented above.
Following considerations above, we see that the associated matrix \(G_{S}\) of the operator where matrix G has a form:
$$\begin{aligned} \varvec{G}_{S}=\left[ \begin{array} [c]{cc} 1-2\alpha ^{2} &{} -2\alpha \sqrt{1-\alpha ^{2}}\\ 2\alpha \sqrt{1-\alpha ^{2}} &{} 1-2\alpha ^{2} \end{array} \right] . \end{aligned}$$
The eigenvalues of the matrix \(\varvec{G}_{S}\) are (following (3.8))
$$\begin{aligned} \xi _{1}&= i\frac{n-2m+2i\sqrt{m(n-m)}}{n} \overset{df}{=}ie^{i\theta _{m,n}}\text {, }\\&\text {and }\xi _{2} = i\frac{n-2m-2i\sqrt{m(n-m)}}{n}=ie^{-i\theta _{m,n}} \end{aligned}$$
and the related eigenvectors compose the following matrix \(\mathcal {B}\) is equal to (by (3.10))
$$\begin{aligned} \mathcal {B=}\left[ \begin{array} [c]{cc} \frac{i}{\sqrt{2}} &{} -\frac{i}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} &{} \frac{1}{\sqrt{2}} \end{array} \right] . \end{aligned}$$
Although almost all the results of the following assertion can be derived directly from the results of the previous section, for the sake of clarity and importance we will present them in the form of the theorem.
Theorem 6
$$\begin{aligned} \mathcal {G}^{k}\left| \varvec{1}\right\rangle= & {} \left( i\right) ^{k}\left( \left| U_{S}\right\rangle \cos \left( k\theta _{m,n}+\delta \right) +\left| E_{S}\right\rangle \sin \left( k\theta _{m,n}+\delta \right) \right) ,\\ \Pr \left( \Gamma _{S}^{k}\left| Y_{S}\right\rangle = E_{S}\right)= & {} \left( \sin \left( k\theta _{m,n}+\delta \right) \right) ^{2}. \end{aligned}$$
where \(\delta \) is defined by (3.14). We have also
$$\begin{aligned} \Pr \left( \Gamma _{S}^{k}\left| Y_{S}\right\rangle = E_{S}\right) \overset{df}{=}F_{k}(\alpha )=(\alpha T_{2k}(\alpha )+\left( 1-\alpha ^{2}\right) U_{2k-1}(\alpha ))^{2}, \end{aligned}$$
(3.16)
where \(T_{n}\) and \(U_{n}\) denote Chebyshev polynomials of the first and second type, respectively. Besides for small values of x for \(l\ge 0\) and \(k\cong \frac{\pi (2l+1)}{4\alpha }-1\), we get \(\Pr (\Gamma _{S}^{k}\left| Y_{S}\right\rangle = E_{S})\cong 1.\)
In the simplification of Formula (3.16), we have used the following well-known properties of Chebyshev polynomials: for \(n\ge 0\) and \(x\in \mathbb {R}:\)
$$\begin{aligned} T_{n}(-x)&=(-1)^{n-1}T_{n}(x),~~T_{n}\left( 2x^{2}-1\right) =T_{2n}(x),\\ 2xU_{n}\left( 1-2x^{2}\right)&=(-1)^{n}U_{2n+1}(x). \end{aligned}$$
Corollary 4
Now notice that to have the element of the set S to be ‘chosen’ for sure we have to have \(\left| \sin (k\theta _{m,n}+\delta )\right| =1,\) that is
$$\begin{aligned} k\theta _{m,n}+\delta = l\pi , \end{aligned}$$
(3.17)
for some integers mnk and l. The simplest case is when \(k=1\) and \( l=0.\) Then, this equation reduces to \(\theta _{m,n}=\delta \) which leads to the equation
$$\begin{aligned} 1-2\alpha ^{2}=\alpha . \end{aligned}$$
The only positive solution of this equation is \(\alpha =1/2,\) which refers to the well-known property of the Grover’s algorithm that if \(n/m=4\) then all ‘chosen’ elements can be chosen in one step. Thus, quantum computers can find all elements of the set consisting of the quarter of the population in one step! Of course the state collapse will produce only one state of the set S. However, the system is in \(E_{S}\) after just one iteration.
Remark 7
We have \(\forall k\ge 1:F_{k}(\frac{\sqrt{2}}{2}x)^{2} =1/2\). In other words, the set of the size of half of the population can be identified by algorithm \(\Gamma _{S}\) only with probability 1/2. However, as it follows from the first part of the paper, we can obtain all ‘chosen’ elements in just one step, provided that the oracle marked chosen elements by the phase change \(\pi /2\) not \(\pi \) as in the case of the classical Grover’s algorithm.
Further, using Mathematica, we get \(F_{1}(\frac{\sqrt{3}}{3}) =.92,\) \(F_{6}(\frac{\sqrt{3}}{3})=.978\) and \(F_{11}(\frac{\sqrt{3}}{3})=.99982.\) That is by iterating \(\Gamma _{S}\) only 11 times, we can identify a subset of ‘chosen’ elements of the size of 1/3 of the population with very big probability. As shown above, we have \(F_{1}(1/2)=1.\) Again, as it follows from the considerations presented in the first part of the paper, this selection of 1/3 of the population can be done in just one step provided the oracle would mark all chosen elements by changing their phase change by \(2\pi /3.\)
Again using Mathematica, we have \(F_{8}(\frac{\sqrt{5}}{5}) =.999214,\) \(F_{20}(\frac{\sqrt{6}}{6})= .99864,\) \(F_{38}(\frac{\sqrt{7}}{7})=.999999965.\)
Remark 8
Notice also that if m is fixed and n is very large then the optimal number of iterations to get the probability of selecting the set S with probability close to 1 is approximately \(\frac{\pi }{4}\sqrt{\frac{n}{m}}.\) This result is known for \(m=1\) and firstly noted in [2].

4 Proofs and auxiliary remarks

Proof of Theorem 1
First of all notice, that \(\left| \varvec{v}_{S}\right\rangle =\varvec{\left| \varvec{y}\right\rangle +}\alpha (e^{i\varphi }-1)\left| \varvec{y}_{S}\right\rangle .\) Hence, we are looking for a unitary operator that satisfies n equations
$$\begin{aligned} \mathcal {A}\left( \varvec{\left| \varvec{y}\right\rangle +}\alpha \left( e^{i\varphi }-1\right) \left| \varvec{y}_{S}\right\rangle \right) =\left| \varvec{y}_{S}\right\rangle \end{aligned}$$
for every \(S\subset \{1,\ldots ,n\}\) of cardinality m. Multiplying both sides by \(\mathcal {A}^{*}\), we get
$$\begin{aligned} \varvec{\left| \varvec{y}\right\rangle }=\left( \mathcal {A}^{*} -\alpha \left( e^{i\varphi }-1\right) \varvec{I}\right) \left| \varvec{y}_{S}\right\rangle , \end{aligned}$$
(4.1)
for every \(S\subset \{1,\ldots ,n\}\) of cardinality m. The above-mentioned equality implies that the matrix \((A^{*}-\alpha (e^{i\varphi }-1)\varvec{I})\) is of rank 1. Thus, we must have
$$\begin{aligned} \left( \mathcal {A}^{*}-\alpha \left( e^{i\varphi }-1\right) \varvec{I}\right) =\left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{2}|, \end{aligned}$$
for some vectors \(\left| \varvec{x}_{1}\right\rangle \) and \(\left| \varvec{x}_{2}\right\rangle .\) To get to know vectors \(\left| \varvec{x}_{1}\right\rangle \) and \(\left| \varvec{x}_{2}\right\rangle \) better, let us write
$$\begin{aligned} \mathcal {A}^{*}&=\alpha \left( e^{i\varphi }-1\right) \varvec{I} +\left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{2}|,\\ \mathcal {A}&=\alpha \left( e^{-i\varphi }-1\right) \varvec{I} +\left| \varvec{x}_{2}\right\rangle \langle \varvec{x}_{1}|, \end{aligned}$$
and multiply these equations side by side getting:
$$\begin{aligned} \varvec{I}&=\mathcal {A}^{*}\mathcal {A=}2\alpha ^{2}(1-\cos \varphi )\varvec{I}+\langle \varvec{x}_{2}\left| \varvec{x} _{2}\right\rangle \left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{1}|\\&\quad +\alpha \left( e^{i\varphi }-1\right) \left| \varvec{x}_{2}\right\rangle \langle \varvec{x}_{1}|+\alpha \left( e^{-i\varphi }-1\right) \left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{2}|\\&=\mathcal {AA}^{*}=2\alpha ^{2}(1-\cos \varphi )\varvec{I}+\langle \varvec{x}_{1}\left| \varvec{x}_{1}\right\rangle \left| \varvec{x}_{2}\right\rangle \langle \varvec{x}_{2}|\\&\quad +\alpha \left( e^{i\varphi }-1\right) \left| \varvec{x}_{2}\right\rangle \langle \varvec{x}_{1}|+\alpha \left( e^{-i\varphi }-1\right) \left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{2}|. \end{aligned}$$
Hence, we see that
$$\begin{aligned} \frac{\left| \varvec{x}_{1}\right\rangle \left\langle \varvec{x} _{1}\right. }{\left\langle \varvec{x}_{1}\right. \left| \varvec{x} _{1}\right\rangle }=\frac{\left| \varvec{x}_{2}\right\rangle \left\langle \varvec{x}_{2}\right. }{\left\langle \varvec{x}_{2}\right. \left| \varvec{x}_{2}\right\rangle }. \end{aligned}$$
Thus, we deduce that \(\left| \varvec{x}_{1}\right\rangle \) and \(\left| \varvec{x}_{2}\right\rangle \) are parallel. Consequently, let \(\left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{2}|= \gamma \left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{1}|.\) Besides, let us select \(\left| \varvec{x}_{1}\right\rangle \) to be normalized, i.e., \(\langle \varvec{x}_{1}\left| \varvec{x}_{1}\right\rangle =1.\) Relationship (4.1) implies that \(\left| \varvec{x}_{1}\right\rangle \) must be parallel to \(\left| \varvec{y} \right\rangle \) so since it is assumed to be a versor we must have \(\left| \varvec{x}_{1}\right\rangle =\left| \varvec{y} \right\rangle .\) Now notice that \(\langle \varvec{y}\left| \varvec{y} _{S}\right\rangle =\alpha \). Consequently, parameter \(\gamma \) must satisfy
$$\begin{aligned} 1=\gamma \alpha , \end{aligned}$$
(4.2)
or \(\gamma =1/\alpha \). Further, we have
$$\begin{aligned} \varvec{I}=2\alpha ^{2}(1-\cos \varphi )\varvec{I}+\left| \gamma \right| ^{2}\left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{1} |+2\alpha {\text {Re}}(\gamma ^{*}\left( \left( e^{i\varphi }-1\right) \right) ^{2}\left| \varvec{x}_{1}\right\rangle \langle \varvec{x}_{1}|. \end{aligned}$$
Consequently, the following system of two equations must be satisfied:
$$\begin{aligned}&2\alpha ^{2}(1-\cos \varphi )=1, \end{aligned}$$
(4.3)
$$\begin{aligned}&\frac{1}{\alpha ^{2}}+2\alpha \frac{1}{\alpha }{\text {Re}}\left( e^{i\varphi }-1\right) =\frac{1}{\alpha ^{2}}+2(\cos \varphi -1)=0. \end{aligned}$$
(4.4)
From the first one, we get
$$\begin{aligned} 1-\cos \varphi =2\sin ^{2}(\varphi /2)=\frac{1}{2\alpha ^{2} }, \end{aligned}$$
from which it follows that the smallest \(\alpha ^{2},\) for which we can get equality (4.1) is 1/4 and that it can happen only when \(\varphi =\pi .\) Notice also that if \(\varphi =2\arcsin \frac{1}{2\alpha },\) then not only the first (i.e., (4.3)) but also the second equation (i.e., (4.4)) is satisfied. So the operator in question is given by the following formula:
$$\begin{aligned} \mathcal {A}=\alpha \left( e^{-i\varphi }-1\right) \varvec{I}+\frac{1}{\alpha }\left| \varvec{y}\right\rangle \left\langle \varvec{y}\right. . \end{aligned}$$
Now notice that
$$\begin{aligned} \alpha \left( e^{-i\varphi }-1\right)&\varvec{=}\frac{\cos \varphi -1-i\sin \varphi }{2\sin (\varphi /2)}=\frac{-2\sin ^{2}(\varphi /2)-2i\sin (\varphi /2)\cos (\varphi /2)}{2\sin (\varphi /2)}\\&=-\sin (\varphi /2)-i\cos (\varphi /2)=-ie^{-i\varphi /2}. \end{aligned}$$
Secondly, notice that
$$\begin{aligned} ae^{i\varphi /2}&=2\sin (\varphi /2)(\cos (\varphi /2)+i\sin (\varphi /2))\\&=\sin \varphi +i(1-\cos \varphi )=i(1-\cos \varphi -i\sin \varphi )\\&=i(1-e^{i\varphi }). \end{aligned}$$
This formula can be nicely transformed to
$$\begin{aligned} \mathcal {A=-}ie^{-i\varphi /2}\left( \varvec{I}-\left( 1-e^{i\varphi }\right) \left| \varvec{y}\right\rangle \langle \varvec{y|}\right) . \end{aligned}$$
Now if \(\alpha =1/2\), then \(\varphi =\pi \) and we have
$$\begin{aligned} \mathcal {A}=-\varvec{I}+2\langle \varvec{y}\left| \varvec{y}\right\rangle ), \end{aligned}$$
if \(\alpha =1/\sqrt{3}\), then \(\varphi =2\pi /3\) and we have
$$\begin{aligned} \mathcal {A}=e^{-i5\pi /6}\left( \varvec{I}-\left( 1-e^{2i\pi /3}\right) \left| \varvec{y} \right\rangle \langle \varvec{y|}\right) , \end{aligned}$$
if \(\alpha =1/\sqrt{2}\), then \(\varphi =\pi /2\) and we have:
$$\begin{aligned} \mathcal {A}=e^{-3i\pi /4}\left( \varvec{I}-\left( 1-e^{i\pi /2}\right) \left| \varvec{y} \right\rangle \langle \varvec{y|}\right) . \end{aligned}$$
\(\square \)
Proof of Theorem 3
We will use the form (3.4). We have
$$\begin{aligned}&O_{1}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k} ,\varvec{G}_{k}^{(1)})O_{2}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k},\varvec{G}_{k}^{(2)})\\&\quad =\left( \varvec{I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right. +\varvec{Y}^{*}\varvec{G}_{k}^{(1)}\varvec{Y)(I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right. +\varvec{Y}^{*}\varvec{G}_{k}^{(2)}\varvec{Y}\right) \\&\quad =\varvec{I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right. +\varvec{Y}^{*}G_{k}^{(1)}\varvec{YY}^{*}\varvec{G} _{k}^{(2)}\varvec{Y=I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \left\langle x_{j}\right. +\varvec{Y}^{*}\varvec{G}_{k}^{(1)} \varvec{G}_{k}^{(2)}\varvec{Y.} \end{aligned}$$
\(\square \)
Proof of Theorem 4
By Theorem 3, we know that given two operators
\(O_{1}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k},\varvec{G}_{k}^{(1)})\) and \(O_{2}(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k},\varvec{G}_{k}^{(2)})\), whose associated matrices satisfy
$$\begin{aligned} \varvec{G}_{k}^{(1)}\varvec{G}_{k}^{(2)}=\varvec{I}_{k}, \end{aligned}$$
we deduce that the composition of these two operators is equal to
$$\begin{aligned} \varvec{I}-\sum _{j=1}^{k}\left| x_{j}\right\rangle \langle x_{j}|+\sum _{j=1}^{k}\left| x_{j} \right\rangle \langle x_{j}|=\varvec{I}. \end{aligned}$$
In other words, an operator \(O(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k},\varvec{G}_{k}^{-1})\) is an inverse of the operator \(O(\left\{ \left| x_{j}\right\rangle \right\} _{j=1}^{k},\varvec{G}_{k})\). Now notice
$$\begin{aligned} \varvec{G}_{k}^{*}=\left\{ \begin{array} [c]{ccc} \left( 1-\beta _{j}^{*}\right) &{} if &{} j=l\\ -\alpha _{lj}^{*} &{} if &{} j\ne l \end{array} \right. . \end{aligned}$$
Consequently, we have
$$\begin{aligned} \left( \varvec{G}_{k}^{*}\varvec{G}_{k}\right) _{jl}=\left\{ \begin{array} [c]{ccc} |1-\beta _{j}|^{2}+\sum _{m\ne j}|\alpha _{jm}|^{2} &{} if &{} j=l\\ \left( 1-\beta _{j}\right) \alpha _{jl}^{*}+\left( 1-\beta _{l}^{*}\right) \alpha _{lj}+\sum _{m\ne j,m\ne l}\alpha _{jm}\alpha _{ml}^{*} &{} if &{} j\ne l \end{array} \right. , \end{aligned}$$
which together with the requirement that \(\varvec{G}_{k}^{*}\varvec{G}_{k}\) is the identity matrix lead to equations (3.5) and (3.6). \(\square \)
Proof of Theorem 5
We have \(\varvec{G}_{S} =\varvec{G}_{2}\varvec{G}_{1}\). Now we get expressions for the determinant and the trace of the matrix \(\varvec{G}_{S}\) by direct calculations. Using the well-known formula expressing characteristic polynomial of the \(2\times 2\) matrix by its determinant and the trace, we get the characteristic polynomial of \(\varvec{G}_{S}.\) We solve it directly and obtain:
$$\begin{aligned} \lambda _{1}&=-ie^{i\varphi /2}\left( \left( 1-2\alpha ^{2}\sin ^{2}(\varphi /2)\right) +2i\alpha \sin (\varphi /2)\sqrt{1-\alpha ^{2}\sin ^{2}(\varphi /2)}\right) ,\\ \lambda _{2}&=-ie^{i\varphi /2}\left( \left( 1-2\alpha ^{2}\sin ^{2}(\varphi /2)\right) -2i\alpha \sin (\varphi /2)\sqrt{1-\alpha ^{2}\sin ^{2}(\varphi /2)}\right) , \end{aligned}$$
for the eigenvalues. Further, we get
$$\begin{aligned} v_{1}^{T}&=\left( e^{i\varphi /2}\frac{-\alpha \cos (\varphi /2)+\sqrt{1-\alpha ^{2}\sin ^{2}(\varphi /2)}}{\sqrt{1-\alpha ^{2}}},1\right) =\left( e^{i\varphi /2}b,1\right) \\ v_{2}^{T}&=\left( -e^{i\varphi /2}\frac{\alpha \cos (\varphi /2)+\sqrt{1-\alpha ^{2}\sin ^{2}(\varphi /2)}}{\sqrt{1-\alpha ^{2}}},1\right) =\left( -e^{i\varphi /2}\frac{1}{b},1\right) , \end{aligned}$$
for the eigenvectors. Now, it is a matter of little algebra to normalize them and to form matrix \(\mathcal {B}\) out of coefficients of the so obtained versors. \(\square \)
Proof of Corollary 3
By Proposition 1, we deduce that the eigenvectors of the operator \(\mathcal {G}\) are given by the relationship
$$\begin{aligned} \left[ \begin{array} [c]{c} \left| g_{1}\right\rangle \\ \left| g_{2}\right\rangle \end{array} \right] =\mathcal {B}\left[ \begin{array} [c]{c} \left| U_{S}\right\rangle \\ \left| E_{S}\right\rangle \end{array} \right] . \end{aligned}$$
Consequently, we have
$$\begin{aligned} \mathcal {B}^{-1}\left[ \begin{array} [c]{c} \left| g_{1}\right\rangle \\ \left| g_{2}\right\rangle \end{array} \right] =\left[ \begin{array} [c]{c} \left| U_{S}\right\rangle \\ \left| E_{S}\right\rangle \end{array} \right] , \end{aligned}$$
Now recall Formula (2.4). We see that versor \(\left| \varvec{1} \right\rangle \) is given by
$$\begin{aligned} \left| \varvec{1}\right\rangle =(\left| U_{S}\right\rangle ,\left| E_{S}\right\rangle )\left[ \begin{array} [c]{c} \sqrt{1-\alpha ^{2}}\\ \alpha \end{array} \right] , \end{aligned}$$
We have
$$\begin{aligned}&(\mathcal {G})^{k}\left| \varvec{1}\right\rangle =(\mathcal {G} )^{k}\left| \varvec{1}\right\rangle =B\mathcal {B}^{-1}(\mathcal {G} )^{k}\mathcal {BB}^{-1}\left| \varvec{1}\right\rangle \\&\quad =\left[ \begin{array} [c]{cc} e^{i\varphi /2}\sin \gamma &{} -e^{i\varphi /2}\cos \gamma \\ \cos \gamma &{} \sin \gamma \end{array} \right] \left[ \begin{array} [c]{cc} e^{ik\theta } &{} 0\\ 0 &{} e^{-ik\theta } \end{array} \right] \\&\qquad \times \left[ \begin{array} [c]{cc} e^{-i\varphi /2}\sin \gamma &{} \cos \gamma \\ -e^{-i\varphi /2}\cos \gamma &{} \sin \gamma \end{array} \right] \left[ \begin{array} [c]{c} \cos \delta \\ \sin \delta \end{array} \right] \\&\quad =(-i)^{k}e^{ik\varphi /2}\left[ \begin{array} [c]{cc} e^{i\varphi /2}\sin \gamma &{} -e^{i\varphi /2}\cos \gamma \\ \cos \gamma &{} \sin \gamma \end{array} \right] \\&\qquad \times \left[ \begin{array} [c]{c} e^{ik\theta }\left( e^{-i\varphi /2}\sin \gamma \cos \delta +\cos \gamma \sin \delta \right) \\ e^{-ik\theta }\left( -e^{-i\varphi /2}\cos \gamma \cos \delta +\sin \gamma \sin \delta \right) \end{array} \right] \\&\quad =\left[ \begin{array} [c]{c} e^{ik\theta }\left( e^{ik\varphi /2}\sin ^{2}\gamma \cos \delta +e^{i(k+1)\varphi /2} \cos \gamma \sin \gamma \sin \delta \right) \\ +e^{-ik\theta }\left( e^{ik\varphi /2}\cos ^{2}\gamma \cos \delta +e^{i(k+1)\varphi /2} \cos \gamma \sin \gamma \sin \delta \right) \\ e^{ik\theta }\left( e^{i(k-1)\varphi /2}\sin \gamma \cos \gamma \cos \delta +e^{ik\varphi /2}\cos ^{2}\gamma \sin \delta \right) \\ +e^{-ik\theta }(-e^{i(k-1)\varphi /2}\cos \gamma \sin \gamma \cos \delta +e^{ik\varphi /2}\sin ^{2}\gamma \sin \delta \end{array} \right] . \end{aligned}$$
\(\square \)
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Grover, Lov K.: Quantum Mechanics Helps in Searching for a Needle in a Haystack. Phys. Rev. Lett. 79(2), 325 (1997)ADSCrossRef Grover, Lov K.: Quantum Mechanics Helps in Searching for a Needle in a Haystack. Phys. Rev. Lett. 79(2), 325 (1997)ADSCrossRef
2.
Zurück zum Zitat Zalka, Cristof: Grover’s quantum searching algorithm is optimal. Phys. Rev. A 60, 2746–2751 (1999)ADSCrossRef Zalka, Cristof: Grover’s quantum searching algorithm is optimal. Phys. Rev. A 60, 2746–2751 (1999)ADSCrossRef
3.
Zurück zum Zitat Baritompa, W.P., Bulger, D.W., Wood, G.R.: Grover’s Quantum Algorithm Applied to Global Optimization. SIAM J. Opt. 15(4), 1170–1184 (2005)MathSciNetCrossRef Baritompa, W.P., Bulger, D.W., Wood, G.R.: Grover’s Quantum Algorithm Applied to Global Optimization. SIAM J. Opt. 15(4), 1170–1184 (2005)MathSciNetCrossRef
4.
Zurück zum Zitat Emma Strubell,: An Introduction to Quantum Algorithms, COS498-Chawathe, (2011) Emma Strubell,: An Introduction to Quantum Algorithms, COS498-Chawathe, (2011)
5.
Zurück zum Zitat Diao, Zijian: Exactness of the original Grover search Algorithm. Physical Review A 82, 044301 (2010)ADSCrossRef Diao, Zijian: Exactness of the original Grover search Algorithm. Physical Review A 82, 044301 (2010)ADSCrossRef
6.
Zurück zum Zitat Chappell, James M., Iqbal, Azhar; Lohe, M. A., von Smekal, Lorenz, Abbott, Derek.: An improved formalism for quantum computation based on geometric algebra—case study: Grover’s search algorithm. Quantum Inf. Process. 12(4), 1719–1735 (2013) Chappell, James M., Iqbal, Azhar; Lohe, M. A., von Smekal, Lorenz, Abbott, Derek.: An improved formalism for quantum computation based on geometric algebra—case study: Grover’s search algorithm. Quantum Inf. Process. 12(4), 1719–1735 (2013)
7.
Zurück zum Zitat Long GuiLu, Zhang WeiLin, Li YanSong, NIU L,: Arbitrary phase rotation of the marked state cannot be used for Grover’s quantum search algorithm., Commun. Theor. Phys. (Beijing, China) 32, pp 335-3388 (1999) Long GuiLu, Zhang WeiLin, Li YanSong, NIU L,: Arbitrary phase rotation of the marked state cannot be used for Grover’s quantum search algorithm., Commun. Theor. Phys. (Beijing, China) 32, pp 335-3388 (1999)
8.
Zurück zum Zitat Long, Gui, Lu, Li, Yan, Song, Zhang, Wei, Lin, Niu, Li,: Phase matching in quantum searching. Phys. Lett. A 262(1), 27–34 (1999) Long, Gui, Lu, Li, Yan, Song, Zhang, Wei, Lin, Niu, Li,: Phase matching in quantum searching. Phys. Lett. A 262(1), 27–34 (1999)
9.
Zurück zum Zitat Long, Gui-Lu, Li, Xiao, Sun, Yang: Phase matching condition for quantum search with a generalized initial state. Phys. Lett. A 294(3–4), 143–152 (2002)ADSMathSciNetCrossRef Long, Gui-Lu, Li, Xiao, Sun, Yang: Phase matching condition for quantum search with a generalized initial state. Phys. Lett. A 294(3–4), 143–152 (2002)ADSMathSciNetCrossRef
10.
Zurück zum Zitat Li, Dafa, et al.: Invariants of Grover’s algorithm and the rotation in space. Physical Review A 66.4: 044304 (2002) Li, Dafa, et al.: Invariants of Grover’s algorithm and the rotation in space. Physical Review A 66.4: 044304 (2002)
11.
Zurück zum Zitat Li, Dafa, Li, Xinxin: More general quantum search algorithm \$Q=-I\_ \(\backslash \) gamma VI\_ \(\backslash \) tau U\$ and the precise formula for the amplitude and the non-symmetric effects of different rotating angles. Phys. Lett. A 287(5–6), 304–316 (2001)ADSMathSciNetCrossRef Li, Dafa, Li, Xinxin: More general quantum search algorithm \$Q=-I\_ \(\backslash \) gamma VI\_ \(\backslash \) tau U\$ and the precise formula for the amplitude and the non-symmetric effects of different rotating angles. Phys. Lett. A 287(5–6), 304–316 (2001)ADSMathSciNetCrossRef
12.
Zurück zum Zitat Long, Gui, Lu, Tu, Chang, Cun, Li, Yan, Song, Zhang, Wei, Lin, Yan, Hai, Yang,: An \$ \(\backslash \) rm SO(3)\$ picture for quantum searching. J. Phys. A 34(4), 861–866 (2001) Long, Gui, Lu, Tu, Chang, Cun, Li, Yan, Song, Zhang, Wei, Lin, Yan, Hai, Yang,: An \$ \(\backslash \) rm SO(3)\$ picture for quantum searching. J. Phys. A 34(4), 861–866 (2001)
13.
Zurück zum Zitat Long, Gui-Lu: Grover algorithm with zero theoretical failure rate. Physical Review A 64(2), 022307 (2001)ADSCrossRef Long, Gui-Lu: Grover algorithm with zero theoretical failure rate. Physical Review A 64(2), 022307 (2001)ADSCrossRef
14.
Zurück zum Zitat Long, Gui-lu, Liu, Yang: Search an unsorted database with quantum mechanics. Frontiers of Computer Science in China 1(3), 247–271 (2007)CrossRef Long, Gui-lu, Liu, Yang: Search an unsorted database with quantum mechanics. Frontiers of Computer Science in China 1(3), 247–271 (2007)CrossRef
15.
Zurück zum Zitat Toyama, F.M., van Dijk, W., Nogami, Y.: Quantum search with certainty based on modified Grover algorithms: optimum choice of parameters. Quantum Inf. Process. 12(5), 1897–1914 (2013)ADSMathSciNetCrossRef Toyama, F.M., van Dijk, W., Nogami, Y.: Quantum search with certainty based on modified Grover algorithms: optimum choice of parameters. Quantum Inf. Process. 12(5), 1897–1914 (2013)ADSMathSciNetCrossRef
Metadaten
Titel
Understanding mathematics of Grover’s algorithm
verfasst von
Paweł J. Szabłowski
Publikationsdatum
01.05.2021
Verlag
Springer US
Erschienen in
Quantum Information Processing / Ausgabe 5/2021
Print ISSN: 1570-0755
Elektronische ISSN: 1573-1332
DOI
https://doi.org/10.1007/s11128-021-03125-w

Weitere Artikel der Ausgabe 5/2021

Quantum Information Processing 5/2021 Zur Ausgabe

Neuer Inhalt