In this section, we survey a number of quantum cryptographic protocols (see Sect.
1.1 for a brief overview of these topics). Many of these protocols share the remarkable feature of being based on a very simple pattern of quantum information called
conjugate coding. Because of its paramount importance in quantum cryptography, we first present this notion in Sect.
3.1. We then show how conjugate coding is the crucial ingredient in the quantum-cryptographic constructions for quantum money (Sect.
3.1), QKD (Sect.
3.2), a quantum reduction from oblivious transfer to bit commitment (Sect.
3.3), the limited-quantum-storage model (Sect.
3.4) and delegated quantum computation (Sect.
3.5). Further topics covered in this section are quantum coin-flipping (Sect.
3.6) and device-independent cryptography (Sect.
3.7).
3.1 Conjugate coding
Conjugate coding [
233] is based on the principle that we can encode
classical information into
conjugate quantum bases. This primitive is extremely important in quantum cryptography—in fact, the vast majority of quantum cryptographic protocols exploit conjugate coding in one way or another. Conjugate coding is also called
quantum coding [
30] and
quantum multiplexing [
26].
The principle of conjugate coding is simple: for clarity of presentation and consistency with commonly used terminology, we associate a qubit with a photon (a particle of light), and use photon polarization as a quantum degree of freedom. Among others, photons can be polarized horizontally
\((\vert \leftrightarrow \rangle )\), vertically (
\(\vert \updownarrow \rangle \)), diagonally to the left (
), or diagonally to the right (
). Photon polarization is a quantum property, and by associating
\( \vert \leftrightarrow \rangle = \vert 0\rangle \),
\(\vert \updownarrow \rangle = \vert 1\rangle \),
and
, we can apply quantum operations to these states, as in Sect.
2.
Each set
\(R=\{\vert \leftrightarrow \rangle ,\vert \updownarrow \rangle \}\) and
forms a basis (called the
rectilinear and
diagonal bases, respectively), and can thus be used to encode a classical bit (see Table
1).
R and
D are
conjugate bases.
Table 1
Example of conjugate coding
Quantum encoding |
\(\vert \updownarrow \rangle \)
| | | |
\(\vert \leftrightarrow \rangle \)
|
\(\vert \leftrightarrow \rangle \)
| |
\(\vert \updownarrow \rangle \)
|
\(\vert \updownarrow \rangle \)
| |
The relevance of conjugate coding to cryptography is summarized by two key features that were, remarkably, already mentioned and exploited in Wiesner’s work [
233]:
1.
Measuring in one basis irrevocably destroys any information about the encoding in its conjugate basis.
2.
The originator of the quantum encoding can verify its authenticity; however, without knowledge of the encoding basis, and given access to a single encoded state, no third party can create two quantum states that pass this verification procedure with high probability.
In order to explain the first property, recall the well-known
Heisenberg uncertainty relation [
135], which forbids learning both the position and momentum of a quantum particle precisely and simultaneously. In terms of photon polarization, and for a single photon, let us denote by
\(P_X\) the distribution of outcomes when measuring the photon in the rectilinear basis and by
\(Q_X\) the distribution when measuring in the diagonal basis. Following Heisenberg, Maassen and Uffink [
162] showed an
uncertainty relation:
\(H(P_X) + H(Q_X) \ge 1\) (where
H is the
Shannon entropy, an information-theoretic measure of uncertainty given by
\({H(P_X)} = -\sum _x p_x \log _2p_x\)). Intuitively, such a relation quantifies the fact that one can know the outcome exactly in one basis, but consequently has complete uncertainty in the other basis. Looking ahead, we will see that such uncertainty relations play a key role in proving security of quantum cryptographic protocols, e.g. in the limited-quantum-storage setting (Sect.
3.4). The second property above is explained by noting that a quantum encoding can be verified by measuring each qubit in its encoding basis and checking that the measurement result corresponds to the correct encoded bit. Intuitively, the
no-cloning theorem (Sect.
2.4) prevents a third party from
forging a state that would pass this verification procedure; however, formalizing this concept requires more work (see Sect.
3.1).
What is more, the technological requirements of conjugate coding are very basic: the single-qubit “prepare-and-measure” paradigm of conjugate coding is feasible with today’s technology—thus, many protocols derived from conjugate coding inherit this desirable property (which is, in fact considered the gold standard for “feasible” quantum protocols).
In the late 1960s, Wiesner [
233] had the visionary idea that quantum information could be used to create unforgeable bank notes. His ideas were in fact so much ahead of their time that it took years to publish them! (According to [
30], Wiesner’s original manuscript was written in 1968.)
In a nutshell, Wiesner’s proposal consists in
quantum banknotes created by encoding quantum particles using conjugate coding (Sect.
3.1), with both the classical information and basis choice being chosen as random bitstrings. Thus, a banknote consists of a sequence of single qubits, chosen randomly from the states
. As discussed in Sect.
3.1, the originator of the quantum banknote (typically called “the bank”) can verify that a quantum banknote is genuine, yet quantum mechanics prevents essentially any possibility of counterfeiting. Clearly such a functionality is beyond what classical physics can offer: since any digital record can be copied, classical information simply cannot be used for uncloneability (not even computational assumptions will help).
Wiesner’s work was improved and extended in many ways: early work of Bennett, Brassard, Breidbart and Wiesner [
26] showed how to combine computational assumptions with conjugate coding in order to achieve a type of
public verifiability for the encoded states (they coined their invention
unforgeable subway tokens). Further work on publicly-verifiable (also called
public-key quantum money) includes schemes based on the computational difficulty of some knot-theory related problems [
107] (see also [
3]), verification “oracles” [
1] and hidden subspaces [
2].
Returning to Wiesner’s scheme (which is often called
private-key quantum money in order to distinguish it from the public-key quantum money schemes), we note that the first proof of security in the case of multiple qubits is based on semi-definite programming, and appeared only recently [
181] (this result is tight, since it also gives an explicit
optimal attack). We also note work on variants of Wiesner’s scheme in which quantum encodings are returned after validation: in all cases (whether the post-verification state is always returned [
108], or the post-verification state is returned
only for encodings that are deemed valid [
55]), the resulting protocol has been found to be insecure.
We also note that further work has studied the possibility of private-key quantum money that can be verified using only
classical interaction with the bank [
116,
181], quantum
coins [
185] (which provide a perfect level of anonymity), as well as noise-tolerant versions of Wiesner’s scheme [
191].
3.2 Quantum key distribution
QKD is by far the most successful application of quantum information to cryptography. By now, QKD is the main topic of a large number of surveys (see, for instance, [
23,
45,
56,
109,
120]). Due to abundance of very good references on this topic, we survey it only briefly here.
The “BB84” protocol [
24,
25] was the first to show how conjugate coding could be used for an information-theoretically secure key agreement protocol. In a nutshell, the protocol consists in Alice sending a sequence of single qubits, chosen randomly from the states
. Bob chooses to measure them according to his own random choice of measurement bases. They communicate their basis choice for each encoded qubit;
eavesdropper detection is performed by comparing the measurement results on a fraction of the bases on which their choices coincide—if successful, this procedure gives a bound on the secrecy and similarity of the remaining shared string, which can be used to distill an almost-perfect shared secret between Alice and Bob. In order to prevent man-in-the-middle attacks, this procedure requires
authenticated classical channels. Usually, authentication is achieved by an initial shared classical secret between Alice and Bob. Thus, QKD is more accurately described as a key-expansion primitive. We note that, as a theoretical or experimental tool, it is often useful to consider a protocol equivalent to BB84, where the random choice of encoding basis (rectilinear of diagonal) is
delayed; thus a quantum source would produce a sequence of maximally entangled states
\((\vert 00\rangle +\vert 11\rangle )/\sqrt{2}\), with both Alice and Bob then measuring in their random choice of bases. That an entangled system could be used in lieu of single qubits was suggested by Ekert [
104], but note that Ekert’s idea was to base security on the observation of a Bell-inequality violation—which implies a set of different measurements than in the rectilinear/diagonal bases. The entanglement-based (“purified”) BB84 protocol was introduced by Bennett, Brassard and Mermin in [
28].
We briefly mention that the formal security of QKD was originally left open, and that a long sequence of works (e.g. [
158,
171]) culminated in a relatively accessible proof by Shor and Preskill, based on the use of quantum error correction [
207]. Further work by Renner [
198] showed a very different approach for proving the security of QKD based on exploiting the symmetries of the protocol (and applying a
de Finetti style representation theorem), and splitting the security analysis into the information-theoretic steps of error-correction and privacy amplification. Other proofs of QKD are more directly based on the complementarity of the measurements [
148]. It is a sign for the complexity of QKD security proofs that most articles on this topic focus only on subparts of the security analysis and only very recently did a first comprehensive analysis of security appear [
213].
The huge success of QKD is due in part to the fact that it is readily realizable in the laboratory (the first demonstration appeared in 1992 [
27]). In light of practical implementations, security proofs for QKD need to be re-visited in order to obtain concrete security parameters—this is the realm of
finite-key security [
133,
204,
213,
214]. Furthermore, we note that when it comes to real-world implementations, QKD is vulnerable to
side-channel attacks, which are due to the fact that physical implementations deviate from the idealized models used for security proofs (this is often referred to as
quantum hacking [
161]).
We further note that the assumption of an initial short shared secret (for authenticating the classical channel) in the implementation of QKD can be replaced with a computational assumption or an assumption about the storage capabilities of the eavesdropper (see Sect.
3.4). The result is
everlasting [
219] or
long-term [
211] security: information-theoretic security is guaranteed
except during the short period of time during which we assume a computational (or memory) assumption holds.
3.3 Bit commitment implies oblivious transfer
Oblivious transfer (OT) and bit commitment (BC) are two basic and important primitives for cryptography. In the classical case, it is easy to show that OT implies BC (in the information-theoretic setting), but the implication in the other direction does not hold.
7 In stark contrast, OT and BC are known to be
equivalent in the quantum world. In the following sections, we introduce these primitives (Sect.
3.3.1) and describe a quantum reduction from oblivious transfer to bit commitment (Sect.
3.3.2).
3.3.1 Oblivious transfer (OT) and bit commitment (BC)
Wiesner’s paper about quantum cryptography [
233] introduced “a means for transmitting two messages either but not both of which may be received”. This classical cryptographic primitive was later rediscovered (under a slightly different form) by Rabin [
195], and was given in the form of
1-out-of-2 Oblivious Transfer (OT)) by Even, Goldreich and Lempel [
106]. In OT, Alice sends two messages
\(m_0,m_1\) to Bob who receives only one of the messages
\(m_c\) according to his choice bit
c. Security for Alice (against dishonest Bob) guarantees that Bob receives only one of the two messages, whereas security for Bob (against dishonest Alice) ensures that Alice does not learn anything about Bob’s choice bit.
8 In the version by Rabin [
195], this primitive is essentially a secure erasure channel where Alice sends a single bit to Bob. This bit gets erased with probability 1/2 (in this case Bob receives
\(\bot \)), but Alice does not learn whether the bit was erased. In fact, it is known that Rabin OT is
equivalent to 1-out-of-2 OT [
81].
The importance of OT is embodied by the fact that it is
universal for secure two-party computation [
145] (i.e. using several instances of 1-out-of-2 OT, any function can be securely evaluated among two parties such that no dishonest player can learn any information about the other player’s input—beyond what can already be inferred from the output of the computed function).
9 Due to this universality, the innocent-looking OT primitive gives an excellent indicator for the cryptographic power of a model.
Bit Commitment (BC) is a cryptographic primitive that captures the following two-party functionality: Alice has a bit b that she wants to commit to Bob, but she wants to prevent Bob from reading b until she chooses to reveal it (concealing or hiding). Although Bob should not be able to determine b before Alice reveals it, Alice should be unable to change the bit after it is committed (binding). A physical-world implementation of bit commitment would be for Alice to write b on a piece of paper, lock it in a safe, and send the safe to Bob. Since Bob cannot open the safe, he cannot determine b (concealing), and since Alice has physically given the safe to Bob, she cannot change b after the commitment phase (binding). When Alice wishes to reveal the bit, she sends the key to Bob.
3.3.2 Quantum protocol for oblivious transfer
In [
29], Bennett, Brassard, Crépeau, and Skubiszewska suggested a very natural quantum protocol for OT (assuming BC): suppose Alice would like to obliviously send
\(m_0\) and
\(m_1\) so that Bob receives the message
\(m_c\) according to his choice bit
c. She uses conjugate coding to send
n quantum states each chosen randomly from the states
to Bob. Let us denote by
\(x \in \{0,1\}^n\) the string of encoded bits and by
\(\theta \in \{R,D\}^n\) the string of basis choices. Bob measures the received qubits in a random basis
\(\theta ' \in \{R,D\}^n\) of his choice, resulting in outcomes
\(x' \in \{0,1\}^n\). After Alice tells Bob the bases
\(\theta \in \{R,D\}^n\) she was using, Bob can partition the set of indices into two disjoint sets
: depending on his OT choice bit
c, he puts all the indices where he measured correctly in
\(I_c\) and the rest in
\(I_{1-c}\). Bob then informs Alice about
\(I_0,I_1\) (in this fixed order, independent of
c). Alice picks two independent hash functions
\(f_0,f_1\) (mapping from
n / 2 bits to 1 bit) and sends
\(s_i = f_i(x|_{I_i}) \oplus m_i\) for
\(i=0,1\) to Bob. Here,
\(x|_I\) denotes the substring of
x with bit indices in
I. Bob will be able to recover
\(m_c\) by computing
\(f_c(x'|_{I_c}) \oplus s_c\).
While it is easy to show that the above protocol is correct and secure against dishonest Alice (i.e. Alice does not learn anything about Bob’s choice bit
c), it is clearly insecure against a dishonest Bob who is able to store all quantum states until Alice tells Bob the basis string
\(\theta \). Such a Bob can then measure all positions in the correct basis and hence recover both
\(m_0\) and
\(m_1\). The idea of [
29] was to
force Bob to perform the measurement by requiring him to commit to the bases
\(\theta '\) and outcomes
\(x'\). Alice then checks a fraction of these commitments
before Alice announces the basis string
\(\theta \).
A long line of research [
29,
82,
168,
172,
238] has worked towards proving the security of this protocol. However, the crucial tools for an actual proof were eventually developed by Damgård, Fehr, Lunemann, Salvail, and Schaffner [
91] nearly two decades after the original protocol was proposed; Unruh subsequently used these techniques to formally establish the equivalence of BC and OT in the quantum UC model [
216].
3.4 Limited-quantum-storage models
As we will see in Sect.
4.1, bit commitment is impossible to construct in the quantum world. More generally, it has been shown (see Sect.
4.2) that secure two-party computation is impossible in the plain quantum model, without any additional restrictions on the adversaries. One option in order to obtain security is to make computational assumptions. However, as we discuss below, it is also possible to obtain information-theoretic security, while making instead some reasonable assumptions about the storage capabilities of the adversary.
One of the challenges in building quantum devices is the difficulty of storing quantum information in a physical system (such as atomic or phototonic systems) under stable conditions over a long period of time—building a reliable quantum memory is a major research goal in experimental quantum physics (see e.g. [
208] for a review produced by the European integrated project
Qubit Applications (QAP)). Despite continuous progress over the last years, large-scale quantum memories that can reliably store quantum information are currently out of reach. As we discuss in this section, ingenious quantum cryptographers have turned this technological challenge into an advantage for quantum cryptography!
The
bounded-quantum-storage model, introduced by Damgård, Fehr, Salvail and Schaffner in [
86], is a model which assumes that an adversary can only store a limited number of qubits. Generally, protocols in this model require no quantum storage for the honest players, and are secure against adversaries that are unable to store a constant fraction of the qubits sent in the protocol.
This model is inspired by the classical bounded-storage model, as introduced by Maurer [
62,
167]. In this model, honest parties are required to store
\(\Theta (n)\) bits, but protocols (for OT and key agreement) are insecure against attackers with storage capabilities of
\(\Omega (n^2)\) bits. Unfortunately, this gap between storage requirements for honest and dishonest players can never be bigger than quadratic [
101,
102]. Combined with the fact that classical storage is constantly getting smaller and cheaper, this quadratic gap puts the classical-bounded-storage assumption on a rather weak footing. In sharp contrast, the
quantum bounded-storage model gives an unbounded gap between the quantum-storage requirements of the honest and dishonest players, making this model model robust to technological improvements.
In the bounded-quantum storage model, a protocol for OT was proposed [
86]. Again, it is based on conjugate coding and is essentially identical to the protocol outlined in the previous Sect.
3.3.2, except that there is a waiting time
\(\Delta t\) (say, 1 s) right after the quantum phase, before Alice sends her basis string
\(\theta \) to Bob. In this time, a dishonest receiver Bob is forced to use his (imperfect) quantum memory and therefore loses some information about Alice’s string
x which intuitively leads to the security of the oblivious transfer. In a subsequent series of works [
88‐
90], protocols for BC, OT and password-based identification (i.e. the secure evaluation of the equality function) were presented. For an overview of these results, see [
109,
205].
The
noisy-quantum-storage model, as introduced by Wehner, Schaffner and Terhal [
231] captures the difficulty of storing quantum information more realistically. Whereas in the bounded-quantum-storage model, the
physical number of qubits an attacker can store is limited, dishonest players are allowed arbitrary (but imperfect) quantum storage in the noisy-quantum-storage model.
Beyond limited quantum storage Continuing the idea of assuming realistic technological restrictions on the adversary, researchers have developed protocols that are secure under the assumption that certain classes of quantum operations are hard to perform. A natural class to study consists of adversaries who can store perfectly all qubits they receive, but who cannot perform any quantum operations, except for single-qubit measurements (adaptively in arbitrary bases) at the end of the protocol. Such a model was first studied by Salvail [
201], and later by Bouman, Fehr, Gonzáles-Guillén and Schaffner [
39] and Liu [
138,
154,
155] under the name of “isolated-qubit model”.
Cryptographic proof techniques In Sect.
3.1, we mentioned
uncertainty relations (and in particular
entropic uncertainty relations—which quantify uncertainty in information-theoretic terms). These relations play a key role in the security proofs for protocols in the limited-quantum-storage model. We refer to [
229] for a survey by Wehner and Winter on this topic. In fact, one can argue that the areas of limited-quantum storage models and entropic uncertainty relations have benefited a lot from each other, as research questions in one area have led to results in the other and vice versa. This fruitful co-existence is witnessed by a series of publications: [
34,
35,
39,
100,
187].
Composability It is natural to ask whether limited-quantum-storage protocols for basic tasks such as OT can be composed to yield more involved two- or multi-party secure computations. This question was answered in the positive in a number of works, including: Fehr and Schaffner [
111], Wehner and Wullschleger [
230] (for sequential composition) and Unruh [
217] (for bounded concurrent composition).
Implementations The technological requirements to implement limited-quantum-storage protocols in practice are modest and rather similar to already available QKD technology (often, the actual quantum phase is the same as in QKD). A small but significant difference is that it makes sense to run secure computations among players which are located within a few meters of each other, whereas the task of distributing keys demands large separations between players. This difference allows experimenters to optimize some parameters (such as the rate) differently for secure-computation protocols. The experimental feasibility of these protocols was analyzed theoretically in [
232] and demonstrated practically in [
105,
188].
3.5 Delegated quantum computation
Quantum computers are known to enable extraordinary computational feats unachievable by today’s devices [
126,
184,
206]. However, technologies to build quantum computers are currently in their infancy; the current state-of-the-art suggests that, when quantum computers become a reality, these devices are likely to be available at a few location only. In this context, we envisage the outsourcing of quantum computations from quantum computationally weak clients to universal quantum computers (a type of
quantum cloud architecture). This scenario has appealing cryptographic applications, such as the delegated execution of Shor’s algorithm [
206] for factoring, and thus breaking RSA public keys [
200]. From the cryptographic point of view, this scenario raises many questions in terms of the possibility of privacy in delegated quantum computation.
Pioneering work of Childs [
70] and Arrighi and Salvail [
12] studied this problem for the first time. The first practical and universal protocol for private delegated quantum computation, called “universal blind quantum computation” (uBQC) was given by Broadbent, Fitzsimons and Kashefi [
53]. In uBQC, the client only needs to be able to prepare random single-qubit auxiliary states (the client requires no quantum memory or quantum processor). Via a classical interaction phase, the client remotely drives a quantum computation of her choice, such that the quantum server cannot learn any information about the computation that is performed—with only the client learning the output. The uBQC protocol has been demonstrated experimentally [
17].
It is remarkable that uBQC is
also based on conjugate coding! For the first time, it is an application where the states derived from conjugate coding are used to directly achieve
computational cryptographic tasks (vs other applications of conjugate coding which essentially directly measure these states in order to extract classical information). This relationship with conjugate coding is more clearly apparent in a related protocol called “quantum computing on encrypted data” (QCED) [
51,
114]. Here, the computation (as given by a quantum circuit) is public, but is executed remotely on an
encrypted version of the data (reminiscent of the work on classical
fully homomorphic encryption [
117,
199]). In this situation, QCED shows that it is possible to achieve delegated quantum computation where the client only needs to send random states in
(hiding of the computation itself can be achieved via a universal circuit construction).
We mention further that the
verifiability of delegated quantum computations has been addressed in [
5,
54], and has been the object of an experiment [
18], and that security has been analyzed in terms of a strong notion of
composability [
97]. Furthermore, work of Reichardt, Unger and Vazirani shows that delegated quantum computation is achievable for a purely classical client, if we are willing to make the assumption of
two universal quantum computers that cannot communicate [
197] (see also Sect.
3.7).
3.6 Quantum protocols for coin flipping and cheat-sensitive primitives
In a classic cryptography paper [
36], Blum describes how to “flip a coin over the telephone” with the help of bit commitment: Alice commits to a random bit
a, Bob tells Alice another random bit
b, and Alice opens the commitment to
a. The outcome of the coin is
\(a \oplus b\) which cannot be biased by any of the two players (intuitively, because at least one random bit of an honest player was involved in determining the outcome). A coin flip with this property is called a
strong coin flip. In contrast, for a weak coin flip, Alice and Bob have a desired outcome, i.e. Alice “wins” if the outcome is 0, and Bob “wins” if the outcome is 1. A weak-coin-flipping protocol with bias
\(\varepsilon \) guarantees that no dishonest player can bias the coin towards his or her desired outcome with probability greater than
\(\varepsilon \). In the classical world, coin-flipping can be achieved under computational assumptions. However, in the information-theoretic setting, it was shown [
130,
136] that one of the players can always achieve his desired outcome with probability 1.
In the quantum world, we note that the general impossibility results for quantum two-party computation (Sect.
4.2) are not applicable to coin flipping, since the participants in a coin flipping protocol have no inputs, and instead aim to implement a
randomized functionality. Nevertheless, Kitaev showed [
146] (see also [
10]) that any quantum protocol for strong coin-flipping is insecure since it can be biased by a dishonest player. Formally, the bias of any strong coin-flipping protocol is bounded from below by
\(\frac{1}{\sqrt{2}}-\frac{1}{2}\). Interestingly, Mochon [
180] managed to expand Kitaev’s formalism of
point games to prove the existence of a
weak coin-flipping protocol with arbitrarily small bias
\(\varepsilon >0\). Unfortunately, Mochon’s 80-page proof has never been peer-reviewed and is rather difficult to follow. Aharonov, Chailloux, Ganz, Kerenidis and Magnin [
6] have managed to simplify this proof considerably.
Based on this result, Chailloux and Kerenidis [
66] derived an optimal strong-coin-flipping protocol with the best possible bias
\(\frac{1}{\sqrt{2}} - \frac{1}{2}\), matching Kitaev’s lower bound. Also based on a weak-coin flip, Chailloux and Kerenidis [
67] gave the best possible imperfect quantum bit commitment. For the optimality, they prove that in any quantum bit commitment protocol, one of the players can cheat with significant probability.
10 Such a result shows that an imperfect bit commitment cannot be amplified to a perfect one—which severely limits the applicability of the scheme to the cryptographic setting.
Cheat sensitivity Quantum mechanics offers the possibility to construct imperfect cryptographic primitives in the sense that they are correct (as long as the players are honest), but they are insecure, i.e. they do allow one of the players (say Alice) to cheat. However, the other player Bob has the possibility to check if Alice has been cheating (possibly by sacrificing the protocol outcome he would have obtained if he followed the protocol without checking). Hence, a cheating Alice has non-zero probability to be detected. These protocols are called
cheat sensitive [
4,
57,
118,
119,
132,
137]. In this context, it is argued that one could set up a game-theoretic environment: a player caught cheating has to pay a huge fine (or undergo another punishment) and is therefore deterred from actually doing it.
We note, however, that the applications of cheat sensitive protocols to the cryptographic setting are limited: while quantum protocols for imperfect and cheat-sensitive primitives can provide nice examples of separations between the classical and quantum worlds, they fulfill their purpose as long as they are considered as “final products”, for instance in case of private information retrieval. However, it is difficult to argue that a strong coin flip with a constant bias, an imperfect bit commitment, or imperfect OT are cryptographically useful primitives, because they do not inherit the cryptographic importance of their perfect counterparts which can be used as building blocks for more advanced cryptographic primitives. In the case of cheat sensitivity, it is often unclear how such primitives behave under composition. In fact, it is a challenging open question to come up with a composability framework for cheat-sensitive quantum primitives.
3.7 Device-independent cryptography
An exciting feature of quantum cryptography is that it allows the possibility of
device-independent cryptography in the sense that protocols can be run on untrusted devices which have possibly been constructed by the adversary. The crucial insight is that the “quantumness” of two (or more) devices can be tested and guaranteed by using the devices to violate a Bell inequality, i.e. to produce correlations that are stronger than allowed by classical mechanics. As outlined in Sect.
2.5, the most well-known example of such an inequality is the CHSH game [
76]. The key observation of device-independent cryptography is that in order to violate the CHSH inequality, a certain amount of intrinsic quantum randomness has to be present in the players’ outputs. That we could exploit this relationship for cryptography was originally pointed out by Ekert [
104], and further studied by Mayers and Yao [
173] and Barrett, Hardy and Kent [
16]. In fact, this latter work shows not only how to accomplish cryptography with untrusted devices, but also how to do away completely with assumptions on the validity of quantum mechanics: instead, it shows how to accomplish QKD solely based on the non-signaling principle [
131,
166]!
The relation between the CHSH violation and the amount of entropy in the outcomes of the measurements can be quantified exactly [
194]. In fact, on the topic of
self-testing quantum devices [
163,
174,
175,
178], Reichardt, Unger and Vazirani have shown a strong robustness result [
197] in the sense that being close to winning the CHSH-game with optimal probability implies that the players must essentially be in possession of a state which is close to an EPR pair. This is an extremely powerful result which has various applications.
The two qubits of an EPR state are maximally entangled. Quantum mechanics forbids any third party to be entangled with such a state (a phenomenon called
monogamy of entanglement). Hence, measurements on an EPR state result in shared randomness which is guaranteed to be unknown to any eavesdropper.
11 In a similar vein, one can argue that the measurement outcomes of Alice and Bob while successfully playing the CHSH game cannot be known to any adversary
even if this adversary has built the devices herself and is possibly still entangled with them.
This effect leads to the interesting cryptographic applications of
device-independent randomness amplification and expansion and
device-independent QKD. In
randomness amplification, the task at hand is to obtain near-perfect randomness from a weak random source using untrusted quantum devices (without using any additional randomness); this idea was originally proposed by Colbeck and Renner [
80]. In
randomness expansion, one wants to expand a few truly random bits into more random bits, again using untrusted quantum devices; this idea was originally proposed by Colbeck and Kent [
67,
78]. Providing formal security proofs has turned out to be rather challenging and was first established against classical adversaries [
112,
193,
194], and later also against quantum adversaries [
179,
224]. A combination of the latest protocols allows to arbitrarily amplify very weak sources of randomness in a device-independent fashion. Experimental realizations of device-independent randomness include [
75,
194].
In device-independent QKD, we make the additional assumption that there is no communication between the adversary and the quantum devices. The first formal proof for a device-independent QKD scheme was given by Vazirani and Vidick [
225]. Current research in this area aims to propose more practical device-independent QKD schemes that retain their functionality at realistic levels of noise.