Skip to main content
Erschienen in: Quantum Information Processing 1/2019

Open Access 01.01.2019

Quantum Penny Flip game with unawareness

verfasst von: Piotr Frąckiewicz

Erschienen in: Quantum Information Processing | Ausgabe 1/2019

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Games with unawareness model strategic situations in which players’ perceptions about the game are limited. They take into account the fact that the players may be unaware of some of the strategies available to them or their opponents as well as the players may have a restricted view about the number of players participating in the game. The aim of the research is to introduce this notion into the theory of quantum games. We shall focus on PQ Penny Flip game introduced by D. Meyer. We shall formalize the previous results and consider other cases of unawareness in the game.
Hinweise
This work was supported by the National Science Centre, Poland under the research Project 2016/23/D/ST1/01557.

1 Introduction

Game theory, launched in 1928 by von Neumann in [1] and developed in 1944 by von Neumann and Morgenstern in [2], is one of the youngest branches of mathematics. The aim of this theory is mathematical modeling of behavior of rational participants in conflict situations. Each participant is supposed to maximize their own gain and take into account all possible ways of behaving of remaining participants. Within this young theory, new ideas that improve already used models of conflict situations are still proposed. One of the latest trends is to study games with unawareness, i.e., games that describe situations in which a player behaves according to his own view of the game and considers how all the remaining players view the game. This way of describing a conflict situation goes beyond the most frequently used paradigm, according to which it is assumed that all participants in a game have full knowledge of the situation.
The other young field developed on the border of game theory and quantum information theory is quantum game theory. This is an interdisciplinary area of research within which considered games are supposed to be played with the aid of objects that behave according to the laws of quantum mechanics, and in which non-classical features of these objects are relevant to the way of playing the game and its rational results.
Game with unawareness is a relatively new notion. The first attempts at formalizing that concept can be found in papers [3, 4] published already in twenty-first century. In paper [5], there is a summary of results obtained in this area till 2012. Quantum counterparts of games with unawareness have not been studied yet. Papers on quantum games with incomplete information concerned only Bayesian games [69] and games with imperfect recall [10, 11]. Our project is the first attempt to use the notion of game with unawareness in theory of quantum games. The main motivation for our interest in developing this branch of quantum game theory was our observation that already in the first paper on quantum games by Meyer [12] its author unconsciously considered a game with unawareness. We may conclude from the famous PQ Penny Flip game described in [12] that Captain Picard (player 2) agrees to join the game because his chance of winning is 1/2. In other words, the game he perceives is the classical one. Q (player 1) views the game in a different way. He is aware of unitary strategies. In addition, player 1 knows that player 2 is only aware of the classical strategies. This knowledge is crucial in the way he chooses his strategy. Choosing, for example, the Hadamard matrix always leads player 1 to getting the best possible outcome. It is optimal to player 1 to play that strategy since he is aware that player 2 has no strategy to counteract the Hadamard matrix. Once we learned the quantum PQ Penny Flip game is a game with unawareness, the description of the game, say by using normal form, requires a family of games rather than a single normal-form game. This has numerous important consequences in the form of solution concepts supposed to predict rational results of the game. In particular, Nash equilibrium concept is not sufficient to fully describe the players’ rational choices. In the case of PQ Penny Flip game in which quantum strategies are available only for player 1, each player 2’s mixed strategy is an equilibrium strategy. However, taking into account player 2’s view about the game (he finds the game to be the classical one), we should predict that he chooses his pure strategies with equal probability.

2 PQ Penny Flip game

Formally, the classically played PQ Penny Flip game [12] is an example of a two-person extensive-form game whose game tree is depicted at the top of Fig. 1. Player 1 initiates the game by choosing one of the two available actions \(I_{1}\) and \(X_{1}\). Next, player 2 chooses a possible action in \(\{I_{2}, X_{2}\}\). The dashed line connecting vertices 2.1a and 2.1b in Fig. 1 indicates player 2’s two-element information set. This means that player 2 does not know whether player 1 has chosen \(I_{1}\) or \(X_{1}\). Similarly, player 1 does not know a move made by his predecessor at the time he chooses his second action. As a result, player 1 has two two-element information sets \(\{1.2a, 1.2b\}\), \(\{1.3a, 1.3b\}\) and one-element information set \(\{1.1\}\).
Every extensive-form game can be associated with a strategic-form game. The latter form is particularly convenient when the two-person extensive game is to be examined with respect to Nash equilibria. A strategic-form game is derived from an extensive-form game by determining the set of strategies \(S_{i}\) of each player i, and the payoffs induced by all strategy profiles in the extensive-form game. A strategy of player i is a function mapping each of her information sets to an element in the set of actions at that information set (see for example [13]). In the case of the top game in Fig. 1 a player 1’s strategy is an element of \(\{I_{1}, X_{1}\} \times \{I_{3}, X_{3}\} \times \{I_{4}, X_{4}\}\). Hence, the strategic form of the extensive game in Fig. 1 and its reduced form is as follows:
https://static-content.springer.com/image/art%3A10.1007%2Fs11128-018-2111-7/MediaObjects/11128_2018_2111_Equ1_HTML.png
(1)
We see at once that player 1 has four strategies that are equivalent to the other four ones (they generate the same outcomes). A strategic-form game in which every set of equivalent strategies is replaced by a single strategy from that set is called a game in reduced strategic form. Hence, the extensive game at the top of Fig. 1 can be associated with \(4\times 2\) reduced strategic form. In this case, we can identify player 1’s strategy as a map that specifies one action in \(\{1.1\}\) and one action in the union of information sets \(\{1.2a, 1.2b\}\) and \(\{1.3a, 1.3b\}\). In other words, the meaningful player 1’s strategies may be written as \((a_{1}, a_{3})\), where \(a_{1}\) and \(a_{3}\) are actions taken at the first and the third stage of the game, respectively. It is worth noting that it still holds if the cardinality of the sets of players’ actions is greater than 2. This property will be used throughout the paper, and it follows from one of the four transformations preserving the reduced strategic form called Inflation–Deflation (see [14, 15]).
Inflation–Deflation The extensive games \(\varGamma \) and \(\varGamma '\) share the same reduced strategic-form game if \(\varGamma '\) differs from \(\varGamma \) only in an information set of some player i in \(\varGamma \) that is a union of information sets of player i in \(\varGamma '\) (\(\{1.2a, 1.2b\}\) and \(\{1.3a, 1.3b\}\) in Fig. 1) with the following property: any two sequences of actions h and \(h'\) leading from the root of the game tree to different members of the union (for example, sequences \((I_{1}, X_{2})\) and \((X_{1}, X_{2})\)) have the subsequences that lead to the same information set of player i (empty sequence \(\emptyset \) in our case) and player i’s action at this information set is different in h and \(h'\).
As it was mentioned at the beginning of this section, the classical Penny Flip game [12] is a special case of the game in Fig. 1. It is obtained by setting
$$\begin{aligned} O_{1} = O_{4} = O_{6} = O_{7} = (1,-1), \quad O_{2} = O_{3} = O_{5} = O_{8} = (-1,1). \end{aligned}$$
(2)
On account of the inflation–deflation principle, we may write the strategic-form game as
https://static-content.springer.com/image/art%3A10.1007%2Fs11128-018-2111-7/MediaObjects/11128_2018_2111_Equ3_HTML.png
(3)
One can check that mixed strategies defined by probability distributions
$$\begin{aligned} \left( \frac{1}{2}, \frac{1}{2}, 0,0\right) , \left( \frac{1}{2}, 0, \frac{1}{2},0\right) , \left( 0,0,\frac{1}{2}, \frac{1}{2}\right) , \left( 0,\frac{1}{2}, 0, \frac{1}{2}\right) \end{aligned}$$
(4)
over the set \(\{I_{1}I_{3}, I_{1}X_{3}, X_{1}I_{3}, X_{1}X_{3}\}\) are the optimal strategies for player 1 in game (3), and, thus, also each probability distribution over (4). The optimal strategy for player 2 is, in turn, determined by the unique probability distribution \((\frac{1}{2}, \frac{1}{2})\) over \(\{I_{2}, X_{2}\}\). Hence, the value of game (3) is equal to zero.
Meyer [12] generalized the PQ Penny Flip game by using quantum computing formalism. The general strategic form of the game \((N, (S_{i})_{i\in N}, (u_{i})_{i\in N})\) in which both players have access to unitary strategies can be written formally as
$$\begin{aligned} \varGamma _{QQ} = \left( \{1,2\}, \{(U_{1}, U_{3})\}, \{U_{2}\}, \{\mathrm {tr}(\rho _{\mathrm {f}}P), -\mathrm {tr}(\rho _{\mathrm {f}}P)\}\right) , \end{aligned}$$
(5)
where
  • \(\{1,2\}\) is a set of players,
  • \((U_{1}, U_{3})\) and \(U_{2}\) are strategies of player 1 and 2, respectively, and \(U_{j}\) is a \(2\times 2\) unitary matrix for each j,
  • \(\rho _{\mathrm {f}}\) is a density matrix defined as follows
    $$\begin{aligned} \rho _{\mathrm {f}} = U_{3}U_{2}U_{1}|0\rangle \langle 0| U^{\dagger }_{1}U^{\dagger }_{2}U^{\dagger }_{3}, \end{aligned}$$
    (6)
  • P is a Hermitian operator in the form
    $$\begin{aligned} P = |0\rangle \langle 0| - |1\rangle \langle 1|. \end{aligned}$$
    (7)
Let us denote by \(\mathbb {1}\) the identity matrix of size 2, and by \(\sigma _{i}\), \(i=x,y,z\), the Pauli matrix i. It follows easily that game (3) is a special case of (5), if the set of unitary actions \(U_{j}\) is restricted to the set \(\{\mathbb {1}, \sigma _{x}\}\).
We shall use the following notation for the PQ Penny Flip game with unitary actions restricted to \(\{\mathbb {1}, \sigma _{x}\}\):
$$\begin{aligned} \varGamma _{CC}&= \left( \{1,2\}, \{\mathbb {11}, \mathbb {1}\sigma _{x}, \sigma _{x} \mathbb {1}, \sigma _{x} \sigma _{x}\}, \{\mathbb {1}, \sigma _{x}\}, \{\mathrm {tr}(\rho _{\mathrm {f}}P), -\mathrm {tr}(\rho _{\mathrm {f}}P)\}\right) ,\nonumber \\ \varGamma _{QC}&= \left( \{1,2\}, \{(U_{1}, U_{3})\}, \{\mathbb {1}, \sigma _{x}\}, \{\mathrm {tr}(\rho _{\mathrm {f}}P), -\mathrm {tr}(\rho _{\mathrm {f}}P)\}\right) ,\nonumber \\ \varGamma _{CQ}&= \left( \{1,2\}, \{\mathbb {11}, \mathbb {1}\sigma _{x}, \sigma _{x}\mathbb {1}, \sigma _{x}\sigma _{x}\}, \{U_{2}\}, \{\mathrm {tr}(\rho _{\mathrm {f}}P), -\mathrm {tr}(\rho _{\mathrm {f}}P)\}\right) . \end{aligned}$$
(8)
For example, in the game \(\varGamma _{CQ}\) player 1 is restricted to use only classical actions whereas player 2’s set of actions is the set of all \(2\times 2\) unitary matrices.
One of the main ideas behind the PQ Penny Flip game was to show that Alice can win the game every time she plays against Bob. It is possible if Alice has access to unitary strategies that Bob is not aware of. That is, Alice is fully aware of unitary operators available in the quantum PQ Penny Flip game, Bob is only aware of unitary operations identified with his strategies in the classical PQ Penny Flip game (for example, \(\mathbb {1}\) and \(\sigma _{x}\)).
The common example of Alice’s winning strategy is playing the Hadamard matrix H twice. It follows from the following reasoning:
$$\begin{aligned} |0\rangle \xrightarrow [\text {Alice}]{H} \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle ) \xrightarrow [\text {Bob}]{\mathbb {1}~\text {or}~\sigma _{x}} \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle ) \xrightarrow [\text {Alice}]{H} |0\rangle . \end{aligned}$$
(9)
Starting from the state \(|0\rangle \) (which is identified with the coin heads up), Alice creates the equal superposition state \((|0\rangle + |1\rangle )/\sqrt{2}\) by using the operator H. Bob having only \(\mathbb {1}\) and \(\sigma _{x}\) cannot affect that superposition state. For that reason, Alice again chooses the Hadamard matrix and then she gets the state \(|0\rangle \) back. From (9), it follows that for any Bob’s mixed strategy \((p,1-p)\) over \(\{\mathbb {1}, \sigma _{x}\}\) Alice wins the game by playing HH, i.e.,
$$\begin{aligned} {\text {tr}}\left( \left( pH\mathbb {1}H|0\rangle \langle 0|H\mathbb {1}H + (1-p)H\sigma _{x}H|0\rangle \langle 0|H\sigma _{x}H\right) P\right) = {\text {tr}}(|0\rangle \langle 0|P)= 1. \end{aligned}$$
(10)

2.1 Technical difficulties in describing PQ Penny Flip problem

We know from (10) that Alice can win the PQ Penny Flip game if she has access to the Hadamard matrix, and Bob is not aware of unitary matrices except \(\mathbb {1}\) and \(\sigma _{x}\). A natural question arises as to how this problem can be described from a game theory point of view.
At first glance, the following strategic-form game seems to express that problem:
$$\begin{aligned} F_{1}:~ \end{aligned}$$
(11)
We infer from bimatrix game (11) that Alice has an additional move H compared with her actions in the classical PQ Penny Flip game, and now she has eight strategies. Bob has two strategies equivalent to ones in the classical game. Moreover, looking at (11) we see that HH is Alice’s winning strategy. So, up to this point, (11) appears to agree with the PQ Penny Flip problem. However, (11) turns out to provide Bob with much richer description of the game than he actually has. Making his strategic decision based on (11), Bob finds that Alice has the additional action H and consequently the winning strategy. Perhaps, Bob does not know that H is the Hadamard matrix or he does not even realize that he is to play the quantum game. However, Bob knows that he looses the game. According to [12], Bob agrees to play the PQ Penny Flip game because he is confident that the odds of winning the game are even, and his optimal strategy is to play \(\mathbb {1}\) and \(\sigma _{x}\) with equal probability. In the case of (11) Bob gets the payoff of \(-\,1\), no matter which strategy he chooses. Therefore, Bob’s optimal strategy in (11) is any probability distribution over his set of strategies.
The solution is to consider a family of games—the core of the definition of games with unawareness. The formal definition can take into account a player’s view about his strategy set or strategies of the other players, a player’s view about other players’ views, and even a player’s view about the number of players taking part in the game.

3 Preliminaries on games with unawareness

For the convenience of the reader, we review the relevant material from [5]. Before we begin the formal presentation, we will look at an example that illustrates that concept and the ideas behind it. The reader who is not familiar with this topic is encouraged to see a similar introductory example in [5].
Example 1
Let us consider the following bimatrix game
$$\begin{aligned} \varGamma _{1}:~ \end{aligned}$$
(12)
We assume that Alice (player 1) and Bob (player 2) are both aware of all the strategies available in game (12). However, we consider the situation where Bob finds that Alice views the game in the following form:
$$\begin{aligned} \varGamma _{2}:~ \end{aligned}$$
(13)
In words, Bob perceives Alice’s strategy set to be \(\{a_{1}, a_{2}, a_{3}\}\), but for some reason, he thinks that Alice views \(\{a_{1}, a_{2}\}\). Since Bob finds that Alice views the game being played as depicted in (13), Bob thinks that Alice finds that he also considers (13), and so on for higher-order views.
Let us consider the case that Alice is fully aware of Bob’s reasoning. Not only does she perceive her whole strategy set \(\{a_{1}, a_{2}, a_{3}\}\), Alice also finds that Bob does not realize that she is considering \(\{a_{1}, a_{2}, a_{3}\}\) but \(\{a_{1}, a_{2}\}\). Moreover, Alice finds that Bob views the game as in (12).
The problem just presented is an example of a strategic-form game with unawareness that can be formally described by a family of games \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\), where \(\mathcal {V}_{0} = \{\emptyset , 1, 2, 12, 121, \dots \}\), and
$$\begin{aligned} G_{v} = {\left\{ \begin{array}{ll} \varGamma _{1} &{}\quad \text {if}~v\in \{\emptyset , 1, 2, 12\}, \\ \varGamma _{2} &{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
(14)
The set \(\mathcal {V}_{0}\) (with typical element v) consists of the relevant views. The view \(v=\emptyset \) corresponds to the modeler’s game—the actual game played by the players. In our example, this is game (12). That game is also viewed by player 1 (\(v=1\)) and player 2 (\(v=2\)). Furthermore, according to the description of the game, player 1 (Alice) finds that player 2 (Bob) is considering \(\varGamma _{1}\). It is taken into account in (14) by associating \(\varGamma _{1}\) with the view \(v =12\) (the view that player 1 finds that player 2 is considering). In our example, player 2 finds that player 1 views the game as in (13). For this reason, the game \(\varGamma _{2}\) corresponds to \(v=21\). Any higher-order iteration of awareness of Alice and Bob are also assumed to be associated with \(\varGamma _{2}\).
We see at once that game (12) has the unique pure Nash equilibrium \((a_{1}, b_{2})\) with outcome (2, 2), and we could check that, in general, the set of all (mixed) Nash equilibria in (12) is
$$\begin{aligned} \left\{ (a_{1}, (q,1-q)):q \in \left[ 0, \frac{1}{3}\right] \right\} , \end{aligned}$$
(15)
where \((q,1-q)\) denotes player 2’s mixed strategy under which he chooses \(b_{1}\) and \(b_{2}\) with probability q and \(1-q\), respectively. Each of the strategy profiles from (15) yields the payoff profile (2, 2).
Although both players are aware of playing (12), it is not evident that the game ends with (2, 2). According to (14), Bob finds that Alice perceives game (13). Hence, he may deduce that Alice plays according to strategy profile \((a_{2}, b_{1})\) that is the most profitable Nash equilibrium in (13). Bob’s choice would be then \(b_{1}\). Alice, however, is aware of Bob’s thinking. She finds that Bob is considering (12) and also finds that Bob finds that she is considering (13). Alice can therefore deduce that Bob chooses strategy \(b_{1}\) that weakly dominates \(b_{2}\) in (13), i.e., it gives Bob a payoff at least as high as \(b_{2}\), and at the same time, is an element of the most beneficial Nash equilibrium in (13). Since Alice is aware of playing \(\varGamma _{1}\), it is not optimal for her to play according to \((a_{2}, b_{1})\) but to choose \(a_{3}\). As a result, the game described by (14) ends with outcome (4, 0) corresponding to the strategy profile \((a_{3}, b_{1})\).
The game result \((a_{3}, b_{1})\) can be directly determined by the extended Nash equilibrium [5]—a solution concept being a counterpart of Nash equilibrium in games with unawareness. The formal definition is presented in Sect. 3.3. Here we simply provide the result of applying the extended Nash equilibrium to (14). One of the equilibrium solutions is a family of strategy profiles \(((\sigma )_{v})_{v\in \mathcal {V}_{0}}\) defined as follows:
$$\begin{aligned} \sigma _{v} = {\left\{ \begin{array}{ll} (a_{3}, b_{1}) &{}\quad \text {if}~v\in \{\emptyset , 1\}, \\ (a_{2}, b_{1}) &{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
(16)
The strategy profiles (16) coincide with the reasoning we already used to determine the outcome \((a_{3}, b_{1})\). The result of the game corresponds to the modeler’s view \((v=\emptyset )\). It also coincides with Alice’s view \((v=1)\) as Alice is fully aware of the games played by her and Bob. The strategy profile \((a_{2}, b_{1})\) is seen from Bob’s point of view \((v=2)\). Since Alice is aware of Bob’s thinking, she finds that Bob is considering \((a_{2}, b_{1})\) \((v=12)\).

3.1 The role of the notion of games with unawareness in quantum game theory

The notion of games with unawareness is designed to model game theory problems in which players’ perceptions of the game are restricted. It was shown in [5] that the novel structure extends the existing forms of games. Although it is possible to represent games with unawareness with the use of games with incomplete information (by associating probability 0 to games that a player is not aware of), the extended Nash equilibrium does not map to any known solution concept of incomplete information games. In particular, the set of extended Nash equilibria forms a strict subset of the Bayesian Nash equilibria.
Once we know that the notion of games with unawareness presents a new game form, it is natural to study that type of games in the quantum domain. Having given a quantum game scheme that maps a classical game G to the quantum one Q(G), and having given a family of games \(\{G_v\}\), a family of quantum games \({Q(G_v)}\) can be constructed in a natural way. Then we can study if, and to what extent, quantum strategies compensate restricted perception of players.
Besides \(\{Q(G_{v})\}\), the notion of game with unawareness allows one to expand the theory of quantum games by defining a family \(\{Q(G)_{v}\}\), where each quantum game \(Q(G)_{v}\) corresponds to a specific perception of players. In this case, players may have restricted perception of how a quantum game is defined. A good example of that quantum game theory problem is the quantum PQ Penny Flip game [12]: one of the players is aware of having all the quantum strategies, the other player perceives two unitary strategies identified with the strategies in the classical Penny Flip game. We provide a detail exposition of that problem in Sect. 4.
Another example of applying the notion of games with unawareness concerns the case when playing a quantum game is not common knowledge among the players. The quantum game is to be played with the aid of object that behave according to the laws of quantum mechanics, in particular, the players may share an entangled two-qubit state on which they apply unitary strategies. Under this scenario (see Fig. 2), Alice and Bob can be far apart, and a third party, say a modeler, is to prepare the game. After the modeler prepares the quantum game based on its classical counterpart, he sends the message to Alice and Bob so that they know they are to play the quantum game rather that the classical one. When the players receive the message, they each perceive the game as being quantum, i.e., \(G_{i} = \varGamma _{Q}\) for each player i. But this fact is not common knowledge among Alice and Bob. Recall that a fact is common knowledge among the players of a game if for any finite sequence of players \(i_{1}, i_{2}, \dots , i_{k}\) player \(i_{1}\) knows that player \(i_{2}\) knows ...that player \(i_{k}\) knows the fact. In our case, each of the players cannot be certain that the other player finds the quantum game (receives the message from the modeler) until he or she receives a confirmation from that player. According to the scheme in Fig. 2, Alice sends Bob a message about her current state of knowledge. In this way, Bob receiving the message finds that Alice is considering the quantum game, i.e., \(G_{21} = \varGamma _{Q}\). Now, Bob sends the feedback message including his state of knowledge. Owing to this message, Alice finds that Bob is considering the quantum game, \(G_{12} = \varGamma _{Q}\). Moreover, Alice finds that Bob finds that Alice is considering the quantum game, \(G_{121} = \varGamma _{Q}\). At this point, the quantum game is still not considered common knowledge. Bob is not certain that Alice finds that Bob is considering the quantum game until he receives the second message from Alice. Since the game starts before the message arrives at Bob, at the time of the play, either the classical game \(\varGamma _{C}\) or the quantum game \(\varGamma _{Q}\) may be associated with \(G_{212}\), and the same conclusion can be drawn for the higher levels of views. As a result, the players face a game with unawareness described by a family of games \(\{G_{v}\}\) rather than the single game \(\varGamma _{Q}\). An example of the game being in line with the scheme in Fig. 2 is a family \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\), where
$$\begin{aligned} G_{v} = {\left\{ \begin{array}{ll} \varGamma _{Q} &{}\quad \text {if}~v\in \{\emptyset , 1, 2, 12, 21, 121\}, \\ \varGamma _{C} &{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
(17)
We will later see that the result of the game differs significantly depending on how the players perceive the game.
In what follows we provide a formal definition of game with unawareness and extended Nash equilibrium adapted from [5].

3.2 Strategic-form games with unawareness

Let \(G = \left( N, \prod _{i\in N}S_{i}, (u_{i})_{i\in N}\right) \) be a strategic-form game. This is the game considered by the modeler. Each player may not be aware of the full description of G. Hence \(G_{\mathrm {v}} = \left( N_{\mathrm {v}}, \prod _{i\in N_{\mathrm {v}}}(S_{i})_{\mathrm {v}}, ((u_{i})_{\mathrm {v}})_{i\in N_{\mathrm {v}}}\right) \) denotes player \(\mathrm {v}\)’s view of the game for \(\mathrm {v} \in N\). In general, each player also considers how each of the other players views the game. Formally, with a finite sequence of players \(v = (i_{1}, \dots , i_{n})\) there is associated a game \(G_{v} = \left( N_{v}, \prod _{i\in N_{v}}(S_{i})_{v}, ((u_{i})_{v})_{i\in N_{v}}\right) \). This is the game that player \(i_{1}\) considers that player \(i_{2}\) considers that ...player \(i_{n}\) is considering. A sequence v is called a view. The empty sequence \(v = \emptyset \) is assumed to be the modeler’s view (\(G_{\emptyset } = G\)). We denote a strategy profile in \(G_{v}\) by \((s)_{v}\). The concatenation of two views \(\bar{v} = (i_{1}, \dots , i_{n})\) followed by \(\tilde{v} = (j_{1}, \dots , j_{m})\) is defined to be \(v = \bar{v}\hat{}{\tilde{v}} = (i_{1}, \dots , i_{n}, j_{1}, \dots , j_{m})\). The set of all potential views is \(V = \bigcup ^{\infty }_{n=0}N^{(n)}\) where \(N^{(n)} = \prod ^{n}_{j=1}N\) and \(N^{(0)} = \emptyset \).
Definition 1
A collection \(\{G_{v}\}_{v\in \mathcal {V}}\) where \(\mathcal {V} \subset V\) is a collection of finite sequences of players is called a strategic-form game with unawareness and the collection of views \(\mathcal {V}\) is called its set of relevant views if the following properties are satisfied:
1.
For every \(v \in \mathcal {V}\),
$$\begin{aligned} v\hat{}{\mathrm {v}} \in \mathcal {V} ~\hbox {if and only if}~\mathrm {v} \in N_{v}. \end{aligned}$$
(18)
 
2.
For every \(v\hat{}{\tilde{v}} \in \mathcal {V}\),
$$\begin{aligned} v \in \mathcal {V}, \quad \emptyset \ne N_{v\hat{}{\tilde{v}}} \subset N_{v}, \quad \emptyset \ne (A_{i})_{v\hat{}{\tilde{v}}} \subset (A_{i})_{v} ~\hbox {for all}~i \in N_{v\hat{}{\tilde{v}}} \end{aligned}$$
(19)
 
3.
If \(v\hat{}{\mathrm {v}}\hat{}{\bar{v}} \in \mathcal {V}\), then
$$\begin{aligned} {v\hat{}{\mathrm {v}}\hat{}{\mathrm {v}}}\hat{}{\bar{v}} \in \mathcal {V} ~\hbox {and}~ G_{v\hat{}{\mathrm {v}}\hat{}{\bar{v}}} = G_{v\hat{}{\mathrm {v}}\hat{}{\mathrm {v}}\hat{}{\bar{v}}}. \end{aligned}$$
(20)
 
4.
For every strategy profile \((s)_{v\hat{}{\tilde{v}}} = \{s_{j}\}_{j\in N_{v\hat{}{\tilde{v}}}}\), there exists a completion to an strategy profile \((s)_{v} = \{s_{j}, s_{k}\}_{j\in N_{v\hat{}{\tilde{v}}}, k\in N_{v}\setminus N_{v\hat{}{\tilde{v}}}}\) such that
$$\begin{aligned} (u_{i})_{{v\hat{}{\tilde{v}}}}((s)_{v\hat{}{\tilde{v}}}) = (u_{i})_{v}((s)_{v}). \end{aligned}$$
(21)
 

3.3 Extended Nash equilibrium in strategic-form games with unawareness

In order to define extended Nash equilibrium, it is needed to redefine the notion of strategy profile.
Definition 2
Let \(\{G_{v}\}_{v\in \mathcal {V}}\) be a strategic-form game with unawareness. An extended strategy profile (ESP) in this game is a collection of strategy (pure or mixed) profiles \(\{(\sigma )_{v}\}_{v\in \mathcal {V}}\) where \((\sigma )_{v}\) is a strategy profile in the game \(G_{v}\) such that for every \(v\hat{}{\mathrm {v}}\hat{}{\bar{v}} \in \mathcal {V}\) holds
$$\begin{aligned} (\sigma _{\mathrm {v}})_{v} = (\sigma _{\mathrm {v}})_{v\hat{}{\mathrm {v}}} ~\hbox {as well as}~ (\sigma )_{v\hat{}{\mathrm {v}}\hat{}{\bar{v}}} = (\sigma )_{v\hat{}{\mathrm {v}}\hat{}{\mathrm {v}} \hat{} \bar{v}}. \end{aligned}$$
(22)
To illustrate (22) let us take the game \(G_{12}\)—the game that player 1 thinks that player 2 is considering. If player 1 assumes that player 2 plays strategy \((\sigma _{2})_{12}\) in the game \(G_{12}\), she must assume the same strategy in the game \(G_{1}\) that she considers, i.e., \((\sigma _{2})_{1} = (\sigma _{2})_{12}\). In other words, player 1 finds that player 2 is considering strategy \((\sigma _{2})_{12}\). Thus, player 1 considers that strategy in her game \(G_{1}\).
Next step is to extend rationalizability from strategic-form games to the games with unawareness.
Definition 3
An ESP \(\{(\sigma )_{v}\}_{v\in \mathcal {V}}\) in a game with unawareness is called extended rationalizable if for every \(v\hat{}{\mathrm {v}} \in \mathcal {V}\) strategy \((\sigma _{\mathrm {v}})_{v}\) is a best reply to \((\sigma _{-\mathrm {v}})_{v\hat{}{\mathrm {v}}}\) in the game \(G_{v\hat{}{\mathrm {v}}}\).
Consider a strategic-form game with unawareness \(\{G_{v}\}_{v\in \mathcal {V}}\). For every relevant view \(v \in \mathcal {V}\) the relevant views as seen from v are defined to be \(\mathcal {V}^{v} = \{\tilde{v} \in \mathcal {V}:v\hat{}{\tilde{v}} \in \mathcal {V}\}\). For \(\tilde{v} \in \mathcal {V}^{v}\) define the game \(G^{v}_{\tilde{v}} = G_{v\hat{}{\tilde{v}}}\). Then the game with unawareness as seen from v is defined as \(\{G^{v}_{\tilde{v}}\}_{\tilde{v} \in \mathcal {V}^{v}}\).
We are now in a position to define the counterpart of Nash equilibrium in games with unawareness.
Definition 4
An ESP \(\{(\sigma )_{v}\}_{v\in \mathcal {V}}\) in a game with unawareness is called an extended Nash equilibrium (ENE) if it is rationalizable and for all \(v, \bar{v} \in \mathcal {V}\) such that \(\{G^{v}_{\tilde{v}}\}_{\tilde{v} \in \mathcal {V}^{v}} = \{G^{\bar{v}}_{\tilde{v}}\}_{\tilde{v} \in \mathcal {V}^{\bar{v}}}\) we have that \((\sigma )_{v} = (\sigma )_{\bar{v}}\).
The first part of the definition (rationalizability) is similar to the standard Nash equilibrium where it is required that each strategy in the equilibrium is a best reply to the other strategies of that profile. According to Definition 3, player 2’s strategy \((\sigma _{2})_{1}\) in the game of player 1 has to be a best reply to player 1’s strategy \((\sigma _{1})_{12}\) in the game \(G_{12}\). On the other hand, in contrast to the concept of Nash equilibrium, \((\sigma _{1})_{12}\) does not have to a best reply to \((\sigma _{2})_{1}\) but to strategy \((\sigma _{2})_{121}\).
We saw in (14) of Example 1 that for \(v \in \{21, 121, 212, 1212, \dots \}\) we have \(G_{v} = \varGamma _{2}\). It follows that \(\{G_{21\hat{}v}\}_{v\in \mathcal {V}_{0}} = \{G_{121\hat{}v}\}_{v\in \mathcal {V}_{0}} = \{\varGamma _{2}\}\). According to the second part of ENE, \((\sigma )_{21} = (\sigma )_{121}\).
The following proposition is useful to determine the extended Nash equilibria.
Proposition 1
Let G be a strategic-form game and \(\{G_{v}\}_{v \in \mathcal {V}}\) a strategic-form game with unawareness such that for some \(v\in \mathcal {V}\) we have \(G_{v\hat{}{\bar{v}}} = G\) for every \(\bar{v}\) such that \(v\hat{}{\bar{v}} \in \mathcal {V}\). Let \(\sigma \) be a strategy profile in G. Then
1.
\(\sigma \) is rationalizable for G if and only if \((\sigma )_{v} = \sigma \) is part of an extended rationalizable profile in \(\{G_{v}\}_{v\in \mathcal {V}}\).
 
2.
\(\sigma \) is a Nash equilibrium for G if and only if \((\sigma )_{v} = \sigma \) is part of on an ENE for \(\{G_{v}\}_{v\in \mathcal {V}}\) and this ENE also satisfies \((\sigma )_{v} = (\sigma )_{v\hat{}{\bar{v}}}\).
 
Remark 1
We see from (20) and (22) that for every \(v\hat{}{\mathrm {v}}\hat{}{\bar{v}} \in \mathcal {V}\) a normal-form game \(G_{v\hat{}{\mathrm {v}}\hat{}{\bar{v}}}\) and a strategy profile \((\sigma )_{v\hat{}{\mathrm {v}}\hat{}{\bar{v}}}\) determine the games and profiles in the form \(G_{v\hat{}{\mathrm {v}}\hat{}{\dots } \hat{}{\mathrm {v}}\hat{}{\bar{v}}}\) and \((\sigma )_{v\hat{}{\mathrm {v}}\hat{}{\dots } \hat{}{\mathrm {v}}\hat{}{\bar{v}}}\), respectively. Hence, in general, a game with unawareness \(\{G_{v}\}_{v\in \mathcal {V}}\) and an extended strategy profile \(\{(\sigma )_{v}\}_{v\in \mathcal {V}}\) are defined by \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) and \(\{(\sigma )_{v}\}_{v\in \mathcal {V}_{0}}\), where
$$\begin{aligned} \mathcal {V}_{0} = \{v\in V \mid v=(i_{1}, \dots , i_{n}) ~\hbox {with}~i_{k} \ne i_{k+1} ~\hbox {for all}~k\}. \end{aligned}$$
(23)
Then, we get \(\{G_{v}\}_{v\in \mathcal {V}}\) from \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) by setting \(G_{\tilde{v}} = G_{v}\) for \(v=(i_{1},\dots , i_{n})\in \mathcal {V}_{0}\) and \(\tilde{v} = (i_{1}, \dots , i_{k}, i_{k}, i_{k+1}, \dots , i_{n}) \in \mathcal {V}\). For this reason, we restrict ourselves to \(\mathcal {V}_{0}\) throughout the paper.

4 Quantum PQ Penny Flip game with unawareness

We noted in Sect. 2.1 that a single bimatrix game does not properly reflect the variant of the PQ Penny Flip game where Alice can win every time. We now show that the problem may be regarded as a game with unawareness.
Example 2
Following the description of the game given in Sect. 2 and in [12], we may assume that the modeler’s game (the game that is actually played by the players) is defined by the bimatrix \(F_{1}\) [see (11)]. Alice being aware of the Hadamard matrix views the game \(F_{1}\). Hence \(G_{\emptyset } = G_{1} = F_{1}\) Bob thinks he plays the classical PQ Penny Flip game. So he perceives the game \(G_{2}\) in the following form
$$\begin{aligned} F_{2}:~ \end{aligned}$$
(24)
We then assume that Alice finds that Bob is considering (24), i.e., \(G_{12} = F_{2}\), and higher-order views \(v\in \{21,121, 212, \dots \}\) are associated with (24). We thus obtain a game with unawareness \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) defined as follows:
$$\begin{aligned} G_{v} = {\left\{ \begin{array}{ll} F_{1} &{}\quad \text {if}~v\in \{\emptyset , 1\},\\ F_{2} &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(25)
In what follows, we specify the players’ optimal strategies by applying the notion of extended Nash equilibrium. We let \((\sigma ^c_{1}, \sigma ^c_{2})\) stand for a pair of optimal strategies in (24) which are the optimal strategies of game (3) with \(I_{j}\) and \(X_{k}\) replaced by \(\mathbb {1}\) and \(\sigma _{x}\), respectively. By Proposition 1, the strategy profile \((\sigma ^c_{1}, \sigma ^c_{2})\) is part of ENE for \(v\in \mathcal {V}_{0} \setminus \{\emptyset , 1\}\). In order to determine a strategy \((\sigma )_{1} = (\sigma _{1}, \sigma _{2})_{1}\) seen from Alice’s view, first note that by the definition of extended strategy profile,
$$\begin{aligned} (\sigma _{2})_{1} = (\sigma _{2})_{12} = \sigma ^c_{2}. \end{aligned}$$
(26)
According to Definition 3, Alice’s strategy \((\sigma _{1})_{1}\) is a best reply to \((\sigma _{2})_{1} = \sigma ^c_{2}\) in the game \(G_{1} = F_{1}\). Since Alice has the strategy HH in \(F_{1}\) guaranteeing a payoff of 1, \((\sigma _{1})_{1} = HH\). In a similar way, we determine a strategy profile seen from Bob’s point of view which is \((\sigma )_{2} = (\sigma ^c_{1}, \sigma ^2_{2})\). By Definition 2, \((\sigma _{1})_{2} = (\sigma _{1})_{21} = \sigma ^c_{1}\). Bob’s best reply to \((\sigma _{1})_{2} = \sigma ^c_{1}\) is \((\sigma _{2})_{2} = \sigma ^c_{2}\) in the game \(F_{2}\). Therefore \((\sigma )_{2} = (\sigma ^c_{1}, \sigma ^c_{2})\). Finally, (22) implies that
$$\begin{aligned} (\sigma _{1})_{\emptyset } = (\sigma _{1})_{1} \quad \text {and} \quad (\sigma _{2})_{\emptyset } = (\sigma _{2})_{2}. \end{aligned}$$
(27)
In summary, the unique form of extended Nash equilibrium in the game defined by (25) is
$$\begin{aligned} (\sigma )_{v} = {\left\{ \begin{array}{ll} \left( HH, \sigma ^c_{2}\right) &{}\quad \text {if}~v\in \{\emptyset , 1\}, \\ \left( \sigma ^c_{1}, \sigma ^c_{2}\right) &{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
(28)
In words, Bob is only aware of \(\mathbb {1}\) and \(\sigma _{x}\), and therefore, he perceives \((\sigma )_{2} = (\sigma ^c_{1}, \sigma ^c_{2})\) as a rational result of the game. Alice, in turn, is fully aware of (11), in particular, her access to the Hadamard matrix. Thus, Alice’s prediction about a rational strategy profile is \((\sigma )_{1} = (HH, \sigma ^c_{2})\), and it coincides with the actual final result \((\sigma )_{\emptyset }\).
It is worth emphasizing that an ordinary Nash equilibrium applied to (11) leads to an incorrect prediction about Bob’s optimal strategy. Since Alice has a winning strategy in \(F_{1}\), every probability distribution over \(\{\mathbb {1}, \sigma _{x}\}\) rather than a single strategy \(\sigma ^c_{2}\) is an optimal strategy of Bob in \(F_{1}\).
Our next example shows how a limited perception of a player affects the result of PQ Penny Flip game.
Example 3
Consider strategic-form game (5) with the following strategy sets:
$$\begin{aligned} S_{1} = \{\mathbb {1}, \sigma _{x}, H\}\times \{\mathbb {1}, \sigma _{x}, H\}, \quad S_{2} = \{\mathbb {1}, \sigma _{x}, \sigma _{z}\}. \end{aligned}$$
(29)
Then, according to scheme (5)–(7), the matrix representation of the game and its reduced form take on the form
https://static-content.springer.com/image/art%3A10.1007%2Fs11128-018-2111-7/MediaObjects/11128_2018_2111_Equ30_HTML.png
(30)
An easy computation shows that the value of the games of (30) is 0. Player 1’s optimal strategies in the reduced form are (1 / 2, 1 / 2, 0, 0), (0, 0, 1, 0) and (1 / 2, 0, 0, 1 / 2) (and any probability distribution over these strategies). Player 2’s optimal strategy is (0, 1 / 2, 1 / 2). The result so obtained is valid because we tacitly assume that the form of the game is common knowledge among the players.
Let us now modify the game defined by (30) in how player 1 perceives player 2’s perception of the game. Suppose that player 1 is unaware that player 2 is aware of actions H and \(\sigma _{z}\). In other words, player 1 finds that player 2 is considering (24). On the other hand, we assume that player 2 considers game (30). Furthermore, he knows how player 1 perceives player 2’s perception of the game. We can describe this problem formally as a strategic-form game with unawareness \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\), where the strategy sets of the players in each \(G_{v}\),
$$\begin{aligned} G_{v} = \left( \{1,2\}, \{S_{1}, S_{2}\}_{v}, \{\mathrm {tr}(\rho _{\mathrm {f}}P), -\mathrm {tr}(\rho _{\mathrm {f}}P)\} \right) \end{aligned}$$
(31)
are as follows:
$$\begin{aligned} \{S_{1}, S_{2}\}_{v} = {\left\{ \begin{array}{ll}\{\{\mathbb {1}, \sigma _{x}, H\}\times \{\mathbb {1}, \sigma _{x}, H\}, \{\mathbb {1}, \sigma _{x}, \sigma _{z}\}\}, &{} \hbox {if}~v\in \{\emptyset , 1, 2, 21\}\\ \{\{\mathbb {11}, \mathbb {1}\sigma _{x}, \sigma _{x}\mathbb {1}, \sigma _{x}\sigma _{x}\}, \{\mathbb {1}, \sigma _{x}\}\} &{} \hbox {otherwise}. \end{array}\right. } \end{aligned}$$
(32)
Let us determine an ENE in the above game. Note first that the game
$$\begin{aligned} \varGamma _{CC} = \left( \{1,2\}, \{\{\mathbb {11}, \mathbb {1}\sigma _{x}, \sigma _{x}\mathbb {1}, \sigma _{x}\sigma _{x}\}, \{\mathbb {1}, \sigma _{x}\}\}, \{\mathrm {tr}(\rho _{\mathrm {f}}P), -\mathrm {tr}(\rho _{\mathrm {f}}P)\}\right) \end{aligned}$$
(33)
satisfies the assumption of Proposition 1 for \(v=12\) and \(v=212\), i.e.,
$$\begin{aligned} G_{12\hat{}{\bar{v}}} = G_{212\hat{}{\bar{v}}} = \varGamma _{CC} \end{aligned}$$
(34)
for every \(\bar{v}\) such that \(v\hat{}{\bar{v}} \in \mathcal {V}_{0}\). As a result, Nash equilibria in \(\varGamma _{CC}\) are part of ENE in a game \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) given by (31) and (32). The matrix forms of \(\varGamma _{CC}\) and (3) coincide and so do the optimal strategies. Recall that \((\sigma ^{c}_{1}, \sigma ^{c}_{2})\) denotes a pair of optimal strategies in \(\varGamma _{CC}\). It follows that \((\sigma )_{v} = (\sigma ^{c}_{1}, \sigma ^{c}_{2})\) is part of the ENE for \(v \in \{12,121,212,\dots \}\). We will now use the notion of extended rationalizability (see, Definition 3) to determine the other profiles of \((\sigma )_{v}\). First, it must be the case that \((\sigma _{2})_{21}\) is a best reply to \((\sigma _{1})_{212} = \sigma ^{c}_{1}\) in \(G_{212} = \varGamma _{CC}\). Hence \((\sigma _{2})_{21} = \sigma ^{c}_{2}\). Next, \((\sigma _{1})_{21}\) is a best reply to \((\sigma _{2})_{211}\) in \(G_{211}\). But \((\sigma _{2})_{211} = (\sigma _{2})_{21}\) and \(G_{211} = G_{21}\) by Eqs. (20) and (22). Therefore, \((\sigma _{1})_{21}\) is part of the ENE if \((\sigma _{1})_{21}\) is a best reply to \((\sigma _{2})_{21} = \sigma ^{c}_{2}\) in \(G_{21}\) given by (30). We thus get \((\sigma _{1})_{21} = HH\). As a result, \((\sigma )_{21} = (HH, \sigma ^{c}_{2})\). Let us now find \((\sigma )_{2}\). In this case, \((\sigma _{1})_{2} = HH\) is a best reply to \((\sigma _{2})_{21} = \sigma ^{c}_{2}\) in \(G_{21}\). On the other hand, \((\sigma _{2})_{2}\) is a best reply to \((\sigma _{1})_{2} = (\sigma _{1})_{21} = HH\) in \(G_{2}\). We thus get \((\sigma _{2})_{2} = \sigma _{z}\). This gives \((\sigma )_{2} = (HH, \sigma _{z})\). We conclude similarly that \((\sigma )_{1} = (HH, \sigma ^{c}_{2})\) and \((\sigma )_{\emptyset } = ((\sigma _{1})_{1}, (\sigma _{2})_{2}) = (HH, \sigma _{z})\). To sum up, the ENE is of the form
$$\begin{aligned} (\sigma )_{v} = {\left\{ \begin{array}{ll}\left( \sigma ^{c}_{1}, \sigma ^{c}_{2}\right) &{}\quad \hbox {if}~v\in \{12,121, 212, \dots \}, \\ \left( HH, \sigma ^{c}_{2}\right) &{}\quad \hbox {if}~v\in \{1,21\}, \\ \left( HH, \sigma _{z}\right) &{}\quad \hbox {if}~v\in \{\emptyset , 2\}. \end{array}\right. } \end{aligned}$$
(35)
The ENE predicts that the game with unawareness ends with the payoff result of \(-\,1\) determined by \((\sigma )_{\emptyset } = (HH, \sigma _{z})\).
The above examples shows that incomplete awareness may dramatically affect the result of the game. In what follows, we shall show that this is also true in a general setting, where the set of available actions for the players is the set of \(2\times 2\) unitary matrices U(2).

4.1 Relevant best replies in PQ Penny Flip-type games

Recall that the unitary matrix \(R_{\mathbf {n}}(\theta )\) corresponding to counterclockwise rotation through an angle \(\theta \) about the axis directed along the unit vector \(\mathbf {n} = (n_{x}, n_{y}, n_{z})\) is given by
$$\begin{aligned} R_{\mathbf {n}}(\theta ) = \cos \frac{\theta }{2}\mathbb {1} - \mathrm{i}\sin \frac{\theta }{2}(n_{x}\sigma _{x} + n_{y}\sigma _{y} + n_{z}\sigma _{z}). \end{aligned}$$
(36)
In particular, the rotation matrices about the x, y, and z axes are
$$\begin{aligned}&R_{x}(\theta ) = \begin{pmatrix} \cos \frac{\theta }{2} &{} -\mathrm{i}\sin \frac{\theta }{2} \\ -\mathrm{i}\sin \frac{\theta }{2} &{} \cos \frac{\theta }{2}\end{pmatrix},\nonumber \\&R_{y}(\theta ) = \begin{pmatrix} \cos \frac{\theta }{2} &{} -\sin \frac{\theta }{2} \\ \sin \frac{\theta }{2} &{} \cos \frac{\theta }{2}\end{pmatrix}, \nonumber \\&R_{z}(\theta ) = \begin{pmatrix} e^{-\mathrm{i}\theta /2} &{} 0 \\ 0 &{} e^{\mathrm{i}\theta /2}\end{pmatrix}. \end{aligned}$$
(37)
In order to state our results, we need to apply the following proposition [16].
Proposition 2
Let \(\mathbf {m}, \mathbf {n} \in \mathbb {R}^3\) be unit vectors, \(\mathbf {m} \bot \mathbf {n}\), and \(U\in {\textsf {SU}}(2)\). Then one can find real numbers \(\beta \), \(\gamma \), and \(\delta \) such that
$$\begin{aligned} U = R_{\mathbf {n}}(\beta )R_{\mathbf {m}}(\gamma )R_{\mathbf {n}}(\delta ). \end{aligned}$$
(38)
We are now in a position to prove the lemmas that determine players’ best replies to specific strategies. Let \(|\pm \rangle = (|0\rangle \pm |1\rangle )/\sqrt{2}\). The following lemma exhibits all winning strategies of player 1 in Meyer’s problem. It is a reformulation of the results appeared in [17, 18].
Lemma 1
The optimal strategy for player 1 in game \(\varGamma _{QC}\) is a pair of unitary matrices \((V_{1},V_{3})\) such that
$$\begin{aligned} V_{1}|0\rangle \langle 0|V^{\dagger }_{1} \in \{|+\rangle \langle +|, |-\rangle \langle -|\}, \quad V_{3} = R_{z}(\alpha )V^{\dagger }_{1} \end{aligned}$$
(39)
for \(\alpha \in \mathbb {R}\). The matrix representation of \(V_{1}\) (up to the global phase factor) is
$$\begin{aligned} V_{1} = \frac{R_{z}(a)}{\sqrt{2}}\left( \begin{array}{rr} \mathrm {e}^{-\mathrm {i}\gamma /2} &{} -\mathrm {i}\mathrm {e}^{\mathrm {i}\gamma /2}\\ \mathrm {e}^{-\mathrm {i}\gamma /2} &{} \mathrm {i}\mathrm {e}^{\mathrm {i}\gamma /2}\end{array}\right) \end{aligned}$$
(40)
for \(a \in \{-\pi ,0\}\) and \(\gamma \in \mathbb {R}\).
Proof
Note first that according to the definition of \(\varGamma _{QC}\), a mixed strategy of player 2 is represented by a probability distribution \((p,1-p)\) over \(\{\mathbb {1}, \sigma _{x}\}\). Let \(\rho \) be a state corresponding to a result of playing a mixed strategy \((p,1-p)\) by player 2 against a strategy \((U_{1}, U_{3})\) chosen by player 1. Then \(\rho \) may be written as
$$\begin{aligned} \rho = pU_{3}U_{1}|0\rangle \langle 0|U^{\dagger }_{1}U^{\dagger }_{3} + (1-p)U_{3}\sigma _{x}U_{1}|0\rangle \langle 0|U^{\dagger }_{1}\sigma _{x}U^{\dagger }_{3}. \end{aligned}$$
(41)
Let \((V_{1}, V_{3})\) be a strategy of player 1 such that \(\mathrm {tr}(\rho P) = 1\) for every mixed strategy of player 2. This clearly forces
$$\begin{aligned} {\left\{ \begin{array}{ll} V_{3}V_{1}|0\rangle \langle 0|V^{\dagger }_{1}V^{\dagger }_{3} = |0\rangle \langle 0|,\\ V_{3}\sigma _{x}V_{1}|0\rangle \langle 0|V^{\dagger }_{1}\sigma _{x}V^{\dagger }_{3} = |0\rangle \langle 0|. \end{array}\right. } \end{aligned}$$
(42)
Combining Eq. (42) we obtain
$$\begin{aligned} \sigma _{x}V_{1}|0\rangle \langle 0|V^{\dagger }_{1}\sigma _{x} = V_{1}|0\rangle \langle 0|V^{\dagger }_{1}. \end{aligned}$$
(43)
Since the eigenvectors of \(\sigma _{x}\) are \(|+\rangle \) and \(|-\rangle \), and \(V_{1}|0\rangle \langle 0|V^{\dagger }_{1} =|\varPsi \rangle \langle \varPsi |\), where \(|\varPsi \rangle \) is a unit vector, player 1’s optimal action \(V_{1}\) in game \(\varGamma _{QC}\) satisfies either \(V_{1}|0\rangle \langle 0|V^{\dagger }_{1} = |+\rangle \langle +|\) or \(V_{1}|0\rangle \langle 0|V^{\dagger }_{1} =|-\rangle \langle -|\).
In what follows, we derive the matrix representation of \(V_{1}\). Let us first consider the case
$$\begin{aligned} V_{1}|0\rangle \langle 0| V^{\dagger }_{1} = |+\rangle \langle +|. \end{aligned}$$
(44)
By Proposition 2, the matrix \(V_{1}\) may be written (up to the global phase factor) as
$$\begin{aligned} V_{1} = \mathrm {e}^{\mathrm {i}\pi /4}R_{z}(\beta )R_{x}(\gamma )R_{z}(\delta ). \end{aligned}$$
(45)
Let us determine \(\beta \), \(\gamma \) and \(\delta \) so that Eq. (44) is satisfied. First, note that \(R_{z}(\delta )\) has no effect on \(|0\rangle \langle 0|\), i.e., \(R_{z}(\delta )|0\rangle \langle 0|R^{\dagger }_{z}(\delta ) = |0\rangle \langle 0|\). This is because, \(R_{z}(\delta )\) corresponds to a counterclockwise rotation through an angle \(\delta \) about the z-axis, and state \(|0\rangle \) is represented by a point on that axis (see Fig. 3). It follows that \(\delta \in \mathbb {R}\). Thus, we are left with the task of determining \(\beta \), \(\gamma \). We conclude from equation
$$\begin{aligned} R_{z}(\beta )R_{x}(\gamma )|0\rangle \langle 0|R^{\dagger }_{x}(\gamma )R^{\dagger }_{z}(\beta ) = |+\rangle \langle +| \end{aligned}$$
(46)
that
$$\begin{aligned} (\beta , \gamma ) \in \{(\pi /2, \pi /2), (-\pi /2, -\pi /2)\}. \end{aligned}$$
(47)
Indeed, starting at point A on the Bloch sphere (Fig. 3), we have to set \(R_{x}(\pi /2)\) or \(R_{x}(-\pi /2)\) in order to reach point E by using the rotation matrix about z-axis. As a result, we obtain
$$\begin{aligned} \mathrm {e}^{\mathrm {i}\pi /4}R_{z}(\pi /2)R_{x}(\pi /2)R_{z}(\gamma )&= \frac{1}{\sqrt{2}}\left( \begin{array}{rr} \mathrm {e}^{-\mathrm {i}\gamma /2} &{} -\mathrm {i}\mathrm {e}^{\mathrm {i}\gamma /2}\\ \mathrm {e}^{-\mathrm {i}\gamma /2} &{} \mathrm {i}\mathrm {e}^{\mathrm {i}\gamma /2}\end{array}\right) \nonumber \\&=\mathrm {e}^{\mathrm {i}\pi /4}R_{z}(-\pi /2)R_{x}(-\pi /2)R_{z}(\gamma +\pi ). \end{aligned}$$
(48)
Applying similar reasoning to the case
$$\begin{aligned} V_{1}|0\rangle \langle 0| V^{\dagger }_{1} = |-\rangle \langle -| \end{aligned}$$
(49)
leads to
$$\begin{aligned} V_{1}= \mathrm {e}^{\mathrm {i}\pi /4}R_{z}(\pi /2)R_{x}(-\pi /2)R_{z}(\gamma )~\hbox {or}~V_{1}= \mathrm {e}^{\mathrm {i}\pi /4}R_{z}(-\pi /2)R_{x}(\pi /2)R_{z}(\gamma ). \end{aligned}$$
(50)
One can check that both forms of \(V_{1}\) are the same up to \(\gamma \in \mathbb {R}\). Therefore, Eq. (49) implies
$$\begin{aligned} V_{1}&= \mathrm {e}^{\mathrm {i}\pi /4}R_{z}(-\pi /2)R_{x}(\pi /2)R_{z}(\gamma )\end{aligned}$$
(51)
$$\begin{aligned}&=\frac{1}{\sqrt{2}}\left( \begin{array}{rr} \mathrm {i}\mathrm {e}^{-\mathrm {i}\gamma /2} &{} \mathrm {e}^{\mathrm {i}\gamma /2}\\ -\mathrm {i}\mathrm {e}^{-\mathrm {i}\gamma /2} &{} \mathrm {e}^{\mathrm {i}\gamma /2}\end{array}\right) = \frac{1}{\sqrt{2}}R_{z}(-\pi )\left( \begin{array}{rr} \mathrm {e}^{-\mathrm {i}\gamma /2} &{} -\mathrm {i}\mathrm {e}^{\mathrm {i}\gamma /2}\\ \mathrm {e}^{-\mathrm {i}\gamma /2} &{} \mathrm {i}\mathrm {e}^{\mathrm {i}\gamma /2}\end{array}\right) . \end{aligned}$$
(52)
We next turn to determining \(V_{3}\). Without restriction of generality we can assume that \(V_{1}\) is given by Eq. (44). We deduce from the system of Eq. (42) that
$$\begin{aligned} V^{\dagger }_{3}|0\rangle \langle 0|V_{3} = |+\rangle \langle +| = V_{1}|0\rangle \langle 0|V^{\dagger }_{1}. \end{aligned}$$
(53)
Hence the optimal action \(V_{3}\) has the form of \(V^{\dagger }_{1}\) up to the composition with rotation about the z axis. Thus the general form of \(V_{3}\) may be written as \(R_{z}(\alpha )V^{\dagger }_{1}\), where \(\alpha \in \mathbb {R}\). \(\square \)
As it was mentioned in [12], player 2 being aware of his unitary strategies has a counterstrategy \(V_{2}\) to player 1’s optimal strategy played in \(\varGamma _{QC}\). The following lemma provides us with the general form of \(V_{2}\).
Lemma 2
Player 2’s best reply to strategy \((V_{1}, V_{3})\) given by (39) in game \(\varGamma _{QQ}\) is a unitary matrix \(V_{2}\) such that
$$\begin{aligned} V_{2}|+\rangle \langle +|V^{\dagger }_{2} = |-\rangle \langle -|. \end{aligned}$$
(54)
Its possible matrix representation is
$$\begin{aligned} V_{2} = \mathrm {e}^{\mathrm {i}\alpha }\left( \begin{array}{ll}\mathrm {i}\cos \frac{\gamma }{2} &{} -\sin \frac{\gamma }{2} \\ \sin \frac{\gamma }{2} &{} -\mathrm {i}\cos \frac{\gamma }{2} \end{array}\right) , \gamma \in \mathbb {R}. \end{aligned}$$
(55)
Proof
Let us assume that \((V_{1}, V_{3})\) satisfies \(V_{1}|0\rangle \langle 0|V^{\dagger }_{1} = |+\rangle \langle +|\). It follows easily that \(V^{\dagger }_{3}|1\rangle \langle 1|V_{3} = |-\rangle \langle -|\). It was shown in Theorem 2 [12] that there exists \(V_{2} \in U(2)\) such that
$$\begin{aligned} \mathrm {tr}(\rho _{\mathrm {f}}P) = -1. \end{aligned}$$
(56)
Then \(V_{2}\) is player 2’s best reply to \((V_{1}, V_{3})\). From (56) we obtain
$$\begin{aligned} \rho _{\mathrm {f}} = V_{3}V_{2}V_{1}|0\rangle \langle 0|V^{\dagger }_{1}V^{\dagger }_{2}V^{\dagger }_{3} = |1\rangle \langle 1|. \end{aligned}$$
(57)
This clearly forces
$$\begin{aligned} V_{2}|+\rangle \langle +|V^{\dagger }_{2} = V^{\dagger }_{3}|1\rangle \langle 1|V_{3} = |-\rangle \langle -|. \end{aligned}$$
(58)
We can apply similar arguments again, with condition \(V_{1}|0\rangle \langle 0|V^{\dagger }_{1} = |+\rangle \langle +|\) replaced by \(V_{1}|0\rangle \langle 0|V^{\dagger }_{1} = |-\rangle \langle -|\) to obtain
$$\begin{aligned} V_{2}|-\rangle \langle -|V^{\dagger }_{2} = |+\rangle \langle +|. \end{aligned}$$
(59)
Equation (59) is clearly equivalent to (58).
We will now derive the matrix representation of \(V_{2}\). The method is similar to that in the proof of Lemma 1. By Proposition 2, we may write \(V_{2}\) (up to the global phase factor) in the following form
$$\begin{aligned} V_{2} = R_{x}(\beta )R_{z}(\gamma )R_{x}(\delta ). \end{aligned}$$
(60)
Now Eq. (54) becomes
$$\begin{aligned} R_{z}(\gamma )R_{x}(\delta )|+\rangle \langle +|R^{\dagger }_{x}(\delta )R^{\dagger }_{z}(\gamma ) = R^{\dagger }_{x}(\beta )|-\rangle \langle -| R_{x}(\beta ). \end{aligned}$$
(61)
Since \(R_{x}(\delta )\) and \(R^{\dagger }_{x}(\beta )\) only affect the global phase factor of \(|+\rangle \) and \(|-\rangle \), we find that \(\delta , \beta \in \mathbb {R}\), and Eq. (61) reduces to
$$\begin{aligned} R_{z}(\gamma )|+\rangle \langle +|R^{\dagger }_{z}(\gamma ) = |-\rangle \langle -|. \end{aligned}$$
(62)
It follows that \(\gamma \in \{-\pi , \pi \}\). We thus obtain
$$\begin{aligned} V_{2} = \mathrm {e}^{\mathrm {i}\alpha }R_{x}(\beta )R_{z}(-\pi )R_{x}(\delta ) = \mathrm {e}^{\mathrm {i}\alpha }\left( \begin{array}{ll} \mathrm {i}\cos \frac{\beta - \gamma }{2} &{} -\sin \frac{\beta -\gamma }{2} \\ \sin \frac{\beta -\gamma }{2} &{} -\mathrm {i}\cos \frac{\beta -\gamma }{2}\end{array}\right) . \end{aligned}$$
(63)
Since \(\beta \) and \(\gamma \) are real numbers, an equivalent formulation of (63) is (55). \(\square \)
The next lemma characterizes player 2’s optimal unitary strategy in the game against player 1 equipped with the classical strategies.
Lemma 3
Player 2’s optimal strategy \(W_{2} \in U(2)\) in \(\varGamma _{CQ}\) is of the form
$$\begin{aligned} W_{2} = \frac{\mathrm {e}^{\mathrm {i}\alpha }}{\sqrt{2}}\left( \begin{array}{ll}\mathrm {e}^{\mathrm {i}(-\beta /2 - \delta /2)} &{} -\mathrm {e}^{\mathrm {i}(-\beta /2 + \delta /2)} \\ \mathrm {e}^{\mathrm {i}(\beta /2 - \delta /2)} &{} \mathrm {e}^{\mathrm {i}(\beta /2 + \delta /2)}\end{array} \right) . \end{aligned}$$
(64)
Proof
We first determine the general final state \(\rho _{\mathrm {f}}\) of \(\varGamma _{CQ}\) resulting from playing player 1’s mixed strategy (probability distribution \((p_{1}, p_{2}, p_{3},1-p_{1} - p_{2} - p_{3})\) over \(\{\mathbb {11}, \mathbb {1}\sigma _{x}, \sigma _{x}\mathbb {1}, \sigma _{x}\sigma _{x}\}\)) and player 2’s unitary strategy \(W_{2}\) written in the form
$$\begin{aligned} W_{2} = \mathrm {e}^{\mathrm {i}\alpha }R_{z}(\beta )R_{y}(\gamma )R_{z}(\delta ). \end{aligned}$$
(65)
We obtain
$$\begin{aligned} \rho _{\mathrm {f}}&= p_{1}\mathbb {1}W_{2}\mathbb {1}|0\rangle \langle 0|\mathbb {1}W^{\dagger }_{2}\mathbb {1} + p_{2}\sigma _{x}W_{2}\mathbb {1}|0\rangle \langle 0|\mathbb {1}W^{\dagger }_{2}\sigma _{x} \nonumber \\&\quad + p_{3}\mathbb {1}W_{2}\sigma _{x}|0\rangle \langle 0|\sigma _{x}W^{\dagger }_{2}\mathbb {1} + (1-p_{1} - p_{2} - p_{3})\sigma _{x}W_{2}\sigma _{x}|0\rangle \langle 0|\sigma _{x}W^{\dagger }_{2}\sigma _{x}\nonumber \\&=\left( \begin{array}{cc} \frac{1}{2}(1+ (1-2p_{2}-2p_{3})\cos \gamma ) &{} \dots \\ \dots &{} \frac{1}{2} + \left( -\frac{1}{2} + p_{2} + p_{3}\right) \cos \gamma \end{array}\right) . \end{aligned}$$
(66)
Therefore, the payoff outcome corresponding to (66) depends only on \(\gamma \), and it is equal to
$$\begin{aligned} \mathrm {tr}(\rho _{\mathrm {f}}P) = \left( 2\cos ^2\frac{\gamma }{2} - 1\right) (1-2p_{2}-2p_{3}). \end{aligned}$$
(67)
One can check that (67) coincides with the outcome in \(\varGamma _{CC}\) when player 1 uses her mixed strategy \((p_{1}, p_{2}, p_{3}, 1-p_{1} - p_{2} - p_{3})\), and player 2 plays \(\mathbb {1}\) and \(\sigma _{x}\) according to the probability distribution \((\cos ^2(\gamma /2), 1- \cos ^2(\gamma /2))\). Since player 2’s optimal strategy in \(\varGamma _{CC}\) is \((q, 1-q) = (1/2, 1/2)\), the value of \(\cos (\gamma /2)\) is either \(-1/\sqrt{2}\) or \(1/\sqrt{2}\). We thus get
$$\begin{aligned} W_{2} = \frac{\mathrm {e}^{\mathrm {i}\alpha }}{\sqrt{2}}\left( \begin{array}{ll}\pm \mathrm {e}^{\mathrm {i}(-\beta /2 - \delta /2)} &{} -\mathrm {e}^{\mathrm {i}(-\beta /2 + \delta /2)} \\ \mathrm {e}^{\mathrm {i}(\beta /2 - \delta /2)} &{} \pm \mathrm {e}^{\mathrm {i}(\beta /2 + \delta /2)}\end{array} \right) . \end{aligned}$$
(68)
Note that the signs associated with the diagonal entries depend on whether we set \(\beta \) and \(\delta \) or \(\beta + \pi \) and \(\delta + \pi \). For this reason, the form of \(W_{2}\) is (64). \(\square \)

4.2 Extended Nash equilibria in generalized PQ Penny Flip-type games

Having determined the relevant best replies in \(\varGamma _{QQ}\), \(\varGamma _{QC}\) and \(\varGamma _{CQ}\), we are now in a position to study the quantum PQ Penny Flip game with unawareness in a more general setting.
Let us look at a family of games \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) whose elements are
$$\begin{aligned} G_{v} = {\left\{ \begin{array}{ll} \varGamma _{QQ} &{}\quad \text {if}~v\in \{\emptyset , 1\},\\ \varGamma _{CC} &{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
(69)
Eq. (69) clearly generalizes (25) to the case in which the game \(\varGamma _{QQ}\) is played instead of (11), and player 1 is now aware of all \(2\times 2\) matrices. The solution of Example 2 together with Lemma 1 enables us to state the following proposition:
Proposition 3
Let \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) be a game with unawareness defined by (69). Then the form of all extended Nash equilibria of \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) is of the form
$$\begin{aligned} (\sigma )_{v} = {\left\{ \begin{array}{ll} \left( (V_{1}, V_{3}), \sigma ^c_{2}\right) &{}\quad \mathrm{if}~v\in \{\emptyset , 1\},\\ (\sigma ^c_{1}, \sigma ^c_{2}) &{}\quad \mathrm{otherwise}. \end{array}\right. } \end{aligned}$$
(70)
Proof
The fact that \((\sigma )_{v} = (\sigma ^c_{1}, \sigma ^c_{2})\) for \(v\in \mathcal {V}_{0}\setminus \{\emptyset , 1\}\) is obtained by the same line of reasoning as in Example 2.
Let us examine \((\sigma )_{1} = (\sigma _{1}, \sigma _{2})_{1}\). From (22) we deduce that \((\sigma _{2})_{1} = (\sigma _{2})_{12} = \sigma ^c_{2}\). By Definition 3, \((\sigma _{1})_{1}\) is a best reply to \((\sigma _{2})_{1} = (\sigma _{2})_{12} = \sigma ^c_{2}\). Lemma 1 leads to \((\sigma _{1})_{1} = (V_{1}, V_{3})\), which establishes formula (70). \(\square \)
Consider now a family of games \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) where
$$\begin{aligned} G_{v} = {\left\{ \begin{array}{ll} \varGamma _{QQ} &{}\hbox {if}~v\in \{\emptyset , 1, 2, 21\}, \\ \varGamma _{CC} &{}\hbox {otherwise}. \end{array}\right. } \end{aligned}$$
(71)
Game (71) generalizes the game defined by (32). The set of actions available to the players is now the set of all \(2\times 2\) unitary matrices. The game given by (71) has the same structure of unawareness as the game in Example 3. This fact implies that both games have the same structure of ENE.
Proposition 4
Let \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) be a game with unawareness defined by (71). The set of extended Nash equilibria is given by the following formula
$$\begin{aligned} (\sigma )_{v} = {\left\{ \begin{array}{ll} \left( \sigma ^c_{1}, \sigma ^c_{2}\right) &{}\quad \mathrm{if}~v\in \{12, 121, 212, \dots \},\\ \left( (V_{1}, V_{3}), \sigma ^c_{2}\right) &{}\quad \mathrm{if}~v\in \{1, 21\},\\ \left( (V_{1}, V_{3}), V_{2}\right) &{}\quad \mathrm{if}~v \in \{\emptyset , 2\}. \end{array}\right. } \end{aligned}$$
(72)
Proof
As in Example 3, the game \(G = \varGamma _{CC}\) meets condition (34). This fact justifies the first piece of (72). Let us justify \((\sigma )_{21}\). From (22) we obtain \((\sigma _{2})_{21} = (\sigma _{2})_{212} = \sigma ^c_{2}\). Turning to \((\sigma _{1})_{21}\), by Definition 3, we need to determine player 1’s best reply to \((\sigma _{2})_{21} = \sigma ^c_{2}\) in \(G_{21} = \varGamma _{QQ}\). Lemma 1 shows that player 1’s optimal strategy to any probability mixture over \(\mathbb {1}\) and \(\sigma _{x}\) is \((V_{1}, V_{3})\). Hence \((\sigma _{1})_{21} = (V_{1}, V_{3})\). Let us examine the strategy profile \((\sigma )_{2}\). Again, we see from (22) that \((\sigma _{1})_{2} = (\sigma _{1})_{21} = (V_{1}, V_{3})\). On the other hand, it follows from Lemma 2 that \(V_{2}\) defined by (54) is player 2’s best reply to \((V_{1}, V_{3})\) in \(G_{2} = \varGamma _{QQ}\). This gives \((\sigma _{2})_{2} = V_{2}\). Similar reasoning applies to the other profiles of (72). \(\square \)
We conclude from Proposition 4 that \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) given by (71) favors Player 2. The ENE generates the best possible payoff for player 2,
$$\begin{aligned} \mathrm {tr}\left( \left( V_{3}V_{2}V_{1}|0\rangle \langle 0| V^{\dagger }_{1}V^{\dagger }_{2}V^{\dagger }_{3}\right) P\right) = -1. \end{aligned}$$
(73)
We now investigate the case where each player perceives the quantum game \(\varGamma _{QQ}\) but finds that the other player considers \(\varGamma _{CC}\). Although a two-component strategy set may seem to be to player 2’s advantage in any zero-sum game (see, Theorem 5.44 in [13]), the corresponding ENE does not prejudge the outcome; each player has the chance of getting her most-preferred outcome. To be specific, consider \(\{G_{v}\}\) defined as follows:
$$\begin{aligned} G_{v} = {\left\{ \begin{array}{ll} \varGamma _{QQ} &{}\quad \hbox {if}~v\in \{\emptyset , 1, 2\}, \\ \varGamma _{CC} &{}\quad \hbox {if}\,\mathrm{otherwise}. \end{array}\right. } \end{aligned}$$
(74)
We can formulate the following proposition:
Proposition 5
Let \(\{G_{v}\}_{v\in \mathcal {V}_{0}}\) be a game with unawareness defined by (74). The set of extended Nash equilibria is given by the following formula
$$\begin{aligned} (\sigma )_{v} = {\left\{ \begin{array}{ll} ((V_{1}, V_{3}), W_{2}) &{}\quad \mathrm{if}~ v = \emptyset , \\ \left( (V_{1}, V_{3}), \sigma ^c_{2}\right) &{}\quad \mathrm{if}~ v = 1,\\ \left( \sigma ^c_{1}, W\right) &{}\quad \mathrm{if}~ v = 2, \\ \left( \sigma ^{c}_{2}, \sigma ^{c}_{2}\right) &{}\quad \mathrm{if}~\mathrm{otherwise}.\end{array}\right. } \end{aligned}$$
(75)
Proof
Analysis similar to that in the proof of Proposition 4 shows that \((\sigma )_{v} = (\sigma ^c_{1}, \sigma ^c_{2})\) for \(v\in \{12, 21, 121, 212, \dots \}\) and \((\sigma _{1})_{2} = \sigma ^c_{1}\). By Lemma 3, player 2’s best reply to \(\sigma ^c_{1}\) is given by (64). We thus obtain \((\sigma )_{2} = (\sigma ^c_{1}, W)\). We leave it to the reader to verify the other profiles of (75). \(\square \)
The ENE predicts \((\sigma )_{\emptyset } = ((V_{1}, V_{3}), W_{2})\) in game \(\{G_{v}\}\) given by (74). An easy computation shows that
$$\begin{aligned} \mathrm {tr}\left( V_{3}W_{2}V_{1}|0\rangle \langle 0|V^{\dagger }_{1}W^{\dagger }_{2}V^{\dagger }_{3}\right) = -\sin \beta _{2}\sin \delta _{2}. \end{aligned}$$
(76)
According to (75), player 2 predicts that the result of the game is \((\sigma )_{2} = (\sigma ^{c}_{1}, W)\), and so player 2 does not have most-preferred parameters \(\beta \) and \(\delta \) in \(W_{2}\). If we assume that \((\beta , \delta )\) are uniformly distributed over \([0, 2\pi ] \times [0, 2\pi ]\) then the expected value of (76) is equal to 0.

5 Conclusions

We have shown that the notion of game with unawareness is a useful tool in studying the quantum PQ Penny Flip game. Different players’ perceptions of strategies available in the game require using more sophisticated methods for describing the game and its possible rational results than an ordinary matrix game together with the concept of Nash equilibrium. The examples used in the paper indicate that not only the possibility of using quantum strategies but also incomplete awareness of the players may lead to unpredictable outcomes. This fact undoubtedly sheds new light on quantum game theory.
Our work provides new tools that might be utilized in allied sciences. The obtained results can be generalized to more complex games and then applied to study numerous economical problems formulated in terms of games with unawareness with the use of mathematical methods of quantum information. At the same time, these problems will enrich theory of quantum information through new examples that will show superiority of using quantum methods over methods of classical information theory.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Literatur
2.
Zurück zum Zitat von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior. Princeton University Press, Princeton (1944)MATH von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior. Princeton University Press, Princeton (1944)MATH
4.
Zurück zum Zitat Feinberg Y.: Subjective reasoning—games with unawareness. Research Paper No. 1875, Stanford Graduate School of Business (2004) Feinberg Y.: Subjective reasoning—games with unawareness. Research Paper No. 1875, Stanford Graduate School of Business (2004)
5.
Zurück zum Zitat Feinberg, Y.: Games with Unawareness. Working Paper No. 2122, Stanford Graduate School of Business (2012) Feinberg, Y.: Games with Unawareness. Working Paper No. 2122, Stanford Graduate School of Business (2012)
6.
Zurück zum Zitat Han, Y.J., Zhang, Y.S., Guo, G.C.: Quantum game with incomplete information. Fluct. Noise Lett. 02, L263 (2002)MathSciNetCrossRef Han, Y.J., Zhang, Y.S., Guo, G.C.: Quantum game with incomplete information. Fluct. Noise Lett. 02, L263 (2002)MathSciNetCrossRef
7.
Zurück zum Zitat Iqbal, A., Chappell, J.M., Li, Q., Pearce, C.E.M., Abbott, D.: A probabilistic approach to quantum Bayesian games of incomplete information. Quantum Inf. Process. 13, 2783 (2014)ADSMathSciNetCrossRef Iqbal, A., Chappell, J.M., Li, Q., Pearce, C.E.M., Abbott, D.: A probabilistic approach to quantum Bayesian games of incomplete information. Quantum Inf. Process. 13, 2783 (2014)ADSMathSciNetCrossRef
8.
9.
10.
Zurück zum Zitat Cabello, A., Calsamiglia, J.: Quantum entanglement, indistinguishability, and the absent-minded driver’s problem. Phys. Lett. A 336, 441 (2005)ADSMathSciNetCrossRef Cabello, A., Calsamiglia, J.: Quantum entanglement, indistinguishability, and the absent-minded driver’s problem. Phys. Lett. A 336, 441 (2005)ADSMathSciNetCrossRef
11.
Zurück zum Zitat Fra̧ckiewicz, P.: Application of the Eisert–Wilkens–Lewenstein quantum game scheme to decision problems with imperfect recall. J. Phys. A Math. Theor. 44, 325304 (2011)MathSciNetCrossRef Fra̧ckiewicz, P.: Application of the Eisert–Wilkens–Lewenstein quantum game scheme to decision problems with imperfect recall. J. Phys. A Math. Theor. 44, 325304 (2011)MathSciNetCrossRef
13.
Zurück zum Zitat Maschler, M., Solan, E., Zamir, S.: Game Theory. Cambridge University Press, Cambridge (2013)CrossRef Maschler, M., Solan, E., Zamir, S.: Game Theory. Cambridge University Press, Cambridge (2013)CrossRef
14.
Zurück zum Zitat Thompson, F.B.: Equivalence of Games in Extensive Form, Research Memorandum RM-759, U.S. Air Force Project Rand, Rand Corporation, Santa Monica, California, (1952), (Reprinted on pp. 36–45 of Classics in Game Theory (Kuhn, H.W. (ed.)) Princeton University Press, Princeton (1997) Thompson, F.B.: Equivalence of Games in Extensive Form, Research Memorandum RM-759, U.S. Air Force Project Rand, Rand Corporation, Santa Monica, California, (1952), (Reprinted on pp. 36–45 of Classics in Game Theory (Kuhn, H.W. (ed.)) Princeton University Press, Princeton (1997)
15.
Zurück zum Zitat Osborne, M.J., Rubinstein, A.: A Course in Game Theory. The MIT Press, Cambridge (1994)MATH Osborne, M.J., Rubinstein, A.: A Course in Game Theory. The MIT Press, Cambridge (1994)MATH
16.
Zurück zum Zitat Shende, V.V., Markov, I.L., Bullock, S.S.: Minimal universal two-qubit controlled-NOT-based circuits. Phys. Rev. A 69, 062321 (2004)ADSCrossRef Shende, V.V., Markov, I.L., Bullock, S.S.: Minimal universal two-qubit controlled-NOT-based circuits. Phys. Rev. A 69, 062321 (2004)ADSCrossRef
17.
Zurück zum Zitat Chappell, J.M., Iqbal, A., Lohe, M.A., von Smekal, L.: An analysis of the quantum penny flip game using geometric algebra. J. Phys. Soc. Jpn. 78, 054801 (2009)ADSCrossRef Chappell, J.M., Iqbal, A., Lohe, M.A., von Smekal, L.: An analysis of the quantum penny flip game using geometric algebra. J. Phys. Soc. Jpn. 78, 054801 (2009)ADSCrossRef
18.
Zurück zum Zitat Balakrishnan, S., Sankaranarayanan, R.: Classical rules and quantum strategies in penny flip game. Quantum Inf. Process. 12, 1261 (2013)ADSMathSciNetCrossRef Balakrishnan, S., Sankaranarayanan, R.: Classical rules and quantum strategies in penny flip game. Quantum Inf. Process. 12, 1261 (2013)ADSMathSciNetCrossRef
Metadaten
Titel
Quantum Penny Flip game with unawareness
verfasst von
Piotr Frąckiewicz
Publikationsdatum
01.01.2019
Verlag
Springer US
Erschienen in
Quantum Information Processing / Ausgabe 1/2019
Print ISSN: 1570-0755
Elektronische ISSN: 1573-1332
DOI
https://doi.org/10.1007/s11128-018-2111-7

Weitere Artikel der Ausgabe 1/2019

Quantum Information Processing 1/2019 Zur Ausgabe

Neuer Inhalt