1 Introduction

In classical logic, the proposition expressed by a sentence is construed as a set of possible worlds, embodying the informative content of the sentence. However, sentences in natural language are not only used to provide information, but also to request information. Thus, natural language semantics requires a logical framework whose notion of meaning embodies both informative and inquisitive content.

A natural starting point–rooted in the seminal work of Hamblin (1973) and Karttunen (1977) on the semantics of questions, and further pursued in recent work on inquisitive semantics (Groenendijk and Roelofsen 2009; Ciardelli 2009; Ciardelli and Roelofsen 2011, among others)–is to construe the proposition expressed by a sentence \(\varphi \), \([\varphi ]\), as a set of possibilities, where each possibility in turn is a set of possible worlds. In uttering \(\varphi \), a speaker can then be taken to provide the information that the actual world is located in at least one of the possibilities in \([\varphi ]\), i.e., in \(\bigcup [\varphi ]\), and at the same time she can be taken to request information from other conversational participants in order to locate the actual world inside a specific possibility in \([\varphi ]\).

For instance, if \([\varphi ] = \{\{w_1,w_2\},\{w_1,w_3\}\}\), as depicted in Fig. 1, then in uttering \(\varphi \), a speaker can be taken to provide the information that the actual world lies in \(\bigcup [\varphi ] = \{w_1,w_2,w_3\}\), and at the same time she can be taken to request enough information to establish that the actual world lies in \(\{w_1,w_2\}\) or to establish that it lies in \(\{w_1,w_3\}\). Thus, propositions defined as sets of possibilities are able to capture both informative and inquisitive content.

Fig. 1
figure 1

Capturing informative and inquisitive content using sets of possibilities

As soon as we move from the classical notion of propositions as sets of possible worlds to the richer notion of propositions as sets of possibilities, two crucial questions arise. The first question is whether propositions should really be defined as arbitrary sets of possibilities, or whether we should adopt certain constraints on which sets of possibilities form suitable propositions and which don’t. The above discussion indicates that sets of possibilities are sufficient for the purpose at hand, which is to capture informative and inquisitive content. But we do not only want a notion of propositions that is sufficient for the given purpose, we want a notion that is just right. In particular, it should be the case that any two non-identical propositions really differ in informative and/or inquisitive content. Otherwise, we would have two representations for exactly the same content, which means that our notion of propositions would be too fine-grained. We will show that, in order to meet this criterion, propositions should not be defined as arbitrary sets of possibilities, but, instead, as sets of possibilities that are non-empty and downward closed (i.e., if \(\alpha \in [\varphi ]\) and \(\beta {\,\subseteq \,}\alpha \), then \(\beta \in [\varphi ]\) as well). This result is relevant for any Hamblin/Karttunen-style semantic account of questions, no matter whether such an account is cast within the framework of inquisitive semantics or not.

The second question that arises is how the propositions expressed by complex sentences should be defined in a compositional way. In particular, if we limit ourselves to a first-order language, what is the role of connectives and quantifiers in this richer setting? How do we define \([\lnot \varphi ]\), \([\varphi \wedge \psi ]\), \([\varphi \vee \psi ]\), etcetera, in terms of \([\varphi ]\) and \([\psi ]\)?

This issue has been addressed quite extensively in recent work on inquisitive semantics. It has also been addressed in a different setting, namely in work on so-called alternative semantics for disjunction and existentials (Kratzer and Shimoyama 2002; Simons 2005a, b; Alonso-Ovalle 2006, 2008, 2009; Aloni 2007a, b; Menéndez-Benito 2005, 2010, among others). In this framework, sets of possibilities–also known as alternatives–are not primarily used to capture inquisitive content, but rather to characterize the semantic contribution of disjunction and existentials in the process of meaning composition. Even though inquisitive and alternative semantics were motivated by rather different concerns, they essentially coincide in their treatment of disjunction and existentials.

It has been shown that the treatment of the logical constants in inquisitive and alternative semantics makes suitable predictions about the semantic behavior of the corresponding connectives and quantifiers in a variety of typologically unrelated natural languages. However, even though we have thus obtained a much more accurate characterization of the meaning of the relevant connectives and quantifiers in natural language, inquisitive and alternative semantics do not yet provide an explanation for why these constructions have the particular meanings that they have, and why constructions with these particular meanings are so wide-spread across languages.

After all, to justify their treatment of the logical constants, both frameworks directly rely on observations concerning the semantic behavior of the corresponding connectives and quantifiers in natural langauge. For instance, the treatment of \(\vee \) is justified by observations concerning the word or in English and similar words in other languages. The vantage point of this approach is that it provides a very direct link between the formal treatment of the logical constants on the one hand, and intuitions about the natural language expressions that these logical constants are usually associated with on the other hand. Thereby, it immediately brings out the linguistic significance of the two frameworks. However, in order to gain explanatory power, the given treatment of the logical constants should be justified by considerations independent of the linguistic data themselves.

To this end, the present paper develops an inquisitive semantics whose treatment of the logical constants is motivated exclusively by algebraic considerations. Just like classical propositions can be shown to form a complete Boolean algebra, and classical logic can be obtained by associating the basic operations in this algebra with the logical constants, we will show that inquisitive propositions form a complete Heyting algebra, and we will obtain an inquisitive semantics for the language of first-order logic by associating the basic operations in this algebra with the logical constants. Crucially, the justification for the resulting system does not rely in any way on intuitions concerning specific linguistic constructions.

Still, the results of our algebraic enterprise will be highly relevant for natural language semantics, since it is to be expected that natural languages generally have constructions that are used to perform the basic algebraic operations on propositions. For instance, it is natural to expect that languages generally have a word that is used (possibly among other things) to construct the join of two propositions, and another word to construct the meet of two propositions. In English, the words or and and are usually taken to fulfill this purpose. If this general expectation is borne out, then our algebraic semantics does not only provide a precise characterization of the meaning of these words; it also provides an explanation for the ubiquity of words with these particular meanings across languages. Footnote 1

Our algebraic semantics will essentially coincide with the simplest and most well-understood existing implementation of inquisitive semantics, and it will also concur with the treatment of disjunction and existentials in alternative semantics. Thus, our algebraic considerations will indeed converge with the linguistic intuitions that previously played a central role in justifying the treatment of the logical constants, and the main result of our work will not be a wholly new semantics, but rather a more solid foundation for some of the existing proposals.

The paper is structured as follows. Section 2 briefly reviews the algebraic foundations of classical logic; Sect.  3 develops an algebraically motivated inquisitive semantics, discussing its logical properties and significance for natural language semantics; and Sect. 4 concludes.

2 Algebraic foundations of classical logic

To illustrate our approach, let us briefly review the algebraic foundations of classical logic. Footnote 2 Throughout the paper we will assume a set \(W\) of possible worlds as our logical space. In classical logic, the proposition expressed by a sentence \(\varphi \) is a set of possible worlds, embodying the informative content of the sentence. We will denote this set of worlds as \([\varphi ]_c\), where the subscript \(c\) stands for classical. In asserting \(\varphi \), a speaker is taken to provide the information that the actual world is located in \([\varphi ]_c\). Given this way of thinking about propositions, there is a natural entailment order between them: \(A\models _c B\) iff \(A\) is at least as informative as \(B\), i.e., iff \(A\subseteq B\).

This entailment order in turn gives rise to certain algebraic operations on propositions. For instance, for any set of propositions \(\Sigma \), there is a unique proposition that (i) entails all the propositions in \(\Sigma \), and (ii) is entailed by all other propositions that entail all propositions in \(\Sigma \). This proposition is called the greatest lower bound of \(\Sigma \) w.r.t. \(\models _c\), or in algebraic jargon, its meet. It amounts to \(\bigcap \Sigma \) (given the stipulation that \(\bigcap \emptyset = W\)). Similarly, every set of propositions \(\Sigma \) also has a unique least upper bound w.r.t. \(\models _c\), which is called its join, and amounts to \(\bigcup \Sigma \). The existence of meets and joins for arbitrary sets of classical propositions implies that the set of all classical propositions, \(\Pi _c\), together with the entailment order \(\models _c\), forms a complete lattice.

This lattice is bounded. That is, it has a bottom element, \(\bot :=\emptyset \), and a top element, \(\top := W\), such that for every proposition \(A\), we have that \(\bot \models _c A\) and \(A\models _c\top \). Moreover, for every two propositions \(A\) and \(B\), there is a unique weakest proposition \(C\) such that \(A\cap C\models _c B\). This proposition is called the pseudo-complement of \(A\) relative to \(B\). It is denoted as \(A{\,\Rightarrow \,}B\) and amounts to \((W-A)\cup B\). Intuitively, the pseudo-complement of \(A\) relative to \(B\) is the weakest proposition such that if we ‘add’ it to \(A\), we get a proposition that is at least as strong as \(B\). The existence of relative pseudo-complements implies that \(\langle \Pi _c,\models _c\rangle \) forms a Heyting algebra.

If \(A\) is an element of a Heyting algebra, it is customary to refer to \(A^*:= (A{\,\Rightarrow \,}\bot )\) simply as the pseudo-complement of \(A\) (rather than the pseudo-complement of \(A\) relative to \(\bot \)). In the case of \(\langle \Pi _c,\models _c\rangle \), \(A^*\) amounts to \(W-A\). By definition of \({\,\Rightarrow \,}\), we always have that \(A \cap A^* = \bot \). In the specific case of \(\langle \Pi _c,\models _c\rangle \), we also always have that \(A\cup A^*=\top \). This means that \(A^*\) is in fact the Boolean complement of \(A\), and that \(\langle \Pi _c,\models _c\rangle \) forms a Boolean algebra, a special kind of Heyting algebra.

Now, classical propositional logic is obtained by associating the basic algebraic operators, meet, join, and (relative) pseudo-complementation with the logical constants:

  1. 1.

    \([\lnot \varphi ] \qquad := [\varphi ]^*\)

  2. 2.

    \([\varphi \wedge \psi ]\,\,:= [\varphi ]\cap [\psi ]\)

  3. 3.

    \([\varphi \vee \psi ]\,\,:= [\varphi ]\cup [\psi ]\)

  4. 4.

    \([\varphi \rightarrow \psi ]\! := [\varphi ]{\,\Rightarrow \,}[\psi ]\)

Notice that everything starts with a certain notion of propositions and a natural entailment order on these propositions. This entailment order, then, gives rise to certain basic operations on propositions–meet, join, and relative pseudo-complementation–and classical propositional logic is obtained by associating these basic semantic operations with the logical constants.

3 Algebraic foundations for inquisitive semantics

Exactly the same strategy can be applied in the inquisitive setting. Only now we will have a richer notion of propositions, and a different entailment order on them, sensitive to both informative and inquisitive content.

3.1 Propositions and entailment

Let us first determine how propositions and entailment should be defined precisely. We will start with the following notion of propositions; this will be refined below, but it forms a natural point of departure.

Definition 1

(Possibilities and propositions).

  • A set of possible worlds \(\alpha {\,\subseteq \,}W\) is called a possibility.

  • A proposition is a non-empty set of possibilities. (to be refined)

Propositions of this kind can be taken to embody informative and inquisitive content in the following way. First, in uttering a sentence that expresses a proposition \(A\), a speaker can be taken to provide the information that the actual world lies in at least one of the possibilities in \(A\), i.e. in \(\bigcup A\). In view of this, we will refer to \(\bigcup A\) as the informative content of \(A\), and denote it as \(\mathsf{info}(A)\).

Definition 2

(Informative content).  \(\mathsf{info}(A) := \bigcup A\)

Second, someone who utters a sentence that expresses a proposition \(A\) can also be taken to request certain information from other conversational participants. Namely, she can be taken to request enough information to locate the actual world in a specific possibility in \(A\), rather than just in the union of all the possibilities that \(A\) consists of.

We will say that a piece of information \(\beta \), modeled as a set of possible worlds, settles a proposition \(A\) just in case it is contained in one of the possibilities \(\alpha \) that \(A\) consists of, which means that it locates the actual world inside that possibility \(\alpha \).

Definition 3

(Settling a proposition). A piece of information \(\beta \) settles a proposition \(A\) if and only if \(\beta {\,\subseteq \,}\alpha \) for some \(\alpha \in A\).

Notice that propositions are defined as non-empty sets of possibilities. This reflects the assumption that for any proposition, there is at least one piece of information that settles that proposition (although there is one proposition, namely \(\{\emptyset \}\), which can only be settled by providing inconsistent information).

Propositions can be ordered in terms of the information that they provide, but also in terms of the information that they request. We say that one proposition \(A\) is at least as informative as another proposition \(B\), \(A{\,\models _\mathsf{info}\,}B\), just in case \(\mathsf{info}(A)\subseteq \mathsf{info}(B)\), as in the classical setting. On the other hand, we say that one proposition is at least as inquisitive as another proposition \(B\), \(A{\,\models _\mathsf{inq}\,}B\), iff \(A\) requests at least as much information as \(B\), i.e., iff every piece of information that settles \(A\) also settles \(B\). This means that every possibility in \(A\) must be contained in some possibility in \(B\). Thus, \(A{\,\models _\mathsf{inq}\,}B\) if and only if \(\forall \alpha \in A.~\exists \beta \in B.~ \alpha \subseteq \beta \). These two orders can be combined into one overall entailment order: \(A\models B\) iff both \(A{\,\models _\mathsf{info}\,}B\) and \(A{\,\models _\mathsf{inq}\,}B\).

Definition 4

(Entailment).

  • \(A{\,\models _\mathsf{info}\,}B\)   iff   \(\mathsf{info}(A)\subseteq \mathsf{info}(B)\)

  • \(A{\,\models _\mathsf{inq}\,}B\)    iff   \(\forall \alpha \in A.~\exists \beta \in B.~ \alpha \subseteq \beta \)

  • \(A\models B\)      iff   \(A{\,\models _\mathsf{info}\,}B\) and \(A{\,\models _\mathsf{inq}\,}B\)

Notice that \(A{\,\models _\mathsf{inq}\,}B\) actually implies that \(A{\,\models _\mathsf{info}\,}B\). After all, if every possibility in \(A\) is contained in some possibility in \(B\), then \(\bigcup A\) must also be contained in \(\bigcup B\). Thus, the overall entailment order can be simplified as follows.

Fact 1

(Entailment simplified). \(A\models B\)   iff  \(\forall \alpha \in A.~\exists \beta \in B.~ \alpha \subseteq \beta \)

Having established this notion of entailment, we are ready to examine whether our notion of propositions is really appropriate for the purpose at hand. As mentioned in the introduction, we would like to have that any two non-identical propositions really differ in informative and/or inquisitive content. Or, phrased the other way around, any two propositions \(A\) and \(B\) that are just as informative and just as inquisitive, should be identical. In more technical terms, we want our entailment order to be anti-symmetric. That is, whenever \(A\models B\) and \(B\models A\), it should be the case that \(A=B\). We will show that this is not the case.

Consider the two propositions in Fig. 2. The proposition on the left, \(A\), consists of two possibilities, \(\alpha \) and \(\beta \), while the proposition on the right, \(B\), consists of three possibilities, \(\alpha \), \(\beta \), and \(\gamma \). Thus, these two propositions are not identical. However, they are just as informative and just as inquisitive: \(A\models B\) and \(B\models A\).

Fig. 2
figure 2

Two non-identical propositions that are equivalent w.r.t. \(\models \)

To see this, first notice that \(\mathsf{info}(A)\) and \(\mathsf{info}(B)\), i.e., the union of the possibilities in \(A\) and the union of the possibilities in \(B\), clearly coincide. Thus, \(A\) and \(B\) are just as informative. To see that \(A\) and \(B\) also request just as much information, consider a piece of information that settles \(A\). Such a piece of information must either provide the information that the actual world lies in \(\alpha \) or it must provide the information that the actual world lies in \(\beta \). But that means that it also settles \(B\). And vice versa, any piece of information that settles \(B\) also settles \(A\). Thus, \(A\) and \(B\) are also just as inquisitive.

This shows that, as long as we are interested in capturing only informative and inquisitive content, our notion of propositions as arbitrary sets of possibilities is not quite appropriate. Rather, we would like to have a more restricted notion, such that any two non-identical propositions really differ in informative and/or inquisitive content.Footnote 3

To this end, we will define propositions as non-empty, downward closed sets of possibilities.

Definition 5

(Propositions as downward closed sets of possibilities).

  • A set of possibilities \(A\) is downward closed if and only if for every \(\alpha \in A\) and every \(\beta {\,\subseteq \,}\alpha \), we also have that \(\beta \in A\).

  • Propositions are non-empty, downward closed sets of possibilities.

We will use \(\Pi \) to denote the set of all propositions. To see that downward closedness is a natural constraint on propositions in the present setting, consider the following. We are conceiving of propositions as sets of possibilities, and these possibilities determine what it takes to settle a given proposition. Thus far, we have been assuming the following relationship between the pieces of information that settle a proposition \(A\) and the possibilities that \(A\) consists of: a piece of information \(\beta \) settles \(A\) iff it is contained in some possibility \(\alpha \in A\). But we could just as well assume a more direct relationship between the possibilities in \(A\) and the pieces of information that settle \(A\). Namely, we could simply think of the possibilities in \(A\) as corresponding precisely to the pieces of information that settle \(A\). But if we conceive of the possibilities in a proposition in this way, we are immediately forced to define propositions as downward closed sets of possibilities. After all, if \(\alpha \in A\), then, given the assumed conception of possibilities, \(\alpha \) is a piece of information that settles \(A\); but then any stronger piece of information \(\beta \subset \alpha \) also settles \(A\), and this means, again given the assumed conception of possibilities, that any \(\beta \subset \alpha \) must also be in \(A\).

Given this more restricted notion of propositions as non-empty, downward closed sets of possibilities, the characterization of \(\models \) can be further simplified. We said above that \(A\models B\) iff every piece of information that settles \(A\) also settles \(B\). Given our new conception of propositions, this simply amounts to inclusion: \(A{\,\subseteq \,}B\).

Fact 2

(Entailment further simplified). \(A\models B\)  iff  \(A{\,\subseteq \,}B\)

From this characterization it immediately follows that \(\models \) forms a partial order over \(\Pi \). This implies in particular that \(\models \) is anti-symmetric, which means that every two non-identical propositions really differ in informative and/or inquisitive content, as desired.

3.2 Algebraic operations

The next step is to see what kind of algebraic operations \(\models \) gives rise to. It turns out that, just as in the classical setting, any set of propositions \(\Sigma \) has a unique greatest lower bound (meet) and a unique least upper bound (join) w.r.t. \(\models \).

Fact 3

(Meet). For any set of propositions \(\Sigma \), \(\bigcap \Sigma \) is the meet of \(\Sigma \) w.r.t. \(\models \) (assuming that \(\bigcap \emptyset = \wp (W)\)).

Proof

First, let us show that \(\bigcap \Sigma \) is a proposition. If \(\Sigma =\emptyset \) then \(\bigcap \Sigma = \wp (W)\), which is indeed a proposition. If \(\Sigma \ne \emptyset \) then \(\bigcap \Sigma \) must contain \(\emptyset \), since all elements of \(\Sigma \) are non-empty and downward closed, which means that they must contain \(\emptyset \). So \(\bigcap \Sigma \) is non-empty. To see that it is also downward closed, suppose that \(\alpha \in \bigcap \Sigma \). Then \(\alpha \) must be in every proposition in \(\Sigma \). But then every \(\beta {\,\subseteq \,}\alpha \) must also be included in every proposition in \(\Sigma \), and therefore in \(\bigcap \Sigma \). So \(\bigcap \Sigma \) is indeed downward closed. Next, note that \(\bigcap \Sigma \models A\) for any \(A\in \Sigma \), which means that \(\bigcap \Sigma \) is a lower bound of \(\Sigma \). What remains to be shown is that \(\bigcap \Sigma \) is the greatest lower bound of \(\Sigma \). That is, for every \(B\) that is a lower bound of \(\Sigma \), we must show that \(B\models \bigcap \Sigma \). To see this let \(B\) be a lower bound of \(\Sigma \), and let \(\beta \) be a possibility in \(B\). Then, since \(B\models A\) for any \(A\in \Sigma \), \(\beta \) must be included in any \(A\in \Sigma \). But then \(\beta \) must also be in \(\bigcap \Sigma \). Thus, \(B\models \bigcap \Sigma \), which is exactly what we set out to show. So \(\bigcap \Sigma \) is indeed the greatest lower bound of \(\Sigma \). \(\square \)

Fact 4

(Join). For any set of propositions \(\Sigma \), \(\bigcup \Sigma \) is the join of \(\Sigma \) w.r.t. \(\models \) (assuming that \(\bigcup \emptyset = \{\emptyset \}\)).

Proof

We omit the proof that \(\bigcup \Sigma \) is a proposition. For any \(A\in \Sigma \), \(A\models \bigcup \Sigma \), which means that \(\bigcup \Sigma \) is an upper bound of \(\Sigma \). What remains to be shown is that \(\bigcup \Sigma \) is the least upper bound of \(\Sigma \). That is, for every \(B\) that is an upper bound of \(\Sigma \), we must show that \(\bigcup \Sigma \models B\). To see this let \(B\) be an upper bound of \(\Sigma \), and \(\alpha \) a possibility in \(\bigcup \Sigma \). Then \(\alpha \) must be in some proposition \(A\in \Sigma \). But then, since \(A\models B\), \(\alpha \) must also be in \(B\). And this establishes that \(\bigcup \Sigma \models B\), which is what we set out to show. Thus, \(\bigcup \Sigma \) is indeed the least upper bound of \(\Sigma \). \(\square \)

The existence of meets and joins for arbitrary sets of propositions implies that \(\langle \Pi ,\models \rangle \) forms a complete lattice. And again, this lattice is bounded, i.e., there is a bottom element, \(\bot :=\{\emptyset \}\), and a top element, \(\top :=\wp (W)\). Finally, as in the classical setting, for every two propositions \(A\) and \(B\), there is a unique weakest proposition \(C\) such that \(A\cap C\models B\). Recall that this proposition, which is called the pseudo-complement of \(A\) relative to \(B\), can be characterized intuitively as the weakest proposition such that if we add it to \(A\), we get a proposition that is at least as strong as \(B\). The only thing that has changed with respect to the classical setting is that strength is now measured both in terms of informative content and in terms of inquisitive content.

Definition 6

For any two propositions \(A\) and \(B\):

$$\begin{aligned} A{\,\Rightarrow \,}B ~:=~ \{ \alpha \mid \text{ for } \text{ every } \beta {\,\subseteq \,}\alpha , \text{ if } \beta \in A \text{ then } \beta \in B \} \end{aligned}$$

Fact 5

(Relative pseudo-complement). For any two propositions \(A\) and \(B\), \(A{\,\Rightarrow \,}B\) is the pseudo-complement of \(A\) relative to \(B\).

Proof

We omit the proof that \(A{\,\Rightarrow \,}B\) is a proposition. To see that \(A\cap (A{\,\Rightarrow \,}B)\models B\), let \(\alpha \) be a possibility in \(A\cap (A{\,\Rightarrow \,}B)\). Then \(\alpha \) is both in \(A\) and in \(A{\,\Rightarrow \,}B\). Since \(\alpha \in A{\,\Rightarrow \,}B\), it must be the case that if \(\alpha \in A\) then also \(\alpha \in B\). But we know that \(\alpha \in A\). So \(\alpha \) must also be in \(B\). This establishes that \(A\cap (A{\,\Rightarrow \,}B)\models B\).

It remains to be shown that \(A\!{\,\Rightarrow \,}\! B\) is the weakest proposition \(C\) such that \(A\cap C\!\models \! B\). In other words, we must show that for any proposition \(C\) such that \(A\cap C\models B\), it holds that \(C\models (A{\,\Rightarrow \,}B)\). To see this, let \(C\) be a proposition such that \(A\cap C\models B\) and let \(\alpha \) be a possibility in \(C\). Towards a contradiction, suppose that \(\alpha \not \in (A{\,\Rightarrow \,}B)\). Then there must be some \(\beta {\,\subseteq \,}\alpha \) such that \(\beta \in A\) and \(\beta \not \in B\). Since \(C\) is downward closed, \(\beta \in C\). But that means that \(\beta \) is in \(A\cap C\), while \(\beta \not \in B\). Thus \(A\cap C\not \models B\), contrary to what we assumed. So \(A{\,\Rightarrow \,}B\) is indeed the pseudo-complement of \(A\) relative to \(B\). \(\square \)

The existence of relative pseudo-complements implies that \(\langle \Pi ,\models \rangle \) forms a Heyting algebra. Recall that in a Heyting algebra, \(A^*:= (A{\,\Rightarrow \,}\bot )\) is referred to as the pseudo-complement of \(A\). In the specific case of \(\langle \Pi ,\models \rangle \), pseudo-complements can be characterized as follows.

Fact 6

(Pseudo-complement). For any proposition \(A\):

$$\begin{aligned} A^* = \{\beta \mid \beta \cap \bigcup A = \emptyset \} \end{aligned}$$

Thus, \(A^*\) consists of all the possibilities that are disjoint from \(\bigcup A\). This means that a piece of information settles \(A^*\) just in case it locates the actual world outside \(\bigcup A\).

So far, then, everything works out just as in the classical setting. However, unlike in the classical setting, the pseudo-complement of a proposition is not always its Boolean complement. In fact, most propositions in \(\langle \Pi ,\models \rangle \) do not have a Boolean complement at all. To see this, suppose that \(A\) and \(B\) are Boolean complements. This means that (i) \(A\cap B = \bot \) and (ii) \(A\cup B = \top \). Condition (ii) can only be fulfilled if \(W\) is contained in either \(A\) or \(B\). Suppose \(W\in A\). Then, since \(A\) is downward closed, \(A=\wp (W)=\top \). But then, in order to satisfy condition (i), we must have that \(B = \{\emptyset \} = \bot \). So the only two elements of our algebra that have a Boolean complement are \(\top \) and \(\bot \). This implies that \(\langle \Pi ,\models \rangle \) does not form a Boolean algebra.

Thus, starting with a new notion of propositions and an entailment order on these propositions that takes both informative and inquisitive content into account, we have established an algebraic structure with three basic operations, meet, join, and relative pseudo-complementation. The only difference with the algebraic structure obtained in the classical setting is that, apart from the extremal elements of the algebra, propositions do not have Boolean complements. However, as in the classical setting, every proposition does have a pseudo-complement.

3.3 Connectives

Now suppose that we have a language \(L\), whose sentences express the kind of propositions considered here. Then it is natural to assume that this language has certain sentential connectives which semantically behave like meet, join, and (relative) pseudo-complement operators. Below we define a semantics for the language of propositional logic, \(L_P\), that has exactly these characteristics: conjunction behaves semantically as a meet operator, disjunction behaves as a join operator, negation as a pseudo-complement operator, and implication as a relative pseudo-complement operator. The semantics assumes a valuation function which assigns a truth-value to every atomic sentence in every world. For any atomic sentence \(p\), the set of worlds where \(p\) is true is denoted by \(|p|\).

Definition 7

(An algebraic inquisitive semantics for \(L_P\)).

  1. 1.

    \([p] := \wp (\, |p| \,)\)

  2. 2.

    \([\lnot \varphi ] := [\varphi ]^*\)

  3. 3.

    \([\varphi \wedge \psi ] := [\varphi ]\cap [\psi ]\)

  4. 4.

    \([\varphi \vee \psi ] := [\varphi ]\cup [\psi ]\)

  5. 5.

    \([\varphi \rightarrow \psi ] := [\varphi ]{\,\Rightarrow \,}[\psi ]\)

The clauses for the logical constants are completely determined by our algebraic considerations. Notice, however, that these considerations do not dictate a particular treatment of atomic sentences. We assume that in uttering an atomic sentence \(p\), a speaker provides the information that \(p\) is true, and does not request any further information from other participants. This assumption is directly reflected by the clause for atomic sentences given above, which defines \([p]\) as the set of all possibilities containing only worlds where \(p\) is true.

3.4 Quantifiers

The approach taken here can straightforwardly be extended to obtain an inquisitive semantics for the language of first-order logic, \(L_{FO}\). The proposition expressed by a universally quantified formula \(\forall x.\varphi \), relative to an assignment \(g\), can be defined as the meet of all the propositions that \(\varphi \) expresses relative to assignment functions that differ from \(g\) at most in the value that they assign to \(x\). And similarly, the proposition expressed by an existentially quantified formula \(\exists x.\varphi \), relative to \(g\), can be defined as the join of all the propositions that \(\varphi \) expresses relative to assignment functions that differ from \(g\) at most in the value that they assign to \(x\).

As usual, the semantics for \(L_{FO}\) assumes a domain of individuals \(D\) and a world-dependent interpretation function \(I_w\) that maps every individual constant \(c\) to some individual in \(D\) and every \(n\)-place predicate symbol \(R\) to a set of \(n\)-tuples of individuals in \(D\). Formulas are interpreted relative to an assignment function \(g\), which maps every variable \(x\) to some individual in \(D\). For every individual constant \(c\), \([c]_{w,g} = I_w(c)\) and for every variable \(x\), \([x]_{w,g} = g(x)\). An atomic sentence \(Rt_1\ldots t_n\) is true in a world \(w\) relative to an assignment \(g\) iff \(\langle [t_1]_{w,g},\ldots ,[t_n]_{w,g}\rangle \in I_w(R)\). Given an assignment \(g\), the set of worlds \(w\) such that \(Rt_1\ldots t_n\) is true in \(w\) relative to \(g\) is denoted by \(|Rt_1\ldots t_n|_g\).

Definition 8

(An algebraic inquisitive semantics for \(L_{FO}\)).

  1. 1.

    \([Rt_1\ldots t_n]_g := \wp (~ |Rt_1\ldots t_n|_g ~)\)

  2. 2.

    \([\lnot \varphi ]_g := [\varphi ]_g^*\)

  3. 3.

    \([\varphi \wedge \psi ]_g := [\varphi ]_g\cap [\psi ]_g\)

  4. 4.

    \([\varphi \vee \psi ]_g := [\varphi ]_g\cup [\psi ]_g\)

  5. 5.

    \([\varphi \rightarrow \psi ]_g := [\varphi ]_g{\,\Rightarrow \,}[\psi ]_g\)

  6. 6.

    \([\forall x.\varphi ]_g := \bigcap _{d\in D}~ [\varphi ]_{g[x/d]}\)

  7. 7.

    \([\exists x.\varphi ]_g := \bigcup _{d\in D}~ [\varphi ]_{g[x/d]}\)

Given its algebraic characterization, the status of this system among logical frameworks for the semantic treatment of informative and inquisitive content, is precisely the same as that of classical first-order logic among logical frameworks for the semantic treatment of purely informative content. In this sense, the system may be regarded as the most basic inquisitive semantics. Just like classical logic in the purely informative setting, the system provides a suitable framework for the formulation and comparison of different theories of inquisitive constructions in natural language, and a common starting point for the development of even richer logical frameworks dealing with aspects of meaning that go beyond purely informative and inquisitive content (e.g. presuppositional aspects of meaning). We will therefore refer to the system as \(\mathsf {Inq}_\mathsf{B}\), where B stands for basic.

In the remainder of the paper we will relate \(\mathsf {Inq}_\mathsf{B}\) to earlier work on inquisitive semantics, identify its basic logical properties, and discuss its significance for natural language semantics.

3.5 Propositions and support

In previous work on inquisitive semantics, a number of different systems have been considered. We will focus here on the simplest and most well-understood system, where the proposition expressed by a sentence is defined in terms of the notion of support (just as in the classical setting, the proposition expressed by a sentence is usually defined in terms of truth). Support is a relation between sentences and information states (relativized to an assignment function in the first-order setting). Information states are modeled as sets of possible worlds (valuation functions in the propositional setting; first-order models in the first-order setting). Support for \(L_{FO}\) is defined recursively as follows. Footnote 4

Definition 9

(First-order support).

  1. 1.

    \(s\models _{g}Rt_1\ldots t_n\)    iff    \(s{\,\subseteq \,}|Rt_1\ldots t_n|_g\)

  2. 2.

    \(s\models _{g}\lnot \varphi \)    iff    \(\forall t\subseteq s:\) if \(t\ne \emptyset \) then \(t\not \models _{g}\varphi \)

  3. 3.

    \(s\models _{g}\varphi \wedge \psi \)    iff    \(s\models _{g}\varphi \) and \(s\models _{g}\psi \)

  4. 4.

    \(s\models _{g}\varphi \vee \psi \)    iff    \(s\models _{g}\varphi \) or \(s\models _{g}\psi \)

  5. 5.

    \(s\models _{g}\varphi \rightarrow \psi \)    iff    \(\forall t\subseteq s:\) if \(t\models _{g}\varphi \) then \(t\models _{g}\psi \)

  6. 6.

    \(s\models _{g}\forall x.\varphi \)    iff    \(s{\,\models \,}_{g[x/d]}\varphi \) for every \(d\in D\)

  7. 7.

    \(s\models _{g}\exists x.\varphi \)    iff    \(s{\,\models \,}_{g[x/d]}\varphi \) for some \(d\in D\)

Now, it turns out that there is a very close connection between the information states that support a formula \(\varphi \), relative to an assignment \(g\), and the proposition \([\varphi ]_g\) that \(\varphi \) expresses relative to \(g\) in \(\mathsf {Inq}_\mathsf{B}\). Namely, the proposition expressed by \(\varphi \) relative to \(g\) in \(\mathsf {Inq}_\mathsf{B}\) is precisely the set of all states that support \(\varphi \) relative to \(g\).

Fact 7

(Propositions and support). For any formula \(\varphi \in L_{FO}\), state \(s\), and assignment \(g\):

$$\begin{aligned} s\models _{g}\varphi ~~\Longleftrightarrow ~~ s\in [\varphi ]_g \end{aligned}$$

This result tells us that \(\mathsf {Inq}_\mathsf{B}\) essentially coincides with the existing support-based system. It must be noted that in most presentations of the support-based system, the proposition expressed by a sentence is defined as the set of maximal states supporting the sentence, rather than the set of all supporting states. Footnote 5 However, central logical notions like entailment and equivalence are directly defined in terms of support, which means that the logic that the two systems give rise to is exactly the same. Thus, all the logical results obtained for the support-based system immediately carry over to our algebraic system. In particular, we can import the following completeness result (Ciardelli 2009; Ciardelli and Roelofsen 2009, 2011). Footnote 6

Theorem 1

(Completeness theorem). Let \(\Phi \) be a set of sentences and \(\psi \) a sentence, all in \(L_P\). Then \(\Phi \) entails \(\psi \) in \(\mathsf {Inq}_\mathsf{B}\) if and only if \(\psi \) can be derived from \(\Phi \) using modus ponens as the only inference rule, and the following axioms:

  • All axioms for intuitionistic logic.

  • Kreisel-Putnam: \((\lnot \varphi \rightarrow \psi \vee \chi ) \longrightarrow (\lnot \varphi \rightarrow \psi )\vee (\lnot \varphi \rightarrow \chi )\)

  • Atomic double negation: \(\lnot \lnot p\rightarrow p\) (only for atomic \(p\))

Note that \(\mathsf {Inq}_\mathsf{B}\) is stronger than in intuitionistic logic. Namely, besides the axioms of intuitionistic logic, which are valid on any Heyting algebra (Troelstra and van Dalen 1988), it also validates the Kreisel-Putnam axiom and the law of double negation for atomic sentences. The latter is evidently connected to the treatment of atomic sentences in \(\mathsf {Inq}_\mathsf{B}\). Recall that our algebraic considerations did not dictate a particular treatment of atomic sentences. We defined the proposition expressed by an atomic sentence \(p\) as the set of all possibilities consisting of worlds where \(p\) is true, reflecting the assumption that in uttering \(p\), a speaker provides the information that \(p\) is true, and does not request any further information from other participants. This particular treatment of atomic sentences results in the validity of \(\lnot \lnot p\rightarrow p\).

The validity of the Kreisel-Putnam axiom is connected to the fact that the space of propositions in \(\mathsf {Inq}_\mathsf{B}\) actually forms a specific kind of Heyting algebra. This additional structure is not directly relevant for the purposes of this paper, but clearly plays a crucial role in comparing the logic of \(\mathsf {Inq}_\mathsf{B}\) with intuitionistic logic. Ciardelli (2009) and Ciardelli and Roelofsen (2009, 2011) pursue such a comparison in more detail.

In the next two subsections we will introduce some additional notions, and highlight some specific features of \(\mathsf {Inq}_\mathsf{B}\). In doing so, we will mostly restrict our attention to the propositional setting. Everything we will say also applies to the first order system, but formulating things in the first-order setting is a bit more cumbersome, because everything needs to be relativized to assignment functions.

3.6 Informativeness and inquisitiveness

Recall that we defined the informative content of a proposition \(A\), \(\mathsf{info}(A)\), as the union of all the possibilities in \(A\). Derivatively, we will say that the informative content of a sentence \(\varphi \), \(\mathsf{info}(\varphi )\), is the informative content of the proposition that it expresses, i.e., \(\bigcup [\varphi ]\).

It can be shown that the informative content of a sentence \(\varphi \) in \(\mathsf {Inq}_\mathsf{B}\) always coincides with the proposition \([\varphi ]_c\) expressed by that sentence in classical logic (see, e.g., Ciardelli and Roelofsen 2011, p. 62). This means that \(\mathsf {Inq}_\mathsf{B}\) forms a conservative extension of classical logic, in the sense that it leaves the treatment of informative content untouched.

Fact 8

(The treatment of informative content in \(\mathsf {Inq}_\mathsf{B}\) is classical). For any sentence \(\varphi \): \(\mathsf{info}(\varphi ) = [\varphi ]_c\).

We will say that a sentence is informative just in case its informative content does not cover the entire logical space, i.e., iff \(\mathsf{info}(\varphi )\ne W\). On the other hand we will say that \(\varphi \) is inquisitive just in case accepting \(\mathsf{info}(\varphi )\) is not sufficient to settle \([\varphi ]\), i.e., iff \(\mathsf{info}(\varphi )\not \in [\varphi ]\). In uttering an inquisitive sentence, a speaker does not just ask other participants to accept the information that she herself provides in uttering that sentence, but also to supply additional information.

Definition 10

(Informative and inquisitive sentences).

  • \(\varphi \) is informative iff \(\mathsf{info}(\varphi )\ne W\)

  • \(\varphi \) is inquisitive iff \(\mathsf{info}(\varphi )\not \in [\varphi ]\)

In terms of these notions of informativeness and inquisitiveness, we define questions, assertions, hybrids, and tautologies as follows.

Definition 11

(Questions, assertions, hybrids, and tautologies).

  • \(\varphi \) is a question iff it is non-informative

  • \(\varphi \) is an assertion iff it is non-inquisitive

  • \(\varphi \) is hybrid iff it is both informative and inquisitive

  • \(\varphi \) is a tautology iff it is neither informative nor inquisitive

Recall that in the classical setting, a sentence is a tautology just in case it is non-informative. In \(\mathsf {Inq}_\mathsf{B}\), sentences can be meaningful by being informative, but also by being inquisitive. Thus, it is natural that in order to count as a tautology in \(\mathsf {Inq}_\mathsf{B}\), a sentence has to be neither informative nor inquisitive.

Notice that a question is tautological just in case it is non-inquisitive, and an assertion is tautological just in case it is non-informative. Thus, sentences that are neither informative nor inquisitive count both as tautological assertions and as tautological questions.

It can be shown that a sentence is tautological just in case it expresses the proposition \(\wp (W)\), which is the top element of our algebra.

Fact 9

(Tautologies express the top element of the algebra).

  • \(\varphi \) is a tautology iff \([\varphi ] = \top = \wp (W)\)

3.7 Disjunction, existentials, and inquisitiveness

\(\mathsf {Inq}_\mathsf{B}\) crucially differs from classical logic in its treatment of disjunction. This is illustrated in Fig. (3a) and (3b). These figures assume a propositional language with just two atomic sentences, \(p\) and \(q\); world 11 makes both \(p\) and \(q\) true, world 10 makes \(p\) true and \(q\) false, etcetera. Figure (3a) depicts the classical meaning of \(p\vee q\): the set of all worlds that make \(p\) or \(q\) true. Figure (3b) depicts the proposition expressed by \(p\vee q\) in \(\mathsf {Inq}_\mathsf{B}\). For visual clarity, we have only depicted the maximal possibilities in \([p\vee q]\): the possibility that consists of all worlds where \(p\) is true, and the possibility that consists of all worlds where \(q\) is true. Since \(\mathsf{info}(p\vee q)\) does not cover the entire logical space, \(p\vee q\) is informative; and since \(\mathsf{info}(p\vee q)\not \in [p\vee q]\), it is also inquisitive. So \(p\vee q\) is an example of a hybrid sentence.

Fig. 3
figure 3

The proposition expressed by \(p\vee q\) in classical logic and in \(\mathsf {Inq}_\mathsf{B}\)

This example shows that disjunction is a source of inquisitiveness. It turns two atomic, non-inquisitive sentences into an inquisitive sentence. In the first-order setting, existential quantification behaves in a similar way and is also a source of inquisitiveness. It can in fact be shown that disjunction and existential quantification are the only sources of inquisitiveness in \(L_{FO}\) (see, e.g., Ciardelli and Roelofsen 2011, p. 62).

Fact 10

(Disjunction, existentials, and inquisitiveness). If a sentence in \(L_{FO}\) does not contain disjunction or existential quantification then it is not inquisitive. Footnote 7

As mentioned in the introduction, a treatment of disjunction and existentials as introducing sets of possibilities has not only been developed in inquisitive semantics but also in alternative semantics (Kratzer and Shimoyama 2002; Simons 2005a, b; Alonso-Ovalle 2006, 2008, 2009; Aloni 2007a, b; Menéndez-Benito 2005, 2010, among others). This treatment has been motivated by a number of empirical phenomena, including free choice inferences, exclusivity implicatures, and conditionals with disjunctive antecedents. The proposed analysis of disjunction and indefinites led to new accounts of these phenomena which improved considerably on previous accounts. However, as mentioned in the introduction as well, no motivation has so far been provided for this alternative treatment of disjunction and existentials independently of the linguistic phenomena at hand. Moreover, the treatment of disjunction and existentials in alternative semantics has been presented as a real alternative for the classical treatment of these logical constants as join operators. It seems, then, that anyone adopting the proposed alternative treatment of disjunction and existentials is forced to give up the classical treatment of these operators. One particular consequence of taking such a step is that the duality between disjunction and conjunction, and the corresponding duality between existential and universal quantification, gets lost.

The algebraic inquisitive semantics developed in the present paper sheds new light on these issues. First, it shows that, once inquisitive content is taken into consideration besides informative content, general algebraic considerations lead essentially to the treatment of disjunction and existentials that was proposed in alternative semantics, thus providing exactly the independent motivation that has so far been missing. Moreover, it shows that the proposed ‘alternative’ treatment of disjunction and existentials is actually a natural generalization of the classical treatment: disjunction and existentials can still be taken to behave semantically as join operators, only now the propositions that they apply to are more fine-grained in order to capture both informative and inquisitive content. And once the algebraic underpinning is regained, the duality between disjunction and conjunction, and the corresponding duality between existential and universal quantification, are restored as well. So we can have our cake and eat it: we can maintain the idea that disjunction and existentials behave as join operators, and still treat them as introducing sets of alternatives. Footnote 8

3.8 Projection operators

It is natural to think of sentences in \(\mathsf {Inq}_\mathsf{B}\) as inhabiting a two-dimensional space, as depicted in Fig. 4 (see also Mascarenhas 2009; Ciardelli 2009). One of the axes is inhabited by questions, which are always non-informative; the other axis is inhabited by assertions, which are always non-inquisitive; the ‘zero-point’ of the space is inhabited by tautologies, which are neither informative nor inquisitive; and the rest of the space is inhabited by hybrids, which are both informative and inquisitive.

Fig. 4
figure 4

Questions, assertions, hybrids, and tautologies in a two-dimensional space

Given this picture, it is natural to think of projection operators that map any sentence onto the axes of the space. In particular, we may consider a non-inquisitive projection operator \(!\) that maps any sentence \(\varphi \) to an assertion \(!\varphi \) that is non-inquisitive but otherwise as similar as possible to \(\varphi \), and a non-informative projection operator \(?\) that maps every \(\varphi \) to a question \(?\varphi \) that is non-informative but otherwise as similar as possible to \(\varphi \).

We will add the operators \(!\) and \(?\) to our logical language. In order to define their semantic contribution, let us formulate more precisely how we would like them to behave. First consider \(!\), the non-inquisitive projection operator. We would like this operator to behave in such a way that for any \(\varphi \):

  1. 1.

    \(!\varphi \) is non-inquisitive;

  2. 2.

    \(\mathsf{info}(!\varphi )=\mathsf{info}(\varphi )\), i.e., \(!\varphi \) preserves the informative content of \(\varphi \)

The following ‘representation theorem’ shows that these requirements uniquely determine how \(!\) should be defined. Footnote 9

Theorem 2

(Representation theorem for non-inquisitive projection). The non-inquisitive projection operator \(!\) meets the above requirements if and only if it is defined as follows:

$$\begin{aligned}{}[!\varphi ] := \wp (\mathsf{info}(\varphi )) \end{aligned}$$

Proof

First, we show that \(!\), as defined here, satisfies the requirements. Notice that \(\mathsf{info}(!\varphi ) = \bigcup [!\varphi ] = \mathsf{info}(\varphi )\). So the second requirement is fulfilled. And since \(\mathsf{info}(\varphi )\in [!\varphi ]\), the first requirement is fulfilled as well.

Now let us show that any operator that meets the given requirements must behave exactly as \(!\) does. Let \(\nabla \) be an operator that meets the given requirements. Then, for every \(\varphi \), \(\nabla \varphi \) must be non-inquisitive. That is, \([\nabla \varphi ] = \wp (\mathsf{info}(\nabla \varphi ))\). But we must also have that \(\mathsf{info}(\nabla \varphi )=\mathsf{info}(\varphi )\), which means that \([\nabla \varphi ] = \wp (\mathsf{info}(\varphi )) = [!\varphi ]\). So \(\nabla \) must indeed behave exactly as \(!\) does. \(\square \)

Now let us consider \(?\), the non-informative projection operator. Clearly, we always want \(?\varphi \) to be non-informative. But what else do we want? We cannot demand that \(?\varphi \) is always just as inquisitive as \(\varphi \) itself, i.e. that \([?\varphi ]\) and \([\varphi ]\) are always settled by exactly the same pieces of information. After all, if we enforced this requirement, \(?\varphi \) would simply have to be equivalent to \(\varphi \). There is, however, a natural way to weaken this requirement. In order to do so, we should not only consider the pieces of information that settle \([\varphi ]\), but rather more generally the pieces of information that decide on \([\varphi ]\).

Definition 12

(Contradicting and deciding on a proposition). Let \(\beta \) be a piece of information, and \([\varphi ]\) a proposition. Then:

  • \(\beta \) contradicts \([\varphi ]\) iff \(\beta \cap \bigcup [\varphi ]=\emptyset \)

  • \(\beta \) decides on \([\varphi ]\) iff it settles \([\varphi ]\) or contradicts \([\varphi ]\)

  • \(\mathsf{D}(\varphi )\) denotes the set of all pieces of information that decide on \([\varphi ]\)

Now we are ready to formulate the requirements for \(?\). Namely, we want \(?\) to behave in such a way that for every \(\varphi \):

  1. 1.

    \(?\varphi \) is non-informative

  2. 2.

    \(\mathsf{D}(?\varphi ) = \mathsf{D}(\varphi )\)

Again, these requirements uniquely determine how \(?\) should be defined.

Theorem 3

(Representation theorem for non-informative projection). The non-informative projection operator satisfies the above requirements if and only if it is defined as follows:

$$\begin{aligned}{}[?\varphi ] := \mathsf{D}(\varphi ) \end{aligned}$$

That is, \([?\varphi ]\) consists of all possibilities that decide on \([\varphi ]\).

Proof

First let us check that, given this definition, \(?\) satisfies the given requirements. First, we always have that \(\bigcup [?\varphi ] = W\), which means that \(?\varphi \) is never informative. Moreover, if \(\beta \) is a piece of information that decides on \([\varphi ]\) then it clearly settles, and therefore decides on \([?\varphi ]\). Vice versa, if \(\beta \) decides on \([?\varphi ]\) then, since there are no possibilities that are disjoint from \(\bigcup [?\varphi ]\), \(\beta \) must actually settle \([?\varphi ]\) and therefore be included in \([?\varphi ]\). And this means, given how \([?\varphi ]\) is defined, that \(\beta \) must decide on \([\varphi ]\). So \(?\) indeed meets the given requirements.

Now let us show that any operator that satisfies the given requirements must behave exactly as \(?\) does. Let \(\Delta \) be an operator that satisfies the requirements. Then, for every \(\varphi \), \(\Delta \varphi \) must be non-informative, which means that \(\mathsf{info}(\Delta \varphi ) = W\). Moreover, we must have that \(\mathsf{D}(\Delta \varphi )=\mathsf{D}(\varphi )\). Given that \(\mathsf{info}(\Delta \varphi ) = W\), there cannot be any possibilities that are disjoint from \(\bigcup [\Delta \varphi ]\). Thus, \(\mathsf{D}(\Delta \varphi )\) amounts to \([\Delta \varphi ]\). But then \([\Delta \varphi ]\) must be identical to \(\mathsf{D}(\varphi )\), which is \([?\varphi ]\). So \(\Delta \) must indeed behave exactly as \(?\) does. \(\square \)

Now, if \([!\varphi ]\) is defined as \(\wp (\mathsf{info}(\varphi ))\), and \([?\varphi ]\) as \(\mathsf{D}(\varphi )\), then the semantic behavior of these operators can actually be characterized in terms of our basic algebraic operations.

Fact 11

(Projection in terms of basic algebraic operations).

  • \([!\varphi ] = ([\varphi ]^*)^*\)

  • \([?\varphi ] = [\varphi ]\cup [\varphi ]^*\)

This also means that the projection operators can actually be expressed in terms of the basic connectives in our logical language. Footnote 10

Fact 12

(Projection operators in terms of basic connectives).

  • \(!\varphi \,\equiv \lnot \lnot \varphi \)

  • \(?\varphi \equiv \varphi \vee \lnot \varphi \)

Thus, rather than adding \(!\) and \(?\) as primitive logical constants to our language, we can simply introduce \(!\varphi \) as an abbreviation of \(\lnot \lnot \varphi \) and \(?\varphi \) as an abbreviation of \(\varphi \vee \lnot \varphi \). The logic that the system gives rise to is then fully determined by the behavior of our basic connectives, and in proving things about the system, we never need to consider \(!\) and \(?\) explicitly.

This is in fact exactly how \(!\varphi \) and \(?\varphi \) were defined in (Groenendijk and Roelofsen 2009; Ciardelli 2009; Ciardelli and Roelofsen 2011), i.e., as abbreviations of \(\lnot \lnot \varphi \) and \(\varphi \vee \lnot \varphi \). So again, our considerations in this section have not really led to a new treatment of projection operators, but rather to a more solid foundation for the existing treatment. Footnote 11

Having established the connection between our characterization of the projection operators and the way they were defined in earlier work, we can immediately import a number of results. We mention here only the two most significant ones. First, there is a close correspondence between the projection operators and the semantic categories of questions and assertions.

Fact 13

(Projection operators and semantic categories).

  • \(\varphi \) is an assertion iff \(\varphi \equiv {!\varphi }\)

  • \(\varphi \) is a question iff \(\varphi \equiv {?\varphi }\)

Second, a sentence \(\varphi \) is always equivalent to the conjunction of its two projections, \(?\varphi \) and \(!\varphi \).

Fact 14

(Division). \(\varphi \equiv {?\varphi }\wedge {!\varphi }\)

The results obtained in this section are summarized visually in Fig. 5. Every hybrid sentence \(\varphi \) has a projection onto the horizontal axis, \(!\varphi \), and a projection onto the vertical axis, \(?\varphi \). The former is always an assertion, the latter is always a question, and the conjunction of the two is always equivalent with \(\varphi \) itself.

Fig. 5
figure 5

Projection and division

In our view, these results are significant for the semantic analysis of declarative and interrogative complementizers in natural language. Just as it is to be expected that natural languages generally have connectives that behave semantically as join, meet, and pseudo-complement operators, it is also to be expected that natural languages generally have complementizers that behave semantically as non-informative or non-inquisitive projection operators, or combinations thereof. Footnote 12

It is interesting to note in this regard that the non-informative projection operator, \(?\), which turns every sentence in our logical language into a question and would therefore naturally be associated with interrogative complementizers in natural languages, is closely related to disjunction and existential quantification. Namely, \([?\varphi ]\) is the join of \([\varphi ]\) and \([\varphi ]^*\), and the join operation, also associated with disjunction and existential quantification, is the essential source of inquisitiveness in \(\mathsf {Inq}_\mathsf{B}\). This fact may provide the basis for an explanation of the well-known observation that in many languages, question markers are homophonous with words for disjunction and/or indefinites (e.g., Japanese ka) (see Jayaseelan 2001, 2008; Bhat 2005; Haida 2007; AnderBois 2011, 2012, among others).

3.9 Maximal possibilities and compliance

Before concluding, we would like to briefly come back to the difference between the notion of a proposition in \(\mathsf {Inq}_\mathsf{B}\) and the one assumed in most previous work on the support-based system (see footnote 5).

As mentioned, the proposition expressed by a sentence \(\varphi \) in \(\mathsf {Inq}_\mathsf{B}\) coincides precisely with the set of all states that support \(\varphi \). However, in the support-based system the proposition expressed by \(\varphi \) is usually defined as the set of maximal states supporting \(\varphi \), i.e., the set of states that support \(\varphi \) and are not contained in any other state supporting \(\varphi \). We will use \([\![\varphi ]\!]\) to denote this set of maximal supporting states.

Now, if we restrict our attention to \(L_P\), it can in fact be shown that a sentence \(\varphi \) is supported by a state \(s\) if and only if \(s\) is contained in a maximal state supporting \(\varphi \) (see Ciardelli and Roelofsen 2011, p. 59).

Fact 15

(Support and maximal supporting states for \(L_P\)). For any sentence \(\varphi \in L_P\) and any state \(s\):

$$\begin{aligned} s{\,\models \,}\varphi ~\Longleftrightarrow ~ s{\,\subseteq \,}\alpha \text{ for } \text{ some } \alpha \in [\![\varphi ]\!] \end{aligned}$$

This means that, for any \(\varphi \in L_P\), \([\varphi ]\) can be fully recovered from \([\![\varphi ]\!]\), simply by taking its downward closure. Clearly, \([\![\varphi ]\!]\) can also always be obtained from \([\varphi ]\), by taking maximal elements. So at first sight there does not seem to be any reason to prefer one notion over the other.

However, there is a specific reason why \([\![\varphi ]\!]\) is usually adopted in the support-based system, rather than \([\varphi ]\). Namely, one of the main logical pragmatic notions that the semantics is intended to give rise to, i.e., the notion of compliance (Groenendijk and Roelofsen 2009), makes crucial reference to maximal supporting states and is therefore more straightforwardly characterized in terms of \([\![\varphi ]\!]\) than in terms of \([\varphi ]\). Compliance is a strict notion of logical relatedness. For instance, \(p\) is a compliant response to \(?p\), but \(p\wedge q\) is not, because \(q\) contributes information that is logically unrelated to \(?p\). Maximal supporting states play an important role in characterizing compliance because they correspond to pieces of information that are just sufficient to settle the given proposition, i.e., they settle the proposition without providing additional, possibly redundant and logically unrelated information (see Groenendijk and Roelofsen 2009).

Thus, if we want to characterize such a notion of compliance, there indeed seems to be a good reason to focus on maximal supporting states, and in the propositional setting this is unproblematic (although taking the proposition expressed by a sentence to consist of all supporting states, as in \(\mathsf {Inq}_\mathsf{B}\), does of course not prevent us from characterizing compliance, it just makes it slightly less straightforward).

However, it has been shown in great detail by Ciardelli (2009, 2010) that if we move to the first-order setting, compliance can no longer be defined in terms of maximal supporting states; in fact, in the first-order setting compliance cannot be defined in terms of support at all. Ciardelli’s argument starts with the following example.

Example 1

(The boundedness formula). Consider a first-order language which has a unary predicate symbol \(P\), a binary function symbol \(+\), and the set \(\mathbb{N }\) of natural numbers as its individual constants. Suppose that our logical space consist of first-order models \(M=\langle D,I\rangle \), where \(D=\mathbb{N }\), \(I\) maps every \(n\in \mathbb{N }\) to the corresponding \(n\in D\), and \(+\) is interpreted as addition. So the only difference between the models in our logical space is the way in which they interpret \(P\). Let \(x\le y\) abbreviate \(\exists z(x+z=y)\), let \(B(x)\) abbreviate \(\forall y(P(y)\rightarrow y\le x)\), and for every \(n\in \mathbb{N }\), let \(B(n)\) abbreviate \(\forall y(P(y)\rightarrow y\le n)\). Intuitively, \(B(n)\) says that \(n\) is greater than or equal to any number in \(P\). In other words, \(B(n)\) says that \(n\) is an upper bound for \(P\).

A state \(s\) supports a formula \(B(n)\), for some \(n\in \mathbb{N }\), iff \(B(n)\) is true in every model in \(s\), that is, iff \(n\) is an upper bound for \(P\) in every \(M\) in \(s\). Now consider the formula \(\exists x.B(x)\), which intuitively says that there is an upper bound for \(P\). This formula, which Ciardelli refers to as the boundedness formula, does not have a maximal supporting state. To see this, let \(s\) be an arbitrary state supporting \(\exists x. B(x)\). Then there must be a number \(n\in \mathbb{N }\) such that \(s\) supports \(B(n)\), i.e., \(B(n)\) must be true in all models in \(s\). Now let \(M^{\prime }\) be the model in which \(P\) denotes the singleton set \(\{n+1\}\). Then \(M^{\prime }\) cannot be in \(s\), because it does not make \(B(n)\) true. Thus, the state \(s^{\prime }\) which is obtained from \(s\) by adding \(M^{\prime }\) to it is a proper superset of \(s\) itself. However, \(s^{\prime }\) clearly supports \(B(n+1)\), and therefore also still supports \(\exists x.B(x)\). This shows that any state supporting \(\exists x.B(x)\) can be extended to a larger state which still supports \(\exists x.B(x)\), and therefore no state supporting \(\exists x.B(x)\) can be maximal. \(\square \)

This example shows that a general notion of compliance, that applies both in the propositional setting and in the first-order setting, should not make reference to maximal supporting states. Such a notion would give undesirable results for the boundedness formula and other cases where there are no maximal supporting states. Intuitively, this is because in these cases there are no pieces of information that provide exactly enough information to settle the given proposition. For every piece of information that settles the proposition, we can find a weaker piece of information that still settles the proposition. This means that maximal supporting states do not form a suitable basis for a general notion of compliance.

Ciardelli goes on to argue that a satisfactory notion of compliance can in fact not be defined in terms of support at all. This argument is based on the following example.

Example 2

(The positive boundedness formula). Consider the following variant of the boundedness formula: \(\exists x(x\ne 0\wedge B(x))\). This formula says that there is a positive upper bound for \(P\). Intuitively, it differs from the ordinary boundedness formula in that it does not license “Yes, zero is an upper bound for \(P\)” as a compliant response. However, in terms of support, \(\exists x(x\ne 0\wedge B(x))\) and \(\exists x.B(x)\) are entirely equivalent. Thus, support is not fine-grained enough to capture the intuition that these formulas do not license the same range of compliant responses. \(\square \)

This argument is relevant here, because it brings to light an important limitation of the support-based system, and therefore also of \(\mathsf {Inq}_\mathsf{B}\). The system does what it was meant to do, i.e., it provides a notion of meaning that embodies both informative and inquisitive content in a satisfactory way (also in the case of the boundedness formulas). However, this notion of meaning is not fine-grained enough to provide the basis for an adequate notion of compliance.

There have been several attempts to overcome this limitation (see, e.g., Ciardelli 2009, 2010; Westera 2012a; Ciardelli et al. 2013b). However, none of these attempts have so far been entirely conclusive. We hope that the algebraic approach developed here will shed new light on this issue as well. In principle, we could start out with a notion of meaning that is even richer than the one adopted here. Once we have a clear intuitive understanding of such a notion of meaning, and a suitable notion of entailment, we can follow essentially the same line of thought that has been pursued here to arrive at a system that adequately deals with compliance and possibly other aspects of meaning that are beyond the reach of \(\mathsf {Inq}_\mathsf{B}\). Initial work in this direction has been pursued in (Roelofsen 2011b) and (Westera 2012b).

4 Conclusion

In this paper we developed and investigated a framework for the semantic treatment of informative and inquisitive content, driven entirely by algebraic considerations. We proposed to define propositions as non-empty, downward closed sets of possibilities, and we showed that entailment can simply be defined as inclusion in this case, suitably capturing when one proposition is at least as informative and inquisitive as another. We showed that this entailment order gives rise to a complete Heyting algebra, with meet, join, and relative pseudo-complement operators. Just as in classical logic, these semantic operators were then associated with the logical constants in a first-order language.

We found that the resulting system essentially coincides with the simplest and most well-understood existing implemenation of inquisitive semantics, and that its treatment of disjunction and existentials also concurs with that of alternative semantics. Thus, our algebraic considerations did not lead to a wholly new semantics, but rather to a more solid foundation for some of the existing systems. In future work, we hope to extend the approach to obtain an even more fine-grained framework, where propositions do not only embody informative and inquisitive content, but also further aspects of meaning.