The contemporary origins of epistemic approaches to democracy are interweaved with those of deliberative democracy and, in fact, they have evolved in the same way. The dichotomy between ‘talkers’ and ‘counters’ might eventually be more apparent than real as the two paradigms diversify and overlap. This may create some confusion to readers. For example, Joshua Cohen is often quoted as one of the leading proponents of deliberative democracy but his seminal 1986 paper is titled ‘An epistemic conception of democracy’. In this paper, Cohen presents ‘an epistemic interpretation of voting’ with three components: ‘(1) an
independent standard of correct decisions—that is, an account of justice or of the common good that is
independent of current consensus and the outcome of votes; (2) a
cognitive account of voting—that is, the view that voting expresses beliefs about what the correct policies are according to the independent standard, not personal preferences for policies; and (3) an account of
decision making as a process of the adjustment of beliefs, adjustments that are undertaken in part in light of the evidence about the correct answer that is provided by the beliefs of others.’ (Cohen
1986, 34). Cohen, however, would eventually abandon this explicit formulation, and hence the potential confusion. Melissa Schwartzberg has helped to clarify this issue by noting that ‘as Cohen wrote the essay he had become skeptical about the idea that democracy was fundamentally about aggregating opinions about the content of the ‘independent standard’ (Schwartzberg
2015, 189).
Another issue about the ‘independent standard of correctness’ is that there are different versions of this core theoretical tenet in the epistemic democracy literature. As David Estlund explains, ‘one version might say that there are right answers and that democracy is the best way to get at them. Another version might say that there are right answers and there is value in trying collectively to get at them whether or not that is the most reliable way. Yet another: there are no right answers independent of the political process, but overall it is best conceived as a collective way of coming to know (and institute) what to do. There are others’ (Estlund
2008, 1). The more pragmatic approaches to the standard seem to have prevailed, though. Jack Knight has recently conceded that ‘there’s a growing number of people who probably think that getting at ‘the truth’ is too strong a claim to make for democratic institutions, but who do think that democracy has epistemic value in producing better decisions. Here the ‘better decisions’ would mean the enhancement of democratic decisions through discussion’ (Knight et al.
2016, 138). Knight’s last sentence also offers an additional clue by highlighting the role of deliberation in the contemporary epistemic approaches. In her account, Schwartzberg states that epistemic democracy emerged as a response to social choice theory to defend ‘the capacity of ‘the many’ to make correct decisions’ (Schwartzberg
2015, 187–188) and remarks that ‘epistemic democracy does not position itself as an alternative to deliberative democracy but instead generally resituates deliberation as instrumental to the aim of good, or correct, decision making’ (Schwartzberg
2015, 189).
4 Similarly, Landemore argues that ‘epistemic democracy is both a subset of deliberative democracy and goes beyond it because it includes things that deliberative democracy doesn’t necessarily include’ (Knight et al.
2016, 142). According to Landemore, the epistemic models aim “to emphasize the knowledge-producing properties of democratic institutions and procedures; and specifically (…) to assume that those procedures are good at tracking a procedure-independent standard of correctness, which is sometimes called ‘truth’” (Knight et al.
2016, 141).
Most contemporary epistemic democrats, in short, assume an independent standard of correctness in their models, but they do so in different ways. Depending on how it is formulated, democratic decision making will produce ‘true’, ‘right’, ‘good’, ‘correct’ or ‘better’ outcomes (provided that appropriate mechanisms are in place, as we will see). Regardless of the tonality that the standard adopts, it is hardly surprising if this is the cause of major theoretical debates. Can we rely on independent standards of what is true, or right, or good, or better, when diversity of opinions, values, and interests are the fabric of our plural democracies? If that is the case for some questions (let us say, questions involving core democratic principles or values) but not for others, how do we discern between them? As Schwartzberg put is, ‘there may not be such an independent standard of correct decisions—or if such standard exists, we might not have any way of knowing whether we had reached it.’ (Schwartzberg
2015, 198). Or, alternatively, in Landemore’s view, it is possible for epistemic-democratic theories to ‘conceptualize the truth, goodness, or correctness of democratic decisions or solutions’ through diverse options: ‘you can conceptualize it in terms of good governance, human rights, social justice, perhaps a developmental index, a happiness index or something like that, or something else entirely.’ (Knight et al.
2016, 143). From this perspective, political scientists and social sciences in general could contribute to measure those achievements even though, as Nadia Urbinati objects, ‘the measurement is always open to judgment and my judgment can be different from yours because in the domain of political opinion we don’t have a mathematical measurement after all’ (Knight et al.
2016, 149). The lack of conclusive answers or still insufficient empirical support leads Schwartzberg to conclude that epistemic democrats ‘may wish to temper the strength of their claims’ and that ‘relinquishing the independent standard of correctness ought to be a first step’ (Schwartzberg
2015, 201).
5 Ultimately, this more tempered approach seems to permeate Landemore’s response to the criticism that it is difficult to ascertain whether a decision is good or not at the moment it is made: ‘In the here and now, at time T—the time of the decision—your only alternative is to involve one, few, or many people in the decision procedure. All I’m saying is that at time T you’ll likely better off with the decision that involves the greatest number of people.’ (Knight et al.
2016, 146). In this nuanced account, the focus is now placed on the mechanisms of aggregation of preferences and, particularly, on exploring the conditions under which hypotheses such as ‘more is smarter’ (Landemore
2012a, 265) or ‘it is often better to have a group of cognitively diverse people than a group of very smart people who think alike’ (Landemore
2012a, 260) can be successfully tested.
2.3.1 Some Mechanisms of Aggregation in Epistemic Approaches
The epistemic-democratic literature explores a number of mechanisms that can support the argument for the epistemic properties of aggregation. The most popular are the Condorcet Jury Theorem
(CJT) and its different variants and, most recently, the ‘miracle of aggregation’ (e.g. Converse
1990; Surowiecki
2004), and the Diversity Trumps Ability (DTA) theorem by Hong and Page (
2004). Let’s briefly review the three of them.
The Jury Theorem proposed by Condorcet in 1785 draws from the law of large numbers and applies to issues that offer only two options, with one correct answer. There are a number of variants of the CJT, including a generalisation of the theorem from majority voting over two options to plurality voting over many options (List and Goodin
2001). As Landemore presents it in its standard formulation, the majority of voters will be “virtually certain to track the ‘truth’” if three conditions are met: ‘(1) voters are better than random at choosing true propositions; (2) they vote independently of each other; and (3) they vote sincerely or truthfully’ (Landemore
2012a, 265). The CJT has been largely scrutinised for its ‘value for democratic theory’. For example, David Austen-Smith and Jeffrey Banks first questioned the assumption of voters’ sincerity as in a number of models since voting failed to be informative and rational; instead, they suggested that ‘the appropriate approach to problems of information aggregation is through game theory and mechanism design, not statistics’ (Austen-Smith and Banks
1996, 44). Also using a formal demonstration, Franz Dietrich and Kai Spiekermann have contended that the ‘asymptotic conclusion’ of the CJT (the probability of a correct majority decision converging to one as the group size tends to infinity) is questionable: ‘If the asymptotic conclusion applied directly to modern democracies with their large populations, these democracies would be essentially infallible when making decisions between two alternatives by simple majority’ (Dietrich and Spiekermann
2013, 88). Dietrich and Spiekermann tackle one of the most significant concerns in the CJT literature—the potential violation of voters’ independence via exchange of information and deliberation—and note that it is ‘not always obvious whether deliberation overall increases or decreases dependence, another reason why the classical CJT literature struggles so much with deliberation’ (Dietrich and Spieckermann
2013, 106). Their proposal consists on a new notion of independence, based on causal networks models, where deliberation not only does not undermine independence but also augments voters’ competence: ‘Consequently, a group of deliberating economists may perform better because they are more likely to face decisions they tend to get right, while isolated economists may not’ (Dietrich and Spieckermann
2013, 106). Whereas this model reconciles deliberation and competence with epistemic arguments for democracy based on jury theorems there is still, as Schwartzberg notes, a lack of systematic testing of these models and thus empirical evidence to demonstrate how judgements are achieved as well as their epistemic value (Schwartzberg
2015, 195–197).
The ‘miracle of aggregation’
is another application of the ‘law of large numbers’ evident in different models. A simple explanation is the one offered by Marc Keuschnigg and Christian Ganser: ‘the central tendency of a set of independent estimates represents the truth more closely than the typical individual estimation’ (Keuschnigg and Ganser
2016, 1). Landemore reviews three versions of this model, which she denominates ‘elitist’, ‘democratic’, and ‘distributed’. The first version is labeled as ‘elitist’ as it relies on the presence of ‘informed people’ in the group to arrive at a ‘right answer’ and thus is a form of ‘elite’ extraction. In the second ‘democratic’ version by Page and Shapiro (
1992) no elite has the right answer and everyone is roughly correct (the errors cancel each other and the collective decision is more accurate than the individual guesses). In the ‘distributed version’, instead, ‘the right answer is dispersed in bits and pieces among many people’ (Landemore
2012a, 267). The objections that Landemore raises to these ‘miracle of aggregation’ versions regarding their relevance for democratic theory are twofold: (i) concern about the assumption of independence of individual judgements (as with the CJT); and (ii) empirical defeasibility of the hypothesis of random or symmetrical distribution of errors (Landemore
2012a, 268).
The third mechanism, the ‘diversity trumps ability theorem’ (DTA) was first formulated by Hong and Page (
2004) and later discussed extensively in Page’s book The Difference (
2007). The DTA model focuses on ‘functional diversity’ (‘differences in how people encode problems and attempt to solve them’) and identifies the conditions under which ‘when selecting a problem-solving team from a diverse population of intelligent agents, a team of randomly selected agents outperforms a team comprised of the best-performing agents’ (Hong and Page
2004, 16386). In other words, ‘random collections of intelligent problem solvers can outperform collections of the best individual problem solvers’ (Page
2007, 10). The conditions (slightly modified in the 2007 version of the DTA) are that: ‘(1) The problem must be difficult; (2) the perspectives and heuristics that the problem solvers possess must be diverse; (3) the set of problem solvers from which we choose our collection must be large; and (4) the collection of problem solvers must not be too small’ (Page
2007, 10). In a recent replication of the DTA model, Keuschnigg and Ganser have found a particular case where ‘ability’ remains relevant: ‘in determining collective accuracy, diversity is crucial only in large groups and/or in cases of aggregation via averaging. Hence, if forced to plurality vote in a small group—which is often the case in decision-making committees in both firms and public administrations—the electorate must contain highly competent individuals’ (Keuschnigg and Ganser
2016, 8). The DTA theorem, nevertheless, has been criticised from different angles. Abigail Thompson has rebutted the mathematical proof provided by Hong and Page and states that, under the proposed conditions, randomness, and not diversity, is what trumps ability (Thompson
2014). In another exposition of the theorem, John Weymark has noted that DTA does not apply in situations involving binary choices and, when there are more than two options to choose from, the assumption about non-strategic behaviour (decision makers sharing information truthfully) may be as questionable as it is with CJT. He concludes by suggesting caution, for DTA ‘offers no comfort to those who want to use it to argue for the collective decision to be made by an inclusive set of individuals rather than by an epistocracy’ (Weymark
2015, 508).
Landemore considers both the CJT and the ‘miracle of aggregation’ as accounts or mechanisms of collective intelligence drawing from statistics and probability theory. The DTA theorem, instead, would be a more ‘cognitive account’ as ‘it opens the black box of voters’ (Landemore
2012a, 268). However, this categorisation might be slightly confusing for two different reasons, as we will see.
First, although ‘account’ and ‘mechanism’ seem to be used indistinctively in her essay, Landemore initially states that “‘mechanism’ is a loose term by which we mean to refer to the concrete institutions that channel collective wisdom, such as expert committees, deliberative assemblies, deliberative communities like Wikipedia, majority rule, information markets, or the ranking algorithms of search engines such as Google’ (Landemore
2012b, 12). However, the examples that Landemore conflates are distinct: expert committees, deliberative assemblies, or deliberative communities are institutions in the sense of groups of individuals following ‘action-guiding rules’ (Ober
2008a, 8), while majority rule, information/prediction markets, or ranking algorithms are formalised methods, processes, or techniques. The different versions of CJT and ‘the miracle of aggregation’, therefore, are formal arguments, methods, or techniques to aggregate individual preferences into a collective outcome, but not institutional mechanisms involving real people and both formal and informal
action-guiding rules. Likewise, the DTA theorem offers a mathematical argument for collective decision making (rather than a cognitive account) and Page himself, in his answer to Thompson’s rebuttal, refuses the accusation of misusing mathematics by assuring that ‘In my [
Difference] book, I caution readers to apply mathematical models carefully, highlighting the subtleties of moving from the starkness of mathematical logic to the richness of human interactions’ (Page
2015, 10). Very much like mini-publics are regarded as living laboratories to test the theoretical principles of deliberative democracy, epistemic democrats ask for more ‘empirical testing [of] the conditions under which groups of ordinary citizens are most likely to produce wise decisions’ (Schwartzberg
2015, 197). Yet, none of the two approaches seem to fully acknowledge Page’s call to take subtleties into account. In our view, those subtleties translate into the contextual, intermediate level that shapes human decisions and delimits their implementation, that is, the institutional layer of democratic systems. Human interactions within
ad hoc mini-publics cannot be disconnected from the institutions that create them, set their governing rules, and apply (or not) their carefully deliberated outcomes. Since micro-deliberations do not happen in a vacuum, institutional agendas, policies, goals, expectations, and values are part of the analysis too. The systemic approach calls for an ethnography of the institutions as much as for empirical white-room testing or simulation modelling.
Let us illustrate this point with a real story about randomness and quizzes extracted from Leonard Mlodinow’s book
The drunkard’s walk: How randomness rules our lives (Mlodinow
2009). The main character in this story is Marilyn vos Savant, an American columnist and author listed in the Guinness Hall of Fame for having scored the ‘World’s Highest IQ’ when tested as a child. Marilyn vos Savant has also successfully run the
Parade magazine column ‘Ask Marilyn’ since 1986, replying to questions posted by readers on a vast number of topics. On September 1990, a reader (inspired by a popular television game show called
Let’s Make a Deal) asked Marilyn:
Suppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?
6
When Marilyn replied ‘Yes; you should switch. The first door has a 1/3 chance of winning, but the second door has a 2/3 chance’ all hell broke loose. Marilyn reported to have received more than 10,000 letters, some 1000 of them from angered PhDs and academics accusing her of ‘propagating mathematical illiteracy’, inviting her to check ‘a standard textbook on probability’ or arguing their case with the more succinct ‘
You are the goat!’ (Crockett
2015). According to Mlodinov, 92% of Americans ‘agreed that Marilyn was wrong’ (Mlodinov
2009, 44). Yet, she was right, and her response was not only supported by mathematical proof and computer simulations, but with data from the game show: ‘those who found themselves in the situation described in the problem and switched their choice won about twice as often as those who did not’ (Mlodinov
2009, 55). The reason why Marilyn got it right and proved some of the best and brightest mathematical brains of our time—including Paul Erdős—wrong lies outside Page’s ‘starkness of mathematical logic’. Rather, it has to be found in the intermediate level of ‘action-guiding rules’. The rules of the TV game show entitled the program host to intervene in an initially random process by using his inside knowledge to bias the result, thus violating randomness (idem). None of Marilyn’s outraged critics did factor in the contextual rules that altered the abstract conditions of their models.
As Mlodinov puts it, ‘to a mathematician a blunder is an issue of embarrassment, but to a gambler it is an issue of livelihood’ (Mlodinov
2009, 56). As citizens (and voters) living in polities, we probably keep being a perpetual source of embarrassment to our political philosophers, although we’re not in permanent gambling survival mode either. Most of the time, we play predictably by interacting with shared action-guiding rules. In other words, when it comes to real scenarios, either deliberative or not, there is no mathematical logic capable to fully contain the dynamic interplay between people’s behaviours and rules and the emergent pragmatic properties of such an interplay. If that is the case, we still need an institutional theory of democracy to explain how collective intelligence emerges from a myriad of micro-interactions and contributes to produce an epistemically advanced form of government.
Second, what does ‘collective intelligence
’ (CI) mean in the epistemic approaches we have reviewed so far? The notion of ‘collective intelligence’ gained its current popularity with the publication of Pierre Lévy’s book
L’intelligence collective (
1997) who defined CI as a ‘universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills’ (Levy
1997, 13). Lévy’s premise is that ‘no one knows everything, everyone knows something, all knowledge resides in humanity’ (Levy
1997, 13–14). This premise resonates with Edward Hutchins’ work on socially distributed cognition (Hutchins
1995) and his effort to resituate the focus of cognitive science as a study of ‘the social and material organization of cognitive activity’ rather than the solitary individual. Other frequently quoted definitions approach CI as ‘the capability for a group of individuals to envision a future and reach it in a complex context’ (Noubel
2008, 233); ‘groups of individuals doing things collectively that seem intelligent’ (Malone
2008); or ‘the general ability of a group to perform a wide variety of tasks’ (Woolley et al.
2010). In a review discussing the recent literature on CI in humans, Juho Salminen highlights the multidisciplinary character of this emergent paradigm and identifies three levels of abstraction: the micro-level (CI as ‘a combination of psychological, cognitive and behavioral elements’); the macro-level (CI as a ‘statistical phenomenon’) and the level of emergence between the two which ‘deals with the question of how system behavior on the macro-level emerges from interactions of individuals at the micro-level’ (Salminen
2012, 3–5). If we follow this categorisation, most of the epistemic approaches to democracy that draw on the notion of CI use it in the sense of a macro-level ‘statistical phenomenon’. Yet, as we have argued, this may exclude the middle level that emerges from individuals interacting with other individuals and rules: institutions. By considering institutions as a key instance of CI, we also expand our notion of ‘epistemic’ when referring to democratic systems. Thus, by ‘epistemic’ we do not refer to the properties of aggregation, the majority rule, or to truth-seeking or better-than-something-else mechanisms of CI. Rather, we understand ‘epistemic’ in the broader sense of knowledge that is openly shared, used, and remixed. In this regard, we heavily rely on the works of Josiah Ober when he explores the connections between democracy and knowledge using classical Athens as a case in point. And we also borrow from Henry Farrell and Cosma Shalizi’s outline of what they defined as ‘cognitive democracy’ (Farrell and Shalizi
2015). We discuss both approaches in the next section.