Skip to main content
Top
Published in:
Cover of the book

Open Access 2020 | OriginalPaper | Chapter

5. Introduction to the Case Study Research

Authors : Ira van Keulen, Iris Korthagen

Published in: European E-Democracy in Practice

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
download
DOWNLOAD
print
PRINT
insite
SEARCH
loading …

Abstract

Van Keulen and Korthagen introduce in this chapter the empirical study of 22 cases of digital tools which form a large and important part of this book. The authors indicate how the cases were selected based on different criteria such as diversity of institutional contexts and scales, geographical diversity and different types of citizen involvement. Each of the cases is described based on an evaluation framework for assessing the digital tools. The authors choose to put legitimacy and its key dimensions—input, throughput and output legitimacy—central in the framework. A logical choice since disengagement from the European democratic processes and distance of the European citizens from EU institutions remains a major problem. This short chapter explains furthermore how the data collection and analysis of the 22 cases have been done. Special attention is paid to explaining the method of Qualitative Comparative Analysis (QCA).
Next to the systematic literature review (see Part I), this research comprises the empirical study of 22 cases of digital tools, tools which have been used or are still used as instruments for citizen involvement in democratic processes. The cases were for a large part requested by the Panel for the Future of Science and Technology at the European Parliament, who commissioned this research. The other cases were selected based on the following criteria: (1) diversity of tools, (2) diversity of institutional contexts and scales (local, national, European and some international),1 (3) geographical diversity and (4) different types of citizen involvement. The combination of these criteria provides a broad perspective on the kind of tools that could be used to strengthen participatory democracy at the EU level. Of course, we do not claim that this set of case studies would be representative for all uses of digital tools as discussed on the basis of our literature review. It remains a selection which could be completed towards still greater correspondence with our conceptual framework and the arsenal of digital practices in political participation, if there were no space limitations.

5.1 Evaluation Framework

The description of the 22 cases is based on an evaluation framework for assessing the digital tools. The selection of the key elements of the framework has been made according to the project’s central aim: To identify and analyse best practices with digital tools for participatory and direct democracy at different political and governmental levels (local, national, European) that in the future can be used at EU level to encourage citizen engagement and countervail the European democratic deficit.
In view of the current crisis of representative democracy, the disengagement from the democratic processes and the distance of citizens from EU institutions, restoration and enhancement of democratic legitimacy at the European level is needed. Therefore, we put legitimacy and its key dimensions (Schmidt 2013) centre stage in the evaluation framework and use it as the basis for differentiating further, more specific evaluation aspects. In this we follow the Council of Europe in its recommendation on e-democracy as referred to in the Introduction: “E-democracy, as the support and enhancement of democracy, democratic institutions and democratic processes by means of ICT, is above all about democracy. Its main objective is the electronic support of democracy” (Council of Europe 2009: 1).
In order to investigate how digital tools can contribute to stronger connections between EU citizens and EU politics, we distinguish between five types of citizen involvement: (1) monitoring, (2) formal agenda setting (invited space, i.e. initiated by government), (3) informal agenda setting (invented space, i.e. initiated by citizens), (4) non-binding decision-making and (5) binding decision-making (see Table 5.1) (Kersting 2014). In combination with the focus of the research on democratic legitimacy, this leads to an evaluation model along the lines of the input, throughput and output legitimacy of political decision-making processes (Schmidt 2013; Scharpf 1999).
Table 5.1
Overview of case studies
Monitoring
 
TheyWorkForYou
National
Great Britain
Abgeordnetenwatch
National
Germany
Agenda setting
Informal
Petities.nl: Dutch e-petitions site
National
Netherlands
Open Ministry Finland: crowdsourcing for law proposals
National
Finland
Formal
Iceland: crowdsourcing for a new constitution
National
Iceland
Future Melbourne Wiki: crowdsourcing for city planning vision
Local
Australia
Predlagam.vladi.si: platform for e-proposals and e-petitions
National
Slovenia
European Citizens’ Initiative: citizens’ proposals for new EU laws
European
EU
Participatory budgeting Berlin
Local
Germany
Internetconsultatie.nl: consultation on draft laws
National
Netherlands
Futurium: consultation on EU (digital) policy making
European
EU
Your Voice in Europe: (open) public consultation on EU policy
European
EU
European Citizens’ Consultation: pan-European consultation on the future of Europe
European
EU
Decision-making
Non-binding
Pirate Party Germany
National/district
Germany
Five Star Movement
National
Italy
Podemos
National
Spain
Participatory budgeting Belo Horizonte
Local
Brazil
Participatory budgeting Paris
Local
France
Participatory budgeting Reykjavik
Local
Iceland
Binding
Voting for Spitzenkandidaten in the 2014 EP elections within the Green Party
European
EU
E-voting for elections
National
Estonia
E-voting for elections/referenda
National
Switzerland
Fritz W. Scharpf (1999) divided democratic legitimisation into input legitimacy, judged in terms of the EU’s responsiveness to citizen concerns as a result of participation by the people and output legitimacy, judged in terms of the effectiveness of the EU’s policy outcomes for the people. Vivien Schmidt (2013) has added to this theorisation of democratic legitimacy, a third criterion for evaluation of EU governance processes: throughput legitimacy, judging legitimacy in terms of their inclusiveness and openness to consultation with the people.
The distinction between the three criteria for democratic legitimacy helps to understand the particular relevance of the democratic deficit in times of the recent and current EU crisis. Due to the transnational character, EU institutions’ legitimisation has difficulties to be rooted in strong channels of information by citizens (input legitimacy) and consultation with citizens (throughput legitimacy) and thus must rely on legitimising its policies by the quality of its output, that is its decisions and regulations being in the best interest of, and thus being supported by, the citizenry (output legitimacy). The fact that in the latter respect the means of the EU institutions are restricted as well has a special bearing in times of crisis. The missing input legitimacy becomes the more problematic, the weaker output legitimacy is getting, entailing apparent difficulties to establish consensus on a, for example, joint European policy to solve the refugee problem. In a situation where strong decisions have to be taken at the EU level (beyond national interests), input but also throughput legitimacy is urgently needed.
The three types of legitimacy pose different demands on digital tools for citizen involvement. In the following paragraphs we will address these different demands.
Regarding the input legitimacy, the use of digital tools will be assessed for how it enhances the voice of citizens in the political decision-making process. “Voice” concerns the way in which affected citizens are able to influence the political agenda (Manin 1987). To what extent are citizens enabled to express their wishes and interests in political decision-making? How can citizens get an issue on to the political agenda? Is there equal opportunity for citizens to voice their concerns? Are citizens supported enough in their efforts to voice themselves in the process (i.e. interaction support)? Is the tool user-friendly (i.e. tool usability)?
Regarding the throughput legitimacy, an evaluation will be made of how digital tools contribute to the quality of the deliberation process, in terms of an inclusive dialogue and a careful consideration of options (Cohen 1989). Relevant questions are: to what extent do the views of the citizens expressed by the digital tool represent the views of the general population (i.e. representation)? How is the diversity of views within the population (including minority views) reflected in the process? Are the different policy options carefully considered in the deliberation process? Do the citizens have access to all the relevant information about the decision-making process to which the results of the digital citizen involvement should contribute?
Concerning the output legitimacy, responsiveness to the arguments and proposals of citizens (Cohen 1989) and effectiveness (Scharpf 1999) will be evaluated, along with the accountability of decisions made. To what extent do the tools substantially contribute to the political decisions made (i.e. democratic impact)? How do the digital tools contribute to feedback? Is information provided about the decision-making process and its outcomes (i.e. accountability)?
The cases are described based on the questions mentioned in Table 5.2 on the evaluation framework. Each case description has at least four sections: an introductory section (i.e. short description of the digital tool), one on the participants, one on the participatory process and one on the results of the digital tool.
Table 5.2
Evaluation framework for assessing digital tools
Key dimensions
Demands
Specific questions
Input legitimacy
• Information/equality of opportunity
• Tool usability
• Interaction support
• Voice
• Has the possibility to participate been effectively communicated to the target group?
• Is the tool accessible for every member of the target group to participate?
• Are the participation tools considered usable, reliable and secure?
• How and to what extent are participants enabled to express their wishes and interests?
• How and to what extent are the participants able to set the (political) agenda?
• Does the design help to involve citizens beyond the participation elite?
Throughput legitimacy
• Deliberation quality
• Representation
• Diversity/inclusion
• To what extent is information provided about the complete decision-making process and how is the citizen participation part of this (during the process)?
• How is information provided to the participants about the issues at stake?
• Does the tool encourage interactive exchange of arguments between participants?
• Does the tool encourage interaction between the views of participants and views of the officials/politicians?
• To what extent are the participants representative for the target group?
• To what extent is the input of and/or conversation between participants moderated?
• How is the diversity of views of the participants managed (aggregated?) in the process; are minority standpoints included?
Output legitimacy
• (Cost)-effectiveness
• Democratic impact
• Accountability
• Responsiveness
• How does the instrument contribute to the decision-making process and its outcomes?
• Does the tool increase the transparency of the issues at stake?
• Does the tool help to enhance accountability: informing who is responsible for what action?
• How are participants informed about the outcomes and about what has been done with their contributions (afterwards)?
• Does the process provide space to the official/politician to make their own judgement, selection or assessment?

5.2 Data Collection

Each individual case is thoroughly studied. All aspects of the evaluation framework are covered in a structured template that forms the empirical checklist for the case studies. Empirical data on all these aspects come from different data sources and methods of data collection, namely:
  • (grey) literature research
  • standardised online questionnaire
  • semi-structured interviews
Key in our strategies for data collection is thus methodological triangulation. We used more than one method and source to gather data on the 22 cases. This was to cross-check our data and to obtain construct validity (an effective methodology and a valid operationalisation) (Fielding and Warnes 2009). The elementary data for the case studies came from the (grey) literature about the case. In addition, two respondents per case were interviewed. In our design the two respondents were (1) a professional that is involved in the case and (2) an expert who scientifically studied and/or contemplated the case. The data collection was finished in February 2017.
The interviews took place via two steps. First, the interviewees were asked to answer a standardised questionnaire online to evaluate the digital tool. For the e-voting experiences a separate questionnaire was created, because not all questions were applicable in these cases. The concept questionnaires were pre-tested in a pilot and feedback was received from two external experts. This led to several adjustments in the questionnaire.
Second, the respondents were interviewed face-to-face, by telephone or Skype, asking follow-up open questions which took no more than one hour. The individual responses of the professionals and experts guided these subsequent semi-structured interviews. The open questions addressed, in a more qualitative way, the motivations of respondents behind their evaluation scores. Moreover, the open questions focused on a better understanding of the success factors, risks, challenges and the EU suitability in relation to the specific digital tool. In addition, in the interviews unsolved issues within the case study—inconsistencies in the data or aspects on which no information can be found in the literature—were discussed with the respondents. The interviewees were able to comment on the transcript of the interview as well as on the draft case study.
The data collection was conducted in the year 2016 until February 2017. In a few of the cases the latest developments in the following months and year (2017–2018) are addressed in the case descriptions.

5.3 Qualitative Comparative Analysis (QCA)

To analyse case descriptions based on the findings of the desk research, the questionnaire and the interviews, the technique of Qualitative Comparative Analysis (QCA) was used.2 QCA is a technique for making a systematic comparison of different case studies. The intention of the QCA is to integrate qualitative case-oriented and quantitative variable-oriented approaches (Ragin 1987). The QCA technique aims for “meeting the need to gather in-depth insight into different cases and to capture their complexity, while still attempting to produce some form of generalization” (Rihoux and Ragin 2009, xvii).
Our particular research has an intermediate-N research design, including 22 cases. This sample is too large to focus on in-depth analysis only and too small to allow for a conventional regression analysis, but QCA is an appropriate technique for analysis (cf. Gerrits and Verweij 2015). It is particularly in such intermediate-N research designs that QCA helps to acknowledge the internal case complexity on the one hand, while it enables cross-case comparison on the other hand (Rihoux and Ragin 2009, xvii).
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Footnotes
1
We included two cases of application of digital participation tools from outside Europe (Belo Horizonte and Melbourne) because we hold that these can help to set up effective and efficient digital tools at the EU level (just like cases at the local and national level can).
 
2
In Chap. 12 the QCA method is explained in more detail.
 
Literature
go back to reference Cohen, J. (1989). Deliberation and democratic legitimacy. 1997, 67–92. Cohen, J. (1989). Deliberation and democratic legitimacy. 1997, 67–92.
go back to reference Fielding, N., & Warnes, R. (2009). Computer-based qualitative methods in case study research. In D. Byrne & C. C. Ragin (Eds.), The Sage handbook of case-based methods (pp. 270–288). London: Sage.CrossRef Fielding, N., & Warnes, R. (2009). Computer-based qualitative methods in case study research. In D. Byrne & C. C. Ragin (Eds.), The Sage handbook of case-based methods (pp. 270–288). London: Sage.CrossRef
go back to reference Gerrits, L., & Verweij, S. (2015). Taking stock of complexity in evaluation: A discussion of three recent publications. Evaluation, 21(4), 481–491.CrossRef Gerrits, L., & Verweij, S. (2015). Taking stock of complexity in evaluation: A discussion of three recent publications. Evaluation, 21(4), 481–491.CrossRef
go back to reference Kersting, N. (2014). Online participation: From “invited” to “invented” spaces. International Journal for Electronic Government, 6, 260–270. Kersting, N. (2014). Online participation: From “invited” to “invented” spaces. International Journal for Electronic Government, 6, 260–270.
go back to reference Manin, B. (1987). On legitimacy and political deliberation. Political Theory, 15(3), 338–368.CrossRef Manin, B. (1987). On legitimacy and political deliberation. Political Theory, 15(3), 338–368.CrossRef
go back to reference Ragin, C. (1987). The comparative method: Moving beyond qualitative and quantitative methods. Berkeley: University of California. Ragin, C. (1987). The comparative method: Moving beyond qualitative and quantitative methods. Berkeley: University of California.
go back to reference Rihoux, B., & Ragin, C. C. (2009). Introduction. In B. Rihoux & C. C. Ragin (Eds.), Configurational comparative methods. Qualitative comparative analysis (QCA) and related techniques (pp. xvii–xxxv). Thousand Oaks, CA: Sage.CrossRef Rihoux, B., & Ragin, C. C. (2009). Introduction. In B. Rihoux & C. C. Ragin (Eds.), Configurational comparative methods. Qualitative comparative analysis (QCA) and related techniques (pp. xvii–xxxv). Thousand Oaks, CA: Sage.CrossRef
go back to reference Scharpf, F. W. (1999). Governing in Europe: Effective and democratic? Oxford: Oxford University Press.CrossRef Scharpf, F. W. (1999). Governing in Europe: Effective and democratic? Oxford: Oxford University Press.CrossRef
go back to reference Schmidt, V. A. (2013). Legitimacy in the European Union revisited: Input, output and throughput. Political Studies, 61(1), 2–22.CrossRef Schmidt, V. A. (2013). Legitimacy in the European Union revisited: Input, output and throughput. Political Studies, 61(1), 2–22.CrossRef
Metadata
Title
Introduction to the Case Study Research
Authors
Ira van Keulen
Iris Korthagen
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-27184-8_5

Premium Partners