A specific definition of citizen science in IS research is provided by Levy and Germonprez (
2017, p. 29): “
Citizen science in IS research is a partnership between IS researchers and people in their everyday lives. Citizen science projects in the IS domain involve (a) IS phenomenon of interest to both citizens and scientists, (b) the intervention of citizens in the collection, collaboration, or co-
creation of scientific endeavors for the purposes of scientific literacy education and a more informed public, and (c) citizens themselves not being the direct subject of scientific inquiry.” The definition highlights the intended knowledge transfer based on a “learning by doing” research approach and excludes scientific projects that involve the citizens as mere participants of empirical studies and experiments (e.g., using citizen pools). Especially the second constraint is a hard delimitation: Obviously, citizen science should not just make the participants provide their data, but ask them to intentionally collect, deliver and/or use data about research objects – even about the behavior of people (e.g., data about children’s smartphone addiction detected by their parents). Finally, the IS phenomenon needs to be of interest to both researchers and citizens. However, right now we do not find many projects in IS research that meet these criteria. This leads to the question: Do the phenomena we are investigating have the potential to interest and involve the ordinary citizen in a broader scope, or is citizen science within IS research condemned to be the subject of scattered individual projects in niche contexts?
To involve citizens in our research, we first need to have a look at how we can place and generalize our methods and theories in a (sometimes entirely) new context (Lee and Baskerville
2012; Levy and Germonprez
2017). In addition, they need to be comprehensible and applicable to citizens with no noteworthy IS background. Second, we need to attract citizens to our theories, methods, tools, research questions, and fields of interest – basically our knowledge. This will not be possible for all of our work, but let us highlight some methods and fields of research in IS that, inherently, seem fit to this context and which have been addressed (in a related context), among others, in the BISE journal:
-
Participatory Design (e.g., Qaurooni et al.
2016; Simonofski et al.
2019)
-
Co-
Creation (e.g., Haki et al.
2019)
-
User-
centered Design (e.g., Grace et al.
2015)
-
User-
generated Content (e.g., Tilly et al.
2017)
-
Design Science (e.g., Mueller et al.
2018)
-
Crowdsourcing/Crowd-
Reporting (e.g., Abu-Tayeh et al.
2018; Niemeyer et al.
2018; Schoder et al.
2014)
-
Open Innovation (e.g., Smart et al.
2019)
-
Gamification (e.g., Zhou et al.
2017)
-
Ethics, e.g., regarding
Privacy (e.g., Krasnova et al.
2012; Peukert and Kloker
2020)
To foster citizen science projects in IS research, we need to increase the interest of our citizens for these kinds of mechanisms. They should not only be interested in using them, but also to research them: invent, test, and evaluate their own mechanisms for, e.g., gamification or crowdfunding – and do so in cooperation with professional researchers. Also, topics like ethics and privacy affect us in current times more than ever before and have a huge potential for citizens to engage in our research, e.g., to understand the use and limitations of the COVID-19 App. The researchers are then responsible for translating the citizen science projects into underlying theories, providing the right infrastructure, training, and tools for observation and measurement (Budde et al.
2017): The
right infrastructure is important because participation only works on a very low threshold and citizens typically can hardly provide this themselves. Robinson and Imran (
2015) declare cost neutrality as the aim, which is even more important in developing countries where the diversity of access devices is high and technological literacy may be rather low (Basole and Karla
2011). The
right training is important, as information quality is perceived differently by scientists and citizens (Lukyanenko et al.
2016). For scientists, information quality is primarily expressed as consistency and completeness according to a standardized observation protocol. For citizen scientists, quality of information “[…] also includes the extent to which the design of a specific project facilitates citizens’ abilities to spot something interesting, unexpected, or novel” (Lukyanenko et al.
2016, p. 448). Formal training should enable citizen scientists to make exactly this contribution – spot the extraordinary while understanding that ordinary data also has to be recorded seriously. The
right tools for observation and measurement are important as the lack of experience and a “thrive for the interesting and novel” of citizen scientists remain as a bias in the data (Budde et al.
2017). For this reason, Parsons et al. (
2011) advocate that inserting data should be as easy as possible and not compel citizen scientists to make a possibly biased guess. They suggest to let them rather report the observed attributes directly, instead of pressing a classification. Lukyanenko et al. (
2019b) showed in a six-month field study and a consecutive laboratory experiment that instance-based user interfaces (reporting of attributes) are better for projects where the focus is on the absolute number of observations and the accuracy of the data, while class-based user interfaces (reporting of classes) are dominant where the focus rather is on precision. In their experiment, citizen scientists reported on plants and animals. For other contexts, these findings may need to be reproduced.
This leads us to a major point in the discussion of citizen science: replication of results. Replication is probably never possible as the research projects are much too dependent on the concrete citizens and surroundings (time, location, …). However, some mechanisms can be used to ensure that each observation is correct – for example, when observations need to be confirmed by at least two independent citizen scientists (Kosmala et al.
2016). Integrity mechanisms of distributed ledger technologies may be of help here (Nofer et al.
2017; Wortner et al.
2019). Further strategies to ensure objectivity, reliability, and validity are, e.g., expert validation or even statistical modeling of systemic error in order to assess the likelihood of false observations (Kosmala et al.
2016). Still, replication of results constitutes a drawback in citizen science that may remain inherent up to a certain degree.