Skip to main content
Top
Published in:
Cover of the book

Open Access 2014 | OriginalPaper | Chapter

Research Funding in Open Science

Authors : Jörg Eisfeld-Reschke, Ulrich Herb, Karsten Wenzlaff

Published in: Opening Science

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The advent of the Open Science paradigm has led to new interdependencies between the funding of research and the practice of Open Science. On the one hand, traditional revenue models in Science Publishing are questioned by Open Science Methods and new revenue models in and around Open Science need to be established. This only works if researchers make large parts of their data and results available under Open Access principles. If research funding wants to have an impact within this new paradigm, it requires scientists and scientific projects to make more than just text publications available according to the Open Access principles. On the other hand, it is still to be discussed how Research Funding itself could be more open. Is it possible to generate a new understanding of financing science shaped by transparency, interaction, participation, and stakeholder governance—in other words reach the next level as Research Funding 2.0? This article focuses on both of the aspects: Firstly, how Research Funding is promoting Open Science. Secondly, how an innovative and open Research Funding might look like.
Notes
The ultimate act of Scholarship and Theater is the art of selling
George Lois

Research Funding Policies: Pushing forward Open Science

In the past decades, with new technology allowing interactive communication between content producers, content consumers, and intermediaries such as the publishing industry, there has been tremendous pressure on journalists, politicians, and entrepreneurs to become more accessible. The Web 2.0, a buzzword comprising of phenomena such as blogs and social networks, is shaping not just communication, but also decision-making in communities ranging from Parliamentarians to CEOs, and from TV staff to newspaper editors.
The scientific community has not felt the same amount of pressure. Nevertheless, a few scientists have taken the lead: Science blogs are increasingly popular and scientists tweet and interact with a larger audience. Universities have opened pages on Facebook and other social networks, or have created internal networks for communication. We have even witnessed the rise of social networks that exclusively address scientists (see Academia Goes Facebook? The Potential of Social Network Sites in the Scholarly Realm). A silent revolution is on its way with more and more students entering the ivory tower of academia with a set of communication expectations based on real-time communication across borders, cultures, and scientific clubs. The ivory tower opened up some windows which are now being showcased as Science 2.0.
Meanwhile, a second silent revolution has been taking place. In 2002, the Soros-founded Budapest Open Access Initiative1 called for more publications along their principles. Ten years later, Open Access has spread to a plethora of scientific communities, publishers and journals (Björk et al. 2010; Laakso et al. 2011; Laakso & Björk 2012). The more common Open Access has become, the more people start to think about what other sorts of scientific information might be available openly. As is the case with Open Access, for many initiatives striving for Openness in the context of scientific information it is quite unclear what kind of Openness they are claiming or how they define Openness. Some Open Access advocates are satisfied if scientific publications are available online and free of charge (so called Gratis Open Access) while others want these publications to be made available according the principles of the Open Definition,2 that applies the Open Source paradigm to any sort of knowledge or information, including the rights to access, reuse, and redistribute it.
Considering the Open Science concepts as an umbrella, there are several initiatives (see Herb 2012) that want to open up nearly every component or single item within research and scientific workflows, e.g.:
  • Open Review, which includes both review of funding proposals and articles that are submitted for publication, the latter traditionally conducted as a peer review. Open Review does not so much aim for Openness according to the Open Definition or the Open Source Principles, rather it is meant to make the review processes more transparent, impeding cliquishness between colleagues as submitting scientists and reviewers (Pöschl 2004).
  • Open Metrics as a tool for establishing metrics for the scientific relevance of publications and data that are independent from proprietary databases like the Web of Science or the SCOPUS database which do not only charge fees, but also disallow unrestricted access to their raw data.
  • Open Access to scientific data according to the Panton Principles.3
  • Open Access to scientific publications.
  • Open Bibliography, meaning Open Access to bibliographic data.
As we can see, some of these initiatives focus on free or Open Access to science related information (in the shape of scientific data, scientific publications, or bibliographic data), while others promote a more transparent handling of the assessment processes of scientific ideas and concepts (such as Open Review and Open Metrics).
Many prominent funding agencies have already adopted policies that embrace single elements of an Open Science. Among others, the National Institutes of Health NIH,4 the Wellcome Trust,5 the European Research Council,6 and the upcoming European Commission Framework Horizon 20207 also require funded projects to make project-related research data and publications freely available.
The unfolding science of the future will be characterized not only by seamless and easy access, but also by interaction, networked and integrated research information workflows and production cycles, openness, and transparency. Science (at least in the Western hemisphere) was an open process in antiquity, having been debated in agoras in the centre of Greek cities. Even in the Roman Empire, the sharing of ideas across the Mediterranean Sea had a profound impact on civilisation—the French, Swedish, English, Italian and German languages attest to the common linguistic principles that developed in this era. Only with the ideological dominance of the Catholic doctrines following the collapse of the Roman Empire did science retreat to monasteries, and scholarly debate to universities and peer-reviewed science communities. The Enlightenment ensured that the educated citizenry became involved in science, but only the Internet has pushed the possibility for a complete citizen science, not unlike how the Greek science community would have seen the debate.
Even though the online media and the initiatives mentioned above brought Openness back to scientific communication, one might ask what research funding which is compliant with the paradigms of Openness and Science 2.0 might look like. As we have seen, many funding agencies require fundees to make their project-related research results (as data or text publication) more or less open, or at least freely available, but until now the processes of research funding are hardly ever considered to be relevant to Open Science scenarios and appear to be closed, hidden, and opaque (in contradiction to any idea of Openness).

Research Funding at Present: Limitations and Closed Discourses

Usually, applications for research funding are reviewed and assessed in closed procedures similar to the review and assessment of articles submitted for publication in scientific journals. This also entails that the reviewers are unknown to the applicant, while, on the other hand, the applicant is known to the reviewers (so-called single blind review). Horrobin describes the process of evaluating a funding application as follows:
“A grant application is submitted. The administrators send it to reviewers (usually two) who are specialists in the field and therefore competitors of the applicant. A committee (usually between ten and 30 members) assesses the application and the reviewers’ reports, perhaps with a commentary from the administration.” (Horrobin 1996, p. 1293). Not only the sequence of events involved in the funding process, but also the results achieved through the funding as well as the publication of the results show similarities: in both contexts, the so-called Matthew Effect (Merton 1968) is evident. This effect describes the fact that authors with an already high citation quotient are more likely to keep receiving a high number of citations in the future, and that in the same vein, institutions already attracting vast amounts of funding can expect to pull in more funds than other institutions (see The Social Factor of Open Science, Fries: The Social Factor in Open Science). A current study of the Sunlight Foundation reveals this effect for example in the funding patterns of the National Science Foundation NSF: “Twenty percent of top research universities got 61.6 % of the NSF funding going to top research universities between 2008 and 2011.” (Drutman 2012).
Even the handing-over of the final funding decision from the peers to so-called Selection Committees, whose verdicts are supposed to incorporate the judgments of the peers, has led to similar results (v. d. Besselaar 2012). Furthermore, peer decisions on research funding from the review process do not appear to be objective: Cole, Cole & Simon presented reviewers with a series of accepted as well as declined funding applications and examined the consistency of the (second) judgment. The result: No significant connection between the first and the second decision on the eligibility of a funding proposal could be established. The results indicate “that getting a research grant depends to a significant extent on chance. The degree of disagreement within the population of eligible reviewers is such that whether or not a proposal is funded depends in a large proportion of cases upon which reviewers happen to be selected for it” (Cole et al. 1981, p. 881). A study by Mayo et. al. produces similar conclusions, it “found that there is a considerable amount of chance associated with funding decisions under the traditional method of assigning the grant to two main reviewers” (Mayo et al. 2006, p. 842). Horrobin even diagnosed in 2001 “an alarming lack of correlation between reviewers’ recommendations” (Horrobin 2001, p. 51).
Although the review process for publications as well as for funding proposals is similar, the consequences of distortions in the reviewing of funding applications are far more dramatic. Whereas even a mediocre article, after a series of failed submissions, can hope to be eventually accepted by some lesser journal, an application for research funding is stymied from the beginning by the paucity of funding organizations: “There might often be only two or three realistic sources of funding for a project, and the networks of reviewers for these sources are often interacting and interlocking. Failure to pass the peer-review process might well mean that a project is never funded.” (Horrobin 2001, p. 51).
Horobin suggests that the review process for research funding is inherently conservative as evidenced by the preference for established methods, theories, and research models, and that reviewers are furthermore “broadly supportive of the existing organization of scientific enterprise”. He summarizes: “it would not be surprising if the likelihood of support for truly innovative research was considerably less than that provided by chance.” (2001, p. 51). Consequently, the funding bodies fund “research that is fundamentally pedestrian, fashionable, uniform, and second-league—the sort of research which will not stand the test of time but creates an illusion that we are spending money wisely. The system eliminates the best as well as the worst and fails to deliver what those providing the funds expect.” (Horrobin 1996, p. 1293). The preference for mainstream research is thus an impediment to innovation: “The projects funded will not be risky, brilliant, and highly innovative since such applications would inevitably arouse broad opposition from the administrators, the reviewers, or some committee members.” (Horrobin 1996, p. 1293). In addition, traditional research funding promotes uniformity: “Diversity—which is essential, since experts cannot know the source of the next major discovery—is not encouraged.” (Horrobin 1996, p. 1294). In a meta-study on the effect of peer reviewing in the funding process, Demicheli and De Pietrantonj came to the sobering conclusion that: “No studies assessing the impact of peer review on the quality of funded research are presently available.” (Demicheli and Di Pietrantonj 2007, p. 2).
Critics of classic research funding are therefore demanding among other alternatives the allocation of a part of the available funds by lot through an innovation lottery (Fröhlich 2003, p. 38) or through the assignment of funds in equal parts to all researchers: “funds [should be] distributed equally among researchers with academic (…) positions” (Horrobin 1996, p. 1294). Additionally, the application of the Open Review described above would ensure greater transparency of the review process as well as prevent or counteract distortions; however, in actual fact, Open Review is hardly practiced in funding processes.8
In the following, the question as to what extent crowdfunding and other innovative funding procedures and channels may serve as an alternative to traditional research funding will be examined.

Open Research Funding

Open Science and Open Research Funding share mutual spheres of interest. Both want to advance science through the involvement of citizens, and both want to make content available that was previously hidden behind paywalls of traditional science publishers, informal boundaries of scientific-peer-communities, or formal boundaries established by private or public funding-bodies. It can be compared as to how two areas of content creation with similar situation have addressed this demand for open content: journalism and arts. In both realms, the suppliers of content vastly outnumber the financiers of content.
There are many more journalists, writers, photographers out there willing to provide content than there are people willing to invest in large news corporations, which before the digital era were the only institutions capable of funding large-scale news publishing. Notwithstanding the bromide that the Internet has allowed everybody to publish, it has also allowed everyone to find a financier for publishing—through self-publishing on the eBook market through micropayments to crowdfunding sites like Spot.us9 or Emphas.is10 we have seen the gradual development of democratized news funding.
Similarly in the arts. While true skills in the arts still require perseverance, endurance, and none-the-least talent, the Internet has allowed artists to broaden their audience and reach out to fans, thus converting them into patrons for the arts. Therefore artists now enjoy avenues outside of the traditional mechanism in which content is being produced, sold, and licensed.
Let us examine some cases where this new freedom to find financiers for content has blended dynamically with Open-Access principles. In 2011, the Kickstarter11 project “Open Goldberg variations”12 reached US-$ 23.748 by recording the Bach Goldberg Variations for release into the public domain13:
We are creating a new score and studio recording of J.S. Bach’s Goldberg Variations, and we’re placing them in the public domain for everyone to own and use without limitations on licensing. Bach wrote his seminal work over 270 years ago, yet public domain scores and recordings are hard or impossible to find. Until now! read the introduction at the Kickstarter project.
The focus of the project is to generate funds for a piece of music that can be found across the world in music stores, but not in the Public Domain with easy redistribution, sharing, and mash-up-possibilities.
A similar idea drove the German platform Sellyourrights.org.14 Artists were encouraged to release their music into the public domain by having the music crowdfunded through the fans. Unfortunately, that experiment was stopped when the German GEMA—a collective rights union for artists—prohibited musicians (not just GEMA members) from share their work on the platform.15
Spot.us—an American platform to crowdfund journalism—was also motivated by public access to investigative journalism. All crowdfunded articles were released under a CC-by-license, thus allowing attribution and sharing.16 Again, the motivation to create content outside of traditional publishing avenues is a key factor in the success of crowdfunding.

Open Research Funding: Some Considerations

A consideration of the criticism voiced against the traditional models of research funding suggests that innovative procedures of research funding should exhibit certain characteristics. For example, the decision-making process leading to a verdict for or against a funding request should be transparent. Ideally, the decision would be taken not only by a small circle of (frequently) only two reviewers, but also involve the entire academic community of the subject in question. Moreover, the question arises as to whether in the age of electronic communication research funding still has to be based on bureaucratic models of delegating responsibility, or whether the direct participation and involvement of researchers as well as citizens might conceivably be practiced. Since research funding is not only subject to the inherent conservative tendencies mentioned above, but also thematically restricted by the available funding programs, it can also be asked as to whether there might not be more flexible vehicles that are better suited for the funding of niche projects unrelated to the buzzwords or academic discourses of the day.

Crowdfunding

Obviously, crowdfunding meets several of the aforementioned criteria. Jim Giles, while commenting also on the public nature of crowdfunding, states in this context: “Crowd-funding—raising money for research directly from the public—looks set to become increasingly common.” (Giles 2012, p. 252). In principle, crowdfunding can also be more closely intertwined with the model of Citizen Science than traditional research funding organizations—for those organizations only a select and narrowly defined circle of applicants, whether institutions or individuals, are eligible. While this pre-selection of fundees may be justified by quality assurance considerations, it also effectively excludes researchers on the basis of institutional (and thus non-academic) criteria—a practice that in an age of rising numbers of Independent Researchers conducting their research outside of the university framework may well be put into question. Giles also points out the connection between research and the civic society established through crowdfunding: “It might help to forge a direct connection between researchers and lay people, boosting public engagement with science.” (2012, p. 252)
Publicly financed research nevertheless does not appear well-suited to crowdfunding in the classic sense: research as a process is lengthy, institutionally anchored, and team-based. It is also expensive. The results of the research process in many disciplines are not immediately ‘tangible’—rather they are documented in articles, books, or conference presentations, and more rarely in patents and new products. Researchers are normally not judged or evaluated according to how well they can explain their research to the public (not to mention to their students), but according to how successfully they present themselves to their peers in the academic journals, conferences, and congresses of their respective fields.
However, the most important impediment is the research process itself. In ancient times, the ideal of gaining knowledge was the open dialogue in the Agora. Our notion of science since the advent of modernity has been marked by the image of the solitary researcher conducting research work in splendid isolation either in a laboratory or library. Although this myth has long since been destroyed by the reality of current research, it is still responsible for the fact that a substantial part of the publicly financed academic establishment feels no pressure to adequately explain or communicate their research and its results to the public. In particular, there is no perceived need to involve the public even in the early stages of the research project, for example in order to discuss the research motivations and methods of a project.
An alternative model of research is described by Open Science. Open Science aims to publish research results in such a manner that they can be received and newly interpreted by the public, ideally in an Open Access journal or on the Internet. However, Open Science also means that research data are made public and publicly accessible. Yet this concept of Open Access is incompatible with current financing models which profit from limited access to research. The concept is, however, compatible with the basic notion underlying crowdfunding—that of the democratization of patronage of the arts, which bears many similarities to the public patronage of the sciences. A quick glance at the mutually dependent relationship existing between the ‘free’ arts and company sponsors, wealthy individuals, and the funding agencies of the public sector reveals that the supposedly ‘free’ sciences are tied up in an equally symbiotic relationship with political and economic interests.
The democratization of research financing does not necessarily lead to a reduction in dependency but rather increases reciprocity. Nevertheless, in analog fashion to the creative industries, crowdfunding can also be used as an alternative, supplementary, or substitute financing instrument in research funding. Just as crowdfunding does for film, music, literature, or theatre projects, crowdfunding in research has one primary purpose: To establish an emotional connection between the public and an object.
If seen in this way, the ideal of the ‘rational scientist’ seems to contrast with ‘emotional science’. Yet the enormous response elicited in our communications networks by a small NASA robot operating thousands of miles away on another planet testifies to the emotional potential inherent in science. The example of CancerResearchUK17 demonstrates that this potential can also be harnessed for crowdfunding. As part of the CancerResearchUK initiative, attempts were made to increase donations to scientific projects by means of crowdfunding. The special draw of the campaign was the chance for supporters to reveal their personal connection to cancer—be it the death from cancer of a friend or relative, the recovery, the effect of medicines, medical advances, or research.
What then should a crowdfunding platform for science look like, if it is supposed to be successful? One thing is clear. It will not look like Kickstarter, Indiegogo,18 or their German equivalents Startnext,19 Inkubato,20 Pling,21 or Visionbakery.22 The Kickstarter interface we have become accustomed to in the creative industries which can increasingly be considered as ‘learned’ or ‘acquired’ would make little sense in a scientific research context. Kickstarter is successful because four key information elements are made so easily graspable and transparent: Who started the crowdfunding project? How much money is needed overall? How much still has to be accumulated? What do I get for my contribution?
Presumably, crowdfunding for science and research will have to rely on entirely different types of information. A film, for example, is inextricably bound up with the name of a director, or producer, or actors; a team of a different stripe will not be able to replicate this effect. In science, however, the replicability of methods, arguments, and results is key—therefore research and researcher must be independent of each other. The knowledge gap a crowdfunding project is supposed to close is much more salient than the individual researcher. Thus, the big challenge faced by a crowdfunding platform for research is to visualize this gap.
The target amount is of much lesser concern in research crowdfunding than in crowdfunding in the creative industries. In the CancerResearchUK project, this information was visualized—the score of funds received up to the current point was visually indicated on the site—but for most donors the decisive factor in joining was not whether the total amount needed was ₤ 5,000 or ₤ 50,00,000, but the concrete result that a contribution of ₤ 5 or ₤ 50 from crowdfunding would bring about.
Last, but not not least, the rewards: For many research projects, much thought and creativity will have to be devoted to the rewards, if a reward-based system of crowdfunding is to be favored over a donation-based system like Betterplace23. Reward-based crowdfunding on sites like Kickstarter make it essential to provide some material or immaterial rewards to incentivize a contribution; donation-based crowdfunding relies solely on the charitable appeal of a project.
Finding material rewards that are closely connected to the research process for science projects is not an easy proposition—after all, the results of such a project are much more complex than in most crowdfunding projects. Of much greater relevance than the question of what the scientist can give back to the community is hence the problem of what the community can give to the researcher. For this reason, the reward actually lies in the access to science and the research process that participation in the funding initiative allows.
A crowdfunding platform should therefore conceptualize and visualize the following three elements as effectively as possible: (a) visualization of knowledge gaps, (b) results achieved by funds, (c) participation options for supporters.
Although crowdfunding is still the exception among the financing instruments used for research projects, it has nonetheless advanced beyond the merely experimental stage. Among other projects, scientists were able to raise $ 64,000 through the crowdfunding platform Open Source Science Project24 OSSP for a study on the water quality of the Mississippi River (Giles 2012, p. 252). The way a scientific crowdfunding platform works so far is largely identical to the way platforms devoted to other content categories operate: “Researchers start by describing and pricing a project, which they submit to the site for approval. If accepted, the pitch is placed online and donors have a few weeks or months to read the proposal and make a donation. Some sites operate on a non-profit basis and channel all proceeds to researchers; others are commercial concerns and take a cut of the money raised.” (Giles 2012, p. 253).

Social Payments in Science

Related to crowdfunding, but not entirely the same, are new tools known as social payments. Typically, these are micropayments given for content that already exists on the net. They share with crowdfunding the notion of many small payments generating large income through accumulation. Flattr25 and Kachingle26 are two tools commonly associated with social payments. They are a little different from each other, but share the idea that a content creator embeds a small button on its webpage, and a content financer in pushing that button ships a small payment to the content creator.
When the New York Times put their blogs behind a flexible paywall in 2010, Kachingle rose to the occasion and allowed the readers to “kachingle” the New York Times blogs, in other words transferring a little bit of money to the writers behind the blogs every time they visited the site. The New York Times was not amused and sued Kachingle for using their trademarks—which in the eyes of most commentators was a reaction to new forms of financing typical of a news monolith.
Flattr, another social payment provider, has deep connections with the Creative Commons ecosphere. The website Creative Commons employs a Flattr-button to earn micropayments27 and many bloggers are putting their content both under a CC license and a Flattr-button. However, there is also one mishap present: Creative Commons are typically shared under a Non-Commercial Clause, which would prohibit the use of Flattr on any blog licensing content into the public domain.28
How can social payments be applied to science? Already Scienceblogs are using the social payment system—not necessarily for monetary gains but also for sharing content, engaging in conversation with readers, and measuring relevance29:
“It is easy to find out how many people access a certain Internet site—but those numbers can be deceiving. Knowing that X number of people have clicked on my article on Y is no doubt a good start. But I have no real insight on how many had a genuine interest in precisely this article and have read my article and appreciated it and how many only found my site after a Google search and left after 5 s. There may be tools allowing me to find answers to these questions, but they will most likely require a lot of work and analysis. But if I have a Flattr-button under each of my articles, I can assume that only people who really appreciated reading them will click on it—after all, this click costs them real money.” says Florian Freistetter, author of a blog on Astronomy.
The real potential of social payments lies in combination with Open Access journals, archives, and publications. Imagine, for instance, databases of publicly available data which allow the users of content to flattr or kachingle the site whenever they visit it? This would allow the making of money from scientific content beyond the traditional licensing systems of universities and libraries. Imagine if a university has a Flattr account filled with 100,000 Euros per year. Every time a university member accesses a scientific journal, the 100,000 Euro is divided among the clicks. This could generate a demand-based but fully transparent way of funding science.
Social payments could also be integrated into direct funding: For instance, through scientists receiving a certain amount of public money or money from funding institutions which cannot be used for their own projects but must be donated to other projects in their discipline. Funds as yet unassigned would remain in a payment pool until the end of the year and then be divided up equally among all projects.
There seems to be some evidence30 showing that distributions of social payments follow roughly the sharing and distribution behavior of content. In other words, content which is often liked, shared, and tweeted is more likely to receive funds through Flattr.
Social payments are thus likely to generate an uneven distribution of science funding—controversial, popular articles and data might generate more income than scientific publications in smaller fields.
Groundbreaking research might profit from such a distribution mechanism, especially if a new idea applies to a variety of disciplines. The established citation networks of scholars and the Matthew Effect mentioned above might even be stabilized.
Social payments in combination with measuring social media impact could provide an innovative means of measuring relevance in science. Such indices would not replace traditional impact scores, such as appearances in journals, invitations to congresses, and third-party funding, but would allow assessment of the influence of scientific publications within the public sphere.

Virtual Currencies in Science

All of the tools described above relate in one way or another to real cash-flows in the overall economy. However, these mechanisms might also work with virtual currencies which may be linked to existing currencies, but not in a 1-to-1-relationship.
In Flattr, it is customary to be able to use the earned income within the system to Flattr new content, without having to withdraw cash. The Flattr ecosystem generates its own value of worth. Similarly, on crowdfunding sites such as Sellaband31 or Sonicangel,32 fans can use some of the rewards they receive to fund new artists. The money stays inside the ecosystem of the platform. Virtual currencies are used often in games, whereby gamers can turn real money into virtual money such as Linden Dollars on Second Life or Farmdollars on Farmville; the virtual money buys goods and services inside the game, both from other players and the game provider, and the earned income can be withdrawn at any time. It might be conceivable that a scientific community creates its own virtual currency. The currency could be used to trade and evaluate access to data, publications, or other scientific resources. Let us imagine for instance that a scientist receives a certain amount of ‘Aristotle-Dollars’ for every publication in the public domain. Based on the amount of ‘Aristotle-Dollars’ which they earn, they receive earlier access to public data.

Some Critical Reflections

Quality Assurance and Sustainability

One advantage of the peer review system is seen in the provision of quality assurance, although the procedure, as stated before, has been criticized. Some of the crowdfunding platforms hosting scientific project ideas also use peer review (for further details, see Giles 2012); for instance, SciFlies33 and OSSP only publish project ideas after an expert review. Moreover, only members of scientific institutions are allowed to present project ideas via OSSP. In one possible scenario, researchers could identify themselves in crowdfunding environments by means of an author identifier such as the Open Researcher and Contributor ID ORCID34 and document their expert status in this way (see Unique Identifiers for Researchers, Fenner: Unique Identity for a Researcher). Project proposals from the #SciFund Challenge,35 on the other hand, were not subject to proper peer review but were scrutinized only in passing. Since the crowdfunding model, however, demands that each submission reveals the research idea and the project realization, a mechanism of internal self-selection can be posited: It can be assumed that scientists will only go public in crowdfunding environments with projects that are carefully conceived.
The same applies to plagiarism and idea or data theft—these types of academic misbehavior would almost certainly be revealed through the public nature of the procedure. The same arguments have also been offered in support of Open Review. Ulrich Pöschl, editor of the journal Atmospheric Chemistry and Physics36 (ACP), stresses the fact that the transparency of the submissions process in ACP increases the quality of submissions because authors are discouraged from proposing articles of mediocre or questionable quality (Pöschl 2004, p. 107): In the interest of self-protection and self-preservation, scientists can be expected to refrain from exposing and potentially embarrassing themselves in their community with premature or ill-conceived publications. Furthermore, crowdfunding relies upon self-regulation through the expertise of donors who in most cases are able to judge the merits of a proposal themselves, so that weak proposals, if they are made public at all, will have very poor prospects. Some crowdfunding platforms also use forums as additional mechanisms of quality assurance; in FundaGeeks37 “Geek Lounge”, for instance, potential donors can exchange their thoughts on the weaknesses or strong points of a project idea. Thanks to an expert discussion in the Kickstarter forum, a questionable project could be stopped without financial loss for the donors (Giles 2012, p. 253).
In spite of the positive outlook outlined above, scientific crowdfunding has yet to prove the advantages claimed for it. To conclude with Jim Giles: “For crowd-funding to make a real difference, advocates will have to prove that the process—which sometimes sidesteps conventional peer review — channels money to good projects, not just marketable ones.” (Giles 2012, p. 252). Also somewhat questionable is the long-term perspective of the projects: Unlike classic research funders, crowdfunding donors can hardly require fundees to only develop sustainable service infrastructures, for example. Conversely, crowdfunding, social payments, and virtual currencies may create new funding avenues facilitating the funding of specific individual researchers rather than abstract projects with fluctuating staff. Small projects with a funding volume below the funding threshold of classic funders could also be financed with these instruments.

Plagiarism

As already mentioned, the public character of proposals for crowdfunding is more likely to expose plagiarism in projects than closed review procedures. For the same reason, researchers submitting their project ideas for crowdfunding demonstrate their claim to a scientific idea or method in a manner that can hardly be ignored, thus discouraging the plagiarizing of ‘their’ project. To put things into perspective, it must be mentioned that plagiarism or idea theft has also been reported for closed review procedures (Fröhlich 2003, p. 54).

Pop Science, Self-Marketing, and Verifiability

On account of its proximity to citizens and its public character, crowdfunding also bears the inherent danger of unduly popularizing research, especially if any individual may donate to a project. Even though internal platforms for crowdfunding in which only researchers can donate to a project proposal may develop faster, it is conceivable—as with the peer review in traditional funding—that mainstream research is favored. Some also suspect that crowdfunding, but also social payments, could establish a disproportionate preference of applied research over basic research (Giles 2012, p. 253). The same could also be suspected for popular science or science that lends itself easily to media portrayal.
Crowdfunding, social payments, and virtual currencies place new demands on researchers’ self-marketing (Ledford 2012), but these demands need not be a bad thing, since a clear, succinct, and understandable presentation of a project proposal can only enhance and augment the verifiability and testability of scientific concepts by eliminating the dense prose and difficult wording found in many funding applications (language that is often mandated by funders’ requirements), thus promoting the intersubjective verifiability of scientific concepts called for by science theory and philosophy of science.
A more solid grounding in the scientific community might be achieved if crowdfunding, social payments and virtual currencies were not applied in entirely open contexts, but rather only within scientific communities (if necessary under the umbrella of discipline-specific associations or learned societies). In such a scenario, however, the aspect of involvement of civic society would be lost.

What About Openness?

Although crowdfunding, social payments, and virtual currencies appear as more transparent than traditional avenues of research financing, the question of the ‘openness’ or accessibility of the research results nevertheless arises. Whereas traditional financing institutions may require fundees to make project results publicly available, it is as yet unclear how projects funded through the innovative procedures detailed above might be mandated to make their project results accessible for the public. Of equal importance may be the question of who owns the rights to a project’s results. In the interest of transparency, it might be desirable to make the names of donors who contributed to a project public, so as to identify and prevent potential conflicts of interest. Conversely, the risk of sponsorship bias must not be neglected. The term sponsorship bias refers to the production of results—consciously or unconsciously—that are consistent with the presumed expectations or desires of the financiers. To minimize the risks posed by conflicts of interest as well as sponsorship bias, it may be advisable to keep the identity of the financiers hidden from the fundees until the project’s end.

Open Access

This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Open Access This Chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Appendix

Appendix: A List of Crowdfunding Platforms for Scientific Projects

Footnotes
1
Budapest Open Access Initiative: http://​www.​soros.​org/​openaccess
 
8
At least some scientists make their funding proposals available after the review is finished (on their motivation see White 2012).
 
14
Sell your rights. The Blog: http://​blog.​sellyourrights.​org/​
 
Literature
go back to reference Fröhlich, G. (2003). Anonyme Kritik: Peer review auf dem Prüfstand der Wissenschaftsforschung. Medizin—Bibliothek—Information, 3(2), 33–39. Fröhlich, G. (2003). Anonyme Kritik: Peer review auf dem Prüfstand der Wissenschaftsforschung. MedizinBibliothekInformation, 3(2), 33–39.
go back to reference Herb, U. (2012). Offenheit und wissenschaftliche Werke: Open Access, open review, open metrics, Open Science & open knowledge. In U. Herb (Ed.), Open Initiatives: Offenheit in der digitalen Welt und Wissenschaft (pp. 11–44). Saarbrücken: Universaar. Herb, U. (2012). Offenheit und wissenschaftliche Werke: Open Access, open review, open metrics, Open Science & open knowledge. In U. Herb (Ed.), Open Initiatives: Offenheit in der digitalen Welt und Wissenschaft (pp. 11–44). Saarbrücken: Universaar.
Metadata
Title
Research Funding in Open Science
Authors
Jörg Eisfeld-Reschke
Ulrich Herb
Karsten Wenzlaff
Copyright Year
2014
DOI
https://doi.org/10.1007/978-3-319-00026-8_16

Premium Partners