Skip to main content
Erschienen in: Ethics and Information Technology 2/2012

Open Access 01.06.2012 | Original Paper

Coercion or empowerment? Moderation of content in Wikipedia as ‘essentially contested’ bureaucratic rules

verfasst von: Paul B. de Laat

Erschienen in: Ethics and Information Technology | Ausgabe 2/2012

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In communities of user-generated content, systems for the management of content and/or their contributors are usually accepted without much protest. Not so, however, in the case of Wikipedia, in which the proposal to introduce a system of review for new edits (in order to counter vandalism) led to heated discussions. This debate is analysed, and arguments of both supporters and opponents (of English, German and French tongue) are extracted from Wikipedian archives. In order to better understand this division of the minds, an analogy is drawn with theories of bureaucracy as developed for real-life organizations. From these it transpires that bureaucratic rules may be perceived as springing from either a control logic or an enabling logic. In Wikipedia, then, both perceptions were at work, depending on the underlying views of participants. Wikipedians either rejected the proposed scheme (because it is antithetical to their conception of Wikipedia as a community) or endorsed it (because it is consonant with their conception of Wikipedia as an organization with clearly defined boundaries). Are other open-content communities susceptible to the same kind of ‘essential contestation’?

Introduction

Online communities with user-generated content that invite everybody to contribute come in various kinds. The term can refer to initiatives like photograph or video sharing sites, collective blogs or discussion sites, ‘social news’ sites, ‘citizen journals’, sites for the collective production of reference works, or—as the archetype of them all—‘open source software’ (OSS) projects. Such sites draw together contents ranging from text, images, photographs, and videos to source code. The cooperation involved ranges from just piling up all contents such as photographs and videos (‘loose’ collaboration), to working on a collectively evolving product such as pieces of text and source code (‘tight’ collaboration); or, in the terminology employed by Dutton (2008), the focus of ‘collaborative network organizations’ may range from ‘contributing’ (2.0) to ‘co-creating’ (3.0).
Inevitably, over time such communities have to introduce rules and regulations to structure interactions. Access to resources, entitlement to perform activities on them, and procedures, all have to be specified. Arrangements ultimately become implemented in software. Such internal governance employs tools like project architecture, formalization, division of roles, and rules for decision making (for OSS specifically cf. de Laat 2007). An important area is the management of incoming content and of participants who contribute. The terms for accepting and ‘processing’ content have to be agreed. This may involve a process of moderation: judgment of content, either by specific role occupants (‘moderators’) appointed to exercise such powers (‘moderation proper’), or by all users of the community (‘self-moderation’). Such moderation may be exercised after content has become public, or (also) before. In addition, contributors themselves may have to be monitored and disciplined by specified role occupants. This area of governance is obviously central to the functioning of any community and may therefore, on the face of it, evoke strong feelings and opinions, both in favour and against. However, the management arrangements adopted usually seem to be accepted without much questioning, right from the start of such communities.
Examples readily come to mind. As far as ‘contributing’ 2.0 communities are concerned, on Skinhead Forum, a blog for purported skinheads (now defunct), discussants continuously patrolled and scrutinized each other in order to uphold the ‘authentic’ skinhead identity. Transgressors of these boundaries were reprimanded and flamed. Ultimately the blog’s appointed ‘moderators’ could censor their posts or expel them altogether (Anahita 2006). Slashdot, a news site about technology-related matters (for ‘nerds’), solicits topics for discussion and comments on them from the crowd. Site editors first filter incoming topics—before publication. After publication they continuously scout comments for inappropriate or illegal content and deal with it; the contributors involved are warned or banned from the site by them. Moreover, comments are moderated as to whether they are constructive (a plus vote) or not (a minus vote). Apart from the professional site editors, these rights to moderate comments are distributed to ‘average and positive’, registered Slashdot contributors by a random process (for a limited amount of moderation points). The privilege thus rotates over the Slashdot population as a whole. Users who are perceived as abusing their powers of moderation are removed from the moderator eligibility pool (Poor 2005; http://​slashdot.​org/​faq).1 In Reddit, a social news site, registered users may submit news stories (as link, or summary text, or both) for subsequent discussion in the general ‘reddit’ or in specific ‘subreddits’. As a quality rating system, participants may vote on whether news items and/or comments are helpful or not. The sum total of these up votes (+1) and down votes (−1)—and how recent a contribution is—determines the order in which items and comments appear on their respective front pages. As in Slashdot, the editorial team monitors for off-topic, inappropriate, or illegal content and takes action (deletion of content and/or banning users) accordingly. ‘Moderators’ (initiators of subreddits being automatically appointed as such) may do likewise (http://​www.​reddit.​com).
‘Co-creative’ 3.0 communities usually seem to accept such arrangements for managing content and/or contributors as well. In Citizendium, an ‘open source’ encyclopedia, comments from all (once registered as ‘author’) on the site’s entries are welcome (wiki format). Collegial ‘editors’ (appointed by the editor-in-chief) exercise ‘gentle oversight’ over their development and finally may award the status of ‘approved’ article. If necessary, abusive users are expelled from the site by a ‘constable’, who is appointed after public consultation (de Laat 2012). In communities of OSS, moderation is applied prior to ‘publication’. Commonly a division of roles obtains, typically of project owner–developers–observers. It is only up to developers (and beyond) to contribute to the source code tree—either their own code, or code suggested by observers that has passed the test of scrutiny. Contributors with unhelpful—let alone disruptive—suggestions not only do not obtain developer status, they are also simply ignored (de Laat 2010).
As may be expected, in many individual instances comments, interventions and/or disciplining decisions evoke strong reactions from the participants involved. Nobody particularly likes to be ignored, corrected, overruled, flamed, ostracized, or punished. But the point I want to make is a different one: while particular outcomes of a system may be contested, the system as such does not seem to be in dispute in the cases described above. Their users seem to consent that the particular system adopted is legitimate. This observation applies regardless of whether the moderation systems employed were more hierarchical (as to a small extent in Slashdot and to a great extent in OSS) or more egalitarian (all others). It would seem, moreover, that the moderation systems involved were in place right from the start of these ventures. Some were slightly refined and tweaked thereafter (Slashdot in particular; cf. Poor 2005), but the basics of their setup have remained intact. According to my observations, right from the beginning until the present day, their legitimacy has not been questioned.2 So, as a general rule, users’ consent for systems of moderation and contributor management has been continuous, right from the beginning up to the present day.
This article focuses on an intriguing exception to this rule: in Wikipedia, the online encyclopedia par excellence, over the years an editorial policy had evolved of equal wiki access for all, without specific moderators, taking a ‘wikiquette’ into account, though with ‘administrators’ being entitled to execute punitive actions (cf. Stvilia et al. 2008; de Laat 2010). This policy had been accepted without much protest. Then, recently, the first steps on the path of moderation proper were taken: a specific proposal to introduce a system of reviewing edits before they appear on screen (aka ‘flagged revisions’). The new proposal led to a remarkable division of views: some embrace it, others denounce and reject it in equally vigorous terms. Remarkably, this sharp division is reproduced across several language versions of Wikipedia (although the extent of adherence to one stance or the other varies). An actual split up of some of the language communities (the English in particular) over this issue cannot be ruled out. It is this review proposal that forms the central topic of this article.
My approach is as follows. Discussions, debates, and polls about (varieties of) the scheme in the three largest Wikipedia language versions (English, German and French) are analysed. The main arguments pro and con are extracted from the Wikipedian archives—and shown to be roughly similar across Wikipedians speaking the English, German or French language. Proponents argue that the scheme is a welcome tool to combat vandalism, while their opponents maintain that the original egalitarian conception of Wikipedia is sacrificed, with bureaucracy increasing. These contrasting ‘definitions of the situation’ are then analysed by a comparison with real-life organizations—which enables us to interpret the proponents’ perception of the scheme as springing from an enabling logic, and the opponents’ perception of the scheme as emanating from a control logic. Finally, I speculate about the reasons behind this unusual spectacle of an ‘essentially contested’ system of moderation. Basically, two equally consistent visions are pitted against each other: Wikipedia as a reliable, encyclopedic institution on the hand, and Wikipedia as a solidary community on the other. This immediately raises the further question of whether such contrasting visions also exist in other open-content communities.

Wikipedian governance

Editorial policy within Wikipedia is still very similar to the way it started a decade ago: all users (called ‘editors’), anonymous or not, may change existing entries and provide comments (on an associated ‘talk page’). Any changes are immediately effectuated in the entry (as it appears in a ‘wiki’). Upon registration with a user name, users may also create new entries. While an entry evolves over time, users have to sort it out between themselves; they moderate among themselves, no specific moderators are appointed.
After this egalitarian setup had been put into operation, Wikipedians very soon discovered to their dismay that ‘disruptive behaviour’ was not to be ruled out. One variety of such behaviour is directed against the entries themselves: so-called ‘vandals’ take pleasure in disfiguring pages in ways ranging from subtle to vulgar. Another variety takes aim at their co-contributors: they are abused, harassed and/or threatened (on talk pages, by email, and the like). In response, two initiatives unfolded. For one thing, Wikipedian ‘etiquette’ was formulated, emphasizing good faith, civility, warmth, and forgiveness. For another, disciplining roles were introduced to keep abuses in check: ‘administrators’—appointed by ‘bureaucrats’. Administrators primarily freeze disputed pages for a while (‘page protection’) or delete them altogether. In addition, they may ban specific, troublesome users from editing for a while (‘blocking’ users).
In spite of these measures, vandalism very much continued to plague Wikipedia. In general, anonymous users turn out to be more prone to vandalism than registered users. As a result, of the total number of vandalising edits (around 4% of all edits), most (97%) are produced by anonymous users.3 In order to meet these worries and enhance the encyclopedia’s reliability, several initiatives are unfolding. Software bots, specifically coded for the occasion, are allowed to scan and patrol entries 24 h a day. Moreover, WikiProjects are formed that mobilize users having a focus on specific areas; relevant entries are taken into custody in order to foster improvements. A quality grading procedure is part of this initiative. Finally, the WikiTrust extension (available for Mozilla Firefox only) has been developed that colours sentences of an entry according to the time they have survived intact within the encyclopedia. The darker their colour, the younger the text is, and—so the assumption goes—the less reliability a user may impute to the text.
Also, in a more radical vein, schemes are being contemplated that constitute a change to editorial practices themselves—not just increasing vigilance to the quality of textual entries as they evolve. The scheme, which is the subject proper of this article, proposes to review new edits coming in for evidence of vandalism; only after this check do they become ‘official’. While this is the basic idea, several parameters are still open to various implementations. Is review to apply only to specific entries that are vulnerable to vandalism (like biographies of living persons), or to all entries? Which are the identifying signs of ‘obvious vandalism’? Who are to be entitled to review new edits? Whose edits are to be checked: only anonymous users, only inexperienced users, or all users? And which version of an entry is shown to unregistered users as the default: the most recent version (as is current practice), or indeed the latest reviewed version (which supposedly discourages potential vandals by barring immediate gratification for performing a vandalistic edit)?

System of review: procedure

Before recounting which specific configurations of the review system actually crystallised in practice, I first briefly describe the consultation procedures on the scheme as they were performed in various language versions of Wikipedia. I shall focus on the three largest Wikipedias: English, German and French. It all began in German circles. From 2006 onwards users discussed all possible technicalities of the reviewing system (such as the criteria for becoming a reviewer, criteria for being exempt from review, waiting times of pending edits, and the like). Subsequently, German programmers wrote implementing software4 which was tested and tried for 3 months (from May 2008 onwards). The configuration chosen may aptly be referred to as the ‘German system’ (see below). A poll was held subsequently to determine whether to continue or not. A vivid debate ensued. Ultimately, some 55% voted in favour (the outcome of a rather confusing voting procedure).5 As a result, the system has remained ‘switched on’ ever since.6
In the English Wikipedia the debate was more protracted. Proponents and opponents were both very vocal, and voiced their concerns incessantly over several years. As the two camps were almost equal in strength, the debate was very protracted. Several polls were held, but the deadlock could not be broken. Decisions were constantly postponed. Only in 2010 could a 2-month trial be agreed upon (with 60% in support).7 The configuration chosen, which I refer to as the ‘English system’ (see below), is considerably less stringent than the German one. After the trial (July–August 2010), a majority (about 60%) voted in favour of keeping the system turned on until updated software became available.8 At the end of May 2011 the system was finally turned off (with 66% in favour)9 in order to clear the air for further discussion about whether and how to apply the review system.
The French debate, in comparison, was the most orderly of all. Users discussed the various schemes over the years. Ultimately, in 2009, they decided to take a vote on both the English and the German system—there being no need for them, apparently, to invent a French system. After a heated argument, both options were rejected: 56% voted against the English system, 78% against the German system.10 Systems of review have disappeared from view ever since.

System of review: the rules

As recounted above, all possible configurations of such a reviewing system ultimately narrowed down to two specific options: the German and the English system. Let me first explain the German system as it is the most elaborate and most extensively tried system of the two (http://​de.​wikipedia.​org/​wiki/​Wikipedia:​GSV). The term employed in German for the reviewing process is Sichten (literally: to sift, to sight); versions that have been reviewed are called gesichtete Versionen (sighted versions); users performing the review are denoted as Sichter. All new edits are to be checked, whether or not they involve entries that are sensitive to vandalism. Users who desire to obtain Sichterrechte (rights of sighting) must be registered, have been active for at least 60 days, and have performed at least 300 edits (or 200 gesichtete, i.e., sighted, edits)—note that these are the same criteria for users to obtain the right to vote within Wikipedia. Over time, users may automatically become exempt from the requirement that their own edits need to be reviewed (passive Sichterstatus; i.e., rights of ‘auto-review’): at least 30 days of activity and 150 performed edits (or 50 gesichtete edits) are the criteria. Unregistered users, by default, get to see the latest reviewed version (although the most recent version is just one mouse click away). For registered users, the current practice of showing the latest version is continued. According to the relevant German statistics, ungesichtete Versionen (unsighted versions) on average may have to wait 5–6 days before being cleared.11
The English system, which was adopted much later (July 2010–May 2011), employed its own Wikipedian terminology as well. Here, versions with new edits were put on hold, waiting to be reviewed (‘patrolled’)—if a reviewer found them free of vandalism (s)he flagged them as a sign of approval (‘flagged revisions’, FRs). The newly proposed edits that were still uncensored were denominated ‘pending changes’ (cf. http://​en.​wikipedia.​org/​wiki/​Wikipedia:​FLR; and http://​en.​wikipedia.​org/​wiki/​Wikipedia:​PC). This English review system of flagged revisions did not apply across the board to all entries. Only specific entries that were deemed to be susceptible to vandalism (like biographies of living persons—say Janet Reno or Stephen Fry) were possible candidates. The rationale was that FRs are meant to be an intermediate measure between protection of an entry (either full protection: no one may perform edits, or semi-protection: only experienced users may edit) on the one hand, and no protection at all on the other. FRs allow editing to continue—under increased vigilance. In keeping with this idea, an entry only received such ‘flagged protection’ upon request. Over 1,000 entries actually enjoyed this protection during the trial. Reportedly, pending changes were routinely dealt with in a few hours.
Rights of participation in the system were not awarded in some automatic manner. Users had to apply formally to the administrators and show some Wikipedian experience. Rights of review were granted if performed edits numbered over 100 and the applicant’s past behaviour was impeccable (no vandalism, no harassment, and the like). Exemption from review (rights of so-called ‘auto-patrol’ or ‘auto-review’) was granted if one had contributed some 50 ‘sound’ articles to the encyclopedia. As in the German system, unregistered users got to see the latest patrolled version by default—all others were still presented with the latest version.

System of review: pros and cons

All across the three largest Wikipedias this review system (of one variety or another) has been the subject of intense debate.12 Arguments in favour were championed just as passionately as arguments against. Which arguments were adduced? I scrutinized Wikipedian archives for all talk pages, discussion pages, petitions, (straw) polls, requests for comments, and the like related to this issue, in the English, German and French language versions. The group of English participants was the largest, numbering several hundreds who voiced their opinions. From these sources I was able to extract the following arguments, which turned out to be basically similar, whether one happened to speak English, German, or French.13
To begin with, the precise procedure that was followed in arguing and deciding about a reviewing system has evoked many comments, over the years. This is only to be expected because (democratic) procedure in Wikipedia is still a rather fuzzy area. Several parameters of the review system were under discussion simultaneously (such as its scope, the granting of rights of review, and of auto-review). How should such a debate, in cyberspace, about a number of competing alternatives be handled? Asking participants to vote is easy enough. But then the more difficult question imposes itself of how large the majority of votes should be to take a decision? Discussions on heated topics have a tendency to go on indefinitely. On which grounds (if any) is it permissible to declare a debate closed? In both German and English Wikipedia, trials were held that lasted a number of months (see above). In both cases, a debate ensued at the end of the trial period about whether or not to continue the trial. If the debate is strenuous and takes time (which the English case in particular did), does the trial have to be stopped in the meantime or not? All discussions on content were shot through with such procedural misgivings. And of course, procedural arguments were mainly voiced by opponents who felt threatened by the possible introduction of the review system.
Leaving such procedural arguments aside, I now turn to an analysis of the substantive arguments brought up in the discussion. The argument in favour was invariably that a review system would facilitate a more effective fight against vandalism. The reliability of Wikipedia was seen to be at stake.
This could significantly improve Wikipedia. In my experience, the vast majority of IP edits are not constructive, and I think that removing the possibility of ‘instant gratification’ for vandals would dramatically reduce the appeal of vandalism. (English user, 2009)
So, the slogan should be rewritten to Wikipedia: The free encyclopaedia that anyone who behaves can edit. So, who will be affected by flagged revisions? Those that we smack around already anyway. If you add libel to the project, we ride your ass. If you vandalize, we ride your ass. And if you behave, you quickly enough get the opportunity to flag revisions yourself. (English user, 2009)14
An additional argument for the English system in particular was that FRs create an additional option between no protection and (semi-)protection of entries, which allows editing to continue.
This was the one and only argument, repeated by all who championed the review system. Arguments from the opponents were more ramified. For one thing, they expressed doubts about the effectiveness of the scheme. As some remarked, ‘obvious vandalism’ does not need a special system: precisely because such edits are obvious, normal Wikipedian users will routinely correct or ignore them. Moreover, a review system may considerably delay the publication of edits. Do we have enough reviewers to take care of pending edits in a reasonable time? Fears of clogging the pipeline for months were expressed repeatedly. In addition, the system was felt to be extremely complex; the fine details of its operation led to confusion and misunderstandings.
The major practical objection, however, was that the system would misfire and produce unwanted side effects. While vandals may possibly be diverted, at the same time bona fide, anonymous users (constituting the large majority of anonymous users) would simply be chased away because they do not see their edits appear immediately on the screen.
(..) it would be frustrating to go to an encyclopedia that “anyone can edit,” come to a page, make an edit, and then have that edit not show up until some Admin can be bothered to roll over that way and approve it. (English user, 2009)15
An important pool of useful editors would be depleted:
In my opinion, a fit analogy for implementing this widely is “sawing off the branch one is sitting on”… (English user, 2009)
Apart from such practical considerations, the reviewing system was also—and mainly—condemned on reasons of principle. A number of such objections came to be expressed. To begin with, it was interpreted as adding a layer of bureaucracy.
Just another layer of un-needed bureaucracy. (English user, 2010)
This idea is just added bureaucracy to a process of editing that works well already, and with all the suggestions for how the ‘reviewer’ permissions would work, along with all of the exceptions and additions, seems overly-complicated for new users, and just a power-trip for those granted additional permissions and responsibilities. (English user, 2009)16
Moreover, it was seen as representing yet another layer of bureaucratic machinery, on top of one already in existence. The growing surplus of bureaucracy in a volunteer undertaking such as Wikipedia was experienced as extremely aggravating and frustrating.
Wikipedia is already groaning under the weight of all the self-appointed critics on private ego trips, who restrict their activity to flagging articles and chiding the overworked and patient bona-fide editors. Why should we be providing an official niche for people who view the project as a way of fulfilling their power fantasies? (English user, 2009)
This has since developed, or I would say degenerated, into an increasingly complex bureaucratic jungle, of which this proposal is the latest step in the wrong direction. This proposal is so enormously complex that repeatedly in the discussions, supporters are pointing out that other editors have misunderstood some intricate details etc. Very worrisome, if editors that have the Wikipedia-knowledge to find this poll page in the haystack have so many problems understanding and appreciating what is really meant by the proposal. (English user, 2009)
Most prominent of all, however, was the objection that the proposed rules of review reflect a wrong attitude: they depart from the assumption that contributors of new edits are not to be trusted a priori. Their edits therefore have to be censored.
Wikipedia has always worked on a basis of trust, we have always said that being an admin is nothing special and most importantly it’s an encyclopaedia that anyone can edit. Flagged revisions says you can edit but your edit can’t be seen we don’t trust you. It places importance on having tools creating an importance that shouldn’t be there. The worse thing is flagged revisions makes the presumption that all edits are malicious and increases the power of POV pushers/Cabals to control article content. In all of this people appear to be loosing [sic] sight of the characteristics that made Wikipedia what it is. (English user, 2009)
Why should rollbackers, administrators, and bots get to say that they don’t trust editors who are established users and not vandals, or IP users who are making good edits? (English user, 2009)17
As a result, Wikipedia becomes a society of two classes of users, with experienced users lording it over inexperienced ones.
We don’t want to go down the road where an elite group of “reviewers” decides what an open encyclopedia should include. (English user, 2010)
Wikipedia does not need yet another class of editors privileged over ordinary editors. (English user, 2010)18
One commentator went one step further by asking the perennial question: who is to control the controllers?
Vandalism is not that big a problem that we need to censor all editors until the usual suspects who are supposedly “trusted” come and look at our edits. Will these “trusted” editors all be experts in the subjects they are censoring? How do we know? Who monitors the ability and/or competence of these “trusted” people? Who decides that they are trusted? (English user, 2009)
Finally, all these features were repeatedly denounced as a clear violation of the original egalitarian principles of Wikipedia, the encyclopedia-that-anyone-can-edit.
This proposal goes against the very idea of Wikipedia. We are an encyclopedia open to the public, and an encyclopedia that can be edited by the public, at will; this proposal will restrict that and is a dangerous idea. (English user, 2009)
[I] oppose all measures that contradict the founding principle that Wikipedia is an encyclopedia anyone can edit. (English user, 2010)19

Theories of bureaucracy

So, basically, what we are witnessing here is a clash between two diametrically opposed interpretations: ‘flagged revisions’ (or ‘gesichtete Versionen’) as a useful tool for curbing vandalism versus as a superfluous bureaucratic device that violates egalitarian principles of participation. It would seem useful to probe into the deeper origins of this clash. I shall do so by developing an analogy with organizations which, in ‘real life’, have been introducing rules and regulations for ages. Organization science has, in conjunction, been trying to develop a systematic perspective on these issues: theories of bureaucracy.
The main line has been reviving the Weberian analysis of bureaucracy. Some milestones are Gouldner (1955), who distinguished between a punishment-centred and a representative bureaucracy, and Burns and Stalker (1961) with their famous contrast between a mechanistic and an organic regime. It was up to Fox (1974) to integrate these various approaches around the concept of discretion—the extent to which organizational participants may exercise their own wisdom and judgment in performing their tasks. He proposed a pair of contrasting work role patterns, on either side of a continuum. At one end of the scale, the ‘low-discretion syndrome’ is characterized by an elaborate division of roles (with minimal discretion for the lower echelons), formalization, and centralization of decision-making powers. At the other end of the scale, the ‘high-discretion syndrome’ constitutes the opposite: a minimal division of roles, low formalization, and decentralized decision making.
Subsequently, Fox developed a trust dynamics of organizational moves towards one or the other end of this continuum (Fox 1974: chapter 2). If an organization moves towards its lower end, it expresses declining trust in their contributors in carrying out their tasks. Lowering discretion is a sign of distrust; as a consequence, employees may be expected to reciprocate with a decline in trust towards management. A classic example (in Fox’s interpretation) is the gypsum plant that figured as the centrepiece of analysis in Gouldner (1955): operating rules came to be enforced and freedom of movement forbidden.20 On the other hand, if the organizational syndrome moves toward granting more discretion, then this expresses rising trust of the employees. Recipients are invited to reciprocate likewise.
Bureaucratic rules, then, may be seen to emanate from a control logic. While this has remained the main line of analysis (up to the present), some decades later Adler and Borys (1996) alerted us to the possibilities of another kind of logic: an enabling logic. Bureaucratic rules may also be designed in order to enable employees to better master their tasks, by providing them with the professional tools to do so. Equipment may become designed with usability and upgrading in mind, enhancing users’ capabilities and leveraging their skills. Transparency of the broader system and flexibility for users are key aspects of such design. The showpiece of enabling logic presented by the co-authors is the Xerox company, which developed a new line of photocopiers in an on-going dialogue among users, designers, and business decision makers. They were specifically designed from the premises that the systems should ‘mobilize rather than replace users’ intelligence’ (Adler and Borys 1996: p. 68). Similarly, formal procedures may come to represent organizational memory and allow organizational contingencies to be better dealt with. As a result, workers become empowered to better execute their tasks.
This alternative logic, I presume, can easily be folded into the above Foxian dynamics of trust. The introduction of enabling rules expresses faith in employees’ capabilities. Whether discretion increases (equipment design) or possibly decreases (formal procedure), in all circumstances their discretion is ‘enriched’ by providing the tools for a more professional execution of tasks. As a result, employees’ trust can only increase. On the other hand, failure on the part of management to take such enabling potentialities into account expresses a lack of faith in employees’ potentialities; correspondingly, employee trust towards management can only stagnate or decline.
As can be seen, the two types of ‘bureaucratic’ trust dynamics can easily be combined. The introduction of rules that are seen to emanate from a control logic (thus reducing discretion), and/or the failure to attend to the introduction of rules that are interpreted as enabling (thus neglecting any potential ‘enrichment’ of discretion) tend to diminish mutual trust between employees and management. On the other hand, disregarding/eliminating rules of control (thus widening discretion) and/or introducing rules of empowerment (thus ‘enriching’ discretion) have the potential to increase this trust on the sides of both employees and management.21
Let me stress that in the analysis above, (similar kinds of) rules and regulations are the subject of unequivocal interpretation; they are perceived as either controlling or enabling, depending on their make-up. A strict division of labour is perceived as control; a transparent procedure for self-help repair of equipment is perceived as empowering. In rare instances, reports can be found about the same kind of rules being perceived as either the one or the other, depending on circumstances. Take Sitkin & George (2005), who analysed the introduction of formal criteria for taking decisions about firing or medical cost: they were shown to weaken or enhance managerial legitimacy, depending on context. In particular, they argued that under external threats, formal criteria could acquire a connotation of fairness.22 But that is as far as it goes. To my knowledge, no organizational analysis claims that a particular kind of rule may be perceived as both in the same context: i.e., a sizeable number of participants perceive them as controlling, while an equally sizeable number take the opposite stance and define them as empowering. No wonder, because that would run totally counter to the overall tenor of the analysis in terms of (enriched) discretion as a gesture of trust towards participants.

Communities of user-generated content and bureaucracy

Returning to communities of user-generated content, I propose that a similar kind of bureaucratic analysis can be applied. How can this claim be substantiated? I would argue that in such communities, just as in organizations, a hierarchy obtains. These hierarchical relations have the following origin. Site owners—usually joined by the occupants of several roles which they distinguish, have regulatory powers that are embedded in the community’s virtual design. As a result, users who volunteer to participate may obviously choose their tasks, but for their participation to come into effect they have to request access to relevant files, apply for visitor and/or write access, all the while respecting certain tools and procedures for participation. If volunteers play by these rules their contributions are welcome; otherwise they are ignored or even banned from the community. Note that the hierarchy that obtains does not revolve around participants receiving pay in exchange for contributing (as in organizations) but about acquiring esteem. Participants do not risk being fired, but they can be ignored or evicted.
In view of this hierarchy nexus, a close correspondence obtains between organizations on the one hand and open-content communities on the other.23 Consequently, the above bureaucratic analysis may also be assumed to apply to such communities.24 In particular, the governance rules employed can be perceived to spring from two kinds of logic: either a control logic or an enabling logic. This applies in particular to the subarea under consideration here: management of content and disciplining contributors. Editorial policies, conflict rules, conflict procedure, rule enforcement, all of these have to navigate the waters of control versus empowerment. Since community owners are massively dependent on their community members and are hardly in a position to impose any rules against their will (lest many of them desert), as a rule it will—and only can—be collective discussions inside the community crystallizing into a collective outcome that drive the emergence of governance arrangements and their subsequent implementation in code.
As observed above, content and contributor management systems within such communities usually seem to be accepted without question. In the terms of our bureaucratic analysis, this would seem to be a case of systems being accepted as they are interpreted as enabling. In which sense, then, can members be said to experience feelings of empowerment? What kind of activities are being furthered and strengthened? My answer is that participants endorse a certain amount of content management and ‘policing’ since these enable them to experience and contribute to a vibrant community in which the quality of content is high and unwanted disturbances are kept to a minimum. Compare the communities mentioned in the introduction: Skinheads may rest assured that their identity is being upheld, Slashdotters are spared unconstructive posts, Reddit readers can be sure not to miss the most interesting comments, Citizendium users have a guaranteed, expert-vetted experience, and participants in an OSS project can be sure that the quality of the source code tree is scrupulously guarded by their project leader.
But now consider Wikipedia. Until recently, editorial policies (of equal access, no moderators, observance of Wikipedian etiquette, policing by administrators) had enjoyed broad acceptance. In line with the foregoing analysis, Wikipedians endorsed this gentle policy as enabling Wikipedia to contain disturbances, which inevitably turned up from time to time. The proposed extension of this policy with a system of review, however, showed a marked deviation from this path of broad acceptance. As shown above, it led to both vehement consent and vehement dissent, with only a few adopting an intermediate position. Can this anomalous finding be reconciled with the analysis of bureaucratic rules (as developed above by analogy with the organization phenomenon)? Which kind of logic(s) may be said to apply, and if so, how and why? Can our bureaucratic analysis be saved, possibly in adapted form, or does it have to be abandoned?
The proposition I develop is that the analysis is still appropriate—though in modified form. This modification is necessary as a community has fuzzier boundaries than the more solid boundaries of an organization. As a result, Wikipedians could entertain different background visions of what kind of community Wikipedia should represent. These different visions produced a difference in appreciation. To put it in a nutshell: many of those who reject the system of review do so from a vision of Wikipedia as an unbounded community that shares knowledge without mutual control and suspicion, while many of those who embrace the review system do so because they have a vision of Wikipedia as an organization producing reliable knowledge that keeps vandalism outside its borders. I shall elaborate on this statement by looking back at the results above on the reception of the review system and citing some more participants from the ongoing discussions.
Proponents look on the review system as a further strengthening of anti-vandalism policies already in place. It is cast as an additional tool filling the rather large gap between no protection and full protection. So, to them, endorsement is no big deal, it is seen as enabling a routinely continuation of a disturbance-free Wikipedian experience. It is also stressed that the new system is absolutely necessary to keep rising vandalism at bay—only when armed with it will users keep faith in Wikipedia as a sound undertaking.
Underlying this perception as an enabling logic, two interesting preoccupations can be observed. For one thing, many supporters of this position show themselves to be preoccupied with the reputation of Wikipedia as a reliable encyclopedia. They refer to “Wikipedia as a respected encyclopedia”, “our reputation as an encyclopedia”, a “world-class high quality reference work”, “Förderung von Informations-Qualität”, and “notoriété de Wikipedia”. In conjunction, some users express concern that the lack of reliability may harm people:
We currently have the potential to do great harm to people, corporations and organizations through our popularity as the world’s 4th most visited website. By allowing anyone to edit, we’ve been complicit in allowing users to libel and defame others as a side effect of our open policy. This has caused real damage, to real people, for no other reason than because it’s on their Wikipedia article. (English user, 2009)
As a corollary, fears are expressed that Wikipedia might face a lawsuit one day over defamatory content.
As a second preoccupation, many feel strongly that such a project requires the drawing of boundaries. Vandals should just not be allowed to participate, they should not be allowed in. For their purpose, proponents take pains to paint a dark picture of supposed vandals. They are demonized by depicting them as “kleine Vandalen, Spinnern und Witzlern”, and comparing them to scrawlers of “random graffiti”, spammers, and virus writers. Censorship at the boundaries is therefore just as necessary as a spam filter or an anti-virus program—who could be opposed to that?
When both preoccupations are considered together, proponents of the review system can be interpreted as harbouring a vision of Wikipedia as a proper organization involved in the business of producing a reliable product—reliable knowledge in this case. This involves all the associated paraphernalia: strict conditions of access, role divisions, processes of quality control, possibly even contributor control. Correspondingly, the current state of affairs without a ‘spam filter’ is depicted in dark tones: Wikipedia has all the features of a playground for juvenile vandals.
Opponents of the review system, on the other hand, interpret matters in quite a different light. For them it represents a move in the wrong direction. In their eyes it is not the continuation of editorial policies in place—it is turning them upside down. Let me substantiate this claim by briefly recalling the more fundamental arguments that were raised by many against the system as bureaucracy-in-the-making (cf. above). They perceive the bureaucratic rules involved variously as adding a bureaucratic layer of overseers, expressing distrust of new contributors, and installing a new (class) division between experienced and inexperienced users, between newbies and old-timers. Egalitarianism gives way to a class system. Such terms indicate a perception of the new system as a control logic: the newcomers’ discretion is curtailed, their edits are subject to close control. Wikipedian editorial policy is perceived as moving towards a syndrome of lower discretion, with a new division of roles—consisting of the newly created categories of inexperienced users, users with rights of auto-review, and users with rights of review—and a formal system of ‘patrolling’ edits. As one astute user remarked: the system amounts to a reversal of the burden of proof. In the old system, new edits were sound until proven otherwise, in the new system, new edits are suspect until proven otherwise. As a result, these opponents expect that newcomers, as they are met with distrust, will be repelled and apply their energies elsewhere. Many opponents themselves, as experienced users entitled to participate in these discussions, were so disgusted that at any rate they announced their own departure.
Against this move towards Weberian bureaucracy (in the pejorative sense of the term), towards ‘factory discipline’, these proponents try to uphold the egalitarian vision of Wikipedia as a solidary community—which is the way it all began. Each is to contribute according to his/her potential; no one is to be checked or patrolled without a reason. Boundaries are not to be drawn—everybody is inside. One proponent phrased this vision of Wikipedia as primarily a social movement as follows:
It is better to have vandalism than to have “trusted” Wikipedians as gatekeepers. I think that Wikipedia is not just an online encyclopedia. It is also, perhaps only to a slight extent, a working place for an experiment in human social engineering. We actually shouldn’t be trying to eliminate the inherent difficulties in an encyclopedia that “anyone can edit.” I think Wikipedia may be “editing” us just as we are editing Wikipedia, and that is the way it should be. That is a good thing. I think we should allow this experiment to run its course. Wikipedia was a bold idea in its inception and we shouldn’t become timid simply because we’ve had some “success.” I think the “idea” is what is worth preserving, not necessarily the body of knowledge. An online encyclopedia that anyone can edit is the thing of value, not necessarily the articles. Sure, we want to preserve the integrity of the articles. But that is accomplished through active participation of concerned editors—not by suspending the experiment and opting for stale “knowledge.” (English user, 2009)
A final question that imposes itself is: what kind of users voted in support of the new system, and what kind voted against it? In particular, does experience with editing Wikipedia matter in this regard? My first inclination was that it must surely be the ‘veteran’ Wikipedians who support the review system. They have been around for a longer time, and had gradually become irritated by the disturbances emanating from vandalism. To them, the old days of editing without distinction have become a relic of the past. Hence they plead for tightening control, because it enables them to work undisturbed. Their energies may turn to editing proper. In a similar vein, it could be argued that since they are long-term participants, they have gradually come to adopt a typically ‘managerial’ point of view, which no doubt supports efficiency of the labour process. And, finally, why would newcomers be enthusiastic about the introduction of ‘censorship’? Why would editors without experience be in favour of policing entries?
Surprisingly, Larry Sanger holds the opposite view (Sanger 2005). Based on his experience as one the founders of Nupedia first and Wikipedia later, he maintains that it is precisely the veterans who oppose any review system (or system with moderators in general) and try by all means to uphold the original conception of Wikipedia. Any proposal to introduce rules about content will be branded censorship. In his view, since Wikipedia started with an unmoderated, egalitarian conception, those who were there from the start will stick to it until the end. Precisely because the Internet is made up of so many unmoderated [sic]25 communities, he argues, it has become an internalized norm in general. So if some kind of bureaucracy is deemed to be needed in order to ensure operation within reasonable bounds, a community should adopt those bounds from the beginning. Otherwise the door is opened to perpetual strife.26
Above I have focused exclusively on the arguments expounded by participants. An intriguing question that remains to be answered, of course, is: What brought the three language communities to ultimately choose or reject such a review system? Why is it that, each in their own ways, the Germans voted for acceptance, the French for rejection, while the English have been wavering all the time between acceptance and rejection? While the age of the encyclopedia, as a proxy for its stage of development, is easily ruled out as an explanatory factor—they all started early in 2001—some other answers suggest themselves immediately. The English Wikipedia is reportedly more plagued by vandalism than the two others—recent data suggest that vandalism percentages for English, French and German edits are respectively 11.5, 3.5, and 3.5% (West et al. 2010). Moreover, in a more speculative vein, those whose mother tongue is German may possibly be more deferential to hierarchy than those who speak either French or English, and therefore may prefer the order and respectability introduced by a system of reviewing. However, a full analysis to answer these questions properly would require additional, more sociologically oriented follow-up research in which questionnaires are sent out to samples of the participants involved.

Open-content communities: defining the situation

In this final section I want to discuss the implications of the foregoing analysis for Wikipedia, Wikimedia projects other than Wikipedia, and open-content communities in general. To begin with, the analysis can be extended in a straightforward fashion to several other Wikipedia-like communities under the larger umbrella of the Wikimedia Foundation, all of which operate on the same open-content principles. Once the software extension for reviewing (‘FlaggedRevs’) had been composed, the foundation also made it available to all language versions of all sister-projects of Wikipedia for implementation (on a voluntary basis). As a result, the system has actually been spreading on a wider scale than Wikipedia alone: in particular, it has diffused to Wikinews, Wikibooks and Wiktionary (http://​meta.​wikimedia.​org/​wiki/​Flagged_​Revisions). Did this provoke similar discussion and dissent?
Some preliminary answers follow. Wikinews is an online user-generated journal that started in 2005. Articles are developed in the ‘newsroom’, by means of a wiki; a selection of them subsequently appear on the ‘main page’. Similar to Wikipedia, disruptive editors can be punished by ‘arbitrators’. Reviewing was formally introduced in August 2008 (in the English version): articles have to be formally checked according to several news criteria by appointed ‘reviewers’, before they can be published on the main page. The system was introduced to guarantee the accuracy of Wikinews’ main page. As it turned out, it did provoke controversy, along the same lines of discussion as in Wikipedia, although on a smaller, less extensive scale. So, just like Wikipedia, Wikinews is also an exception to the ‘rule’ of quiet acceptance of moderation (all information above obtained from http://​en.​wikinews.​org).27 Remarkably enough, in some other Wikimedia projects reviewing seems to have been accepted quietly and has been functioning smoothly ever since 2008: English Wikibooks (http://​en.​wikibooks.​org) and German Wiktionary http://​de.​wiktionary.​org).28
On the other hand, the foregoing analysis may be useful for diagnosing future developments within Wikipedia itself. The next step on the road of bureaucratisation may soon be taken. Checking edits for vandalism is one thing; checking edits (and entries as a whole) for quality proper is another. Precisely such a more severe check is under consideration, as part of the drive towards raising the quality of Wikipedian entries further and thus the reputation of Wikipedia as a reliable encyclopedia. ‘Super-reviewers’ (or Prüfer in German) will be appointed. Such a review will undoubtedly introduce even more lines of division. Moreover, it will not only affect inexperienced users but all users across the board. It may well be that this super-review system will produce an even larger division of opinions, along the lines sketched out above. An actual split of the (English) Wikipedian population and an exodus of the naysayers cannot be excluded. Further, the system raises the acute question of who is to be recruited as super-reviewers: very experienced users from within Wikipedia, or vetted experts from outside Wikipedia (cf. also de Laat 2012)? The latter option in particular will not fail to encounter tough resistance from those who emphasize the notion of ‘community’ and cling to the original egalitarian conception of an encyclopedia-that-anyone-can-edit.
As a final remark, turning to open-content communities more generally, the ‘organizational’ analysis of bureaucratic rules would seem to be useful after all. A control logic and an enabling logic may have their place in the analysis, in the following manner. Insofar far as rules (like those of design, or procedure) foster the exercise of capabilities on the part of employees, their unequivocal interpretation as springing from an enabling logic stands to reason (just as in organizations). For the classic Weberian rules that revolve around discretion, however, both logics would seem to be applicable (unlike the case of organizations). Users confronted with Weberian bureaucratic rules for their communities may perceive them in accordance with either of these logics—depending on their broader vision of what the community is to stand for. Let me elaborate this proposition.
If users cling to an egalitarian vision, such rules may easily be perceived as undermining that vision and therefore they are suspect: they are branded as a control logic. Resistance is especially likely if boundaries and roles are introduced where none existed before. A ‘factory model’ is not appreciated. If, however, users basically conceive of the community as a productive organization, the situation may be different. Weberian rules that aim to create a more solid organization from an otherwise amorphous community in cyberspace may receive a warm reception. As long as boundaries are installed in the first place, with insiders clearly demarcated from outsiders, such rules will be perceived as enabling the performance of a proper community job. Community members involved applaud the rules as empowering. Note that the latter interpretation does not rule out, of course, that additional Weberian rulemaking inside such community-turned-into-organization, later on, which directly affects the discretion of some or all members, will not be accepted by them, since it is interpreted as constituting a control logic.
Remarkably, the latter kind of users reveal themselves to be supporters of bureaucracy, and thereby seem to pull back from blindly trusting everybody (at least a priori), falling back instead on an adage formulated at the beginning of the cyber era: ‘Trust needs boundaries’ (Handy 1995). Charles Handy, an organization scientist, famously argued that for purposes of virtual cooperation one cannot extend trust to an infinite multitude of largely unknown people; lines of inclusion/exclusion have to be drawn somewhere, somehow.
Note that this embrace of Weberian bureaucracy by users is also at odds with the received wisdom about how open-content communities supposedly operate—or should operate. Yochai Benkler coined the term ‘peer production of knowledge’ (Benkler 2006). In his conception of this ‘mode of production’, quality control by the collective is high on the agenda (‘self-moderation’)—but appointed moderators are a bridge too far. Similarly, Axel Bruns directed our attention to processes of ‘produsage’ (which is taken to mean the use and production of content intertwined) in such communities as Wikipedia and beyond (Bruns 2008).29 The author emphasizes that a sound community of hybrid ‘produsers’ needs processes of ‘communal evaluation and filtering’; a ‘stronger recognition and quantification of individual reputation’ may help in the process. Nevertheless, specifically entitled moderators—whether operating after or (even) before ‘publication’—have no proper place in his ‘produsage’ model and remain conspicuously unmentioned. The principle of ‘equipotentiality’ is to reign, not hierarchy.
In a sense, then, governance rules concerning content and their contributors present us—and, not unimportantly, the members of the communities involved themselves—with a kind of conceptual muddle (Moor 1985): it all depends on participants’ background convictions how they will define the situation. From participants in the usual organization—whether real or virtual—one may broadly expect a uniform frame of mind on the matter. While generally averse to such Weberian rule-making, which can only serve to erode their discretion, they will make an exception for governance rules insofar as they serve to define the boundaries of the organization and order the work-flow in the first place. As organization members they reason, basically, that any organization needs some minimum amount of Weberian order. The situation is more complicated for communities with users generating their own content. More visions of the community than one can be entertained. Accordingly, the same type of governance rules—Weberian ones in particular—may elicit praise from some members (as empowerment) and disapprobation from other members (as control). Rule making becomes ‘essentially contested’ terrain. It would seem worthwhile to explore this conjecture more fully across the wider range of open-content communities—beyond the particular case of Wikipedia (and Wikimedia projects generally). Managing content and disciplining users, tools for guarding the borders, have turned out to be sensitive areas of virtual community life after all. Is the same ‘essential contestation’ observed in Wikipedia crystallizing elsewhere? Can more exceptions to the ‘rule’ of quiet acceptance of moderation be detected?30

Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://​creativecommons.​org/​licenses/​by/​2.​0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Fußnoten
1
All websites mentioned in the text and the references were last accessed on 24 January 2012.
 
2
This statement could be verified for all open-content communities mentioned above—except for Skinhead Forum. In 2010 its founder, Richard Barrett, was killed by a neighbour. The site (http://​www.​skinheadz.​com) is now therefore defunct and can no longer be inspected.
 
3
These statistics refer to the English Wikipedia over the years 2004–2006; see http://​en.​wikipedia.​org/​wiki/​Wikipedia:​WikiProject_​Vandalism_​studies/​Study1. A general review of studies of vandalism on Wikipedia is given in http://​en.​wikipedia.​org/​wiki/​Wikipedia:​WikiProject_​Vandalism_​studies.
 
4
The software extension, called FlaggedRevs, has been developed by Aaron Schultz and Jörg Baach, and was released under an open source license, the GPL (see http://​www.​mediawiki.​org/​wiki/​Extension:​FlaggedRevs).
 
5
Votes were as follows: 708 in favour of the system with the ‘sighted version’ as default, 197 in favour of the system with the most recent version as default, 362 against the system, 33 neutral, with 129 rejecting the vote on procedural grounds. Precise references for these poll outcomes (and the ones that follow) can be obtained from the author upon request.
 
6
Several other language versions subsequently followed suit. Confining myself to those with over 100,000 entries: the Russian, Chinese, Hungarian, Polish, and Esperanto versions in 2008, the Indonesian version in 2010, and the Turkish version in 2011.
 
7
Votes were divided as follows: 429 in support, 282 in opposition, and 9 neutral.
 
8
This decision took two rounds of voting. The first vote on continuation of the system had the following outcome: 407 in favour, 217 against, and 44 other responses. Since a 2/3 majority (‘consensus’) was not achieved, another vote (with more restricted wording) was organized: 289 for temporary continuation, 199 against, and 40 opposing the poll on procedural grounds.
 
9
As the final part of a discussion about turning off the feature, a vote was taken: 127 in favour, 65 opposed.
 
10
Poll results were as follows: the English system collected 31 votes against, 15 in favour, and 9 neutral; the German system 46 votes against, 12 in favour, and 1 neutral.
 
11
I have chosen to stick to the terms employed in the German Wikipedia and not translate them. This choice may appear cumbersome to the reader, but it has the advantage that terms remain unambiguous and can easily be differentiated from the slightly different system proposed in the English Wikipedia (see below). Note that the Wikipedians involved themselves have also been struggling with this issue of translation.
 
12
Note, as observed above, that while the German discussion centred on the ‘German system’ and the English discussion on the ‘English system’, French Wikipedians confined their discussions to these two existing forms and never developed their ‘own’ variety of the review system.
 
13
For reasons of space, no references to user quotations are provided in this article. All due care has been taken to quote in a precise manner, however. The full list of references (from English, German and French Wikipedia) is obtainable from the author on request.
 
14
Similarly in German: “Dringend notwendig zur Steigerung der Qualität und Bekämpfung von Vandalismus. Unser Ziel ist die Erstellung einer Enzyklopädie.” (German user, 2008); and in French: “Pour avoir vu le système à l'œuvre sur de: Permet de laisser le système ouvert à tous, en freinant sérieusement l'intérêt du vandalisme.” (French user, 2009)
 
15
Similarly in German: “Dass man als IP verwirrt wird, da man seine Änderungen nicht sieht, empfinde ich als fatal. Wer würde sich dadurch nicht abschrecken lassen, wenn einem zunächst gesagt wird, dass man hier frei mitarbeiten kann, die eigenen Änderungen aber nicht angezeigt werden? Es werden massiv Autoren demotiviert und abspringen.” (German user, 2008); and in French: “Cette feature me semble être une très mauvaise idée. Tout l'intérêt de Wikipédia est justement que les IPs peuvent éditer. Que très facilement quelqu'un puisse s'investir facilement. Alors, aujourd'hui, réussir à s'intégrer est plus dur qu'auparavant (nombreuses règles, etc.), si en plus on ajoute que les modifications doivent être validée…” (French user, 2008)
 
16
Similarly in German: “Das Ganze ist nur wieder ein zusätzlicher Bürokratismus.” (German user, 2008); and in French: “Donne beaucoup de travail en plus pour valider les révisions, et ça nous éloigne du concept de Wikipédia. Ça ne va pas.” (French user, 2009)
 
17
Similarly in German: “(…) Edits (of unregistered users) werden aus einer allgemeinen Misstrauenshaltung heraus zu Edits zweiter Klasse, also nicht veröffentlichenswert deklariert, bis eben mal ein Sichter daherkommt, sich erbarmt und diese freischaltet.” (German user, 2008); and in French: “Il s'agit purement et simplement d'une forme de censure.” (French user, 2009)
 
18
Similarly in German: “Um etwas Vandalismus zu verhindern wird nun bewusst eine 2 Klassen Gesellschaft in Kauf genommen.” (German user, 2008); and in French: “Les flagged revisions marquent de facto la séparation entre éditeurs et lecteurs.” (French user, 2009)
 
19
Similarly in German: “Mit dieser Sichterei verleugnet die Wikipedia das Wikiprinzip. Nicht mehr: jeder kann mitschreiben, sondern jetzt: jeder kann Vorschläge einsenden. Das ist nicht mehr die Wikipedia, wie sie sich ursprünglich definiert.” (German user, 2008); and in French: “Du point de vue idéologique, ce genre d’amélioration me parait aller à l’encontre de l’idée originelle qui m’a séduite (participation de toutes et tous sans distinction).” (French user, 2009)
 
20
Its workers staged a wildcat strike precisely because of these low-trust moves (as analysed by the same Gouldner in Wildcat Strike 10 years later).
 
21
Note that a similar kind of conclusion stems from the analysis of inter-organizational relationships (Klein Woolthuis et al. 2005; Mellewigt et al. 2007). Parties usually draft contracts to regulate their cooperation. It is found that the clauses involved may spring from two logics. On the one hand, predictably, they may act as a control mechanism (to substitute for a mutual trust that is absent); on the other hand, and more surprisingly, if trust obtains, they may help to further cooperation by specifying mutual goals and drafting procedures for dealing with economic or technical contingencies (the contract ‘complements’ trust). As a result, contracting does not necessarily chase away trust; it may also underscore and enhance mutual trust.
 
22
Note that these authors studied formal rules unrelated to discretion. Such rules are, I suppose, more amenable to multiple interpretations than discretion-related rules.
 
23
Observe that Dutton (2008), on the basis of a series of empirical case studies, essentially comes to similar conclusions.
 
24
A similar bureaucratic analysis has fruitfully been performed before, concerning communities of OSS and Wikipedia in particular (de Laat 2007, 2010).
 
25
As I made clear in my introduction above, I do not share his diagnosis that Internet communities are predominantly unmoderated. In my perception, most of them routinely apply some kind of moderation proper—at least at present.
 
26
Unfortunately this ‘veteran issue’ cannot be settled easily. User pages reveal neither enough data, nor in a systematic way, to judge how long and how actively a user has been involved.
 
27
At the time or writing, the system is again under discussion among contributors to English Wikinews, because it is felt that there are too few actual reviewers and therefore too few articles under development reach the main page—those that fail ultimately being deleted.
 
28
Note that above the introduction of reviewing in some other—much smaller—language versions of Wikinews/Wiktionary has not been taken into consideration.
 
29
Essentially, Bruns is just rephrasing Benkler’s characterization as ‘peer production of knowledge’.
 
30
OSS definitely does not constitute such an exception. The institution of project leader(s) with full powers of moderation has always been an undisputed anchor point of its governance. Each project is a personal project, not one that belongs to some community. Hackers therefore have always been operating in a kind of organizational mode with clear boundaries. This does not preclude hackers holding wildly divergent visions of the OSS phenomenon, as mainly geared towards either communal freedoms or efficiency (‘free software’ vs. ‘open source software’). But disputes about changing governance remain within this meritocratic organizational model. Nobody advocates a Wikipedian model for OSS.
 
Literatur
Zurück zum Zitat Adler, P. S., & Borys, B. (1996). Two types of bureaucracy: Enabling and coercive. Administrative Science Quarterly, 41(1), 61–89.CrossRef Adler, P. S., & Borys, B. (1996). Two types of bureaucracy: Enabling and coercive. Administrative Science Quarterly, 41(1), 61–89.CrossRef
Zurück zum Zitat Anahita, S. (2006). Blogging the borders: Virtual skinheads, masculinity, and heteronormativity. Journal of Political and Military Sociology, 34(1), 143–164. Anahita, S. (2006). Blogging the borders: Virtual skinheads, masculinity, and heteronormativity. Journal of Political and Military Sociology, 34(1), 143–164.
Zurück zum Zitat Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. New Haven/London: Yale University Press. Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. New Haven/London: Yale University Press.
Zurück zum Zitat Bruns, A. (2008). Blogs, Wikipedia, second life, and beyond: From production to produsage. New York: Peter Lang. Bruns, A. (2008). Blogs, Wikipedia, second life, and beyond: From production to produsage. New York: Peter Lang.
Zurück zum Zitat Burns, T., & Stalker, G. M. (1961). The management of innovation. London: Tavistock Publications. Burns, T., & Stalker, G. M. (1961). The management of innovation. London: Tavistock Publications.
Zurück zum Zitat de Laat, P. B. (2007). Governance of open source software: State of the art. Journal of Management and Governance, 11(2), 165–177.CrossRef de Laat, P. B. (2007). Governance of open source software: State of the art. Journal of Management and Governance, 11(2), 165–177.CrossRef
Zurück zum Zitat de Laat, P. B. (2010). How can contributors to open-source communities be trusted? On the assumption, inference, and substitution of trust. Ethics and Information Technology, 12(4), 327–341.CrossRef de Laat, P. B. (2010). How can contributors to open-source communities be trusted? On the assumption, inference, and substitution of trust. Ethics and Information Technology, 12(4), 327–341.CrossRef
Zurück zum Zitat de Laat, P. B. (2012). Open source production of encyclopedias: Editorial policies at the intersection of organizational and epistemological trust. Social Epistemology, 26(1), 71–103. de Laat, P. B. (2012). Open source production of encyclopedias: Editorial policies at the intersection of organizational and epistemological trust. Social Epistemology, 26(1), 71–103.
Zurück zum Zitat Dutton, W. H. (2008). The wisdom of collaborative network organizations: Capturing the value of networked individuals. Prometheus, 26(3), 211–230.MathSciNetCrossRef Dutton, W. H. (2008). The wisdom of collaborative network organizations: Capturing the value of networked individuals. Prometheus, 26(3), 211–230.MathSciNetCrossRef
Zurück zum Zitat Fox, A. (1974). Beyond contract: Work, power and trust relations. London: Faber and Faber. Fox, A. (1974). Beyond contract: Work, power and trust relations. London: Faber and Faber.
Zurück zum Zitat Gouldner, A. W. (1955). Patterns of industrial bureaucracy. New York: The Free Press. Gouldner, A. W. (1955). Patterns of industrial bureaucracy. New York: The Free Press.
Zurück zum Zitat Handy, Ch. (1995). Trust and the virtual organization. Harvard Business Review, 73(3), 40–50. Handy, Ch. (1995). Trust and the virtual organization. Harvard Business Review, 73(3), 40–50.
Zurück zum Zitat Klein Woolthuis, R., Hillebrand, B., & Nooteboom, B. (2005). Trust, contract and relationship development. Organization Studies, 26, 813–840.CrossRef Klein Woolthuis, R., Hillebrand, B., & Nooteboom, B. (2005). Trust, contract and relationship development. Organization Studies, 26, 813–840.CrossRef
Zurück zum Zitat Mellewigt, Th., Madhok, A., & Weibel, A. (2007). Trust and formal contracts in interorganizational relationships-substitutes and complements. Managerial and Decision Economics, 28, 833–847. Mellewigt, Th., Madhok, A., & Weibel, A. (2007). Trust and formal contracts in interorganizational relationships-substitutes and complements. Managerial and Decision Economics, 28, 833–847.
Zurück zum Zitat Moor, J. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275.CrossRef Moor, J. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275.CrossRef
Zurück zum Zitat Sitkin, S. B., & George, E. (2005). Managerial trust-building through the use of legitimating formal and informal control mechanisms. International Sociology, 20, 307–338.CrossRef Sitkin, S. B., & George, E. (2005). Managerial trust-building through the use of legitimating formal and informal control mechanisms. International Sociology, 20, 307–338.CrossRef
Zurück zum Zitat Stvilia, B., Twidale, M. B., Smith, L. C., & Gasser, L. (2008). Information quality work organization in Wikipedia. Journal of the American Society for Information Science and Technology, 59(6), 983–1001.CrossRef Stvilia, B., Twidale, M. B., Smith, L. C., & Gasser, L. (2008). Information quality work organization in Wikipedia. Journal of the American Society for Information Science and Technology, 59(6), 983–1001.CrossRef
Metadaten
Titel
Coercion or empowerment? Moderation of content in Wikipedia as ‘essentially contested’ bureaucratic rules
verfasst von
Paul B. de Laat
Publikationsdatum
01.06.2012
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 2/2012
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-012-9289-7

Weitere Artikel der Ausgabe 2/2012

Ethics and Information Technology 2/2012 Zur Ausgabe

Premium Partner