Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms

https://doi.org/10.1016/j.mcm.2012.01.006Get rights and content
Under an Elsevier user license
open archive

Abstract

Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. In this paper, we analyze two widely used crowd-based approaches to validate the submitted work.1 Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks.

Keywords

Crowdsourcing
Quality assurance
Cost estimation

Cited by (0)

1

This paper is an extended version of Hirth et al. (2011) [3].