keyboard_arrow_up
Treating Crowdsourcing as Examination: How to Score Tasks and Online Workers?

Authors

Guangyang Han, Sufang Li, Runmin Wang and Chunming Wu, Southwest University, China

Abstract

Crowdsourcing is an online outsourcing mode which can solve the machine learning algorithm's urge need for massive labeled data. How to model the interaction between workers and tasks is a hot spot. we try to model workers as four types based on their ability and divide tasks into hard, medium and easy according difficulty. We believe that even experts struggle with difficult tasks while sloppy workers can get easy tasks right. So, good examination tasks should have moderate degree of difficulty and discriminability to score workers more objectively. Thus, we first score workers' ability mainly on the medium difficult tasks. A probability graph model is adopted to simulate the task execution process, and an iterative method is adopted to calculate and update the ground truth, the ability of workers and the difficulty of the task. We verify the effectiveness of our algorithm both in simulated and real crowdsourcing scenes.

Keywords

Crowdsourcing, Worker model, Task difficulty, Quality control, Data mining.

Full Text  Volume 12, Number 7