Winter Milestone 5 vaastav Task Authorship

From crowdresearch
Revision as of 15:04, 14 February 2016 by Vaastavanand (Talk | contribs) (Method Specifics and Details)

Jump to: navigation, search

This page contains the concrete proposal for a study I am suggesting for the Task Authorship for Winter Milestone 5

Study Introduction

Study 1: Variance in Requester Quality vs Variance in Worker Quality. We begin by comparing the effect of variance in requester task authorship on the overall result of the task with the effect of variance in worker quality. After describing the experimental setup, designed to generate the data required for such a comparison, we show what kind of effect,if any, does the task authorship have on the overall results of the task and then compare it to the effect of variance in worker quality to see which factor has a stronger effect in the overall results of the task.

Study Method

Method: Study 1 and all further experiments reported on this paper were carried out using a microtasking platform that outsources crowd work to workers on the MechanicalTurk platform. The workers and requesters were restricted to the United States. The Study was completed with 10 unique requesters and 30 unique workers.

Method Specifics and Details

We began by populating our evaluation tasks with common crowdsourcing task types, or primitives, that appear commonly as microtasks or parts of microtasks. We found that there are 10 primitive types of tasks that were most common to crowdsource workflows (Figure 1).

Experimental Design for the Study

Measures from the Study

What do we want to analyze?

Contributors

@vaastav