Winter Milestone 5 vaastav Task Authorship
This page contains the concrete proposal for a study I am suggesting for the Task Authorship for Winter Milestone 5
Study 1: Variance in Requester Quality vs Variance in Worker Quality. We begin by comparing the effect of variance in requester task authorship on the overall result of the task with the effect of variance in worker quality. After describing the experimental setup, designed to generate the data required for such a comparison, we show what kind of effect,if any, does the task authorship have on the overall results of the task and then compare it to the effect of variance in worker quality to see which factor has a stronger effect in the overall results of the task.
Method: Study 1 and all further experiments reported on this paper were carried out using a microtasking platform that outsources crowd work to workers on the MechanicalTurk platform. The workers and requesters were restricted to the United States. The Study was completed with 10 unique requesters and 30 unique workers.
Method Specifics and Details
We began by populating our evaluation tasks with common crowdsourcing task types, or primitives, that appear commonly as microtasks or parts of microtasks. We found that there are 10 primitive types of tasks that were most common to crowdsource workflows (Figure 1).
Experimental Design for the Study
Each requester out of the 10 requesters was asked to author 1 task of each primitive type. So, each requester authored 10 tasks, 1 of each primitive type. Each 1 of the workers were asked to complete all the tasks that were authored. So, each worker completed 100 tasks, 10 each of a primitive type and 10 each of a particular requester. The requesters were then asked to each submission of the worker. The details of the requesters and workers were not revealed to remove any kind of personal bias from the judgement criteria. An example task is shown in Figure 2.
Measures from the Study
What do we want to analyze?