Milestone 4 Task authoring - Training interventions in Task Authorship for Requesters

From crowdresearch
Jump to: navigation, search

Outline

• What’s the phenomenon you’re interested in?

Opportunities for work are being digitally transformed. Requesters want to post tasks and get timely high quality results. They want to minimize the cost and time it takes for their work to be completed.

Requesters want their work to be completed to a high standard by workers who they trust to possess the relevant skills.

But many requesters are new/inexperienced, we have as an assumption that they need to be trained in the way one must design and post a task.

Assumptions/questions:

1) Do requesters identify that they require training to acquire skills relevant to create certain tasks? If so, what kind?

2) Do requesters want to access training relevant to authoring tasks on a crowdsource marketplace?

3) Do requesters access the training?

4) Does the quality of the tasks created by the requesters who accessed the training improve? And on what criteria do we measure this?

The Puzzle

• What observation can’t we account for yet? ž

The experimental design

ž Experiment 1:

• Who are you recruiting?

  Three groups: 
  1) Experienced microtask requesters 
  2) novice requesters (requesters who have never posted a task on a microtask platform) 
  3) novice requesters who undertake training in content specific task creation

• What are the conditions?

• What are you measuring? What statistical procedure will you use? ž

  1) Quality of task creation in the group of experienced microtask requesters compared with the novice requesters then compared with the novice requesters who have undertaken training in the content specific task creation
  2) Does training in microtask creation achieve good results?
  3) If yes, why? If not, what else do newcomers identify as what type of training for microtask creation would newcomers want instead?

The result

• What (do you imagine) would happen?

The overall quality of the task will improve. Workers will spend less time trying to ascertain what the requester is requesting.

Less requests of clarifications from workers.


Future work

Offer templates

Collectives of workers to identify good practice and create task templates. Requesters to use/amend task templates. Tasks submitted go to pool of workers related to the task-type for approval/release to workers.

Link to meta-curriculum and provide triggers to both worker and requesters for the acquisition and updating of skills.

http://research.microsoft.com/en-us/um/people/horvitz/task_learning_pipeline_chi2016.pdf

https://www.l3s.de/~gadiraju/publications/gadiraju_ectel2015.pdf Training Workers for Improving Performance in Crowdsourcing Microtasks

http://research.microsoft.com/en-us/um/people/horvitz/task_learning_pipeline_chi2016.pdf Toward a Learning Science for Complex Crowdsourcing Tasks


Contributors:

Please feel free to add/amend/contribute and then add your name here:

@arichmondfuller

@yoni.dayan

@

@

@

@