Difference between revisions of "Milestone 4 Task authoring - Training interventions in Task Authorship for Requesters"

From crowdresearch
Jump to: navigation, search
(Future work)
 
(22 intermediate revisions by 2 users not shown)
Line 1: Line 1:
  
== Outline ž ==
+
== Outline ==
 
   
 
   
• What’s the phenomenon you’re interested in? Specific phenomemon! Not just “crowdsourcing”. More like what makes teams of workers effective. ž
+
• What’s the phenomenon you’re interested in?  
  
The economy and workers’ lives have been digitised. Workers want to access opportunities for work as and when suits their individual circumstances. Workers want pathways for learning and skills. They want it geared to their own needs and to their own level of challenge. Workers also want to be able to share their achievements. They want content and resources tailored to their individual needs and accessible for future reference. Workers do not want to be restricted to working in one skill area but for crowdsourcing markets to allow for branching out to other areas of work without barriers. Workers want timely, relevant opportunities for updating their skills 
+
Opportunities for work are being digitally transformed. Requesters want to post tasks and get timely high quality results. They want to minimize the cost and time it takes for their work to be completed.
  
Requesters want their work to be completed to a high standard by workers they can trust have the relevant skills required.  
+
Requesters want their work to be completed to a high standard by workers who they trust to possess the relevant skills.
  
1) Do requesters identify that they require training to acquire skills relevant to certain tasks? If so, what kind?
+
But many requesters are new/inexperienced, we have as an assumption that they need to be trained in the way one must design and post a task.
  
2) Do requesters want to access training relevant to completion of certain tasks on a crowdsource marketplace?
+
'''Assumptions/questions:'''
  
3) Do requesters access the training?
+
1) Do requesters identify that they require training to acquire skills relevant to create certain tasks? If so, what kind?
  
4) Does the quality of the tasks created by the requesters who accessed the training improve?
+
2) Do requesters want to access training relevant to authoring tasks on a crowdsource marketplace?
 +
 
 +
3) Do requesters access the training?
  
 +
4) Does the quality of the tasks created by the requesters who accessed the training improve? And on what criteria do we measure this?
  
 
== The Puzzle ==
 
== The Puzzle ==
  
• What observation can’t we account for yet? ž  
+
• What observation can’t we account for yet? ž
 
+
Whether or not workers want direct access to skills/knowledge, the best way for the worker to obtain the knowledge/skills whether it be via gold standard tasks, external consumption of learning which is demonstrated through platform badges or certificates, mentoring and/or dual work opportunities, access to examples of good practice,
+
 
+
Outline MOOC research
+
 
+
One route would be to offer certification such as that offered by Freelancer through completion of assessments, where specifically workers are offered skills subset specific tasks for workers to complete. If they pass, they gain a badge, if they fail, they can try again once they’ve upskilled. It is is time-consuming and costly to create platform specific assessment tasks. Another route would be for collectives of workers to own the skill subset area and manage it. This would include identifying tasks related to that skill subset and offer them for peer review similar to the methods used in MOOC peer assessment on NovoEd or Coursera.  We could run some experiments to identify if either of these are effective ways to assess worker skill level.
+
  
 
== The experimental design ==
 
== The experimental design ==
Line 31: Line 28:
 
Experiment 1:  
 
Experiment 1:  
  
• Who are you recruiting? Three groups: 1) Experienced microtask requesters 2) novice requesters (requesters who have never posted a task on a microtask platform) 3) novice requesters who undertake training in content specific task creation
+
• Who are you recruiting?  
 +
  Three groups:  
 +
  1) Experienced microtask requesters  
 +
  2) novice requesters (requesters who have never posted a task on a microtask platform)  
 +
  3) novice requesters who undertake training in content specific task creation
 +
 
 
• What are the conditions?  
 
• What are the conditions?  
 +
 
• What are you measuring? What statistical procedure will you use? ž  
 
• What are you measuring? What statistical procedure will you use? ž  
  
1. Quality of task creation in the group of experienced microtask requesters compared with the novice requesters then compared with the novice requesters who have undertaken training in the content specific task creation
+
  1) Quality of task creation in the group of experienced microtask requesters compared with the novice requesters then compared with the novice requesters who have undertaken training in the content specific task creation
 
+
  2) Does training in microtask creation achieve good results?
2. Does training in microtask creation achieve good results?
+
  3) If yes, why? If not, what else do newcomers identify as what type of training for microtask creation would newcomers want instead?
 
+
3. If yes, why? If not, what else do newcomers identify as what type of training for microtask creation would newcomers want instead?
+
  
 
== The result ==
 
== The result ==
Line 45: Line 46:
 
• What (do you imagine) would happen?
 
• What (do you imagine) would happen?
  
The overall quality of the task will improve. Workers will spend less time trying to ascertain what the requester is requesting - is this one measurable??
+
The overall quality of the task will improve. Workers will spend less time trying to ascertain what the requester is requesting.
  
 +
Less requests of clarifications from workers.
  
  
 +
== Future work ==
  
 
Future work
 
  
 
Offer templates
 
Offer templates
Line 67: Line 68:
 
Toward a Learning Science for Complex Crowdsourcing Tasks   
 
Toward a Learning Science for Complex Crowdsourcing Tasks   
  
 
_________________________
 
 
Future experiment:
 
 
Experiment 2:
 
 
• Who are you recruiting? Experienced microtask workers vs Workers who have never completed work on a microtask platform - complete newcomer
 
• What are the conditions?
 
• What are you measuring? What statistical procedure will you use? ž
 
o 1. Quality of task completion
 
o 2. What type of training for microtask completion is necessary
 
o 2. What other types of
 
 
The result ž
 
• What (do you imagine) would happen?
 
  
  
 +
==Contributors:== 
  
 +
Please feel free to add/amend/contribute and then add your name here:
  
Contributor:
 
 
@arichmondfuller
 
@arichmondfuller
  
 +
@yoni.dayan
  
Details on Milestone 4:
+
@
  
http://crowdresearch.stanford.edu/w/index.php?title=Winter_Milestone_4
+
@
  
https://www.youtube.com/watch?v=aLmr2HvoBKw
+
@
  
http://crowdresearch.stanford.edu/w/img_auth.php/3/36/02-01-research.pdf
+
@

Latest revision as of 05:52, 8 February 2016

Outline

• What’s the phenomenon you’re interested in?

Opportunities for work are being digitally transformed. Requesters want to post tasks and get timely high quality results. They want to minimize the cost and time it takes for their work to be completed.

Requesters want their work to be completed to a high standard by workers who they trust to possess the relevant skills.

But many requesters are new/inexperienced, we have as an assumption that they need to be trained in the way one must design and post a task.

Assumptions/questions:

1) Do requesters identify that they require training to acquire skills relevant to create certain tasks? If so, what kind?

2) Do requesters want to access training relevant to authoring tasks on a crowdsource marketplace?

3) Do requesters access the training?

4) Does the quality of the tasks created by the requesters who accessed the training improve? And on what criteria do we measure this?

The Puzzle

• What observation can’t we account for yet? ž

The experimental design

ž Experiment 1:

• Who are you recruiting?

  Three groups: 
  1) Experienced microtask requesters 
  2) novice requesters (requesters who have never posted a task on a microtask platform) 
  3) novice requesters who undertake training in content specific task creation

• What are the conditions?

• What are you measuring? What statistical procedure will you use? ž

  1) Quality of task creation in the group of experienced microtask requesters compared with the novice requesters then compared with the novice requesters who have undertaken training in the content specific task creation
  2) Does training in microtask creation achieve good results?
  3) If yes, why? If not, what else do newcomers identify as what type of training for microtask creation would newcomers want instead?

The result

• What (do you imagine) would happen?

The overall quality of the task will improve. Workers will spend less time trying to ascertain what the requester is requesting.

Less requests of clarifications from workers.


Future work

Offer templates

Collectives of workers to identify good practice and create task templates. Requesters to use/amend task templates. Tasks submitted go to pool of workers related to the task-type for approval/release to workers.

Link to meta-curriculum and provide triggers to both worker and requesters for the acquisition and updating of skills.

http://research.microsoft.com/en-us/um/people/horvitz/task_learning_pipeline_chi2016.pdf

https://www.l3s.de/~gadiraju/publications/gadiraju_ectel2015.pdf Training Workers for Improving Performance in Crowdsourcing Microtasks

http://research.microsoft.com/en-us/um/people/horvitz/task_learning_pipeline_chi2016.pdf Toward a Learning Science for Complex Crowdsourcing Tasks


Contributors:

Please feel free to add/amend/contribute and then add your name here:

@arichmondfuller

@yoni.dayan

@

@

@

@