Difference between revisions of "Milestone 4 Task authoring - Training interventions in Task Authorship for Requesters"

From crowdresearch
Jump to: navigation, search
(Created page with " == Training interventions in Task Authorship for requesters == Worker/requester matching MBS mentioned only newcomers submissions in this weeks’ hangout ‘Rate workers...")
 
(Future work)
 
(34 intermediate revisions by 2 users not shown)
Line 1: Line 1:
  
== Training interventions in Task Authorship for requesters ==
+
== Outline ==
 +
 +
• What’s the phenomenon you’re interested in?
  
 +
Opportunities for work are being digitally transformed. Requesters want to post tasks and get timely high quality results. They want to minimize the cost and time it takes for their work to be completed. 
  
Worker/requester matching
+
Requesters want their work to be completed to a high standard by workers who they trust to possess the relevant skills.
  
MBS mentioned only newcomers submissions in this weeks’ hangout
+
But many requesters are new/inexperienced, we have as an assumption that they need to be trained in the way one must design and post a task.
‘Rate workers by skill category ~@dhankie, @dineshd ž Workers might have higher rating in one category than another. This protects ž The requester can find workers with high ratings in a specific category ž Workers reputation overall reputation is more protected because a requester can only rate according to a specific category ~Team Duka’
+
  
Not sure why no mention was given to all the work that many of us did last year related to developing learning and skills to improve the quality of both the task creation and task completion.  We previously came up with various elements/aspects. This submission outlines a few of these ideas.
+
'''Assumptions/questions:'''
  
 +
1) Do requesters identify that they require training to acquire skills relevant to create certain tasks? If so, what kind?
  
Governance - groups/collectives
+
2) Do requesters want to access training relevant to authoring tasks on a crowdsource marketplace?
  
Training interventions in Task Authorship for requesters  
+
3) Do requesters access the training?
  
Outline of a science intro ž
+
4) Does the quality of the tasks created by the requesters who accessed the training improve? And on what criteria do we measure this?
• What’s the phenomenon you’re interested in? Specific phenomemon! Not just “crowdsourcing”. More like what makes teams of workers effective. ž
+
  
The economy and workers’ lives have been digitised. Workers want to access opportunities for work as and when suits their individual circumstances. Workers want pathways for learning and skills. They want it geared to their own needs and to their own level of challenge. Workers also want to be able to share their achievements. They want content and resources tailored to their individual needs and accessible for future reference. Workers do not want to be restricted to working in one skill area but for crowdsourcing markets to allow for branching out to other areas of work without barriers. Workers want timely, relevant opportunities for updating their skills 
+
== The Puzzle ==
  
Requesters want their work to be completed to a high standard by workers they can trust have the relevant skills required.
+
• What observation can’t we account for yet? ž
  
1) Do requesters identify that they require training to acquire skills relevant to certain tasks? If so, what kind?
+
== The experimental design ==
2) Do requesters want to access training relevant to completion of certain tasks on a crowdsource marketplace?
+
ž
3) Do requesters access the training?
+
Experiment 1:
4) Does the quality of the tasks created by the requesters who accessed the training improve?
+
  
The Puzzle ž
+
Who are you recruiting?  
What observation can’t we account for yet? ž
+
  Three groups:
 +
  1) Experienced microtask requesters
 +
  2) novice requesters (requesters who have never posted a task on a microtask platform)
 +
  3) novice requesters who undertake training in content specific task creation
  
Whether or not workers want direct access to skills/knowledge, the best way for the worker to obtain the knowledge/skills whether it be via gold standard tasks, external consumption of learning which is demonstrated through platform badges or certificates, mentoring and/or dual work opportunities, access to examples of good practice,
+
• What are the conditions?
  
Outline MOOC research
+
• What are you measuring? What statistical procedure will you use? ž
  
One route would be to offer certification such as that offered by Freelancer through completion of assessments, where specifically workers are offered skills subset specific tasks for workers to complete. If they pass, they gain a badge, if they fail, they can try again once they’ve upskilled. It is is time-consuming and costly to create platform specific assessment tasks. Another route would be for collectives of workers to own the skill subset area and manage it. This would include identifying tasks related to that skill subset and offer them for peer review similar to the methods used in MOOC peer assessment on NovoEd or Coursera.  We could run some experiments to identify if either of these are effective ways to assess worker skill level. 
+
  1) Quality of task creation in the group of experienced microtask requesters compared with the novice requesters then compared with the novice requesters who have undertaken training in the content specific task creation
 +
  2) Does training in microtask creation achieve good results?
 +
  3) If yes, why? If not, what else do newcomers identify as what type of training for microtask creation would newcomers want instead?
  
The experimental design ž
+
== The result ==
  
Experiment 1:
+
• What (do you imagine) would happen?
  
• Who are you recruiting? Experienced microtask requesters  vs Requesters who have never posted a task on a microtask platform - complete newcomer
+
The overall quality of the task will improve. Workers will spend less time trying to ascertain what the requester is requesting.
• What are the conditions?
+
• What are you measuring? What statistical procedure will you use? ž
+
o 1. Quality of task creation
+
o 2. What type of training for microtask creation do newcomers want?
+
o 3. Does training in microtask creation achieve good results?
+
  
The result ž
+
Less requests of clarifications from workers.
• What (do you imagine) would happen?
+
 
 +
 
 +
== Future work ==
  
The overall quality of the task will improve. Workers will spend less time trying to ascertain what the requester is requesting - is this one measurable?? -
 
  
 
Offer templates
 
Offer templates
Line 56: Line 58:
 
Collectives of workers to identify good practice and create task templates.  Requesters to use/amend task templates. Tasks submitted go to pool of workers related to the task-type for approval/release to workers.  
 
Collectives of workers to identify good practice and create task templates.  Requesters to use/amend task templates. Tasks submitted go to pool of workers related to the task-type for approval/release to workers.  
  
 
Future work
 
 
Link to meta-curriculum and provide triggers to both worker and requesters for the acquisition and updating of skills.
 
Link to meta-curriculum and provide triggers to both worker and requesters for the acquisition and updating of skills.
  
Line 69: Line 69:
  
  
_________________________
 
  
Future experiment:  
+
==Contributors:== 
  
Experiment 2:  
+
Please feel free to add/amend/contribute and then add your name here:
  
• Who are you recruiting? Experienced microtask workers vs Workers who have never completed work on a microtask platform - complete newcomer
+
@arichmondfuller
• What are the conditions?
+
• What are you measuring? What statistical procedure will you use? ž
+
o 1. Quality of task completion
+
o 2. What type of training for microtask completion is necessary
+
o 2. What other types of
+
  
The result ž
+
@yoni.dayan
• What (do you imagine) would happen?
+
  
 +
@
  
 +
@
  
 +
@
  
Contributor:
+
@
@arichmondfuller
+
 
+
 
+
Details on Milestone 4:
+
 
+
http://crowdresearch.stanford.edu/w/index.php?title=Winter_Milestone_4
+
https://www.youtube.com/watch?v=aLmr2HvoBKw
+
http://crowdresearch.stanford.edu/w/img_auth.php/3/36/02-01-research.pdf
+

Latest revision as of 06:52, 8 February 2016

Outline

• What’s the phenomenon you’re interested in?

Opportunities for work are being digitally transformed. Requesters want to post tasks and get timely high quality results. They want to minimize the cost and time it takes for their work to be completed.

Requesters want their work to be completed to a high standard by workers who they trust to possess the relevant skills.

But many requesters are new/inexperienced, we have as an assumption that they need to be trained in the way one must design and post a task.

Assumptions/questions:

1) Do requesters identify that they require training to acquire skills relevant to create certain tasks? If so, what kind?

2) Do requesters want to access training relevant to authoring tasks on a crowdsource marketplace?

3) Do requesters access the training?

4) Does the quality of the tasks created by the requesters who accessed the training improve? And on what criteria do we measure this?

The Puzzle

• What observation can’t we account for yet? ž

The experimental design

ž Experiment 1:

• Who are you recruiting?

  Three groups: 
  1) Experienced microtask requesters 
  2) novice requesters (requesters who have never posted a task on a microtask platform) 
  3) novice requesters who undertake training in content specific task creation

• What are the conditions?

• What are you measuring? What statistical procedure will you use? ž

  1) Quality of task creation in the group of experienced microtask requesters compared with the novice requesters then compared with the novice requesters who have undertaken training in the content specific task creation
  2) Does training in microtask creation achieve good results?
  3) If yes, why? If not, what else do newcomers identify as what type of training for microtask creation would newcomers want instead?

The result

• What (do you imagine) would happen?

The overall quality of the task will improve. Workers will spend less time trying to ascertain what the requester is requesting.

Less requests of clarifications from workers.


Future work

Offer templates

Collectives of workers to identify good practice and create task templates. Requesters to use/amend task templates. Tasks submitted go to pool of workers related to the task-type for approval/release to workers.

Link to meta-curriculum and provide triggers to both worker and requesters for the acquisition and updating of skills.

http://research.microsoft.com/en-us/um/people/horvitz/task_learning_pipeline_chi2016.pdf

https://www.l3s.de/~gadiraju/publications/gadiraju_ectel2015.pdf Training Workers for Improving Performance in Crowdsourcing Microtasks

http://research.microsoft.com/en-us/um/people/horvitz/task_learning_pipeline_chi2016.pdf Toward a Learning Science for Complex Crowdsourcing Tasks


Contributors:

Please feel free to add/amend/contribute and then add your name here:

@arichmondfuller

@yoni.dayan

@

@

@

@