Difference between revisions of "Milestone 4 Training to improve quality of work"
m (Angelarichmondfuller moved page Milestone 4 Open Gov - Worker collectives to improve quality of work to Milestone 4 Training to improve quality of work: change of experiment type)
|Line 48:||Line 48:|
Revision as of 06:51, 8 February 2016
Abstract & assumptions
The economy and workers’ lives have been digitised. Workers want to access opportunities for work as and when suits their individual circumstances. Workers want pathways for personal and professional development, and learn new knowledge and skills. They want it geared to their own needs and to their own level of challenge. Workers also want their learning and work to be valued, recognized, and shared. They want content and resources tailored to their individual needs and accessible for future reference. In this era where many of the jobs will be automated through IA & algorithms, there is a need to leverage our human traits like our curiosity and the possibility to mix disciplines. Therefore, workers do not want to be restricted to working in one skill area but for crowdsourcing markets to allow for branching out to other areas of work without barriers or fear of it affecting their reputation rating in the crowdsourcing marketplace. Workers want timely, relevant opportunities for learning or updating their skills. Workers do not want to be penalized for emergencies in work and life that prevent completion of work; they want to be able to reach out to other trusted colleagues to assist as and when.
Who are you recruiting? 1. Experienced microtask workers 2. Novice workers who have never completed work on a microtask platform - complete newcomer 3. Novice worker who completes relevant training on the task's skills subset
What are the conditions?
What are you measuring? What statistical procedure will you use? 1. Quality of task completion for each group - what are we measuring? 2. The type of training for microtask completion that improves the task completion quality - how will we measure quality? 3. What kind of training are we talking about, is it external resources (like taking relevant and recommended MOOCs?), is it a in-house training (within Daemo) in the form of exercises, drills, blank tasks to complete?
What (do you imagine) would happen?
The workers who complete training on the skills needed to complete the task will produce higher results than the novice workers who do not complete relevant training.
The workers will also feel more valued, more fulfilled on a personal level, and therefore more committed to the tasks.
How could we measure this?
Whether or not workers want direct access to skills/knowledge, identify the best way for the worker to obtain the knowledge/skills whether it be via gold standard tasks, external consumption of learning which is demonstrated through platform badges or certificates, mentoring and/or dual work opportunities, access to examples of good practice.
Also A/B testing in terms of satisfaction, retention rate in the platform, engagement, etc.
Outline MOOC research
- One route would be to offer certification such as that offered by Freelancer through completion of assessments, where specifically workers are offered skills subset specific tasks for workers to complete. If they pass, they gain a badge, if they fail, they can try again once they’ve upskilled. It is time-consuming and costly to create platform specific assessment tasks.
- Another route would be for collectives of workers to own the skill subset area and manage it. This would include identifying tasks related to that skill subset and offer them for peer review similar to the methods used in MOOC peer assessment on NovoEd or Coursera. We could run some experiments to identify if either of these are effective ways to assess worker skill level.
- Another possibility would be to add a "professional development" tab on Daemo, consisting of "learning pathways" toward certain goals like being able to complete tasks. If you want to do this work, we recommend you (or ask you) to take MOOC X, then nano-degree Y, then finish with a blank task/drill. Those learning pathways could be user generated (peer workers or requesters), or made through algorithms.