Winter Milestone 3 Despicables Open Governance (Science): Holocracy
One of the fundamental problems with current Crowd Work platforms is the inefficient first experience for new workers. When a worker first joins, there are a couple of areas where he faces challenges :
- Understand the type of tasks that suit his skills
- Assess the requester
- Clarify doubts regarding the tasks.
From the Requester’s point of view, there isn’t an efficient mechanism for feedback on his task design, which is one of the prime factors for low quality work submissions. “Workers fail to understand, what needs to be done”. We realised there is an important need for a solution which allows :
- Requester to get timely feedback from workers regarding the task design.
- Honest workers to find trustable requesters.
- Both the Requesters and Workers to handle he issues effectively and efficiently.
We believe, these problems can be solved collectively, and propose an idea to group Workers into clusters, formed on the basis of varied reputation and common skills / tasks. This cluster of Workers could then discuss problems and issues regarding the task while also mentor new workers. We further describe a filtering issue handling system, which allows workers to post issues, and other workers to vote in favour or against. Once, the number of votes reaches a threshold, the issue is passed on to the Requester as notifications. Also, in order to promote depth into the system, we propose the system to be transparent, so that the Requester is able to view each question posted as well, if he so wishes.
While this system, looks good in theory, we wanted to be see how it might work in practise. So we cried out two experiments. One, which tests it on the Onboarding experience of new worker, and second, which tests it on the quality of the work generated.
The first experiment is based on A/B testing, and compares the time taken by a new worker to earn varying amounts of money, when he is on his own vs when he is put into a cluster of workers with similar skill and varied rep.
The second experiment is also based on A/B testing but follows a contextual inquiry as well. We create two groups A and B. One with Workers are grouped into clusters while the other consists of individual workers. Group A has a issue tracking system in place where they can post the issues – the requester has access and can view the different issues faced by the requester while Group B doesn’t have a system in place and needs to get in touch with the requester on a one-on-one basis to clarify issues. We test, both these groups on the time taken for the requesters to get the desired work, as well as on the quality of the work obtained. We further, ask the requester to redesign the task based on the feedback, and then rate this new design through a random set of workers. Also, in order for the experiment to be fair, it is important for us to have two different requesters instead of a single one, as the feedbacks received from one group might cause a bias. This, dilemma is solved by asking a group of requesters to describe a task description based on a given vague idea, this allows us to form requester pairs based on similarity in the task description.
For the first experiment, we observe that it takes less time for a Worker who has been a part of a cluster vs the one who is alone. This can be credited to the fact that a Worker in a cluster has been able to clarify basic doubts with a real human who has been through the system. Also, team alerts when a new task gets put up/following relevant threads could have also helped.
For the second experiment we observe that it takes relatively less time for the group of Workers in clusters vs the ones alone. Also, we found that Requesters had a better learning and were able to better design tasks after the feedback received from the Workers grouped in clusters.
We can certainly conclude, that this model of Workers in Clusters could certainly solve the above listed problems.