Winter Milestone 5 Team CarpeNoctem - Improving the Relevance and Accuracy of Scores of Workers
- 1 Problems Faced
- 2 Goals
- 3 Our Concept
- 4 Further Thoughts and Suggestions
- 5 References
- 6 Team Members
Inflation of Scores
According to the paper “Reputation Inflation: Evidence from an Online Labor Market by John J. Horton & Joseph M. Golden", there is a general phenomenon of inflation of ratings, making it difficult for the system to use the scores to reflect the quality of workers accurately.
Lack of Incentives for Requesters to Rate
Rating Workers takes a lot of time and effort, and requesters may not be incentivized to rate workers, especially if they only have a 1-time job to post on the system, or short term tasks. Moreover, it’s difficult to require workers to rate on a compulsory basis, as this might deter them from posting the tasks at all because they could have done the tasks themselves if they spend that much time rating
Time Factors not taken into consideration
Scores of workers a long time ago may not reflect the current quality of the workers. Workers who have already earned a stable good reputation score may be incentivized to slack off in the upcoming tasks, or workers’ quality of tasks may fluctuate over time.
Score not reflective of specific type of tasks
Scores of workers a long time ago may not reflect the current quality of the workers. For example, certain workers may be more proficient at doing arithmetic tasks than translation.
- Provide Incentives to Accurately Rate Workers according to Task Quality
- Utilize the Network of Workers
- Make Scores More Reflective of Workers’ Current Situation
- Make Scores More Reflective of Specific Task
In order to tackle the problems, we have decided to utilize the power and time of the existing workers to peer evaluate and rate a small portion of another worker’s task. This not only saves the time of the requester to rate every single worker, it also improves the accuracy of the ratings because workers are incentive to rate fairly as the ratings also affect their own trustability scores and they may be downgraded if the requester finds out that the worker is evaluating other workers inappropriately. This concept of having requesters to monitor the ratings of the tasks instead of the tasks themselves not only saves them time, but also ensures that every worker gets rated and the rating that they receive is fair.
Work and Rating Process
Firstly, the requester breaks down the task to several subtasks, and the worker will complete all of the subtasks. The worker also has to rate a subtask of another worker given by the system, which will contribute to their trustability score and their final rating of that task if the requester decides to monitor those ratings. After the tasks is submitted, the system will randomly pick one of the subtasks for other workers to rate. Finally workers will get a rating based on the average from their peers, and a refactoring from the requester if the requester chooses to monitor those ratings and upgrade or downgrade current scores.
In order to incentivize requesters to evaluate the ratings given by other peer workers to refactor the scores, we can employ Daemo’s concept of increasing the likelihood of that worker to be assigned to the requester’s task again in the future if the worker’s score is high.
Further Thoughts and Suggestions
- Manoj Pandey : @manojpandey
- Michelle Chan : @michellechan
- Lucas Qiu : @lucasq