WinterMilestone 4 hamdur.rahman Impact of peer evaluation and skill set rating between Workers

From crowdresearch
Revision as of 06:13, 7 February 2016 by Hamdurrahman (Talk | contribs)

Jump to: navigation, search


High quality work is central for crowd-sourcing platform as it motivate requester to post more work and worker to earn money. More workers are dissatisfied as their work is being rejected by requester without proper explanation. There is no way they can question it or improve on places where they are lacking. The requester gets low quality work and has no time to either interact with worker to explain him or provide him proper reasoning for rejection. Requester don't buy the idea of wasting more time on low quality works or rejected works by providing feedback.


One of the way to achieve high quality work is by peer evaluation and rating the workers as per the skill sets. Workers rating themselves will not be transparent and bringing any third party rating agency is not always feasible.

Peer evaluation between workers can help:

  • Review and verification of workers work: It will increase the quality of work on the crowd-sourcing platform
  • A quantitative rating of workers skill set by fellow workers: It help requester to find high quality workers and it will also help workers to target right work, improve upon their skills and add more skills.
  • It will help other workers (Reviewer) to increase their rating and earn money.

We want to understand:

  • Is peer evaluation bringing high quality work and taking some of the shortcoming of requester?
  • Is peer evaluation motivating for workers including workers who are reviewer?
  • Is the rating system working to enhance the reputation of work?

Experimental Design

In order to understand, how peer evaluation and skill set rating can impact quality. We introduce skill tag and their rating, a quantitative system which enhance quality, learning feedback and leverage trust between workers.

How skill set rating works:

  • We created a skill set category aligned with this work posted on Daemo. The skills were related to computer programming like HTML 5.
  • We also introduced skill set rating. It is measured on scale (BE: Below expectation, ME: Meet expectation, EE: Exceed expectation)
  • We created a work which was properly authored. The work was to create a form where any person can fill his details and his family details and this form can be accessed on internet. The skill set needed to compete this work is also added. The price of the work was put as $25.
  • We created three group of workers
    • 3 Worker who have HTML 5 skills and have rating of BE, ME, EE. These workers will only review the work of other worker.
    • 5 Worker (no HTML 5 skills) who will perform this work and submit on Daemo. They will not go for any peer review.
    • 5 Worker (no HTML 5 skills) who will perform this work and will go for peer review.
    • 5 Worker (with HTML 5 skill) who will perform this work and submit on Daemo. They will not go for any peer review.
    • 5 Worker (with HTML 5 skill) who will perform this work and will go for peer review.

Rule and Impact of review

  • Review can be done by worker who has these skills and rating above the worker who is asking for review.
  • Relative price was put on reviewer. BE: $2, ME: $4, EE: $6. This is what the worker has to pay to the reviewer from his pocket to review his work.
  • Reviewer will provide review comments, add skills and also increase the rating of the skills of the worker.
  • If the work is accepted by the requester, the skill set rating of the worker and reviewer was re-confirmed by requester.


The result of the experiment brought different findings:

  • Workers with no skills, who directly submitted without review have lot of fundamental quality issues and all rejection (as this work need skills)
  • Workers with no skills but went for review have number of iterations with the reviewer mostly related to skill fundamentals.
  • Workers with skills but directly submitted without review were accepted in some cases but where also rejected as some of the assumptions they took were wrong.
  • The workers with skills who went for review have all acceptance. They took more assumptions, if certain part of the work is not clear or missing as they know this will be validated by the reviewer.
  • None of the worker provided their work to be reviewed by reviewer having BE rating.
  • The requester feel comfortable in accepting the work which was already reviewed.
  • Worker are focusing on increasing their skill sets and skill rating as this will increase their chance of being a reviewer in future and earn money.
  • We still need to check out:
    • How will the skill sets and rating help in bringing work with specific skill set to the right worker and how requester can exploit this?
    • Is the requester also taking into account the reviewer comment of task authoring and improving or otherwise. The requester know that it is the worker who is paying from his pocket not the him.