WinterMilestone 4 inceptionTRACTRICOID

From crowdresearch
Jump to: navigation, search

Categorization for differentiation

Labeling tasks

  • It is time consuming and difficult for workers to find the tasks that match their talents.

Imagine John as a worker who wants to choose a job between job A and job B, B pays higher than A so John decides to choose job B. John does not know that he doesn’t have the skills required for the task B and job A is something he could easily do in a short time. He ends up spending a lot of time on doing task B, and doesn’t do it very well so his work gets rejected. This is an example of how easily knowing the required skills for a job can help workers find the right task to do.

  • Categorizing jobs for suggestions

Labeled tasks are very easy to work with when it comes to showing suggestions to workers. We can understand a workers interest and skills based on the tasks they do and how well they perform them so it makes it a lot easier to suggest the right task to person with matching skills.

Worker Benefits

  1. It is time consuming for the workers to find the task that matches their skill
  2. Generated recommendations

Requester Benefits

  1. Easier to rate workers on the skills related to the task they accomplished.

Endorsing worker skills

Endorsing people on their skills differentiates them from the crowd and helps their reputation and self-confidence. Websites such as StackOverflow use reputation as a motivation to help people with their questions so on a crowdsourcing platform like Daemo, reputation can fundamentally change how the system works. Having requesters rate the worker on the skills that their task required can be a great way to give feedback to the worker.

Worker Benefits

  1. It is difficult for to differentiate themselves from other workers with same rating but different skillset.
  2. There is no way other than being rejected/accepted for workers to gain feedback from requesters about their performance.

Requester Benefits

  1. Currently tasks are vulnerable to noisy labels introduced either unintentionally by unreliable workers or intentionally by spammers and malicious worker.
  2. It is difficult to recruit talent.
  3. The reliabilities or reputations of the workers are often unknown.
  4. Workers will perform better quality work if they know their work will be rated.

Related Work

Authors in "Eliminating spammers and ranking annotators for crowdsourced labeling tasks" propose an empirical Bayesian algorithm to eliminate workers who label randomly without looking at the particular task.

Prototypes

Userratingprototype.png Prototype2.png

What can go wrong and how to overcome

  1. Requesters who are friends create tasks for each other to be able to rate each other well or poor. (eg. LinkedIn [1])
  2. Requesters are able to see who endorse the worker so that if the endorsement was all made by a single requester, can make it questionable
  3. Tagging things correctly can be confusing for requesters. For example, what skills do people need for filling surveys? In this case, our solution would be to have suggested skills for taking surveys such as 'attention to details'

Milestone Contributor

@parsis @sophiesong @hizai