Milestone 6 ams

From crowdresearch
Jump to: navigation, search

WorkBazaar - A Meritocratic Crowd-Sourcing Platform

Abstract

Existing Crowdsourcing platforms do not enforce strong importance on the reputation of both - the workers and the employers. Employers are forced to assign work to workers without considering their capabilities or understanding of the task.

The foundation of this paper aims to solve the problems related to trust, by implementing a crowdsourcing platform which increases the transparency of the reputation of all users of the platform. Primarily, this paper focuses on highlighting the work quality of the workers, and the payment history of the employers. We discuss how employers will be able to recognize top workers and communicate their work requirements, in order to produce quality work from the platform.

Motivation

When we observe crowdsourcing platforms, we often witness quantity over quality in the workforce. There are primarily two issues concerned with such platforms: 1. How do top workers make sure that they stand out from the rest of the crowd? 2. How do employers find the right worker for their tasks?

Once, we have matched potential workers to the employers, we could further deal with issues related to communication, work wage and exact requirements.

Related Work

One of the most popular crowdsourcing platform, Amazon Mechanical Turks, is presently under the load of a massive number of workers, whose reputation is unaccounted for. Although the system of the platform is designed for quick allocation of micro-tasks, the system does not encourage communication between the workers and the employers, and the employers are unsure about their source of results until the work is submitted by a worker. This leads to mass rejection of unsatisfactory work, and time wastage for both the users.

Insight

AMT has no provisions for building confidence amongst the workers and the requesters. WorkBazaar encourages inducing confidence (rather trust) among users, both new and experienced. The ratings are an almost accurate inference of the work ethics and reputation of a user. If a user is going to deal with another user, he must have a certain level of trust before he begins to attempt the job (for worker) or before he spends his valuable time to evaluate the job for which he’s paying another user (for requester). A better user rating is also a positive reinforcement for a worker and will help him getting higher paid and expert level jobs in future, whereas for a requester, a higher rating will help him reach skilled workers who would be looking for a trustworthy user (with a good reputation of paying on time and paying the right amount for the job).

Such a rating is also an asset for the platform on the whole. The happier the users of the platform are, the more popular the platform will be. So it a win-win.

Also, building a communication system around the platform allows the users to interact and deal with any inconsistencies with the work quality.

System

This rating system intends to provide two kinds of ratings: One an overall rating and two, a rating that conveys the recent track record of the requester and worker. The overall rating tells how the user (worker or requester) has performed since day 1. This will help the other users to provide a match when looking for a particular and appropriate kind of user. This rating is also an indication of the experience of the user. The recent track record rating is an indicator of the near past work ethics of the user. Some users who have managed a better overall rating tend to get away even if they perform a few failed jobs as those jobs will not affect their overall rating. But the recent track record takes care of this loophole. Even if a user has a high overall rating, he must also maintain a decent recent track record rating so as to project confidence while giving/taking jobs to/from another user.

A user will rate another user out of 5, and the overall rating will be an average of the number of ratings the user has achieved. Much like it is done on other platforms (Google Play Store, for example). The recent track record will not be an average but a display of all recent 10 ratings (3,3,2,1,3,5,5,4,4,2). Such ratings are better because a lot of times averages hide the true picture. The entire purpose of this rating is to do away with errors created by averages.

Also, these ratings will carry a feedback with them, so that the requester can publicly acknowledge the work quality of the worker, and the worker can also give feedback about the professionalism of the requester. This rating system allows requesters/employers to reach out to potential workers whom they believe have had a good work record, and help workers to avoid requesters whose payment history has been unreliable. This system encourages workers to keep up with their work quality and refrain requesters from unjustly rejecting work.

Also, a prevalent communication system is necessary for bringing the workers and the requesters close. A communication system allows the users to interact with each other before, during and after a job completion. Since micro-tasks under one job carry mostly the same instructions, a single investment of time in figuring out what the requester needs and what the worker can deliver, will improve the work quality and accuracy, and help workers avoid mass rejection.

Evaluation

The rating system has been implemented in a popular freelancing workspace, oDesk. oDesk has been operational since 2003, and the rating and feedback system mentioned in WorkBazaar builds on oDesk's highly successful rating system. The rating system of WorkBazaar has shown drastic improvements in the allocation of the right work to the right people. Over the 4 months of alpha testing of WorkBazaar, it was observed that 34 out of 37 requesters used the rating system to contact the potential workers. Overall, the requesters were satisfied with the workers and always went through the previous 3 to 5 ratings to go through the worker's experience.

The workers on the other hand only gave importance to ratings when in doubt of the requester's reputation. Also, the workers usually resorted to the feedback when the requester unjustly rejected their work.

References

[1] Testset : modeling crowdworking platform after a social media website http://crowdresearch.stanford.edu/w/index.php?title=Milestone_5_Testset#Prototype_for_Idea_1:_modeling_crowdworking_platform_after_a_social_media_website , April 2015

[2] TuringMachine : Sustainable Reputation Mechanism http://crowdresearch.stanford.edu/w/index.php?title=Milestone_5_TuringMachine_Mockup:_Sustainable_Reputation_Mechanism , April 2015