Milestone 6 ams
WorkBazaar - A Meritocratic Crowd-Sourcing Platform
Existing Crowdsourcing platforms do not enforce strong importance on the reputation of both - the workers and the employers. Employers are forced to assign work to workers without considering their capabilities or understanding of the task.
The foundation of this paper aims to solve the problems related to trust, by implementing a crowdsourcing platform which increases the transparency of the reputation of all users of the platform. Primarily, this paper focusses on highlighting the work quality of the workers, and the payment history of the employers. We discuss how employers will be able to recognize top workers and communicate their work requirements, in order to produce quality work from the platform.
When we observe crowdsourcing platforms, we often witness quantity over quality in the workforce. There are primarily two issues concerned with such platforms: 1. How do top workers make sure that they stand out from the rest of the crowd? 2. How do employers find the right worker for their tasks?
Once, we have matched potential workers to the employers, we could further deal with issues related to communication, work wage and exact requirements.
One of the most popular crowdsourcing platform, Amazon Mechanical Turks, is presently under the load of a massive number of workers, whose reputation is unaccounted for. Although the system of the platform is designed for quick allocation of micro-tasks, the system does not encourage communication between the workers and the employers, and the employers are unsure about their source of results until the work is submitted by a worker. This leads to mass rejection of unsatisfactory work, and time wastage for both the users.
AMT has no provisions for building confidence amongst the workers and the requesters. WorkBazaar encourages inducing confidence (rather trust) among users, both new and experienced. The ratings are an almost accurate inference of the work ethics and reputation of a user. If a user is going to deal with another user, he must have a certain level of trust before he begins to attempt the job (for worker) or before he spends his valuable time to evaluate the job for which he’s paying another user (for requester). A better user rating is also a positive reinforcement for a worker and will help him getting higher paid and expert level jobs in future, whereas for a requester, a higher rating will help him reach skilled workers who would be looking for a trustworthy user (with a good reputation of paying on time and paying the right amount for the job).
Such a rating is also an asset for the platform on the whole. The happier the users of the platform are, the more popular the platform will be. So it a win-win.
This rating system intends to provide two kinds of ratings: One an overall rating and two, a rating that conveys the recent track record of the requester and worker. The overall rating tells how the user (worker or requester) has performed since day 1. This will help the other users to provide a match when looking for a particular and appropriate kind of user. This rating is also an indication of the experience of the user. The recent track record rating is an indicator of the near past work ethics of the user. Some users who have managed a better overall rating tend to get away even if they perform a few failed jobs as those jobs will not affect their overall rating. But the recent track record takes care of this loophole. Even if a user has a high overall rating, he must also maintain a decent recent track record rating so as to project confidence while giving/taking jobs to/from another user.
A user will rate another user out of 5, and the overall rating will be an average of the number of ratings the user has achieved. Much like it is done on other platforms (Google Play Store, for example). The recent track record will not be an average but a display of all recent 10 ratings (3,3,2,1,3,5,5,4,4,2). Such ratings are better because a lot of times averages hide the true picture. The entire purpose of this rating is to do away with errors created by averages.
Also, a prevalent communication platform is necessary for
 Testset : modeling crowdworking platform after a social media website http://crowdresearch.stanford.edu/w/index.php?title=Milestone_5_Testset#Prototype_for_Idea_1:_modeling_crowdworking_platform_after_a_social_media_website , April 2015
 TuringMachine : Sustainable Reputation Mechanism http://crowdresearch.stanford.edu/w/index.php?title=Milestone_5_TuringMachine_Mockup:_Sustainable_Reputation_Mechanism , April 2015