Milestone 6 Sundevils

From crowdresearch
Jump to: navigation, search


Moderator based, Quality optimized Crowdsourcing platform

Abstract

Existing Crowdsourcing platforms are lacking fairness and quality control. Our proposal aims to rectify both of these issues by bringing in a moderator in between the existing parties of requester and worker. This idea will enable smooth working of the platform by settling up the rejection issues and also enable a third party control over the entire platform. Our second foundation idea ensures quality work is always submitted and makes the work easier for a requester as well as improve the efficiency of a worker. This is enabled by checking each workers submitted task for the completion of basic ideas laid out by the requester. Additionally a feature is included to give equal opportunity for rookie workers so that they will continue working on the platform.

Motivation

Dispute resolving system: (Foundation Idea) Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately there are cases when the work done by the worker is not given due credit by the Requester. The requester may cite reasons that the work is incomplete, or it is not up to the expectation. In the existing platform, there is no way for the worker to appeal against this. On the other hand, Workers are also equally malicious in attempting to sell a defective work to the requester. There need to be a way to address both worker's and requester's woe.

Automated quality control: (Foundation Idea)Usually in submission, worker just submits a file/folder (usually zip, jar). Requester needs to extract every folder himself and check them one by one. To reduce stress on requester and to improve quality checks, we suggest mandatory terms that should be met during submission. Once they are met, the requester can take look into them for his final review.

Support for Rookie worker: (Feature Idea)All platforms focus on how to help experienced workers and how to give them access to good requester s. Unfortunately new workers to the system or rookie workers are not given good tasks to do and trust issues about their capability arises.To balance this, in our crowdsourcing platform any person can take up any HIT. When a requester has doubts about a particular worker, a pretest is given. If the rookie worker performs well, he is given the HIT.

Related Work

1.Moderator idea exists in Turkopticon. But the concept and work of our moderator is better than our existing idea. The moderator in Turopticon only checks the posts and ensures no spamming or abusive remarks are made and justifies rejection to certain extent.

2.http://publicassets.s3.amazonaws.com/papers/HCOMP2011_philosopher_stone.pdf. This paper talks about Targeted and Scalable Quality Assurance in Crowdsourcing. Present methods rely on EM-style post-processing or manual annotation of large gold standard sets.The paper talks about targeted training feedback to workers.

Insight

Moderator Pool: Hence we propose a platform in which there will be group of people a.k.a Moderators who club together into a Moderator pool which is responsible for addressing worker's and requester's woe. If a person(Requester/Worker) feels he needs to appeal against the other party, the person may moot it to the Moderator pool.

Quality control:This feature automatically checks whether the worker has met all the tasks that are expected by the requester. This saves lot of time for the requester as the system itself will check the basic functionality and reject/accept based on the criteria set by the requester(this is visible to the worker as well).

System

The insight above should explain the high level idea (e.g., "All workers are paid in chocolate"). Here, you explain how it works in specifics. (e.g., "We built a crowdsourcing platform called Chococrowd that mails dark chocolate candies to workers at the conclusion of each month. Requesters choose the quality of the dark chocolate based on the quality of the work.")

1) We propose a crowdsourcing platform in which priority is given to amiability between Worker and Requester. Example: Suppose a requester submits HIT like "Develop an Android app implementing set of features". Set of 10 workers take up the HIT. 6 of them complete the task successfully and 4 of them do not complete it as expected. Hence once the submission due is passed, Requester looks into the submission and awards 5 of them with for HIT. Hence the 6th worker appeals against this in Moderator pool.

Moderator pool consist of set of moderators who are experts in the domain. In our case, they are expert in Android programming. It is the workers responsibility to pay a fraction of amount that he had taken the HIT for, to the Moderator pool - so that the pool would consider the case. Now they look into it and provide their verdict based on the veracity of the workers claim.

If the worker is correct, Requester pays the worker for the HIT, reimburse the worker for the amount he paid to the pool, bears the amount to be paid for the service of the pool. Requester gets a negative badge for the negligence. If the requester is correct, worker pays the amount to be paid for the service of the pool. Worker gets a negative badge for the unnecessary appeal.

2) In the crowdsourcing platform we propose, we suggest algorithm based capability checking of the submission of the worker so that it reduces the complexities of the requester. Example: Suppose a requester submits HIT like "Develop an Android app implementing set of features". When the requester posts the task, there should be "Detail design description" - this specifies what all can be done by the worker and "Basic requirements" - this specifies mandatory tasks to be done by the worker. Set of 10 workers take up the HIT. During submission system algorithm evaluates the submission to check whether the basic requirements are submitted by the worker. This reduces overhead for the requester.

Evaluation

We can check whether the system actually solves the problem by concentrating on how many disputes arises and also how quickly we are able to solve it. A review system or comments section is also to be included in the system. Based on the content of the comments and the queries posted by the users we can determine whether the system has succeeded or not. The requester can also mention whether quality work has been submitted and he/she is satisfied with the ask completion.

We hope to achieve

1.) An efficient pool of moderators who can settle any dispute between requester and worker.

2.) A system which accurately checks if all basic requirements mentioned by the requester is completed by the worker.

3.) Rookie or new comer feels welcomed and is given equal opportunity in accepting any HIT and rookies continue using the platform.

These results will be the outcome of our goal. When these results are achieved our system address some of the trust, quality and mediation issues out there in all crowdsourcing platform.

References