Milestone 6 Sundevils

From crowdresearch
Revision as of 18:17, 8 April 2015 by Vignesh (Talk | contribs)

Jump to: navigation, search


Moderator based, Quality optimized Crowdsourcing platform

Abstract

A concise summary of the background and your research contributions (described below). Usually around 150 words or less.

Each of the following sections should be one to two paragraphs each. Paper introductions are brief and impactful.

Motivation

What is the problem that you are solving, and why is it important?

  • Think about the needs we synthesized from Milestone 2 - ie, trust and power - when thinking about which problems you want to solve.
  • This should be a specific problem! Not just “crowdsourcing”. More like how trust and power are broken.

Dispute resolving system: (Foundation Idea) Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately there are cases when the work done by the worker is not given due credit by the Requester. The requester may cite reasons that the work is incomplete, or it is not up to the expectation. In the existing platform, there is no way for the worker to appeal against this. On the other hand, Workers are also equally malicious in attempting to sell a defective work to the requester. There need to be a way to address both worker's and requester's woe.

Automated quality control: (Foundation Idea)Usually in submission, worker just submits a file/folder (usually zip, jar). Requester needs to extract every folder himself and check them one by one. To reduce stress on requester and to improve quality checks, we suggest mandatory terms that should be met during submission. Once they are met, the requester can take look into them for his final review.

Support for Rookie worker: (Feature Idea)All platforms focus on how to help experienced workers and how to give them access to good requester s. Unfortunately new workers to the system or rookie workers are not given good tasks to do and trust issues about their capability arises.To balance this, in our crowdsourcing platform any person can take up any HIT. When a requester has doubts about a particular worker, a pretest is given. If the rookie worker performs well, he is given the HIT.

Related Work

What are the existing attempts to solve this problem that have been attempted in prior research papers and real-world systems? Why are their solutions unsatisfactory?

  • You may want to search around Google Scholar to find existing work that is related to the ideas your system proposes.

Insight

This section should lay out this the foundational idea(s). These big ideas are the things that you'll be known for, and what other platforms would want to replicate. Explain: Why/how are they novel and better than anything that has been attempted in the past?

Moderator Pool: Hence we propose a platform in which there will be group of people a.k.a Moderators who club together into a Moderator pool which is responsible for addressing worker's and requester's woe. If a person(Requester/Worker) feels he needs to appeal against the other party, the person may moot it to the Moderator pool.

Quality control:This feature automatically checks whether the worker has met all the tasks that are expected by the requester. This saves lot of time for the requester as the system itself will check the basic functionality and reject/accept based on the criteria set by the requester(this is visible to the worker as well).

System

The insight above should explain the high level idea (e.g., "All workers are paid in chocolate"). Here, you explain how it works in specifics. (e.g., "We built a crowdsourcing platform called Chococrowd that mails dark chocolate candies to workers at the conclusion of each month. Requesters choose the quality of the dark chocolate based on the quality of the work.")

1) We propose a crowdsourcing platform in which priority is given to amiability between Worker and Requester. Example: Suppose a requester submits HIT like "Develop an Android app implementing set of features". Set of 10 workers take up the HIT. 6 of them complete the task successfully and 4 of them do not complete it as expected. Hence once the submission due is passed, Requester looks into the submission and awards 5 of them with for HIT. Hence the 6th worker appeals against this in Moderator pool.

Moderator pool consist of set of moderators who are experts in the domain. In our case, they are expert in Android programming. It is the workers responsibility to pay a fraction of amount that he had taken the HIT for, to the Moderator pool - so that the pool would consider the case. Now they look into it and provide their verdict based on the veracity of the workers claim.

If the worker is correct, Requester pays the worker for the HIT, reimburse the worker for the amount he paid to the pool, bears the amount to be paid for the service of the pool. Requester gets a negative badge for the negligence. If the requester is correct, worker pays the amount to be paid for the service of the pool. Worker gets a negative badge for the unnecessary appeal.

2) In the crowdsourcing platform we propose, we suggest algorithm based capability checking of the submission of the worker so that it reduces the complexities of the requester. Example: Suppose a requester submits HIT like "Develop an Android app implementing set of features". When the requester posts the task, there should be "Detail design description" - this specifies what all can be done by the worker and "Basic requirements" - this specifies mandatory tasks to be done by the worker. Set of 10 workers take up the HIT. During submission system algorithm evaluates the submission to check whether the basic requirements are submitted by the worker. This reduces overhead for the requester.

Evaluation

Once the platform you propose has been implemented, how will you determine whether your system actually solves the problem you wanted to solve? What are the results you hope you can realistically achieve? Why do these results show that you have solved the problem?

References

The reference section is where you cite prior work that you build upon. If you are aware of existing related research papers, list them here. We also encourage you to borrow ideas from the past submissions (see the meteor links above). Please list the links of the ideas you used to create this proposal (there's no restriction in terms of number of ideas or whether its yours or others'). You can use the following template:

  • [Foundation Idea] Link...
  • [Feature Idea] Link...
  • [Foundation Idea] Link...so on, and so forth...