Difference between revisions of "WinterMilestone 3 stormsurfer RepresentationIdea: Open Court/Jury System to Review Worker/Requester Cases"

From crowdresearch
Jump to: navigation, search
 
Line 13: Line 13:
 
[[File:MTurk Appeal Winter Milestone 3 stormsurfer.png]]
 
[[File:MTurk Appeal Winter Milestone 3 stormsurfer.png]]
  
I propose that workers be able to "flag" requesters (separate from simply down-voting them) if they feel that they have a serious appeal to the decision made by the requester (usually an unfair rejection, but it does not have to be limited to this). Workers will be required to leave a brief comment (1-3 sentences) describing why they feel the worker should be flagged. After a requester receives a certain number of flags, depending on how many workers (who worked on his/her task) flagged him/her vs. how many didn't (lets say > 3-5% flag rate), a case will be opened, and a jury will be created to review the case. The jury will review all of the appeals made by workers for that requester and make a decision for each one. In cases where the jury doesn't have enough information (initially, it will have the work from the HIT, the requester response, and the worker's appeal), it can ask the requester and worker for more information.
+
I propose that workers be able to "flag" requesters (separate from simply down-voting them) if they feel that they have a serious appeal to the decision made by the requester (usually an unfair rejection, but it does not have to be limited to this). Workers will be required to leave a brief comment (1-3 sentences) describing why they feel the worker should be flagged. After a requester receives a certain number of flags, depending on how many workers (who worked on his/her task) flagged him/her vs. how many didn't (let's say > 3-5% flag rate), a case will be opened, and a jury will be created to review the case. The jury will review all of the appeals made by workers for that requester and make a decision for each one. In cases where the jury doesn't have enough information (initially, it will have the work from the HIT, the requester response, and the worker's appeal), it can ask the requester and worker for more information.
  
 
The jury will be composed of a sample of both workers and requesters: 1 random high-ranking requester, 1 random high-ranking worker, 1 random requester, 1 random worker. In the case of a tie vote, a random high-ranking requester/worker will be called in to review the case. Workers/requesters have no obligation to be part of the jury, but there should be a reward for being part of one (TBD, I can't think of one at the moment).
 
The jury will be composed of a sample of both workers and requesters: 1 random high-ranking requester, 1 random high-ranking worker, 1 random requester, 1 random worker. In the case of a tie vote, a random high-ranking requester/worker will be called in to review the case. Workers/requesters have no obligation to be part of the jury, but there should be a reward for being part of one (TBD, I can't think of one at the moment).
  
This solutions addresses both goals; first, it will allow workers to voice their concerns. Of course, not all appeals can be reviewed, and cases will only be reviewed if enough workers voice their concern about a certain requester. However, the requester should only be of serious concern if there are multiple appeals filed against him/her. Second, a body of both requesters and workers--those who understand the platform best--will work together to create a decision and the penalties, making it fair for both parties.
+
This solutions addresses both goals; first, it will allow workers to voice their concerns. Of course, not all appeals can be reviewed, and cases will only be reviewed if enough workers voice their concern about a certain requester. This is because the requester should only be of serious concern if there are multiple appeals filed against him/her. Second, a body of both requesters and workers--those who understand the platform best--will work together to create a decision and the penalties, making it fair for both parties.
  
 
== Milestone contributors ==  
 
== Milestone contributors ==  
  
 
Slack usernames of all who helped create this wiki page submission: @shreygupta98
 
Slack usernames of all who helped create this wiki page submission: @shreygupta98

Latest revision as of 12:57, 31 January 2016

Describe (using diagrams, sketches, storyboards, text, or some combination) the idea in further detail.

Problem (Goals)

Workers often use third-party forums and tools (such as Turkopticon) to rate requesters and give detailed feedback on good/bad requesters. On these forums, workers often complain that a requester was unfair in rejecting their work, and sometimes, the commenters (mainly workers, but sometimes also requesters) agree with the worker but other times disagree. It is clear that requesters on MTurk have too much power and that they can unfairly reject work; workers on this platform can do little to voice their concerns other than letting other workers know on third-party forums that this requester is "bad." Within the crowdsource platform (in this case, MTurk) itself, how can workers voice their concerns and be heard? How can we determine who was right (the worker, the requester, or perhaps a mix of both) and what the penalty is?

The goal of my solution (described below) is therefore to allow workers to voice their concerns and to have these concerns fairly judged.

Solution (Design)

Suppose a worker's HITs were unfairly rejected; currently, Boomerang will allow the worker to negatively rate the requester so that the requester's tasks will be less likely to appear on the worker's feed, but this doesn't solve the other half of the problem: the worker never received his/her payment. This could, at times, be a large amount (even $10 is a lot considering the relatively low salary that Turkers make), and every dollar counts. Therefore, there needs to be a way for workers to appeal a decision made by the requester.

MTurk Appeal Winter Milestone 3 stormsurfer.png

I propose that workers be able to "flag" requesters (separate from simply down-voting them) if they feel that they have a serious appeal to the decision made by the requester (usually an unfair rejection, but it does not have to be limited to this). Workers will be required to leave a brief comment (1-3 sentences) describing why they feel the worker should be flagged. After a requester receives a certain number of flags, depending on how many workers (who worked on his/her task) flagged him/her vs. how many didn't (let's say > 3-5% flag rate), a case will be opened, and a jury will be created to review the case. The jury will review all of the appeals made by workers for that requester and make a decision for each one. In cases where the jury doesn't have enough information (initially, it will have the work from the HIT, the requester response, and the worker's appeal), it can ask the requester and worker for more information.

The jury will be composed of a sample of both workers and requesters: 1 random high-ranking requester, 1 random high-ranking worker, 1 random requester, 1 random worker. In the case of a tie vote, a random high-ranking requester/worker will be called in to review the case. Workers/requesters have no obligation to be part of the jury, but there should be a reward for being part of one (TBD, I can't think of one at the moment).

This solutions addresses both goals; first, it will allow workers to voice their concerns. Of course, not all appeals can be reviewed, and cases will only be reviewed if enough workers voice their concern about a certain requester. This is because the requester should only be of serious concern if there are multiple appeals filed against him/her. Second, a body of both requesters and workers--those who understand the platform best--will work together to create a decision and the penalties, making it fair for both parties.

Milestone contributors

Slack usernames of all who helped create this wiki page submission: @shreygupta98