WinterMilestone 3 westcoastsfcr RepresentationIdea: CSV Ratings

From crowdresearch
Revision as of 02:35, 1 February 2016 by Alexanderstolzoff (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Overview

One crucial issue with Mechanical Turk is that there is no concise way to give feedback to requesters on how clear their tasks are formatted. Someone could present a poor task, get poor results, and think it is the Workers fault. When in fact studies have shown that how concisely a task is formatted affects the results the requester gets. So it is only to the requesters advantage to be informed when they are poorly formatting tasks. But how can we do this? Requesters are constantly putting out tasks that allow for hundreds of hits, there is no way they could have enough time to read all the feedback messages they got sent. Because of this we must come up with simplistic, easy to read way the requester can read feedback.

As it already is requesters receive feedback in the form of a CSV file. This CSV files contains all the data they wished to collect. It is organized in a manner that makes it easy for them to read and interpret without spending too much time on it. Now what if we had Workers rate the clarity of a task on a scale of 1-5. This rating was then processed onto a separate CSV file. This CSV file would contain all the ratings along with an average rating for the clarity of the task. Since requesters have thousands of Workers this is an extremely easy way for them to see how well they are formatting their tasks, whether their poor results are a reflection of their task composition, and whether or not they need to improve.

Along with this rating system the Worker can leave a comment about why the task wasn't clear. Instead of a requester having to look through all their literary feedback they can use this Rating CSV as an indicator of whether or not to look. This way without spending an immense amount of time sifting through comments, they can simply glance at the CSV file for a few seconds, see their average rating, and then decide whether or not it is worth their time.


Storyboard

Screen Shot 2016-01-31 at 5.06.40 PM.png Screen Shot 2016-01-31 at 5.07.11 PM.png Screen Shot 2016-01-31 at 5.08.04 PM.png Screen Shot 2016-01-31 at 5.07.26 PM.png Screen Shot 2016-01-31 at 5.08.20 PM.png Screen Shot 2016-01-31 at 5.08.31 PM.png


Goals

  • To provide requesters with a very easily read form of feedback
  • To make Worker-Requester communication more efficient
  • To not cause bias when rating a requester
  • Make improvement a lot easier to come by
  • Improve requesters' results


How These Goals Are Achievable Through This System

To provide requesters with a very easily read form of feedback

  • You can easily skim over a CVS file and see what the numbers are, as opposed to reading hundreds of comments
  • Makes it is easier for the requester to read, so they're more likely to improve

To not cause bias when rating a requester

  • You rate before knowing whether or not you got rejected so there's no incentive to be overly harsh in the comments.

Make improvement a lot easier to come by

  • Requesters are more likely to read comments if they have an indication beforehand that their formatting was bad

Improve requesters' results

  • Results tend to be a reflection of how and clear and concise the task HIT was composed. If they're being informed their HIT was poorly formatted and being given advice on how to improve it their HIT is more likely to render better results the next time around.

Potential Issues

  • Some workers may just give low ratings out of spite
  • Requesters could be arrogant and think the Workers are dumb, so there is fault on his behalf
  • Workers might mindlessly rate
  • Workers could be frustrated with their day and take it out on the requester's review


Contributors

- @AlexStolzoff