WinterMilestone 1 Leonardykristianto

From crowdresearch
Jump to: navigation, search

Experience the life of a Worker on Mechanical Turk

User Experience

My first impression of the Amazon Mechanical Turk platform was that it gives off the feeling akin to an out of place artefact. For somebody who have at least make use of online job portals like [Freelancer](https://freelancer.com) or Elance, it does not feel right, and it does not feel Amazon with the strong bluish hue. It felt old, and it looks like a listing of threads normally seen in Japanese forums. There was limited explanation of what HITs (Human Intelligence Tasks) are in the first page and it was not highlighted or even onboarded to the users/visitors. As a matter of fact, the information "233,113 HITs available" is shown first even before the concept of HIT is explained. The description of the process are similar to what Freelancer put up and thus it sets a false expectation of how the tasks are shaped. I was surprised that the site does not even bother to strive to be responsive because it was the first thing I checked with Chrome Developer Tools. It's as if the department handling the Mechanical Turk was disbanded right after the platform is completed and there's no product strategy involved (in contrast to amazon.com).

Workability

Users should have been warned that they have to register and get their accounts approved before they start accepting tasks because in order to access the marketplace listings. The duration of the approval is quite long (max 48 hours) and there's no clear call to action button/link that leads/draws users to click on it (if we perform a heatmap scan, the users attention would scatter around the site with no single point of focus). The ability to preview the task before accepting it is a good advantage but it delays the user from confirming that they'll be executing them. I ran into these issues where one task asks me to validate an URL and provide the correct path if the provided link was invalid, so I copied and checked the link before actually accepting the task to presumably improve my HITs approval rate; only to find out that when I did accept the task (after confirming the URL), it was not available anymore. There's also a problem of waiting the pending status of a task to be approved. In some of the HITs being offered, the instructions or description of the tasks were commonly unclear (especially in the Worker's Sandbox) and it prevented user of actually filling in the correct information due to confusion. I found that the time limit set on each tasks are constraining users' chances of completing them. I was doing one of the available HIT in Sandbox and I ran out of time simply because with a weak Internet connection, the checking engine won't load and I wasted 10-14 seconds waiting for the page to show the forms that I have to fill up.

Goal

I found that by comparing the $1 amount set as a target and the average HIT's pay rated at $0.01-0.10, I aimed for these small tasks because I believed them to be less time consuming. My expectation was that the higher the pay out and the less available HITs for a specific task, the higher the requirements are and the expected quality of the work; thus it might be faster to complete 20-50 $0.01-0.05 HITs of menial tasks than a complex one that pays $1 right off the bat. Going for small tasks also minimizes the amount of risk of the HIT submitted being rejected.

Experience the life of a Requester on Mechanical Turk

Expectations

My account was rejected by the official Mechanical Turk and thus I have to use the Requester's Sandbox. The requester version has a slightly cleaner visual keys compared to the worker's version. It allows requesters to create tasks out of 10 categories with 1 being a custom option where requesters have to define each questions. In designing your own HIT interface, you can opt for the default design that Mechanical Turk has previously set or customize it by your own. In the batch creation process, I'm surprised that users (in this term, requesters) can't directly edit or create the input.csv data on the fly where I was expecting a form like interface for users to input image URLs or at least a table editor. After the batch was created, I can't seem to find the links to share the project out to collect users/workers feedbacks (I was expecting Dropbox/Google Drive shareability).

Explore alternative crowd-labor markets

Readings

MobileWorks

The system is really simple and considerably user friendly when we take into account that this platform is being accessed on feature phones. It draws the very strength that mobile users have: portability. With the platform, almost everybody can accomplish few tasks while they're on the go; the rate of completion of 120 tasks per hour seems to be really promising. What impressed me the most was the matching system to verify the digitized documents. I believe this is the reason how the accuracy rate of submissions are improved (because it's only accepted when there are matching ones). I can see challenges and limitations in what kind of tasks that can be accomplished by a feature phone. The very idea of audio transcription might not be favorable if the costs incurred to download the files is higher than the wages produced from the job. However with the current data, the workers are estimated to make $0.55 per hour while the 3-day Internet plans costs $0.33, which means the cost of a better plans are available. Overall it looks like a really great solution in reaching out to the bottom pyramid (people who really need ways to earn more).

Daemo

  • What do you like about the system / what are its strengths?
    • The better trust system initiated by "boomerang"

Flash Teams

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?

Milestone Contributors

Slack usernames of all who helped create this wiki page submission: @leonardy