Milestone 1 TripleClicks

From crowdresearch
Revision as of 21:05, 4 March 2015 by Mikeyoung (Talk | contribs) (Created page with "Template for your submission for Milestone 1. Do not edit this directly - instead, make a new page at Milestone 1 YourTeamName or whatever your team name is, and copy...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Template for your submission for Milestone 1. Do not edit this directly - instead, make a new page at Milestone 1 YourTeamName or whatever your team name is, and copy this template over. You can view the source of this page by clicking the Edit button at the top-right of this page, or by clicking here.

Experience the life of a Worker on Mechanical Turk

My experience as a worker on Mechanical Turk was challenging. I had previously explored MTurk in the past to see what crowd working was like (purely for curiosity’s sake), but this was the first in a while that I had been back on the platform. Getting signed back into MTurk wasn’t too difficult, I just had to use my Amazon.com login. However, what struck me was the fact that in 5+ years, the platform hadn’t seen any significant change.

Task Search: The process of searching for tasks was confusing as many task titles had tags or identification numbers. Most included labels according to some context that the requester had (e.g. numerical values, jargon), very few were listed with easy to scan descriptions or titles. The extremely small text size made it difficult to also click on links without exercising a considerable amount of precision or mental effort. The meta data (HIT expiration date, Time allotted, Reward, HITs available) was hard to scan and for a beginner worker, hard to navigate and prioritize (trying to find the HITs that were lucrative, interesting, could be completed quickly, etc). I didn’t know what the “Request Qualification” link meant or what it did until I accidentally clicked it trying to click on “Why?” and sent a request notification to a Requester (oops).

Task Selection: Viewing a HIT was nice because it helped me to preview the types of HITs to expect, and I found myself using this as a means to figure out if I wanted to work for the Requester. Eventually, I settled on tasks that asked me to transcribe information in a photo (mostly receipts). I didn’t like tasks that asked me to look for contact information on a person (that felt invasive and too much like a sales/lead generation task - which is a considerable time effort) and I didn’t like tasks that asked me to look for a URL for a business (because I felt that a portion of the businesses might not even be online).

Task Completion: The interface for entering information was fairly straight forward, but in some instances, it was hard to know if I should do the task as is or interpret what the requester wanted. For instance, I was transcribing a receipt. Grocery stores truncate the name of items to the point that they don’t make sense. For example: ORG GRN LETT, which I (as a human) know to be Organic Green Lettuce. But not knowing what standards the Requester has for input, I knew that I would be doing exactly as he/she asked by typing “ORG GRN LETT”, but knowing that there'd be more value provided if I typed the full item name. This, to me, is an example of perhaps incomplete task details and more broadly, the challenges in standards or expectations that a worker might have to deal with. The odd thing, to me, is that a worker could be blocked or removed from future HITs if they make a minor mistake. It seems like a one-way street to me to not have a way to evaluate the clarity of instructions.

Payment: After about 2 hours of submitting tasks and waiting for approval, I managed my first $1.00 doing a series of HITs that were valued at about $0.05 each. I had fun for the first 30-45 minutes, happily transcribing receipts from stores, but soon became tired by constantly checking and rechecking my work for accuracy. Having to wait for HITs to be approved and to transfer payment to my account left me disheartened.

Overall: There is much to be improved upon in Mechanical Turk. From poor interface design, search and navigation, and information design issues to compensation and worker satisfaction concerns, there is much to be disliked and very little to be liked about the platform.

Experience the life of a Requester on Mechanical Turk

Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results.

Explore alternative crowd-labor markets

Compare and contrast the crowd-labor market you just explored (TaskRabbit/oDesk/GalaxyZoo) to Mechanical Turk.

Readings

MobileWorks

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?

mClerk

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?

Flash Teams

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?