Milestone 1 TripleClicks

From crowdresearch
Revision as of 22:14, 4 March 2015 by Mikeyoung (Talk | contribs) (Experience the life of a Requester on Mechanical Turk)

Jump to: navigation, search

Template for your submission for Milestone 1. Do not edit this directly - instead, make a new page at Milestone 1 YourTeamName or whatever your team name is, and copy this template over. You can view the source of this page by clicking the Edit button at the top-right of this page, or by clicking here.

Experience the life of a Worker on Mechanical Turk

Turk #1

My experience as a worker on Mechanical Turk was challenging. I had previously explored MTurk in the past to see what crowd working was like (purely for curiosity’s sake), but this was the first in a while that I had been back on the platform. Getting signed back into MTurk wasn’t too difficult, I just had to use my login. However, what struck me was the fact that in 5+ years, the platform hadn’t seen any significant change.

Task Search: The process of searching for tasks was confusing as many task titles had tags or identification numbers. Most included labels according to some context that the requester had (e.g. numerical values, jargon), very few were listed with easy to scan descriptions or titles. The extremely small text size made it difficult to also click on links without exercising a considerable amount of precision or mental effort. The meta data (HIT expiration date, Time allotted, Reward, HITs available) was hard to scan and for a beginner worker, hard to navigate and prioritize (trying to find the HITs that were lucrative, interesting, could be completed quickly, etc). I didn’t know what the “Request Qualification” link meant or what it did until I accidentally clicked it trying to click on “Why?” and sent a request notification to a Requester (oops).

Task Selection: Viewing a HIT was nice because it helped me to preview the types of HITs to expect, and I found myself using this as a means to figure out if I wanted to work for the Requester. Eventually, I settled on tasks that asked me to transcribe information in a photo (mostly receipts). I didn’t like tasks that asked me to look for contact information on a person (that felt invasive and too much like a sales/lead generation task - which is a considerable time effort) and I didn’t like tasks that asked me to look for a URL for a business (because I felt that a portion of the businesses might not even be online).

Task Completion: The interface for entering information was fairly straight forward, but in some instances, it was hard to know if I should do the task as is or interpret what the requester wanted. For instance, I was transcribing a receipt. Grocery stores truncate the name of items to the point that they don’t make sense. For example: ORG GRN LETT, which I (as a human) know to be Organic Green Lettuce. But not knowing what standards the Requester has for input, I knew that I would be doing exactly as he/she asked by typing “ORG GRN LETT”, but knowing that there'd be more value provided if I typed the full item name. This, to me, is an example of perhaps incomplete task details and more broadly, the challenges in standards or expectations that a worker might have to deal with. The odd thing, to me, is that a worker could be blocked or removed from future HITs if they make a minor mistake. It seems like a one-way street to me to not have a way to evaluate the clarity of instructions.

Payment: After about 2 hours of submitting tasks and waiting for approval, I managed my first $1.00 doing a series of HITs that were valued at about $0.05 each. I had fun for the first 30-45 minutes, happily transcribing receipts from stores, but soon became tired by constantly checking and rechecking my work for accuracy. Having to wait for HITs to be approved and to transfer payment to my account left me disheartened.

Overall: There is much to be improved upon in Mechanical Turk. From poor interface design, search and navigation, and information design issues to compensation and worker satisfaction concerns, there is much to be disliked and very little to be liked about the platform.

Turk #2

Once I was approved for work, I looked through HITs to see what I could do to quickly make $1. Most HITs were of extremely low value, or of seemingly ill intent, such a $1 HIT that promised it was the “easiest” around but also required you to install a media player on your computer. Since this seemed like an ideal way to get malware, I skipped it and chose a $0.10/submission HIT.

After working for roughly 15 minutes and possibly earning $1.10 (based on whether or not my submissions are approved), I couldn’t help but be struck by how dead this made me feel inside. As a freelancer, I’m not unfamiliar with sitting alone and staring at my screen for long stretches of time. But as a crowdworker, labeling collections of images and acting as a supplement to an algorithm, I felt like I was being micromanaged by a computer. A number of my HITs weren’t able to be immediately submitted because my “accuracy was too low.” I was able to bring up my accuracy and then submit a couple of HITs. But the rest in which I failed to achieve proper accuracy, I just skipped because the time put into trying to correct mistakes wasn’t worth the $0.10 per HIT.

Of course, as I completed more HITs I began to get a better feel for what was expected (again, assuming my HITs are approved) and felt a certain sense of accomplishment in being able to breeze through my submissions. But, again, having to adhere to an accuracy metric that wasn’t known by me made me feel the work is all too tenuous and easily disapproved by the requester.

Ultimately, I feel for whoever is trying to make ends meet by working as a crowdworker. There is little incentive to do quality work. It would seem that quick work is ideal, while quality work means never getting enough money. But the perhaps the Master Qualification is something to strive for and thus reason to do a better job. Such information, however, isn’t made obvious to a new Turk like myself.

Likes: The initial feel of working through tasks that you know only a human can do.

Dislikes Knowing that one day a computer will be able to do these tasks; the interface, which looks like it hasn’t be changed in ten years; the lack of clarity in what is required for certain HITs; the pay.

Experience the life of a Requester on Mechanical Turk

Requester #1

The experience of being a Requester on Mechanical Turk had its ups and downs. Sign-up was't too difficult to accomplish since, again, it used my credentials. However, I did find that navigation between the workers and requesters portals was a bit confusing and inevitably got me trapped in some sort of middle ground screen linking the two. In retrospect, this is probably because very few people who are requesters are also workers and vice versa. However, it doesn’t excuse what I deem to be a poor way finding system and navigation scheme.

The Bad: Getting set-up with Amazon Payments to pre-pay for HITs wasn’t difficult, but again, it was likely because I already had an Amazon account. Admittedly, I never knew how much money to pre-load for my account, and I felt at times that I’d be losing nickels and dimes somewhere in the process. Because I didn’t come to the Requester site with a task need in mind, it too me a while to think one up. After trying three different categories or templates/types of tasks (survey, classification, transcribe from an image), I threw my hands up in frustration. Not only was it difficult to use the interface to set up simple form fields for surveys or to load up a CSV with image links, I was left with the feeling of urgency in having to break down a task to its most smallest, granular form. How does one deconstruct and reconstruct a task without losing details or context in process? How do I be very clear and thorough in my instructions? Eventually, I went with a very simple task: tell me about your most positive or favorite childhood memory in 50-100 words. It still took a considerable amount of time to publish (making copy edits, loading money value to my account, setting parameters for qualifications), but I was able to breath a sigh of relief once I saw the results start arriving. However, upon seeing the amount of time it took vs. how much the workers were being paid at an estimated hourly rate, I felt pretty badly about requesting a task all together.

The Good: Outside of the ease in transferring value (pre-payment for the HITs), the positive experience I took away from being a Requester was seeing the quick completion of my task. The approval process was a little clunky because of interface issues on the client side, but it was nice to see the results and know that they came from awesome people.

Overall: There is much to be improved upon in Mechanical Turk from the Requester side. Again, from poor flows, outdated interface design, disorganized search and navigation, there are a lot of things that need to be improved upon to help requesters create clearer and fairer tasks.

Explore alternative crowd-labor markets

Compare and contrast the crowd-labor market you just explored (TaskRabbit/oDesk/GalaxyZoo) to Mechanical Turk.



  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?


  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?

Flash Teams

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?