WinterMilestone 1 @devinho

From crowdresearch
Jump to: navigation, search

Experience the life of a Worker on Mechanical Turk

For me, signing up was surprisingly easy and required little work on my part. Going in, I thought that there would be a lot of overhead and require a lot of information on my end because I was being paid to do a service (and thus the process would take long).

After verification, I browsed the HITs that were first displayed to me. The pay was low, but that was to be suspected from the type of work. For the most part, it was easy to find the work that I wanted to do (easy image recognition). If I were to try to find harder work that required a higher level of qualification, I would definitely have had a harder time (because of lack of qualifications and lack of available tasks).

The act of fulfilling the task was rather simple and MTurk does a good job of walking you through this. Having example HITs was particularly helpful when the instructions or title were unclear.

Having a completed the tasks a couple days ago, I'm still waiting to get paid. The fact that whether or not I'm paid back is based off of the requesters discretion makes me uncomfortable and would definitely cause a lot of worry if I was doing this seriously.

Experience the life of a Requester on Mechanical Turk

Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results. If you're from outside the USA and unable to access MTurk, you can try the MTurk requester sandbox, or CrowdFlower or Microworkers or Clickworker

Compared to being a worker, I thought being a requester was much easier (this might be attributed to the fact that I didn't care about the quality of work however). Once I was familiar with the process of creating a HIT, it was very easy and straightforward. Even though I didn't get too many results, I wasn't too worried.

One worry I had during the process was that I felt much too powerful. I could determine whether or not the worker was paid and it was completely up to my discretion. This idea was also echoed in "My MTurk (half) Workday" where the worker was able to ask for his pay (after the 4 minute survey) and get it, where others might have not done the same.



I think the MobileWorks system is conceptually very strong and it's nice to hear about the economic plausibility of the system.

I really can't see a downside with the system. I wonder how the system would work in the USA vs. how it works in India.


My general thoughts while reading the white paper are outlined below:

The paper is based off of improving two flaws of crowdsourcing and the Daemo paper's corresponding solutions:

1. Flawed reputation system that don't accurately reflect worker and requester quality -> Boomerang reputation system

  • I agree the reputation system needs work and I do like the Boomerang system, but I think it's important to consider why the Boomerang system was created what other alternatives could be
    • The Boomerang system is based off of incentivizing requesters and workers to give accurate feedback
    • So the question that, for me, needs to be answered is how do we best have a user's feedback quality affect how they interact with the system? Good feedback should be rewarded and bad feedback should not be. Does Boomerang do a good job of this? Yes. Is it the best option? I'm not sure and I think this requires much more iteration.

2. Poorly designed tasks -> Prototype tasks

  • The second part of Daemo is where I have the most questions. I think the proposed solution is requires too much over head and doesn't address the problem (if one exists).
  • Prototype tasks do improve task quality, but it's unfair to judge the prototype tasks against non-prototype tasks. Also I think requiring a whole extra prototype stuff is too much work and I'm not sure it's worth the improvement in task.
  • I do believe giving example of a HIT is very helpful to the worker and clears up a lot of confusion.
  • A very important question that I think needs answering: Does the nature of crowdsourcing promote the creation of low level tasks, and in turn, low level skill and pay?
    • My intuition is yes because a major draw for workers is that there is a very low barrier of entry. Making money with this low barrier sort of feels like making "easy" money to many workers (and to me as well).
    • This question also leads into the discussion of implementing a minimum wage that many were having Slack. If we implement a minimum wage, do we push out these low level tasks that make of the majority of MTurk tasks?
    • Maybe a more important question to answer is how do we encourage a good mix of low level and high level tasks and pay? This is touched upon in the Flash teams paper.

Flash Teams


  • The fact that the system deals with high level tasks is really impressive to me. In the same vein, being able to manage experts in an efficient manner is also very impressive.
  • Modularity seems great
  • Blocking is very straight forward and is easy to understand from a high level perspective as well as for the end user


  • I wonder if compensation for the experts is enough (experts can be very expensive). Are they getting enough compensation out of their work? How much does it cost to make a request?
  • How will experts handle taking tasks from non-experts or the system? Will the system do a good enough job of giving out achievable tasks to experts?
  • Will blocking obscure the end goal? Hand offs might distort what the product is supposed to be.
  • What do you think can be improved about the system?

Milestone Contributors

Slack usernames of all who helped create this wiki page submission: @devinho