WinterMilestone 2 vaastav

From crowdresearch
Revision as of 20:12, 24 January 2016 by Vaastavanand (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


This page contains the information I gathered about Workers' needs and Requesters' needs.

But before I get into details about the needs I found, I just want to take a step back and talk about the needfinding methods the lectures talked about.

Needfinding Lectures

Participant Observation

Key Points:

  • Watching people perform a task can help build empathy for them. This in turn allows us to think from their perspective and better understand the problems that they face.
  • Deep Hanging Out : Living people's life is another good way of trying to understand more about the needs of the people in that working situation. ( The whole concept of the TV reality series Undercover Boss is based on this. )
  • 5 Key things to take from Participant Observation:
 1. What do people do now?
 2. What values & goals do people have?
 3. How are these particular activities embedded in a larger ecology?
 4. Similarities & Differences across people
 5. Other types of context ( Basically Science of Deduction. TL;DR Be Sherlock )
  • Be a work apprentice. ( This is kind of like how medical interns learn a lot by shadowing a doctor )
  • Pay attention to how people remind themselves of things. ( Notepads, Post-it notes )
  • Errors are a "goldmine".
  • Don't focus on what people say but rather on what people do


Key Points:

  • Get people who are representatives of the target userbase
  • Approximate if necessary ( Get the closest thing to a representative )
  • "Self-consciousness is the enemy of interestingness"

Anatomy of a good interview question:

  • Don't ask leading questions.
  • More open-ended the questions are, more interesting the answers will be.

Attend a Panel to Hear from Workers and Requesters


Chris, a professor at UPenn and a requestor, believes in writing clear instructions so that workers know what is required from them. Hired an ugrad to act as the MTurk persona and deal with any of the workers' issues.

Rochelle, an experienced worker on MechanicalTurk ever since 2007, a moderator and a guide for new workers. You dont know when the requesters can post HITs. Always on the edge of seats. There are a lot of alerting groups, snapchat groups. Turking is a non-going cycle, it is a 24-hour thing. Work is unpredictable. Hesitant whilst working with new requesters. Likes to make sure there is a person on the other side in case something goes wrong. Some of the HITs have their expected timing off.

Christy, has worked with MTurk for 10 years, believes that the worker perspective is often the forgotten perspective and sometimes it can really be suck to be a worker. I think this pretty much encapsulates what is wrong with MTurk, as it seems to only value the Requester's needs and not care about the worker's needs.

Xiao, an assistant professor and is a requester doing research to study the behaviour of workers on this system. He thinks that people dropping out is a major problem for both the workers and requesters. I feel that this adds to the distrust between the 2 parties and thus fails to generate a trustworthy relationship from both the parties involved. Cares about intention spans of people and design surveys that would get best responses.

Laura, a worker who is disabled, only started doing the MTurk jobs because money was tight and she couldnt physically do the jobs she previously could. She started off with 10$ days, 5$ days and was ecstatic about having a 50$ and a 100$ day. She starts off scripts in the mornings to look for work. Never risks going below 99% rating.

Peter, a requester, instead of rejecting work he instituted qualification to validate responses and root out bad work.

Reading Others' Insights

Worker perspective: Being a Turker

This paper analysed the posts and threads on Turker Nation and dealt exclusively with the opinions of the workers.

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The main reason why Turkers Turk is for monetary gain aka to earn money.
  • They would like to do this for actually learning stuff but for some people this is vital to generating their income to make ends meet. So they take up this lousy pay for work. ( Example 1 in the paper )
  • They get frustrated when their work gets unfairly rejected or if they feel that the requesters didn't provide good enough reasons to reject their work. They also hate being subjected to demeaning comments and being blocked. ( Example 5 in the paper ).
  • Although they are very understanding if they feel their rejection was valid and their work wasn't upto the mark. Additionally, they like it when they are politely communicated to. ( Example 7 in the paper )
  • The only way they can avoid "bad" requesters is by avoiding their tasks. There is no way for them to block the requesters like the requesters can. So there is imbalance of power.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Sometimes they only pay those who gave answers which were part of the majority
  • They don't like being duped by bots.

Worker perspective: Turkopticon

Discusses Turkcon which allows workers to publicize and evaluate relationships with the requesters.

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • They should be getting minimum wage. Tehy talk about it indirectly by using a 1985 case Donovan vs DialAmerica which used a MTurk style market.
  • Workers' opinion should matter. To quote "Turkopticon developed as an ethically motivated response to workers’ invisibility in the design of AMT."
  • Workers dont like if their work is unfairly rejected. Out of 67 responses, 35 felt that their work is continuously rejected.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The amount of malicious behaviour goes down with the increase in explicitly verifiable questions in the task. This was proven by the fall in number of malicious responses from 102 to 7 from Experiment 1 to Experiment 2. This suggests that the users prefer having their tasks very direct and to the point. They generally don't like open-ended tasks.
  • Tasks should be designed so that honestly completing the task is easier than gaming the system.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • There is no way requesters can tell from the responses if the responses were genuine or were they just random answers provided by malicious users.
  • The lack of demographic information and expertise information really generalizes the answers. In a lot of microtasks it is important for the requester to know such information for the improvement of their work.
  • The system it susceptible to being gamed by the users aka malicious behaviour. ( Experiment 1, in the paper )
  • There should be multiple ways to detect malicious responses

Requester perspective: The Need for Standardization in Crowdsourcing

This paper discusses the needs and ways of standardizing the current online market

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The pricing of the current tasks is very low and could use some true market pricing.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Standardization of tasks would save requesters' time and lead to better designed tasks.

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

This post discusses the basic stuff that MTurk gets wrong and things it needs to fix.

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • A better task search interface is required. Currently, it's just painful for workers to find tasks that interest them. There is no browsing system in place.
  • A trustworthiness guarantee from the requesters. Currently workers can reject good work and not pay but they still get to keep the work.
  • Rejecting work should be an option reserved for spammers. It should never be used against honest workers that do not meet the expectations of the requester.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • every requester, in order to get good results, needs to: (a) build a quality assurance system from scratch, (b) ensure proper allocation of qualifications, (c) learn to break tasks properly into a workflow, (d) stratify workers according to quality,

4 major problems the requesters are as follows:

  • Ensuring Quality
  • Scaling Up
  • Managing Execution Time
  • Managing the complex API

2 things that MTurk can do is as follows:

  • A better interface to post tasks. Currently, the task posting system is not user-friendly at all and it needs a lot of improvement.
  • A true worker reputation system. If a requester can't differentiate between a good and a bad worker, the requester automatically assumes that all the workers are bad.

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

Synthesize the Needs You Found

A common theme that I found was that Mechanical Turk currently focuses more on the needs of the Requesters instead of trying to generate some sort of equality in the marketplace.

Worker Needs

  • Monetary Gain/ Money : Most of the workers working on MTurk are actually doing so to make ends meet. They are ok with working for lousy sums of money as long as they can get some kind of money. They are always looking for ways to maximize their earnings. Evidence: A reddit user Uhfgood "I keep hearing of people doing like 200 bucks a week and more. Some people claim others even make twice that. I'm lucky to clear 50 or 60 a week, and that's like doing 6 hours a day for 5 or 6 days a week. So how do you guys manage to make so much? " made this post on reddit which clearly suggests that he cares about making money. Another piece of evidence is the "Being of Turker" which talks in depth about money-related problems of the workers. Laura, from the panel, basically describes how she got into Turking.
  • Workers need to be respected by their employers. Evidence: Example 5 of "Being a Turker" talks about a user whose work along with many others' work was mass rejected and got a lot of demeaning comments in the response.
  • Workers deserve to be paid for the work regardless of the fact if it is upto the Requester's standards or not and they deserve better value for their work. Evidence: "A plea to fix Mechanical Turk" talks about why it is unfair to not pay for the honest work of requesters as not all of them could be spammers. The way Laura( from the panel ) describes having 10$ days and then getting a 50$ day explains how much getting good value meant for her.
  • Workers want a faster means of communication with their employers. Evidence: Rochelle likes to make sure that there is a responsive human on other side while working with new requesters.
  • Sometimes workers cant find work. Evidence: Rochelle( from the panel) describes the whole turking process as unpredictable. She says that if there is only a set time you can work in then sometimes there is no work during that period and you are out of luck.

Requester Needs

  • Requesters want better ways to ensure the validity of the work. Evidence: At the end of the day, requesters just care about the quality of the work and nothing else. Peter (from the panel) instituted qualification for workers instead of rejecting their work.
  • Requesters should have better ways to design tasks. Evidence: "Crowdsourcing User Studies with Mechanical Turk" found that worker only tried to game the system when the tasks were not designed properly. Chris (from the panel) talks about how writing clear instructions are important for the requesters to get the desired output. Xiao (from the panel) makes sure that the surveys he designs keep the attention span of the workers. Peter doesn't use in-built templates as he feels they are not good enough for his HITs
  • Requesters want fairness for everyone. Evidence: Chris from the panel talked about having an adjudication process being set up for workers so that wrokers dont get their work unfairly rejected.