Milestone 2 UWI

From crowdresearch
Revision as of 22:17, 10 March 2015 by Yuzhoushen (Talk | contribs) (Worker perspective: Being a Turker)

Jump to: navigation, search

Attend a Panel to Hear from Workers and Requesters

  • In the panel, many of them are doing sociology research or working with sociology researchers.
  • One person talked about how difficult it is to use the interface of Mturk for the first time. But after he got used to it, he found the platform is more consistent than the other crowd sourcing platform. He wants to do more individual work so he came to MTurk.
  • They talked about the community of turkers a lot. There are many websites for the turkers, such as Turk Nation, Mturk Forum and so on. People share new hits on some websites and share the information and experience on some websites. Someone mentioned a new request was surprised that workers communicated with each other. The experienced turkers help new turkers to get used to the platform, get them advice and share the information about hits.
  • Someone thought that the initial motivation for the turkers is money. But the community really helps people to keep doing this job. When someone get his/her first rejection, he/she would be really frustrated. He/she can find help in the community.
  • The workers would email requesters about the work. They provide suggestions on the format, the reasonable payment and required qualification of the hits. Someone said that the more standard the platform is, the more hits would be posted on it and they will get more work to do and get more money.
  • Two requesters seldom reject workers but they use different ways to test whether the workers take the tasks seriously. One new requester uses the time the worker spend on the instruction page. He thought if a person spend less time on the instruction page, he/she might not pay less attention to it and might not do a good job.
  • But the other requester said he used to use this method but he found out the fact that people spend less time on the instruction page does not necessarily mean they don’t pay attention to it. So he usually post a small question after the instruction. If the workers really understand the instruction, they can answer well.
  • Some of the requesters usually keep the tasks as small as possible. Sometimes they post small tests to test how much time people need to finish the task.
  • One requester said at the first time he didn't realize the time of the hit is the maximum time not the minimum time.
  • A requester said that MTurk doesn't allow download because of the intellectual property. Sometimes he wants the workers to use new tools to do the tasks but he can’t provide the tool to workers.
  • Someone worked for the internal crowdsourcing platforms for Twitter and Google. He said a lot of companies have internal crowdsourcing platform.
  • Someone thought that MTurk is too hard to start with and it lacks the training and tutorial. One worker said that they want qualified and professional workers to stay, so if Amazon doesn't do training, they help new workers in the community.
  • They said that since MTurk doesn't allow workers outside US to work on that, there are someone sells American account to India.

Reading Others' Insights

Worker perspective: Being a Turker

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The platform is designed based on the perspective of requesters and workers have less information than requesters.
  • "Money, and the best way to earn it, underpins much of the discussion about AMT". People share how much they have earned and their goals on the Turker Nation. But some of them also find some tasks interesting.
  • "For this 'digital underclass' who have difficulties accessing the regular labor market (e.g. being housebound, or living a disrupted life)", they use AMT to earn money for basic life.
  • Workers leave comments about requesters, the pay, rejections and responsiveness of these requesters on Turker Nation, sometimes using emotional words and also ask others opinions on requesters. Sometimes they worry about the delay of the payment.
  • Workers sometimes criticize themselves.
  • Workers calculate their effort and pay of a hit. They share their comments whether they think the pay is fair or not.
  • Workers complain about the interference from third parties, such as journalists, researchers and governments. They worry that this interference will lead to government regulation and have negative impact on their job.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The requesters complain about workers sharing information and experience of hits on the forum.
  • Some requesters improve the interface and modify the payment of their hits according to workers feedback.

Worker perspective: Turkopticon

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The workers get minimum wage required in their countries.
  • The workers interact with requesters through the interface of AMT.
  • Workers dissatisfied with a requester’s work rejection can contact the requester through AMT’s web interface.
  • The employer’s algorithmic tests of correctness will filter out the responses that are not same as the majority of responses.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The design features and development for AMT has prioritized the needs of requesters over workers. The requesters design the interface of the hits, define the structure of the data workers must input, create instructions, specify the pool of information that must be processed, and set a price. They also defines criteria that candidate workers must meet to work on the task. The requesters can choose whether to pay for a worker’s work or not.
  • Amazon does not require requesters to respond and many do not.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • In experiment 1, worker response was extremely fast, "with 93 of the ratings received in the first 24 hours after the task was posted, and the remaining 117 received in the next 24 hours".
  • In experiment 1, workers finished the tasks in a very short time. "Many tasks were completed within minutes of entry into the system, attesting to the rapid speed of user testing capable with Mechanical Turk".
  • In experiment 1, "many of the invalid responses were due to a small minority" of workers.
  • In two experience, workers gave totally different qualities of the responses. In experiment 1, 48.6% responses were invalid but in experiment 2, only 2.5% responses were invalid.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The requesters used the Mechanical Turk to gather usability testing.
  • The requesters compared their ratings to an expert group of Wikipedia administrators from a previous experiment
  • The requesters designed new experiment to "make creating believable invalid responses as effortful as completing the task in good faith". They added four questions that had quantitative and verifiable answers before rating the quality of the articles.

Requester perspective: The Need for Standardization in Crowdsourcing

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Workers could come and go as they pleased, receiving or making offers on tasks that different in their difficulty and skill requirements ("install engines!", "add windshields!', "design a new chassis!") for different rates of pay—and with different pricing structures (fixed payment, hourly wages, incentives etc.)
  • Workers have to learn the intricacies of the interface for each separate requester.
  • Workers have to adapt to the different quality requirements of each requester.


2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The long term employers learn from their mistakes and fix the design problems, while newcomers have to learn the lessons of bad design the hard way.
  • Every requester needs to price its work unit without knowing the conditions of the market and this price cannot fluctuate without removing and re-posting the tasks.
  • Requesters seldom publish their evaluation of the workers.

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Good workers get the same pay as bad workers do.
  • When new requesters come to the market, they are treated with caution by the experienced, good workers. Legitimate workers will simply not complete many HITs of a new requester, until the workers know that the requester is legitimate, pays promptly, and does not reject work unfairly. Most of the good workers will complete just a few HITs of the newcomer, and then wait and observe how the requester behaves.
  • Workers cannot search for a requester, unless the requester put their name in the keywords. Also workers have no way to navigate and browse through the available tasks, to find things of interest.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The only way for requester to have a decent interface is to build in an iframe for themselves.
  • Requesters cannot differentiate easily good from bad workers.
  • Requesters are free to reject good work and not pay for work they get to keep. Requesters do not have to pay on time.
  • It is effectively impossible for requesters to predict the completion time of the posted tasks.

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

List out the observations you made while doing your fieldwork. Links to examples (posts / threads) would be extremely helpful.

  • People are willing to share opinions on these forums and many comments are very long. In addition, people would reply the comments as well.
  • There is a list of good workers for requesters in Turker Nation. Workers can apply to get into the list. Someone maintains the list.
  • There is a social area in Turker Nation where turkers share their daily life.
  • People share the hits that they think are great in every forum.

Synthesize the Needs You Found

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

Worker Needs

A set of bullet points summarizing the needs of workers.

  • Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.

Requester Needs

A set of bullet points summarizing the needs of requesters.

  • Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.