Milestone 2 ams

From crowdresearch
Revision as of 12:02, 11 March 2015 by Maniksingh (Talk | contribs)

Jump to: navigation, search

Template for your submission for Milestone 2. Do not edit this directly - instead, make a new page at Milestone 2 YourTeamName or whatever your team name is, and copy this template over. You can view the source of this page by clicking the Edit button at the top-right of this page, or by clicking here.

Attend a Panel to Hear from Workers and Requesters

Deliverable

Report on some of the observations you gathered during the panel.

Reading Others' Insights

Worker perspective: Being a Turker

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Worker perspective: Turkopticon

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Requester perspective: The Need for Standardization in Crowdsourcing

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

The method of standardization could well turn out to be a block for beginners looking for opportunities in the Crowdsoursing sector. The more the number of restrictions (let's be honest, standardizing is restricting) the more the worker will think before setting up his profile as a crowdsourcer. The process should be clean, smooth and open.

Hence, where standardization is being thought to improve work and payment standards of the workers, it may well backfire, defeating its entire purpose.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

For the requestors, yes I agree standardization will bring more sophistication into the system. The jobs will be clearly segregated. The payments will be better handled. Agreed. But then again, for a beginner requestor, who wouldn't know how to categorize his job so as to bring in more workers, this system is too much of a hassle. A very likely scenario is that the requestor who didn't standardize his job well (he's new to the system, mistakes will be made) ended up getting very few workers and was forced to increase the pay so as to attract more workers, which he didn't need to in the first place.

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Since the pay of a HIT, or even multiple HITs, do not amount to much, workers would prefer to move on rather than pursuing a complaint of the requester.
  • Workers aim to improve their 'rating' by working on very small low paying HITs which are a walk in the park. This makes them appear more eligible to higher paying tasks, completely disregarding the account whether they are actually skilled for the task or not.
  • A productive worker cannot differentiate himself from the rest of the crowd and ends up being paid less.
  • Experienced workers usually gauge the trustworthiness of a new requester by only completing a small number of HITs for them.
  • Workers who work legitimately on a large batch of HITs for a requester and get rejectly unjustly, do not only lose the opportunity to earn the money but also lose reputation.
  • Workers search for work based on most recent HITs, instead of searching for work which matches their skill set.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Requesters have the ultimate power to misuse their responsibility.
  • Requesters are stuck with using a complicated and technologically outdated platform.
  • The web platform, and the complex API, do very little in reducing the time and costs involved in requesting HITs, thus moving the break-even point further away from the requester.
  • If a requester has followed the approach of building their own interface in an HTML iframe, it becomes easier for them to expose non-MTurk people to their HITs with simple HTML urls, thus leaking the work from an ecosystem which was supposed to be self sufficient enough to contain itself.
  • The flexible nature of the MTurk platform is beneficial for requesters who want to design and represent their HITs as they please.
  • A newly joined requester is unaware of the market dynamics and might get discouraged to continue with the crowd sourcing platform if he/she encounters inexperienced workers in his/her early stages of involvement with the platform.

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

List out the observations you made while doing your fieldwork. Links to examples (posts / threads) would be extremely helpful.

Synthesize the Needs You Found

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

Worker Needs

A set of bullet points summarizing the needs of workers.

  • Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.

Requester Needs

A set of bullet points summarizing the needs of requesters.

  • Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.