Milestone 2 pentagram

From crowdresearch
Revision as of 11:16, 11 March 2015 by Karthiksenthil (Talk | contribs) (Requester perspective: The Need for Standardization in Crowdsourcing)

Jump to: navigation, search

This is the submission page for Milestone 2 by Team pentagram.

Attend a Panel to Hear from Workers and Requesters

Morning panel

Some observations/ideas noted down in the morning panel session :-

Worker perspectives

  • some sites to find list of HITs- easy way to find tasks undertaken by workers
  • the joy of giving back to community is a motivating point for turkers
  • HITExploiter - a script to automate and rate tasks using Turkopticon
  • Motivation for workers in MTurk
    • money
    • wanting to help people
    • social concepts
  • the worker community is very friendly and helpful ; more or less like a Facebook group discussing about anything under the sun
  • worker <-> requester interaction
    • shoot emails directly (instructions not clear)
    • invite requesters to forums
    • verify credibility of requesters
  • many a times pro workers help newbie requesters on how to use GUI,API and all
  • the biggest hurdles for newbie workers
    • finding a matching job - very hard
    • poor UI in Amazon Mturk
    • requires scripting knowledge to make decent money
    • in case of non-US countries ; more difficult to find work
    • frustration due to poorly paying jobs
    • some very important suggestions --> ask questions
  • the dropoff rate on mTurk --> severe, because of no feedback and rejections

Requester perspectives

  • Some common thoughts
    • prevailing wage 8$-10$ per hour - why ? (demographically and ethically minimum wage)
    • main problem -> was task taken seriously ?
    • how to detect cheating -> using open-ended questions
    • time vs. money
      • balancing act
      • give incentives/bonus
    • requesters dont have time to follow forums to know about their tasks
    • requesters prefer personal email
    • no assurity about completion of tasks
    • very complicated to get a task done by a specific worker
  • threshold for rejecting HITs adopted by many requesters
    • use the open ended questions - if responses are direct from Wikipedia or gibberish implies bad work
    • use timers
    • feel that workers didn't read instructions completely
  • problem in India and outside US
    • proxy accounts
    • Indians using USA-based MTurk account to turk and make money
    • account selling - very common pre-2012
  • Some mistakes by requesters-
    • give very less time (no tutorial from Amazon side)

Evening Panel

Some observations/ideas noted down in the evening panel session :-

Worker perspectives

Requester perspectives

Reading Others' Insights

Worker perspective: Being a Turker

In the paper, the frequently discussed issues and solutions of requesters on crowd sourcing platforms has been augmented with, the not discussed issues of the Turkers. It clearly outlines one sidedness of the current crowd sourcing platforms as

  1. Information assymetry
  2. Imbalance of power between the requesters and turkers

The research seems to be elaborate and realistic,in terms of the opinions gathered.

Observations about workers

  1. Clearly, even though there are some workers who do Turking as a source of entertainment/experience, most would want monetary gains from spending their time and effort.The most matured workers would aim at not only high paying jobs but also interesting jobs.The pay expectations seem to have a varied perspective with some satisfied of earning something extra and others not by doing excessive work for small pay.
  2. There are various range of workers making various amount of cash on crowd sourcing platforms.It has to be noted that the cash made is usually not enough to rely upon as a constant source of income.The interest common to all turkers is to make more cash than their previous attempt.
  3. Turker nation and similar forums are typically used by turkers for reviewing Requesters.Though on the crowd sourcing platforms themselves there are hardly any review system for them.
  4. An interaction between a turker and the requester will be a great platform exchange of necessities and making tasks more viable from both their perspectives.This also comes with the prospect of amateurs turkers/requesters being guidelined by pro requesters/turkers
  5. It has also to be considered that forums such as these have become stagnant accusation portals, after some point.Though there are instances of turkers owning up to individual mistakes,it largely seems to be a blame game.

Worker perspective: Turkopticon

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Requester perspective: The Need for Standardization in Crowdsourcing

Observations about workers

  1. Tasks are chosen by workers through a online spot market.
  2. Workers are not sued or sacked for unsatisfactory task completion except that they don't get payed for HITs they completed.
  3. Tasks which are high demanded generally requires low-skilled workers.
  4. Workers need to strictly and constantly follow the rules in case of standardised tasks.
  5. Workers are free to choose any tasks( all of which differ in terms of level of difficulty and skill set required for each task).
  6. Method for suitable task retrieval by workers is inadequate and inefficient.
  7. Users of crowdsourcing platforms often get mixed results, which is quite fumble.
  8. In "curated garden approach",practitioners gain the scalability and cost savings of crowdsourcing.

Observations about requesters

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

The author of this blog is an experienced professor at the Department of Information, Operations, and Management Sciences at Leonard N. Stern School of Business of New York University.

At the time of writing, he had an experience of using AMT for almost 4 years. He gives a critical analysis of what's been missing in this platform through the blog post.

Some thoughts of author

  • A need to evolve
  • author is stressing on the fact that Amazon has completely alienated itself from the working and policies in mturk (the hands-off approach of Amazon)

Observations about workers

Trustworthiness guarantee for requesters
  • requesters on Mturk are serving like slave masters
  • some common problems with requesters
a) reject good work
b) not pay on time
c) incomplete info on tasks
  • new requesters tend to leave the market if they are not guided by experts on how to post tasks
  • some objective characteristics that workers should look for in a requester before working for him
a) speed of payment
b) rejection rate for requester
c) volume of work posted
  • these call for a system which can present all this information in a format that is accessible to every worker
  • a trustworthy market environment reduces the search costs for both requester and worker
A better user interface
  • make task finding an easy process for workers
  • workers have no means of navigating through the sea of tasks to find those that match their interests
  • this forces the workers to select tasks based on some priorities ; this inturn leads to an uncertainity in the completion time of the posted tasks on Mturk
  • some solutions proposed by author
a) an interactive browsing system
b) improvised search engine
c) a recommender system to post HITs to workers

Observations about requesters

A better UI to post tasks
  • less technical overhead ==> better online marketplace
  • requirements that every requester must satisfy
a) quality assurance for submitted HITs for a task
b) proper allocation of qualifications
c) break tasks into a feasible workflow
d) classify workers
  • Author points out an external API for running iterative tasks, Turkit, which has been very user-friendly for requesters especially
  • Mturk is requiring the requesters to build the app from scratch to orient it according to their needs
A better and true reputation system for workers
  • current reputation system uses no. of HITs completed and approval rate which are easy to manipulate
  • why a good reputation system ? because if requester can't differentiate a good from a bad worker, he tends to assume that every worker is bad
  • suggestions from author for a new reputation system
a) More public qualification tests
b) Track working history of workers
c) Rating of workers
d) Disconnect payment from rating
e) Classify HITs and rating
f) API for all the above features

A critical fact stressed by the author:

 A labor marketplace is not the same thing as a computing service. Even if everything is an API, the design of the market still matters.

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

List out the observations you made while doing your fieldwork. Links to examples (posts / threads) would be extremely helpful.

Synthesize the Needs You Found

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

Worker Needs

A set of bullet points summarizing the needs of workers.

  • Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.

Requester Needs

A set of bullet points summarizing the needs of requesters.

  • Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.