WinterMilestone 2 Crayons

From crowdresearch
Revision as of 14:18, 24 January 2016 by Ameenmkhan (Talk | contribs) (Attend a Panel to Hear from Workers and Requesters)

Jump to: navigation, search

Attend a Panel to Hear from Workers and Requesters


The hangout session was an interaction between workers and requesters, similar to of round table conference, but online. Everyone introduced themselves one by one. Both workers and requester were found to be enthusiastic, and showcase their experience, either as a worker or as a requester. The people who were on hangouts were from different fields and work. Some were professors at elegant universities, working on their research, and some were there to simplify their work by using mTurk to get large online labor to do their work for them(Christ, Xaio, Peter). Some workers (Laura, Kristene) were there who were already employed but use mTurk as an alternative source of income, and people who were totally dependent on mTurk for their whole income. Some of the folk told about the kind of work they are dealing with.There is work is done through Tagging Images, Collecting Directive Data , working on Survey Data and with Research Perspective too.People shared their experience of working in mTurk.

They talked about problems they face while working on mTurk like-'

  • High drop out from time to time ,its real problem both sides
  • Large variation in wages from day to day.One may have planned to earn enough to do sort of earning but end up in earning just pennies.
  • You need to complete the hit even when you have other important work to do i.e. you can't delay. You have to hit and win or leave and loose. Instances for delay :"eh its time for kid care", "its time for teaching", "its time for lunch".
  • You have to sacrifice a lot to go through with the hit.

They talked about how they find work:-

  • Some go by network, by rating , by organization they have already worked for, pay, time and also by good organizations. The people that are quite cooperative and like to work in an environment where workers also helps each other, if someone is stuck. Workers generally admire and encourage to work with ethical behavior and where there easy reach out to the requesters.
  • Choose only from rated things and email if you don't get the instructions
  • Estimate how long its going it take from your prior experience.
  • Look up your already work done tables , and go for what suits them best.

They also shared the time of day work on mTurk-:

  • It depends on person to person . Hits are 24*7 on and so workers can any time that comforts them. Working can also be decided by the type of work you want or the work that interest you or may be there is very high paying hit , so you may decide to leave everything to go work or may be you take a leave , its all kind of decision one has to make .
  • Its never the time or situation when there is nothing posted on ,its only when and what you are hitting.
  • Large individual variation based on task or typing speed and that your hourly is something that can be tracked but it may be hard to make generic.

INTERPRETATION:- mTurk is a great resource for research purposes. You need to go and find lots of crowd when its available online for little pay for not much work. Quickly test out hypothesis and design experiment in a single day and discern whether it makes sense then to proceed on to checking whether it works by deploying it online. It is online community for crowd sourcing and it is found to have high turn-over rate which is very high which in turn proves to be a good source of income. Inspite of all the help, there are lots of problem that both workers and requesters are facing . These problems are:

Worker side:

  • High drop-out from time to time
  • There are lots of variation in wages.
  • Lack of transparency between workers and requesters.
  • No platform exists so that workers and requesters can interact directly on mTurk.
  • Unexpected rejection without proper explanation.
  • Poor presentation of instructions for the workers.

Requesters side:

  • There are dealing with lots of poor quality work .
  • Poor interaction with workers.
  • There is also transparency problem to deal with, some folks are suspected to be cheating.


As Workers

  • You need to properly understand the instruction before proceeding to work on the task because it might effect your reputation.
  • Also look for the qualifications required for the work to be done.
  • Workers needs to estimate the difficulty of the hit from the feedback of others and then estimate the possibility if yours doing it.
  • Workers need to know there potential and go on hit according to save there time or make income some other it.
  • Worker need to maintain a sort of database of their prior work.For example-You may also maintain a spread sheet documenting all you work, time taken, pay , got rejected etc. So that in future you are prevented from doing such sort of work and that could avoid rejection.
  • There is need of a more transparent platform for interaction to requesters.

As Requesters:

  • Requesters need to make some sort of threshold for acceptance of work and with proper explanation for the rejection of the work so that worker feel satisfied.
  • There is a need of a proper platform to interact with workers so that they can direct them while workers are working on problem for a better quality of works.

Reading Others' Insights

Worker perspective: Being a Turker

1. What observations about Workers can you draw from the readings?

  1. There are about 500,000 workers on Amazon Mechanical Turk(AMT). Although a study shows that only around 10% workers are active.
  2. Majority (80 percent) of the tasks are carried out by 20 percent most active workers.
  3. Workers are primarily from US(56%) and India(36%).
Why do turkers turk?
  1. Monetary gains are the primary reason why turkers turk.
  2. Some workers may also prefer to do work which they find interesting or which increases their knowledge or skills, however these comprise just a small section of the reasons chart dominated by money.
How much do they earn?
  1. Earning depend highly on the number of hours workers contribute and whether AMT is their prime source of income, their source of survival, or they just work part-time or just for fun.
  2. Skill set of the workers also affect the earnings.
  3. Even the earning of the most experienced workers just touches the mark set by minimum wages.
  4. Workers set targets for themselves.
Relation with Requesters
  1. A major section of the Turker nation forum is dedicated to Workers-Requesters relations.
  2. Workers post reviews about their experiences with requesters. Reviews may consist of how well the requester communicated the task, whether the workers were paid well and on time etc.
  3. Workers do some initial search about the requester before taking his/her job.
  1. Majority of the turkers oppose the idea to regularize the AMT platform.
  2. They believe that they are in the best position to influence and manage the market.
  3. They also share a strong opinion against the attention given to the crowdsource industry by journalists and academicians as they fear this would demotivate requesters and amount of work would reduce.
The Invisible Work
  1. Workers spent a good amount of time in doing the work which is hiden from the outside world.
  2. This includes finding the most suitable HITs in terms of payment, their knowledge and other hardware or software restrictions, searching about reputation of the requesters, learn new skills, manage AMT work etc.
Major concerns
  1. The main concerns for the workers are employers who don't pay, unfair rejections, identifying scams, the cost of poorly designed tasks.

2. What observations about Requesters can you draw from the readings?

Getting the job done
  1. The main focus of the Requesters is to get the job done as quickly as possible within the budget.
  2. Sometimes they don't even shy from being unfair to the workers.
  3. They rate workers on the basis of the work. They can even block the workers. Although the same option is not provided to the workers.
  1. Some requesters follow forums like Turker Nation to communicate with workers during the work.
  2. They remain online during the task to care of any problems that the workers might face.

Worker perspective: Turkopticon

Turkopticon is developed in response to invisibility of worker in AMT design. Turkopticon is a system that was created to allow workers to publicize and evaluate their relationships with employers since workers need to know who are the bad ones and who are the good ones. That is, in essence, Turkopticon is a platform to rate employers.

  1. Low income due to losses because of arbitrary rejection of work they did.
  2. Low income for workers because the do not get fair compensation for their work.
  3. Workers are paid late because of considerable delay in acknowledgement of the task payments from requester.
  4. Workers do the work even without assurances of getting payment in return.
  5. Rejection of work leads to lowering of approval ratings.
  6. Low ratings of workers in AMT results in inaccessibility for higher rating work.
  7. Amazon response to workers complaints are not up to mark.
  8. No feedback mechanism for worker to rate requester/employer, no rating mechanism for workers to rate employers
  9. The communication between the worker and a requester is inefficient.
  1. Requester/employer do not have to provide valid reason for rejection of work.
  2. Although payment rejected by the employer, they(employer) still own the work which was provided to them.
  3. Requester/employers are given a window of 26 days to evaluate workers task.
  4. Difficult for requesters/employers to rate task which are subjective in nature.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

User studies are an important part of the design process. It can help to improve the design by providing relevant inputs and feedback. Mechanical Turk is a platform which provides a low cost alternative to requesters to collect data from users with different backgrounds.

As a part of this study, two experiments were conducted:

Experiment 1 : Users were asked to evaluate Wikipedia article on a scale of 7 with optional text feedback. The feedback aimed at finding the veracity of ratings provided by users.

Experiment 2 : This experiment was on the same lines with one addition. A verifiable questionnaire was included to reduce the number of malicious users.


  1. Malicious workers may try to enter fake input without going through the data just to complete the tasks in short duration of time.
  2. Such practices increase with the tasks where the output is not verifiable.


  1. Without verifiable output, requesters cannot check the authenticity of the task.
  2. Workers have to reply on factors like task completion duration, plagiarism etc to separate valid and invalid inputs.
  3. Workers may add verifiable questionnaire to discourage malicious users.

Requester perspective: The Need for Standardization in Crowdsourcing


  • Workers tend to work on similar tasks
  • Tasks of some complexity are comprised of tasks anbalogous to building blocks
  • Workers don't get payed for HITs they completed if the work is not to the requesters liking.
  • Workers are free to choose any tasks varying in of level of difficulty and skills required to complete each task.


  • Requester request similar tasks.
  • Wide range on the rewards for similar task.
  • Current crowd-labor market which is unorganized due to a lack of standardization or proper handling of negative externalities.
  • Requesters are sometime scammers.
  • Requesters cannot rely on the quality of the task.

Both Perspectives : A Plea to Amazon: Fix Mechanical Turk


Issue 1: Posting tasks and creation of workflows
  1. Might need to hire a full-time developer to deal with the complexities of the system.
  2. Learning to break tasks into a workflow.
  3. Stratify workers, according to quality.
  4. Have to build their own interfaces, workflow systems and quality assurance systems from scratch.
  1. Command line tools to post tasks considered user-friendly by the platform.
  2. No easy implementation of workflows.
  3. Most have crowd-sourced workflows instead of one-pass tasks.
  4. Only a few "big requesters" and very many "small requesters".
  5. Difficult for small guys to grow.
Issue 2: Bad reputation system for workers
  1. Increased gaming of system by the workers.
  2. Requesters tend to think that every worker is bad.
  3. New requesters then get only low quality workers, get disappointed with the quality of the results and they leave the market.
  1. "Number of completed HITs" and "approval rate" are easy to game by spammer workers.
  2. Good workers leaving the market.
  3. New requesters leaving the market
Issue 2: Quality Assurance
  1. Get multiple, redundant answers for the same question.
  2. Qualification tests, testing if they the workers were competent enough to participate.
  3. To make sure the instructions were followed, users were asked to submit the answers to already completed tasks.
  1. Uncertainty about the validity of the submitted answers.
  2. Clarified in the instructions that we will pay only for submissions that agree with the responses submitted by other workers.
  3. Increased the costs.
  4. Slows down the process.


Issue 1: A Trustworthiness Guarantee for Requesters
  1. Scam requesters post HITs, behave badly.
  2. Requesters can reject good work and not pay for the work they get to keep. Requesters do not have to pay on time.
  3. The new requester treated with caution until he becomes trustworthy.
  4. Good workers do a very small number of tasks to see if the new requester is trustworthy.
  1. Cause good workers to avoid any newcomer.
  2. Turker Nation and TurkOpticon, make it possible to know about the about the badly behaving requesters.
  3. New requesters are satisfied if they post only small batches of work.
  4. New requesters posting large batches are often disappointed as the large subset of the work is done by the spammers.
  5. Subjective reputation is not enough
Issue 2: Restrictive user interface
  1. Only 2 ways of sorting, the most recent HITs, or the HITgroups with the most HITs.
  2. Workers use priority queues to pick the tasks to work on.
  1. Highly restricted by the interface; cannot search for a requester, unless the requester put their name in the keywords; no way to find the tasks of interest.
  2. Workers use priority queues to pick the tasks to work on.
  3. It is effectively impossible to predict the completion time of the posted tasks (the mean completion time is expected to increase continuously as we observe the market for longer periods of time).

Synthesize the Needs You Found