Milestone 2 Illuminati

From crowdresearch
Jump to: navigation, search

Members:

  • Abhishek Nandgaonkar
  • Vivek Nair
  • Smit Shah
  • Punita Dharod


Reading Others' Insights

Worker perspective: Being a Turker

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • While turkers find some HITs to be fun, interesting or educational, it was invariably related to comments about the HITs also paying well.
  • Many turkers are doing AMT as a means of living ‘hand-to-mouth’.
  • Workers like the requester that approves all HITs and dislike the requester that mass rejects some HITs.
  • Workers like requesters that are engaging and good communicators.
  • Novice workers are likely to take up poor paying jobs to increase their HIT count and approval rating, while experienced turkers are much more concerned with approval rating as it can rise and fall.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Requesters expect the turkers to be patient and not too quickly judge them and give them a bad rating.
  • A Requester can block a Turker if they are not satisfied by the work done by the Turker. However, not all workers may provide quality work. Requesters must accept that as well.

Worker perspective: Turkopticon

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Workers have limited options for dissent within AMT and AMT treats workers interchangeably so Turkopticon offers workers a way to dissent, holding requester accountable and offering one another mutual aid.
  • Requesters have all the power and can reject to pay for a worker’s efforts.
  • Turkopticon allows the workers to evaluate their requesters based on various parameters to create a standardized way to asses a requester’s reputation, which can be helpful to other workers.
  • Workers often get frustrated because they often think that the requesters ought to reply to their questions, justify their rejections and they feel that they must have the right to confront the requester about those rejections.
  • Some workers are interested in a forum in which Turkers could air concerns publicly without censorship or condescension, and worker visibility and dignity more generally while, others are interested in a way to build long-term work relationships with prolific requesters, and worker-requester relations generally.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Requesters can often ignore a workers need due to the availability of surplus labor.
  • Requesters often do not respond to workers concerns due to the thousand-to-one worker-requester ratio, which makes responding cost prohibitive.
  • Requesters often view workers dispute messages as a way of determining an algorithm’s performance in managing workers and tasks.*
  • Requesters often believe that algorithmic management precludes individually accountable relations.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Very little information about the workers is available which raises suspicion
  • A worker would input valid values if he knew that the answers are scrutinized
  • For a worker, an easy to complete survey would be equivalent to a survey a worker takes genuine interest in

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • A requester gets to evaluate the work by a worker thereby ensuring that the work is authentic
  • Difficult to ensure if a human or a script has completed a survey if the answers are present
  • Workers usually fill out answers for the sake of survey completion and not real answers that they really believe in
  • Making a few fields with known answers compulsory could help a requester figure out if the task performed was genuine
  • Look for patterns like time duration and repetitive comments to know if the task was genuinely performed.

Requester perspective: The Need for Standardization in Crowdsourcing

The paper does focus on the advantages of crowdsourcing platforms, however at the same time it also draws on the negatives.

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • The ratings structure is flawed and does not ensure proper points being awarded to a worker in spite of him putting in his best efforts
  • Inability to filter tasks as per their expertise or payment need is a major detrimental
  • Lack of non-standardized task user interfaces implies that workers need to adjust to each interface
  • Unavailability of training material causes workers to not be up to task

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Requesters do not have a common interface to ensure that they post their requests in a standard format thereby causing redundancy in the requests being created
  • Payment by requesters for tasks is not on a common scale. The same task by different requesters will pay different sums to the workers
  • Requesters do not actually know the background of the worker and so are not quite sure if the task will be completed in the expected standard

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • More public qualification tests would set good worker out from the bad ones giving the good workers the center stage.
  • Work history to highlight and impress the requester would again work in favor of the good and experienced workers
  • A good worker with higher rating would have better opportunities
  • Workers need to be rated since they are like a product a requester invests its money in
  • A worker cannot do many tasks for the same requester in the fear of rejection of submission
  • If a worker can appeal against a rejection, his quality of work and reputation should sustain longer
  • More information about the requester would increase transparency in MTurk
  • A browsing system would help a worker find task that he really wants to do
  • Recommended HITs would provide the worker with similar tasks that he can completed earlier

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Make it easier to requesters to post tasks.
  • It is difficult to use the command-line tools
  • Ability to have customizable workflows instead of a straightforward one
  • It is cumbersome for a requester to build his or her own interface from scratch
  • A reputation mechanism to profile a worker to tell good from bad
  • A happy requester can rate a worker so that other requesters can benefit
  • Difficult for new requesters to enter the already floundered MTurk market

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

A lot of articles and posts written for Amazon Mechanical Turk mention similar points. I have summarized most of them below.

  • Requester and post authenticity so that workers do not fall prey to frauds and advertisement links. Therefore, requester ratings and user ratings could be a nice addition.
  • Sharing of statistics related to the amount of time generally taken by users for a particular task with respect to the time mentioned with the task.
  • Worker statistics for self-viewing so that a worker can have better insights about what sorts of tasks suit them better and help them make more money.
  • A feature to block/flag requestors.
  • A moderator team from Amazon to regulate the requestors and workers to block the ones that appear fake based on past work and user reviews.
  • It would be useful to have a demographic page that could simply be incorporated into any task, so we do not have to be constantly filling in the same old demographic page each time: that wastes the time of both the requester -- to have to ask for it -- and the worker -- to have to do it. Furthermore, this completely limits gaming the system: you do not have workers changing their age, race, sex, education and/or number of children in order to fit what they think the requester is looking for.
  • One thing that would be useful is an in-platform messaging system where it is easy to communicate with requesters. The current email system is very difficult to navigate; many requesters communicate through bonuses and cannot be contacted via a reply. Even better would be a chat room, where requesters can talk directly with their workers while they are working on their hits.
  • It would be great if there were an app for doing HITs from your cell phone. When a requester creates a HIT they could specify whether the HIT should be made available to users using the cell phone app or not.
  • A way to increase the pay for a particular HIT. With MTurk, a requester would have to cancel the HIT and create a new one. However, by default any one that had done your original HIT would be able to do your new HIT again.
  • An option where requesters can give away bonuses.
  • Better filters and relevancy of tasks.
  • Many features exist in additional Google Chrome extensions. Workers would feel more secure if Amazon provided the features.

Synthesize the Needs You Found

Worker Needs

  • Clear details

Every worker important primary need is to get a clear and detailed description about the work requested by the requestors.

Evidence: Many task posted by requestors are not clear, misses certain details and expectations. This results into workers not having a clear idea about the work, its requirements, expectations and end result. Thus, leading to heated situation between worker and requestor at the end.

Interpretation: Every requestor should submit a clear and detailed description about the task, expectations and end result.

  • Value for money

No work done should be free/under valued. A worker always expects acceptable value for their skills, work and time.

Evidence: Many request by requestors have task that require a good amount of work and skills but are still valued less. If workers are paid appropriately, it will result in a good result. If not, might result in delay and bad quality result.

Interpretation: Every kind of task should have a base line amount associated with it. So, always a worker is paid the minimum wage and both the parties are in win-win situation.

Requester Needs

A set of bullet points summarizing the needs of requesters.

  • Requesters need to be able to identify the demographics for the work being assigned.

Evidence: Requesters keep insisting that they should get to know the worker picking up the task they have placed, so that they do not invest their time and expectation on the wrong set of brains. Interpretation: Requesters need results. The quality of work and timeliness in which it is done is an important aspect.

  • Requesters need to know support they will receive after completion of work.

Evidence: Requesters after implementing a part of the code which they got designed have often come up with bugs for the module they got developed and should have method to get it resolved Interpretation: Just completing the work is not sufficient at times and completing and assessing it fits as a round peg in a round hole is also needed.