Difference between revisions of "Milestone 2 ams"

From crowdresearch
Jump to: navigation, search
Line 1: Line 1:
Template for your submission for [[Milestone 2]]. Do not edit this directly - instead, make a new page at [[Milestone 2 YourTeamName]] or whatever your team name is, and copy this template over. You can view the source of this page by clicking the Edit button at the top-right of this page, or by clicking [http://crowdresearch.stanford.edu/w/index.php?title=Milestone_2_Template&action=edit here].
+
== Special Note ==
  
== Attend a Panel to Hear from Workers and Requesters ==
+
We have our examinations going on this week, so we were unable to work on the Monday Panel discussion, and two research papers.
 
+
=== Deliverable ===
+
 
+
Report on some of the observations you gathered during the panel.
+
  
 
== Reading Others' Insights ==
 
== Reading Others' Insights ==
 
=== Worker perspective: Being a Turker  ===
 
 
1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.
 
 
2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.
 
 
=== Worker perspective: Turkopticon ===
 
 
1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.
 
 
2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.
 
  
 
=== Requester perspective: Crowdsourcing User Studies with Mechanical Turk ===
 
=== Requester perspective: Crowdsourcing User Studies with Mechanical Turk ===
Line 30: Line 14:
  
 
The requester has to be extra smart; think two steps ahead of an unfaithful worker.
 
The requester has to be extra smart; think two steps ahead of an unfaithful worker.
The experiment 1, although worker friendly produced unusable statistics. The experiment 2, which was extensive, not worker friendly, turned out to be better for the requestor. To use the crowd as a source, you need to eliminate the unwanted from the crowd. It's okay to make it a little difficult for the worker as long as the task is achievable and worth the money being offered.
+
The experiment 1, although worker friendly produced unusable statistics. The experiment 2, which was extensive, not worker friendly, turned out to be better for the requester. To use the crowd as a source, you need to eliminate the unwanted from the crowd. It's okay to make it a little difficult for the worker as long as the task is achievable and worth the money being offered.
  
 
=== Requester perspective: The Need for Standardization in Crowdsourcing ===
 
=== Requester perspective: The Need for Standardization in Crowdsourcing ===
Line 42: Line 26:
 
2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.
 
2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.
  
For the requestors, yes I agree standardization will bring more sophistication into the system. The jobs will be clearly segregated. The payments will be better handled. Agreed. But then again, for a beginner requestor, who wouldn't know how to categorize his job so as to bring in more workers, this system is too much of a hassle.
+
For the requesters, yes I agree standardization will bring more sophistication into the system. The jobs will be clearly segregated. The payments will be better handled. Agreed. But then again, for a beginner requester, who wouldn't know how to categorize his job so as to bring in more workers, this system is too much of a hassle.
A very likely scenario is that the requestor who didn't standardize his job well (he's new to the system, mistakes will be made) ended up getting very few workers and was forced to increase the pay so as to attract more workers, which he didn't need to in the first place.
+
A very likely scenario is that the requester who didn't standardize his job well (he's new to the system, mistakes will be made) ended up getting very few workers and was forced to increase the pay so as to attract more workers, which he didn't need to in the first place.
  
 
=== Both perspectives: A Plea to Amazon: Fix Mechanical Turk ===
 
=== Both perspectives: A Plea to Amazon: Fix Mechanical Turk ===
Line 57: Line 41:
 
*Experienced workers usually gauge the trustworthiness of a new requester by only completing a small number of HITs for them.
 
*Experienced workers usually gauge the trustworthiness of a new requester by only completing a small number of HITs for them.
  
*Workers who work legitimately on a large batch of HITs for a requester and get rejectly unjustly, do not only lose the opportunity to earn the money but also lose reputation.
+
*Workers who work legitimately on a large batch of HITs for a requester and get rejected unjustly, do not only lose the opportunity to earn the money but also lose reputation.
  
 
*Workers search for work based on most recent HITs, instead of searching for work which matches their skill set.
 
*Workers search for work based on most recent HITs, instead of searching for work which matches their skill set.
Line 82: Line 66:
 
http://turkernation.com/showthread.php?23735-Well-paid-survey-HIT-for-TurkerNation-members-only&p=245585&viewfull=1#post245585
 
http://turkernation.com/showthread.php?23735-Well-paid-survey-HIT-for-TurkerNation-members-only&p=245585&viewfull=1#post245585
  
*A classic case of technical malfunction where the system 'accidentally' rejected a turkers work. The turker seems infuriated by this issue although seemed to have consoled once he/she recieved a support reply from the requester.
+
*A classic case of technical malfunction where the system 'accidentally' rejected a turkers work. The turker seems infuriated by this issue although seemed to have consoled once he/she received a support reply from the requester.
 
http://turkernation.com/showthread.php?23874-Doing-a-survey-and-the-HIT-was-just-automatically-returned
 
http://turkernation.com/showthread.php?23874-Doing-a-survey-and-the-HIT-was-just-automatically-returned
  
 
mTurk Forum
 
mTurk Forum
  
*The forum has an abundance of posts by requesters who are providing incentives and bonusses in search for good turkers. Starting a thread also allows the requesters to interact with the workers and gain knowledge about the workers and their experience.
+
*The forum has an abundance of posts by requesters who are providing incentives and bonuses in search for good turkers. Starting a thread also allows the requesters to interact with the workers and gain knowledge about the workers and their experience.
  
 
*New users, workers and especially requesters, are wary of the mTurk platform. It shows that either these users are completely new to the concept of crowdsourcing platforms, or they have come across negative reviews about the platform.
 
*New users, workers and especially requesters, are wary of the mTurk platform. It shows that either these users are completely new to the concept of crowdsourcing platforms, or they have come across negative reviews about the platform.
 
== Synthesize the Needs You Found ==
 
 
List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.
 
 
=== Worker Needs ===
 
 
A set of bullet points summarizing the needs of workers.
 
 
* Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.
 
 
=== Requester Needs ===
 
 
A set of bullet points summarizing the needs of requesters.
 
 
* Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.
 

Revision as of 21:28, 11 March 2015

Special Note

We have our examinations going on this week, so we were unable to work on the Monday Panel discussion, and two research papers.

Reading Others' Insights

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

Although Mechanical Turks has become a very popular crowdsourcer, there is always that element of money attached with it which inevitably invites malicious practices. In the experiment 1, where the requester put straight forward and simple one word answer questions so as to make is as easy as possible for a worker, the workers took unethical advantage of it by wrongfully submitting their jobs in search for quick money. The experiment 2, although lengthy and subjective produced better results. Let's be honest, surveys are meant to carry personal opinion, hence no answer is right or wrong. It's an opinion. Basically, you can't hold it against the worker. And there lies the loophole. The worker fills in the survey, submits it and good bye. Job done. Money in his pocket.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

The requester has to be extra smart; think two steps ahead of an unfaithful worker. The experiment 1, although worker friendly produced unusable statistics. The experiment 2, which was extensive, not worker friendly, turned out to be better for the requester. To use the crowd as a source, you need to eliminate the unwanted from the crowd. It's okay to make it a little difficult for the worker as long as the task is achievable and worth the money being offered.

Requester perspective: The Need for Standardization in Crowdsourcing

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

The method of standardization could well turn out to be a block for beginners looking for opportunities in the Crowdsoursing sector. The more the number of restrictions (let's be honest, standardizing is restricting) the more the worker will think before setting up his profile as a crowdsourcer. The process should be clean, smooth and open.

Hence, where standardization is being thought to improve work and payment standards of the workers, it may well backfire, defeating its entire purpose.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

For the requesters, yes I agree standardization will bring more sophistication into the system. The jobs will be clearly segregated. The payments will be better handled. Agreed. But then again, for a beginner requester, who wouldn't know how to categorize his job so as to bring in more workers, this system is too much of a hassle. A very likely scenario is that the requester who didn't standardize his job well (he's new to the system, mistakes will be made) ended up getting very few workers and was forced to increase the pay so as to attract more workers, which he didn't need to in the first place.

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Since the pay of a HIT, or even multiple HITs, do not amount to much, workers would prefer to move on rather than pursuing a complaint of the requester.
  • Workers aim to improve their 'rating' by working on very small low paying HITs which are a walk in the park. This makes them appear more eligible to higher paying tasks, completely disregarding the account whether they are actually skilled for the task or not.
  • A productive worker cannot differentiate himself from the rest of the crowd and ends up being paid less.
  • Experienced workers usually gauge the trustworthiness of a new requester by only completing a small number of HITs for them.
  • Workers who work legitimately on a large batch of HITs for a requester and get rejected unjustly, do not only lose the opportunity to earn the money but also lose reputation.
  • Workers search for work based on most recent HITs, instead of searching for work which matches their skill set.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Requesters have the ultimate power to misuse their responsibility.
  • Requesters are stuck with using a complicated and technologically outdated platform.
  • The web platform, and the complex API, do very little in reducing the time and costs involved in requesting HITs, thus moving the break-even point further away from the requester.
  • If a requester has followed the approach of building their own interface in an HTML iframe, it becomes easier for them to expose non-MTurk people to their HITs with simple HTML urls, thus leaking the work from an ecosystem which was supposed to be self sufficient enough to contain itself.
  • The flexible nature of the MTurk platform is beneficial for requesters who want to design and represent their HITs as they please.
  • A newly joined requester is unaware of the market dynamics and might get discouraged to continue with the crowd sourcing platform if he/she encounters inexperienced workers in his/her early stages of involvement with the platform.

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

TurkerNation

  • Requesters are trying to mediate their own verification methods and their trust level for mTurkers seems low since they are trying to filter out mTurkers.

http://turkernation.com/showthread.php?23735-Well-paid-survey-HIT-for-TurkerNation-members-only&p=245585&viewfull=1#post245585

  • A classic case of technical malfunction where the system 'accidentally' rejected a turkers work. The turker seems infuriated by this issue although seemed to have consoled once he/she received a support reply from the requester.

http://turkernation.com/showthread.php?23874-Doing-a-survey-and-the-HIT-was-just-automatically-returned

mTurk Forum

  • The forum has an abundance of posts by requesters who are providing incentives and bonuses in search for good turkers. Starting a thread also allows the requesters to interact with the workers and gain knowledge about the workers and their experience.
  • New users, workers and especially requesters, are wary of the mTurk platform. It shows that either these users are completely new to the concept of crowdsourcing platforms, or they have come across negative reviews about the platform.