Milestone 2 TuringMachine

From crowdresearch
Revision as of 09:19, 17 January 2016 by Neilthemathguy (Talk | contribs) (Team TurinMachine)

Jump to: navigation, search

Team TuringMachine

  • Neil Gaikwad
  • Vishnu Ramachandran
  • Kristiono Setyadi

Needfinding

According to Design Research & Planning, D Patnaik Needfinding is act of discovering users' physical, psychological, or cultural requirements so that we can create an appropriate solution. This activity is a foundation of designing new technology. A user can have explicit need, directly observable or implicit need, hidden beneath the iceberg. In this assignment we conduct our analysis based on two needfinding techniques: Observations and Interview. As suggested in the assignment guidelines we synthesize all needs and interpretations in the final section.

Attend a Panel to Hear from Workers and Requesters

  • We attended the panel discussion lead by experienced workers and requestors from AMT and oDesk. The panelists came from diverse backgrounds including research scholars, academicians, students, and freelancers.
  • One of the panelists revealed that some workers are completely dependent on MTurk to fulfill their financial needs. These workers eagerly search for every possible HIT that pays high rewards.
  • Most of the workers mentioned that money is the primary reason why they use MTurk or oDesk.
  • One of the workers at oDesk mentioned that she gives high importance to ethics while choosing her assignments.
  • There was a lot of discussion about workers' reactions to the rejections. Panelist mentioned that "Workers try very hard to convince the requesters to flip rejections". We found that a rejection reduces the workers' score and makes it tougher for them to get further HITs.
  • One of the panelist talked about uncertainty in the nature of work. She mentioned that "The work is variable depending on the day of the week and the time of the year". For instance, during the academic calendar, researchers post many survey based HITs. However, during the holidays the amount of HITs posted by academicians goes down.
  • Experienced workers have figured out the patterns of when to perform HITs; They wait for certain time and day of the week to make most money.
  • The panelists highlighted that online forums such as MTurkNation and MTurkGrind help workers to feel connected with each others. Most of the workers come from diverse economic backgrounds and they use the forums to communicate and share experiences.
  • We observed that workers don't get fair pay for the amount of time they allocate to produce high quality results.
  • The discussion also highlighted the drawbacks of lack of interactions between workers and requestors. For instance, workers often had trouble understanding the ambiguous instructions provided by the requestors. They could not reach out to the requestors for further clarification on their doubts.
  • Most requestors divided their tasks in a small chunks.
  • One of the panelist mentioned that he adds Golden Questions i.e. trap to catch fraudulent responses.
  • Some requestors were sensitive towards workers feelings and did not reject the work without a genuine reason.
  • To learn more about workers' and requestors' lives, we asked following questions:
    • What time of the day do you guys Turk? Do you find more HITs on a particular day of week?
    • @workers: what do you do when your task is rejected? @requestor what do you do to justify the rejections?
    • Imagine you have three tasks to choose from: first a taking a survey, second labeling image, and third a data entry. Which one will you choose? All of them pay equal money. Follow up question Why?
    • What do you do to collaborate with other turkers? What do you guys talk about?
    • @workers What bothers you most about the job?

Reading Others' Insights

In this section we use A.E.I.O.U. framework, see Design Research & Planning, D Patnaik, page 18 to organize the observations into Activities, Environments, Objects, Interactions, and Users.

Worker perspective: Being a Turker

Source of observations Being a Turker Martin etal 2008

In the diagram below we break down our observations into several categories including Reasons for Turking, Types of Turkers, Reaction to Rejection, Turker's view of the marketplace, Wage Expectations, and Relationship to requestors. In addition we use A.E.I.O.U. framework to elaborate more on the observations shown in the diagram.

Being Turker

Observations about Workers:

  • Activities:
  1. Most of the turkers worked to fulfill their daily financial needs. They set targets for each year and tried to achieve it. We have observed similar behavior during the hangout interviews as well.
  2. Some turkers worked for having fun. They believed turking for fun is an opportunity to learn and increase HITs rates. However, other turkers believed working for free increases monopoly of the requestors and helps create low payments in the marketplace.
  3. Turkers praised and appreciated efforts of fair requestors. They liked requestor who were honest, empathic and had better communication skills.
  4. Turkers criticized requestors for not approving the HITs on time or providing unfair rejections. Turkers read through the rejection comments and strongly reacted to unreasonable feedback.
  5. We observed that not all reactions to requestors' feedback were negative. Some turkers, who were willing to give a requestor second chance, understood the risk of discouraging a good requestor. As soon as turker's noticed improvement in a requestor's behavior they praised them.
  6. We observed Novice turkers, who had less than 1000 HITs aimed to increase their HITs numbers by doing unpaid work, whereas expert turkers, who had more than 1000 or 5000 HITs choose their work very carefully. Experienced turkers did not like to go through a bad interface/ratings/tags to complete a low paying HIT.
  7. Some of the turkers cheated by posting the survey content or answering qualification questions on other forums.
  8. We observed that turkers had different ideas about amount of money they should get paid. Most of them widely agreed that $1/hr requestors are bad for the marketplace. However, there were mixed opinions about the max and median wages. For instance, one of the workers was happy to earn $8/hr; some believed $4-$6/hr is very low wage. Furthermore, the HITs worth $4 were more attractive to workers in different part of the world than the US workers.
  9. Workers are humans and they like to express their feelings: one of the workers compared his past wage at Amusement Park with $6/hr wage he received on the AMT.
  10. Turkers opposed the intervention of government, academia, or media in the AMT. Some of them believed that government regulation on requestors will result into lower payments.
  11. We observed that turkers supported each other in the forum section called prayers and good vibes. They posted their problems and looked for the advice from the community members.
  • Environments
  1. Physical setting: Turkers carried out their HITs at various physical locations. Some worked at home, others at the location of their full time jobs.
  2. Online setting: Turkers used AMT to search, perform the tasks. They received requestors' feedback and payments through AMT system.
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk (AMT)
  2. Turker Nation, a discussion forum for Amazon Mechanical Turk (AMT)
  • Interactions
  1. Workers - Workers (Direct communication through Turker Nation forum)
  2. Workers - Requestors (Indirect communication to exchange the feedbacks and payments through AMT system)
  3. Workers - AMT (Conducted the task, received payments, and feedback)
  4. Workers - Turker Nation (Expressed their views, anger, demands, support for each others)
  • Users
  1. Workers highlighted in the paper are the members of Turker Nation online forum. They come from various economic backgrounds with different set of incentives.

Observations about Requestors:

  • Activities:
  1. Most of the observations about the requestors are indirect and come through feedback of workers. Here, we follow workers' comments in the above diagram to construct requestors' activities.
  2. Some requestors stayed online when task was carried out. They provided fair payments and addressed workers' questions. (We do not know what was the medium of this synchronous communication)
  3. Some requestors were honest, helpful, and good communicators. They genuinely cared about workers.
  4. Some requestors were very attentive and approved the HITs as soon as they received them. However, others frequently rejected the tasks without providing constructive feedback.
  5. Some requestors paid very low wages up to $1/hr for workers' time; others paid wages upto $8.
  6. Some requestors faced glitches in AMT and could not post quality HITs. However, they redesign the tasks after receiving a feedback from workers.
  • Environments
  1. Physical setting: We don't have enough information to analyze where requestor designed the tasks.
  2. Online setting: Requestors used the AMT platform to perform online activities such as posting the HITs, making payments, and providing feedback.
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk (AMT)
  • Interactions
  1. Requestor - Workers (Indirect communication to exchange the feedbacks and payments through AMT system)
  2. Requestor - AMT (Posted HITs, made payments, and provided feedback)
  • Users
  1. Requestors on AMT (The paper doesn't provide enough details about requestors. We found the paper mentions Tanika Sangakkara, a requestor who posted survey about culture and brands and provided $1.20 for 8 minutes)

Worker perspective: Turkopticon

Source of observations Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk Irani etal 2008

In this paper, authors present Turkopticon, an activist system that allows workers to evaluate the requestors on AMT. According to the authors the system receives around 100,000 page views a month. The authors present Turkopticon as an activist technology that intervenes into AMT, micro-labor system to provoke ethical and political debate. In the POST-ITs diagram below we highlight some of the comments from requestors and workers. These comments were collected from the paper and ACM CHI Turkopticon video In what follows we highlight some of the observations about workers and requestor.

Turkopticon

Observations about Workers:

  • Activities:
  1. Some workers participated in AMT to earn daily wage, however, others participated for fun or kill time.
  2. Workers were frustrated with the delayed payment process.
  3. Workers didn't get paid minimum wages; there was no limit on how low requestor can pay to workers. About 20.89% workers demanded fair payments.
  4. Workers were unhappy about Amazon's lack of response to their concerns. See the POST-IT.
  5. Once the workers lost the approval ratings, it was hard for them to come back and raise the ratings to get the new HITs.
  6. Workers missed the opportunity to build a long-term work relationships with requestors.
  7. Workers frequently use Turkopticon to select requestors based on communication, fair and faster payments, and reviews.
  8. In case of rejections, the workers could contact the requesters through AMT’s web interface. However, considering large volume of messages, the requesters have built algorithms to filter out most of messages from the workers. According to the paper the work submitted by 52.23% workers participated in the survey was regularly rejected without any reasons. See the POST-IT diagram for comments from a requestor and reviewers.
  9. Workers feared retribution for writing critical reviews on Turkopticon.
  • Environments
  1. Physical setting: We don't have enough information to analyze where workers carried out their tasks.
  2. Online setting: Workers used AMT to search, perform the tasks. They received requestors' feedback and payments through AMT system. Workers used Turkopticon to write reviews about the requestors.
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk (AMT)
  2. Turkopticon, an activist system that allows workers to evaluate the requestors
  • Interactions
  1. Workers - Turkopticon (Workers create and use reviews of requestors when choosing HITs on AMT.)
  2. Workers - Requestors (Indirect communication to exchange the feedbacks and payments through AMT system)
  3. Workers - AMT (Conducted the task, received payments, and feedback)
  • Users
  1. 67 workers participated in open-ended survey to articulate the Workers bill of Rights.
  2. Amazon defines the workers as contractors subject to laws designed for freelancing and consulting.

Observations about Requestors:

  1. Requestors defined various tasks on AMT including structuring unstructured data, transcribing snippets of audio, and labeling images. Tasks definition includes structure of data, instructions, and price.
  2. Many requestors integrated workers' output into existing systems operating in their organizations .
  3. Some of the requestors were less concerned about working conditions in which workers operate.
  4. Requestors defined the criteria/qualifications the workers must meet to perform the task.
  5. Requestors had authority to decide whether to pay for submitted task or reject it without paying.
  6. Amazon doesn't require requestors to respond and address workers' concerns. The time and efforts requestors spent exchanging emails costed more than amount they paid to the workers.
  • Environments
  1. Physical setting: We don't have enough information to analyze where requestors carried out their tasks. However, the paper highlights various startups and technology companies were using human computation as a part of internal platforms.
  2. Online setting: Requestors used the AMT platform to perform online activities such as posting the HITs, making payments, and providing feedback.
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk (AMT), online market where requestors can post large volumes of micro-tasks and workers can complete them in provided timeframe. AMT is also knows as microlabor marketplace, human computation resource, and crowdsourcing platform.
  • Interactions
  1. Requestor - Workers (Indirect communication to exchange the feedbacks and payments using AMT system)
  2. Requestor - AMT (Conducted the task, received payments, and feedback)
  3. Requestor - Internal Systems (Utilize the workers' HITs' output for carrying out computational activities in the firm)
  • Users
  1. Requestors on AMT, Technology Startup Companies, Twitter, CrowdFlower
Quantitate Analysis
Turkopticondata

To understand the patterns in the Turkopticon ratings (requestors, comm, pay, fair, fast, reviews), we have retrieved Turkopticon data using the API. (We are in a process of producing further statistical analysis and will add results to this page). The figure shows the sample screenshot of the dataset. Below we show the distribution of Reviews. The Review data follow the heavy tail right skewed distribution, which is very similar to most of social networks datasets. We can see the slow decay as reviews increase. Now, interesting question is : Why some requestors got so many reviews and other got few? Were those requestors good or bad? We are exploring further....

Turkopticondata
Turkopticondata
We share the turkopticon extractor source code with the community and it can be accessed from the github.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

Source of observations Crowdsourcing User Studies With Mechanical Turk, Kittur etal 2008

Observations about Requestors:

  • Activities:
  1. We observe requestors investigated the effectiveness and potential of Amazon Mechanical Turk (AMT), micro-task market to conduct various user research studies.
  2. Requestors believed the AMT can offer a useful platform for conducting a large number of user studies in low time and monetary costs.
  3. Requestors were very cautious about their approach and looking for a high quality submissions from the workers.
  4. Requestors designed two experiments to evaluate the utility of AMT and propose important design considerations to formulate the crowdsourced tasks.
  5. In the experiments, the requestors had the workers rate a diverse set of 14 Wikipedia articles
  6. Experiment 1:
    1. Requestors submitted HITs asking workers rate the Wikipedia articles on 7point Likert scale using Wikipedia Featured Article Criteria.
    2. Requestors provided a brief descriptions of what was meant by each question
    3. Requestors asked workers to fill out a free-form text box describing what improvements they thought the article needed.
    4. Requestors' Motivation behind adding the text box was to provide a checkpoint to verify whether workers are gaming with the system i.e. whether workers are cheating.
    5. Requestors paid workers 5 cents for each task completed. According to researchers at PARC an average HIT requiring a minute may pay 5-10 cents, which corresponds to an hourly wage of $3-6.
    6. Requestors analyzed the results and discovered (1) Worker's response was extremely fast; median duration 1:30 minutes (2) Correlation between workers' ratings and Wikipedia admin ratings was less significant (r = 0.50, p = .07).
  7. Requestors changed the structure of the HITs as the quality of work submitted was declined.
  8. Experiment 2:
    1. At the start of the HIT, requestors introduced four questions that had verifiable, quantitative answers. The questions were focused on characteristics of the Wikipedia article such as number of references, images, and sections the article had. Requestors also asked workers to summarize the article in 4-6 keywords.
    2. Rest of the setting for Experiment 2 remained similar to the Experiment 1.
    3. Requestors compared workers' performance with admins, highly experienced Wikipedia users.
    4. Requestors analyzed the results and discovered (1) The median user response time was increased to 4:06 minutes (2) Correlation between workers' ratings and Wikipedia admin ratings was significant (r = 0.66,p = 0.01).
  • Environments
  1. Lab setting: unfortunately we didn't have opportunity to observe requestors' activities in the lab environment.
  2. Online setting: requestors designed Wikipedia Article Analysis HITs using AMT platform. Requestors made payments through AMT system.
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk, micro-task market
  2. Wikipedia, an online encyclopedia which allows online users to create and edit the content.
  • Interactions
  1. Requestors - Wikipedia
  2. Requestors - Workers (indirect, there is no mention of feedback or direct communication)
  3. Requestors - AMT
  4. Requestors - Requestor (the paper doesn't mention this indirect communication. Requestors were collaborating with each other and discussing ideas about the experiment)
  • Users
  1. The requestors in the study are the human computer interaction researchers from Palo Alto Research Center.
  2. Requestors names: Aniket Kittur, Ed H. Chi, Bongwon Suh

Observations about Workers:

  • Activities:
  1. Workers on AMT participate in the requestors' experiments.
  2. Experiment 1:
    1. We observer 58 workers provided 210 ratings. 44.28% ratings were submitted in within first 24 hours.
    2. Workers submitted 30.5% responses in less than a minute with median duration 1:30
    3. Workers provided about 48.6% uninformative responses to the the free-form text box question .
    4. Small group of 8 workers gave invalid responses and tried to cheat.
    5. Workers spent small amount of time evaluating quality of articles and submitted less accurate ratings as compare to Wikipedia Admin.
  3. Workers response to the HITs changed as the structure of the experiments varied.
  4. Experiment 2:
    1. In the new experiment (described above), 124 workers provided 277 ratings for 14 articles.
    2. Only 6.5% responses were created in less than a minute; median duration for the tasks was increased to 4:06.
    3. Workers spent significant time evaluating quality of articles and submitted accurate ratings: as close to the ratings of Wikipedia Admin.
  • Environments
  1. Home/Office setting: unfortunately we didn't have opportunity to observe requestors' activities in the environment.
  2. Online setting: workers analyzed quality of the Wikipedia articles and submitted ratings using AMT platform. Workers got paid using AMT.
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk, micro-task market
  2. Wikipedia, an online encyclopedia which allows online users to create, edit the content.
  • Interactions
  1. Worker - Wikipedia
  2. Requestors - Workers (indirect, there is no mention of feedback or direct communication)
  3. Workers - AMT
  4. Workers - Workers (the paper doesn't mention this indirect communication, but it is possible workers were communicating about the task using online forums)
  • Users
  1. The workers in AMT, micro-task marketplace.
  2. Workers names: unavailable

Requester perspective: The Need for Standardization in Crowdsourcing

Source of observations The Need for Standardization in Crowdsourcing, Ipeirotis etal 2011. the blog

Observations about Requestors:

  • Activities:
  1. We observer that the experienced requestors learn from their previous mistakes and fix the design problems associated with their tasks.
  2. New requestors go through steep learning curve to get the task design right and spends significant amount of money, efforts, and time.
  3. Requestors work independently on their own to design the work request, prices, and task evaluation methods.
  4. Requestors cannot analyze the market conditions while deciding the prices/wages for their work. Once requestors decide the price, it remains constant until the batch is expired.
  5. Requestors cannot screen, train, and incentivize workers to perform a particular task for a long duration.
  6. Requestors enjoy freedom and monopoly to control payments and pricing structure.
  7. Most of the requestors post similar or repetitive tasks on AMT.
  8. Requestors can post tasks that may involve illegal activities.
  9. Requestors have no control over the timeline indicating when their task will be completed.
  10. Requestors face huge uncertainly about the quality of results.
  11. Requestors are not subject to any reviews or downgrade ratings.
  • Environments
  1. Physical setting: unfortunately we don't have enough information about the physical setting in which the requestor specific to this research is operating.
  2. Online setting: requestors are posting HITs using AMT platform. Requestors make payments using AMT payment system.
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk, an online labor market
  • Interactions
  1. Requestors - Workers
  2. Requestors - AMT
  • Users
  1. The author builds the hypothesis and discussion around the requestors in the online labor market.

Observations about Workers:

  • Activities:
  1. Workers face steep learning curve to understand the interface and instructions created by different requestors.
  2. As every requestor has different quality requirements, workers find it hard to adapt to uncertainty in the ambiguous definition of quality.
  3. Workers have freedom to choose tasks that varies in their difficulty and skill requirements. They have freedom to enter and leave marketplace on their own terms.
  4. Workers can spam the marketplace
  5. Workers' reputation is always at stake. If requestors reject the tasks workers cannot appeal against it.
  6. Workers face huge uncertainly about amount of money can make.
  • Environments
  1. Physical setting: unfortunately we didn't have opportunity to observe worker's activities in the environment.
  2. Online setting: Workers interacting with Requestors through AMT
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk, micro-task market
  • Interactions
  1. Requestors - Workers (indirect, there is no mention of feedback or direct communication)
  2. Workers - AMT
  • Users
  1. The author builds the hypothesis and discussion around the representative behavior of workers.

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

Source of observations A Plea to Amazon: Fix Mechanical Turk!, Ipeirotis 2011

Observations about Requestors:

  • Activities:
  1. Requestors find it hard to design, create, and post a task on AMT.
  2. Requestors spend significant amount of time and money while designing a tasks that will be completed by micro-labors. They use command line tools to post tasks.
  3. Requestors cannot distinguish between good from bad workers. Most of the times requestors assume all workers are bad and reward low wages to both types of workers.
  4. Requestors use iterative processing i.e. repetitive tasks to avoid spamming and ensure high quality.
  5. Requestors find it hard to trust the workers. They don't have any access to past working history of the workers.
  6. Requestors can either accept the work and pay or reject the work and refuse to pay. Thus they are free to keep deliverables even after rejecting a good work.
  7. Requestors cannot categorize or tag their work in different types.
  8. Most of the requestors do not pay on time.
  9. Experienced requestors post the tasks in small batches and periodic intervals.
  10. If quality of submitted work is bad, then the new requestor seeks expert help or leaves the market.
  • Environments
  1. Physical setting: No information about requestors' physical setting is provided.
  2. Online setting: requestors post HITs using AMT platform. Requestors make payments using AMT payment system.
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk, an online labor market
  • Interactions
  1. Requestors - Workers
  2. Requestors - AMT
  • Users
  1. The author builds the hypothesis and discussion around the requestors in the online labor market.

Observations about Workers:

  • Activities:
  1. Workers use TurkOpticon and Turker Nation to learn about the requestors' behavior.
  2. Genuine worker doesn't complete many HITs from a new requestor until they build strong trust based on various factors related to payments and fair treatments.
  3. Most of the workers completes small fraction of HITs of new requestors and observe how the requester behaves.
  4. Workers do not like mass rejection on a big batch and they worry about unreasonable behavior of requestors.
  5. Spammers work on HITs that are big and coming from a new requestor.
  6. Workers cannot see objective characteristics of requestors to decide whether to choose the work or not.
  7. Workers do not know how long it will take to receive money for the work they have completed.
  8. Workers cannot review the rejection rate of requestor.
  9. Workers cannot appeal against the rejection from requestors.
  10. Workers cannot see the total volume of posted work and decide whether it is worth of time and money to learn the new task from the requestor.
  11. Workers cannot search for the requestors or interesting tasks.
  • Environments
  1. Physical setting: unfortunately we didn't have opportunity to observe worker's activities in the environment.
  2. Online setting: Workers interacting with Requestors through AMT
  • Objects (Man-made artifacts)
  1. Amazon Mechanical Turk, micro-task market
  • Interactions
  1. Requestors - Workers (indirect, there is no mention of feedback or direct communication)
  2. Workers - AMT
  • Users
  1. The author builds the hypothesis and discussion around the representative behavior of workers.

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

We browsed through MTurkForum, MTurkGrind, and Reddit to explore Turkers' and Requestors' experience, knowledge, and anger. In what follows we summarize their needs:

  1. Workers need to change their location after they move to a different state
  2. Workers need to know the justification for rejecting the legitimate work. Vredesbyrd said, “I'd imagine the first step in creating a worker oriented crowd sourcing platform would be to find a way to ensure legitimate work can't be unfairly rejected by a requester.”
  3. To avoid the feeling inequality, workers need to know what ethical & work standards the marketplace adhere. In the wish list Admiraljohn, one of the turker, requests Amazon to be more transparent with the workers standards. She gives further urges for providing "clarification on the rejections
  4. Turkers need to search quality HITs and requesters. DCI, one of the turkers said, “As anyone who has worked on mturk for a while knows, it is very important for workers to keep track of HITs and requesters that they like to work for again and to be able to locate these HITs quickly after they are posted.”
  5. Requestors need to help workers improve the experience and get advice on creating and pricing the HITs Reddit "As a scientist who uses MTurk to collect social science data, I'd like to know how to improve the experience for Turkers taking my HITs. I've heard that MTurk can be frustrating when requesters don't take workers' experiences into account. Do you have any suggestions for how I can be a good requester?"
  6. Requestors need to reverse the rejections rather than compensate rejection with bonuses. The cost of rejection is higher because it limits future HITs a worker can attempt.

Synthesize the Needs You Found

Worker Needs

  • Workers need to be treated fairly and respectfully
    • Evidence: Below responses from the Turkopticon and Being Turker research highlight unfair treatment towards workers. The diagrams and POST-IT above also shows this case.
      • "Got a mass rejection from some hits I did for them! Talked to other turkers that I know in real life and the same thing happened to them. There rejection comments are also really demeaning. Definitely avoid!"
      • “I don’t care about the penny I didn’t earn for knowing the difference between an apple and a giraffe, but I’m angry that MT will take requester’s money but not manage, oversee, or mediate the problems and injustices on their site.”
      • "We can be rejected yet the requestors still have our articles and sentences.. Not Fair"
    • Interpretation: Unreasonable rejections and low payments lead to disrespectful and unfair treatment towards the workers, who spend hours completing requestors' tasks. Workers find this act discouraging and unethical. Most of the requestors do not realize that workers are humans not machines and they deserve the respect.
  • Workers need to be trustworthy
    • Evidence:The research conducted by Kittur etal 2008 shows that it is hard for requestors to trust the workers. Authors prove that spammers can significantly degrade the quality of work.
    • Interpretation: Workers have limited opportunity to directly interact with requestors and understand the tasks. This isolation makes it hard for them to establish trustful professional relationship with the requestors. Due small percentage of bad workers, requestors believe that all other workers are bad.
  • Workers need to know larger picture and objectives behind the micro-tasks created by requestors.
    • Evidence: It is hard for workers to deal with payment uncertainty. In A Plea to Amazon: Fix Mechanical Turk!, the author presents the case where small-task requestor can take advantage of the worker by the hiding details related to task.
    • Interpretation: Workers cannot see the total volume of posted work and decide whether it is worth of time and money to learn the new task from the requestor. Most of the times workers do not know whether small-task requestor will post the task in future. This creates uncertainty about income and job. Knowing what lies ahead will help workers to organize their activities and make significant money for living.

Requester Needs

  • Requesters need to price the task fairly
    • Evidence : In this blog, the author highlights how requesters struggle to decide what should be the price of the task. One of the workers in the Being a Turker study mentioned that "The $1/hour requester should be banned from mTurk"
    • Interpretation: It's difficult for requesters to determine the fair price for a given task. The requestors cannot use any historical pricing data to decide the fair price. In addition, lack of information about worker's skill sets makes it impossible to decide the price.
  • Requesters need to establish good relationship with workers
    • Evidence In the Being Turker research paper one of the workers asks "Does anyone know whether this is a good requestor to work for or not?"
    • Interpretation: It's quite difficult for requesters to meet and interact with every single worker. There are thousands of workers and very few of requestors. It would be easy for the requestors to meet a handful of high performing workers. However, there are not criteria to classify a high performing worker.
  • Requesters need to provide quick review, feedback, and justification for rejections
    • Evidence: A worker posts his frustration "I did one hit yesterday, and its still pending. In my opinion the pay is too low for the time required, the pay is also too slow to look past its low payment all of which is assuming you get paid at all because it is majority rules graded. Big thumbs down for me." This affects requestors credibility in the worker's mind.
    • Interpretation: Large scale work force limits the requestors ability to provide individual feedbacks. Some requestors use an automated systems to handle the reviews and rejections. Therefore, worker's concern never reach to the requestors. In addition, the requestors do not have any legal obligations to provide a feedback to the workers.