Milestone 2 taskforce

From crowdresearch
Jump to: navigation, search

Template for your submission for Milestone 2. Do not edit this directly - instead, make a new page at Milestone 2 YourTeamName or whatever your team name is, and copy this template over. You can view the source of this page by clicking the Edit button at the top-right of this page, or by clicking here.

Attend a Panel to Hear from Workers and Requesters


What were the most salient insights you got from hearing the workers and requesters on the panel? What did you learn about their needs?


  • Workers need an easy and effective way to select work which might take into account
    • requesters’ information
    • offered payment
    • time requested
    • topic
  • Workers need people to talk to and to feel understood by others (within community and in the offline social environment).
  • Workers need to collaborate with other workers.
  • Workers need to tell requesters their opinion and how to improve things.
  • Workers need work to be as granular as possible.
  • Workers need to be able to compare requesters and tasks.
  • Workers need to book the work they will do - they open several windows at the same time.
  • Workers need to communicate easily with requesters and to get responses from them.
  • Workers need explanations on why they get rejected.
  • Workers need to be allowed to work no matter the place they work from (country exclusion etc.)
  • Workers need to go over the cold-start problem (regarding skills).
  • Workers need to face a low learning curve, otherwise they won´t join the platform.
  • Workers need to get accepted to work from time to time
  • Workers need to know where their ratings come from
  • Workers need better to have a better management of different international times (for communication with requesters, for finding work, for working).
  • Workers need to see clearly how to translate their real world skills to microtask crowdsourcing (how to communicate them, how to realize if they are useful).
  • Workers need better instructions on the task and what its purpose is


  • Requesters need to have a way to get back to their workers to offer them more work and continue the cycle.
  • Requesters need to design tasks in a way that they are not too boring. Specially after a while, workers may get extremely bored. Introducing bonus helps.
  • Requesters need to get summaries of what is discussed / evaluated (they cannot check all the emails or forums).
  • Requesters need to imagine themselves on the other side (to calculate time estimations etc.).
  • Requesters would love to have a two-step (iteration) process
  • Requesters have to be aware of the selection bias w.r.t. the amount of the reward.
  • Requesters have to provide appropriate training materials.
  • Requesters need to know who their workers are.
  • Requesters need better support from the platform (no reinventing the wheel at a microtask level in for example selection / screening, and improve platform workflows instead).
  • Requesters need to have quality assurance mechanisms that do not require a big overhead but are at the same time working effectively (e.g. gold standard selection etc.)
  • Requesters need to find methods to get the right people for their tasks.
  • Requesters and workers need to have a way to communicate and reach “a mutual understanding of the task”.
  • Requesters need to identify what the right people for their task looks like.
  • Requesters need ratings on different workers skills.
  • Requesters need to take into account that workers are aware that flat rates make them work (consciously) less effectively.
  • Workers would like to have a guarantee on the money they can do per week.
  • Nice idea of an oDesk worker: freelancers motivated by working on work freely, to learn - like internships.
  • Requesters need to still publish during academic breaks! the tasks are 50% from industry and 50% from academia.
  • Requesters and workers need to find a solution together for a stable amount and quality of work available.



What worker needs are discussed or implied by the reading? What requester needs are discussed or implied by the reading?

The authors observed the way AMT works, asked crowd workers how they would define their “Bill of Rights” and implemented a disruptive plugin for AMT that offers the possibility to rate the relationship workers had with requesters.


  • need to know which requester to trust: when workers browse available work in online labor marketplaces they would like to know more information about the requesters. They would like to know whether they should trust them or not.
  • need to speak loud and they need to be heard: workers have the need of sharing their experience with requesters to the community of crowd workers (i.e. advertise and evaluate their relationships with requesters). They need to be visible and express their opinion freely - without getting penalties for doing it.
  • need to feel protected and supported by the community.
  • need to feel respected and fairly treated by the requesters
    • fair acceptance / rejection
    • explained acceptance / rejection
    • fast payment
  • need (and like) to help each other.
  • need an effective mechanism to communicate with requesters.
  • need to have a similar infrastructure to what it exists in the traditional workplace. That´s why they think of minimum wage
  • need a balance between anonymity and reputation.
  • need to get responses from the platform and the requesters to questions and complaints.
  • need to see the platform managing injustices.
  • need to know about the 1) communicativity 2) generosity 3) fairness and 4) promptness of requesters

Turkopticon allows people to give a numeric rating on these qualities and they can also write in a free-form text further reviews of the requesters

(indirect observations)

  • workers need to receive specific instructions on how they should work on tasks


  • need to find an evaluation system that filters accepted and rejected work easily as it is not feasible to go through each single response they get.
  • need to listen to the workers’ feedback in order to improve their microtasks and get more interested workforce, but they need an efficient method to process the workers’ comments and feedback, since they cannot spend hours on the email.
  • requesters need to be fairly evaluated.

(indirectly mentioned)

  • requesters need to define what is good and bad work for their microtasks before they assess workers’ answers.


  • felt the need to support employers,because the platform works after all thanks to the employers who publish the work.

(the reading motivated these thoughts, but they don’t say this explicitly)

  • need to define a legal framework on which requesters and contributors agree.
  • need to take compromises in order to take into account a global scenario (workers and requesters may come from any country in the world).
  • need to be competitive.
  • need to be able to offer a sustainable amount of work and be able to recruit a sustainable workforce.
  • need to offer a (technologically) reliable service.


  • need to be adapted if design changes occur in the crowdsouring process.
  • need to be objective (tactical quantification), simple and collective.
  • reviews need to be globally moderated.

Being a Turker

What worker needs are discussed or implied by the reading? What requester needs are discussed or implied by the reading?

In this paper, the authors study the mechanics of Turker Nation, an online community for Turkers. Through this study, the authors aim to answer questions regarding the relationships and dynamics between the ‘turkers’ and requesters. This work presents a different perspective, one from the counterpoint of a Turker.


  • Workers perform micro-tasks with a primary aim to earn money. While there may be workers who enjoy the tasks/HITs and perform tasks for lower prices since they are fun, the larger majority agrees that monetary incentive is a driving motivation.
  • Turkers like to learn about ways to become more efficient and earn more money, they see Turker Nation as an ideal place to achieve this.
  • They set monetary targets to achieve on a regular basis.
  • Some crowd workers are full-time Turkers, while some others are part-time workers. These circumstances, along with a Turker’s experience influence the amount of money one can earn.
  • Turking is an important source of income for some turkers, making AMT a ‘safety net’ for them. Turker Nation contains discussion threads where people share their problems and advice.
  • Turkers discuss about requesters on the forum to warn other workers of ‘bad requesters’ or share news about ‘good requesters’.
  • Turkers value polite communication channels with requesters. While a requester that provides clear instructions and approves the work is lauded, the way requesters communicate with the Turkers also influence how their mutual relationship evolves.
  • Genuine Turkers take responsibility when their work is rejected due to fair reasons.
  • Turkers often discuss about acting collectively in different situations.


  • Requesters need workers to contribute, so they are willing to keep the workers satisfied.
  • Requesters need workers that provide useful and diligent responses.
  • Requesters are wary of scammers and malicious Turkers.

Crowdsourcing User Studies with Mechanical Turk

What worker needs are discussed or implied by the reading? What requester needs are discussed or implied by the reading?

In this paper the authors ventured into investigating the suitability of microtask markets for collecting user measurements and carrying out user studies. They reflect on the feasibility of doing so, yet caution about the special care that is required while designing such studies. The authors experiment with AMT and present their findings, that confirmed the promise and shortcomings of using MTurk for user studies.


  • Workers are compensated for the work they contribute to and successfully complete through monetary rewards.
  • Workers use their valuable ‘human intelligence’ to complete simple tasks.
  • The diversity in the workers can be a boon or a bane depending on the microtask at hand.
  • The lack of constraints with regards to ‘expertise’ required to work on tasks, gives (seemingly*) equal opportunity to different workers to consume a task. *Workers still need to rely on their experience in order to get most out of their ‘Turking’.


  • Requesters often need to trade-off between the length of the task, the time allocated for task completion and the monetary costs.
  • Requesters can gather large amounts of *representative* data at relatively low costs.
  • Any requester can post a task on AMT and is free to fix the monetary award.
  • Requesters get access to easily available ‘human-intelligence’.
  • Requesters can gather user measurements at rapid speeds.
  • Requesters have to contend with workers that attempt to ‘game’ the system without providing useful responses.
  • A requester can leverage the crowd to approximate expert-like judgements.

A Plea to Amazon: Fix Mechanical Turk

What worker needs are discussed or implied by the reading? What requester needs are discussed or implied by the reading?

In his blog post “ A Plea to Amazon: Fix Mechanical Turk! “ [1] Panos Ipeirotis discusses ways to improve mturk both for requesters and workers. Here are the most interesting points

For requesters:

"A Better Interface To Post Tasks"

  • Most requesters have crowdsourced workflows and few have one task projects.
  • Requesters needs easier ways to create tasks, and not spend money on developers to do that for them.

"A Worker Reputation System"

  • Define more public qualifications tests.
  • Create public resumes for workers.
  • Create worker ratings.

For workers:

"A Requester Trustworthiness Guarantee"

  • How fast does the requester pay.
  • Requester rejection rate.
  • Requester rate.
  • How much work did the requester posted.

"A Better Task Search Interface"

  • The ability to browse tasks per category.
  • Create a better search engine.
  • Propose HITs to workers.


The Need for Standardization in Crowdsourcing

What worker needs are discussed or implied by the reading? What requester needs are discussed or implied by the reading?

List of the most interesting needs for workers and for requesters:


  • The labor markets operate in a uncoordinated manner. Employers generate work offers independently, deciding on the terms and prices without any relying on any standards. While this approach is relatively flexible, it is the same time inefficient.
  • There is no cooperation between requesters that will allow improvement in the ways the job delivered to the Turkers to utilize their performance. For instance, the UI for labeling pictures might be improved by knowledge exchange, and will avoid implementing the platform from scratch.
  • Quality assurance is crucial for the requesters. It is important to create mechanism that will provide sufficient level of quality with high certainty of minimal quality levels.


  • Workers need to learn the intricacies of the interface for each separate employer.
  • Workers need to adapt to the different quality requirements of each employer.

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

CrowdFlower community twitter for contributors (workers):

  • workers need to show their achievements. Evidence: there are plenty of tweets showing a screenshot of the level they reached (e.g. Interpretation: even if money is the big motivation, reputation always plays an important role in social systems.
  • the platform needs to show the world that their workforce is happy to self-promote themselves. Evidence: CrowdFlower is constantly retweeting gratefult messages from the workforce (e.g. see retweets of 22.02 Interpretation: they can have influence in mass opinion like this (if it is aligned with other information sources like Turkopticon).

Reddit Forum for requesters:

  • they need to get accepted by workers and follow de facto standards followed by the community of requesters . Evidence: they are asking for council on payments, and any other suggestion from workers on Reddit. Interpretation: if they do not follow the “rules” they will not attract people to their tasks.
  • they need to learn from past experiences. Evidence: there are several posts asking for helps in experiments. Interpretation: they do not want to fail or reinvent the wheel and want to make the most of others’ experience.
  • they need their work done fast. Evidence: they ask on Reddit for tricks to get their work found and accepted quickly ( Interpretation: they want to know how to beat their competitors - requesters with other HITs (since the time that workers spend in other HITs/requesters is time they do not invest in doing their own HITs).
  • need to know what to expect from marketplaces. Evidence: there was one post asking about the quality to be expected in microtasks published in CrowdFlower and MTurk, because the requester was surprised to get no spam. Interpretation: the more marketplaces there are, the more multi-criteria information requesters need to compare them and decide where to publish.

From own experience:

  • requesters need technical support from experts in the platform (e.g. CrowdFlower). Evidence: whenever I have a question about the API of the new features introduced in the UI of CrowdFlower I ask directly the success center of CrowdFlower. There are other forums where developers might exchange knowledge, but I first ask the platform support people. Interpretation: it is faster and more trustworthy to ask the people who created or maintain the platform (at least for certain things), and only when it is about information that the platform support center does not provide I look at community discussions. Maybe because the community is not that big (compared to other open source projects).

Synthesize the Needs You Found

Key needs of worker-requester relations

  • TRUST: both workers and requesters need to trust the other side. Evidence: there are forums where requesters discuss how well requesters behaved in the past, and requesters check information about the workers’ accuracy in the past. Interpretation: like in any other social ecosystem trust lays the basis of relationships. When there is an investment of time and money people need to be sure that the other people will respect their work and behave as expected.
  • TRADITIONAL WORKPLACE: specially workers need to import procedures that have already been adopted in the traditional workplace. Evidence: the discussion about wages etc. Interpretation: it is important to keep the things that have made our society progress. The field of human resources and the workplace has been under debate for a long time. Therefore, it is at least reasonable to consider what and how to copy from this context and implement it in (microtask) crowdsourcing. Obviously not everything can be copied literally, because there are several differences between microtask crowdsourcing and traditional work (e.g. global setting, different granularity of tasks, more accountable work done in less time).
  • FAIRNESS: both workers and requesters need to be treated fairly. Evidence: they both mentioned this in open discussions and also in the panels. Interpretation: fairness is part of an ethical behaviour. Workers should not be discriminated and should get their payment back when they deserve it. Requesters should not be cheated, nor unreasonably accused of bad behaviour.
  • KNOWLEDGE: workers need to know who they are working for and what the purpose of their work is. Requesters need to have certain knowledge of the workers who work for them. Evidence: panels, papers. Interpretation: knowing about cross-platform work history could be useful.

Key needs of worker - requester communication

  • AVOID COMMUNICATION OVERLOAD: workers and requesters need to communicate with each other efficiently. Evidence: during the panels this was a topic very much discussed. Interpretation: feedback is useful but only if it does not introduce and unmanageable overload.
  • CLARITY AND TRANSPARENCY: workers need clear explanations on why their work was rejected and requesters need clear and concise feedback explaining why workers disapprove their work. Evidence: this topic was discussed in the panels, but also in forums. Interpretation: negative feedback and negative actions may lead to rejection and frustration. However, fair and constructive information may help in improving and learning from the experience.
  • LEARNING: workers need to learn how to work in marketplaces and how to work on particular tasks, while requesters need to learn how to instruct workers. Evidence: the topic was discussed in the panels, and we also observed this while being requesters in our research scenarios. Interpretation: learning leads to improvement and qualification. As a consequence people feel more engaged and more confident. This is positively reflected in performance and subsequently in satisfaction.

Key needs of worker and requester communities

  • HELP: workers need to ask for and offer help to other workers. The same applies to requesters. Evidence: forums are full of questions asking for opinion, help, suggestions. Interpretation: the online context enables the creation of such communities. The help one can get from other experienced persons in the same role is much more valuable than a one-way static tutorial. Sharing task templates and crowdsourced data (with the corresponding consent of involved agents) could help others a lot.
  • SUPPORT: workers need support from other workers and the same is for requesters. Evidence: discussions on the panels and the papers above. Interpretation: it is important to have collective support against e.g. injustices.
  • RECOGNITION: workers, requesters and platforms need to build their reputation within their own communities. Evidence: workers show pictures of their scores and achievements, while platforms and requesters advertise messages that contributors published about being grateful and satisfied with them. Interpretation: as studies have already identified, it´s not only about the money. Recognition is a powerful reason that make people work (as worker or requester) more motivated and in the end with better quality.

Key needs of platform owners

  • SUSTAINABILITY: platform owners need to make their platforms sustainable. Evidence: having enough work and workers is crucial for making crowdsourcing running. Interpretation: sustainability may depend on many things; for example, the way to attract new comers to the platforms, and the way to retain already registered users. Furthermore, the platform needs to be sustainable in all periods of the year.

Key needs of workers

  • WORK SELECTION: workers need effective and efficient ways to select work to work on. Evidence: this is a recurring claim of workers in panels, research studies, etc. Interpretation: workers need tools that facilitate this step of the process, which is crucial to get them motivated.
  • EXPERTISE: workers need to be able to use their expertise, instead of getting rejected due to a lack of expertise. Evidence: workers mention this all the time in panels. Interpretation: they might offer their expertise (also in microtask crowdsourcing) demanding work that require such expertise. There was a very interesting proposal in one of the panels from am oDesk worker about giving the opportunity to receive training as reward (instead of money) in cases in which the expertise is low.

Key needs of requesters

  • AVOID DECAY: requesters need to avoid workers get bored after a while. Evidence: this is a recurring comment from workers. Interpretation: we define microtasks as simple tasks but we should rethink workflows to make them less repetitive and boring if we want to increase engagement of workers.
  • TASK DESIGN: requesters need to design tasks in a reasonable way, and in order to do this they need to be able to test their estimated times, see whether the instructions lead to the correct results and check if the assigned payments are sufficient. Evidence: own experience. Interpretation: assessing this kind of things at an early stage and with a subset of the data might be crucial for the success of the tasks. Support from the platform infrastructure to do this in an easy way, and design a proper UI and workflows is important.
  • STANDARDIZATION OF QUALIFICATIONS: requesters need to deal with standardized qualification tests. Evidence: the papers above, the panels and own experience. Interpretation: it makes no sense to invent thousands of tests for the same thing, when someone has already done research on how to do it optimally.
  • CROWD SELECTION: requesters need to identify the kind of people they need for their tasks and be able to recruit the suitable people. Evidence: (above and own experience) specially in knowledge-intensive tasks, random assignment might lead to worse performance. Ratings in different skills may bring a fine-grained description of the people.
  • TECHNICAL SUPPORT: requesters need technical support from the platform owners. Evidence: my own experience - whenever I have a question about the API of the new features introduced in the UI of CrowdFlower I ask directly the success center of CrowdFlower. There are other forums where developers might exchange knowledge, but I first ask the platform support people. Interpretation: it is more trustworthy to ask the people who created or maintain the platform (at least for certain things), and only when it is about information that the platform support center is not providing enough information it makes sense to look at community discussions. Maybe because the community is not that big (compared to other open source projects).