Difference between revisions of "Milestone 2 singularity"

From crowdresearch
Jump to: navigation, search
m
 
Line 16: Line 16:
  
 
4. Websites and tools such as
 
4. Websites and tools such as
 +
 
i. Turkalert
 
i. Turkalert
  

Latest revision as of 02:00, 12 March 2015

Attend a Panel to Hear from Workers and Requesters

Deliverable

Sources of finding work

Observations

(drawn from "SpamGirl")

1. Primarily through massive forums which are dedicated to sharing opportunities.

2. Daily hits thread which let "altruistic" people share hits and opportunities.

3. Scripts that can automate the task.

4. Websites and tools such as

i. Turkalert

ii. Crowdworkers

iii. Hitscraper

iv. Turkmaster

5. Chatrooms dedicated to this purpose

6. Be a part of the community and get involved with the people in it.

7. Give back to the community by helping others out.

(drawn from Manish):

1. Usage of the above mentioned sources varies from the seriousness of the worker. One could rely on these platforms for serious expenses to casual ones.

Appealing characteristics of hits and jobs. The factors that a worker looks for when searching for hits.

Observations

(drawn from Manish)

1. Workers often use tools and scripts that can filter jobs and tasks based on the criteria set by the worker. The quality of jobs can be roughly estimated using these tools.

2. For example, a worker might use a tool to set a criteria to filter out all jobs that pay less than a certain amount.

Motivations for workers or requesters to use a crowdsourcing platform and stick to it or abandon it.

Observations

(drawn from "SpamGirl")

1. The primary reason seemed to be money. The financial aspect appears to be the biggest factor that draws people into these platforms. Examples for this might include people who learnt about these platforms where one could make money through advertisements, friends, relatives etc.

2. Money is the biggest factor when it comes to "starting" with these platforms.

3. The motivations to go BEYOND that and help others out stem from the following factors:

i. Altruism

ii. A sense of belonging to the community

iii. Forums, blogs, chatrooms often emulate watercooler talk where different members come together to empathize and help each other out.

iv. Helps deal with the tediousness and frustration that can emerge when having a bad day.

What one can do to collaborate with other mturkers and requesters. What sort of interaction occurs between turkers.

Observations

(drawn from David)

1. By getting involved with the community.

2. Using scripts, tools and other websites.

(drawn from "Spamgirl")

1. Interaction with the community ranges from helping others out to talking to other workers about opportunities, discussions about quality of work, type of requesters, possibilities of collaboration and general chats. The community thrives on forums, facebook pages, blogs, subreddits where anything could be a matter of discussion or conversation. Some examples of things that are usually discussed include:

i. What tools are good.

ii. How to be more efficient.

iii.Which requesters to avoid.

iv. Where to make money

2. Interactions with the requesters happens primarily through emails although they are often invited to join the community too. Workers often help requesters with setting jobs appropriately by providing suggestions such as possible mistakes, improvements in the pay structure, setting the appropriate criteria or skill levels based on the jobs being assigned. Requesters can also be asked for suggestions on how to go about a task or improve as a worker on their tasks.

3. Workers may also reach out to requesters if they believe that their work has been unfairly rejected (inadvertently or not).

4. These interactions are a lot like the ones that can happen on any other forum dedicated to a certain thing.

Comparison of different platforms such as oDesk, mTurk etc

Observations

(drawn from David) 1. oDesk seemed to have a lot of competition on it with very little work.

2. One has to often reach out to individuals, blogs in search for work which might be scarce.

3. MTurk was highly consistent when it came to looking for work and fared much better in comparison to oDesk and other platforms when it came to finding work and dealing with competition.

Process when designing and iterating over jobs and pricesthat a requester has to put on a crowdsourcing platform

Observations

(drawn from Serge)

1. Pricing depends on the amount of time it takes for one to complete the work.

2. Requesters have to assess this and set wages appropriately which often border on the minimum wage (8 or 10 dollars an hour)

3. Requesters might refrain from pricing tasks to high as it might attract cheaters or casuals looking to make easy money. (Gordon would pay from 10 to 20 dollars for certain work) 4. Another thing that requesters need to do is assess the quality of the submission which a researcher like serge could do by keeping open-ended questions at the end which might have a subjective answer. This lets the requester guage the seriousness of the worker and the diligence with which they performed the task. It becomes apparent if the worker hasn't even read the instructions.

Restrictions on crowdsourcing platforms. For example some require SSN to register.

Observations

(drawn from Serge)

1. The SSN restriction is purely a tax-issue. US Laws require declaration of income and this is primarily why SSN is asked for.

2. Another restriction according to Serge is that a requester can not ask a worker to download any content. This probably exists to ensure that malware is contained and does not affect the worker but Serge believed that it severely limits some of the tasks that a researcher might want to assign.

Comparison of hits and their duration with the quality of those hits.

Observations

(drawn from Serge)

1. Workers have finite patience, attention and energy.

2. Requesters usually try to make hits as short as possible due to the above reason.

(drawn from Gordon)

1. Keep hits short.

2. Acknowledges a condition "Bubble Hell" where a worker moves from one hit to another like drone and zones out which might decrease quality of hit.

Alternatives to these platforms. What would one do if these disappeared.

Observations

1. Several companies have their internal mini workers who are hired and can work on such tasks.

2. Google and twitter have such divisions too but have differing sizes.

Major hurdles for new Turkers.

Observations

(drawn from David)

1. The biggest hurdle for a new turker is finding work. It requires lots of digging and finding worthwile tasks.

2. Another hurdle is filtering out jobs from requesters who consistently underpay or unfairly reject hits.

(drawn from Serge)

1. Working on crowdsourcing patforms has a steep learning curve when it comes to figuring out how to earn enough.

2. A new worker might face a lot of dejection and disappointment on realizing that they managed to earn very little in the first few weeks.

Actions a worker can take on getting their work rejected.

(drawn from "Spamgirl") 1. Initially, a worker used to be usually helpless and they had no choice but to suck it up and move on.

2. Now, a worker can reach out to the requester via professional emails and ask them about it. There exist templates that a worker can use to draft emails enquiring about rejections.

3. If they believe it has been unfair then they can reach out to the community to seek validation or check if anyone else has gone through the same experience.

4. People in the community spread awareness about unfair requesters who rejected hits when they shouldn't.

Methods of controlling quality in hits. Criteria used be requesters to reject or accept work.

Observations

(drawn from Serge, Ranjay and Gordon)

1. Serge preferred approving almost all hits.

2. Rejection occurred only when it was extremely apparent the worker was very lackluster or did not bother to read even the instructions or chose random options.

3. He kept open-ended questions at the end to guage the worker's seriousness and diligence. Some non-serious workers or cheaters would type gibberish or plagiarize answers to get past such questions easily.

4. Some requesters keep timers on pages to check the amount of time a worker is putting in.

Alternatives for workers outside US (example Indians).

Observations

1. Alternatives for workers outside US are few.

2. Examples of some include Clickworker (which does not offer a lot of work) and oDesk (which David criticized for having little work and high competition).


Other general observations

1. Gianluca was believed that the quality of submissions for his academic research had increased of late.

2. He was discontent with these platforms for two major reasons: i. There is no guarantee of the time by which a task will be completed and the requester will be returned the results.

ii. If a requester needs a task to be done by a specific worker then he/she has to contact them individually through emails or other means which complicates things.

Reading Others' Insights

Worker perspective: Being a Turker

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • MTurk is the most popular microtask platform. Most active users (20%)do majority of the taks(80%).
  • According to the ross survey - majority of the turkers are US based , but the number of turkers from other nationalities especially india is growing.
  • Like a labour market - turkers come to an understanding about the wages, even though they are less than the min wages.
  • Workers work for monetary gain. It is the primary reason for turking. Some believe that people turking for fun and learning will harm the community, and further reduce the wages.
  • More experienced turkers made around ~15k per year.
  • Some do it as a part time job, and some do it during their office hours or a break , just to make a little extra cash.
  • Some workers are living hand to mouth and turking is the best alternative to earn money and sustain themselves.
  • Rejection from a requester means no money for the worker and a reduction of the approval rating.
  • No redressal mechanism for blocking, difficult to prove that requester is the one at fault.
  • Workers blamed the requester if they design a bad task and fall prey to cheaters/scammers. Workers are happy to assist for designing HITs.
  • Turkers helping fellow turkers to find work , sometimes by supplying incorrect data to the screening questions.
  • Some workers self-critical, understand why they got rejected, post it on tukitcon.
  • Fair play and community ethics important for workers.
  • Workers screen jobs based on hourly rate , design of HIT.
  • Time spent searching and learning - invisible work high for novices.
  • Fear that academics and journos advocating for min wage, might get Mturk closed. There isn’t alternative for workers in that scenario. Workers are wary of legislization and govt involvement.


2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Mturk is biased in favour of requesters - provides no way to rate requesters and allows requesters to not pay if they deem that the work isn’t up to a standard.
  • Biggest feature of turker nation is the requester hall of fame/shame.
  • Can block a worker to ensure he/she doesn’t work for them again.
  • Not many requesters directly communicate with the workers for HIT design , response of their HITs.

should take into account that screening questions would lead to a loss in quality.

  • Requesters unfairly being black-listed will harm the community as a whole.
  • Lack of information on the HITs seems to increase the adversarial tension between the workers/requesters.
  • Requesters can become invisible to the work of the workers - because of distance, anonymity,minimal communication, and electronic exchange.
  • Requesters use filters to screen workers instead of manually checking out each worker and allowing manual individual access.

Worker perspective: Turkopticon

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • AMT brings together crowds of workers as a form of infrastructure,
  • workers - Humans as a service
  • workers have no legal recourse against employers who reject work and then go on to use it.
  • Workers dissatisfied with a requester’s work rejection can contact the requester through AMT’s web interface.
  • Limited options of dissent. intentionally incorrect answers to show dissatisfaction can be filtered out by the reviewing algo of the requester.
  • Bill of rights responses showed that workers felt they had been arbitrarily rejected , demanded faster forms of payment , wage dissatisfaction.
  • Some workers criticised Amazon for not overseeing the problems on mturk, and paying little attention to their concerns.
  • Workers had conflicting opinions about unions, forums to publicly air their views. It is difficult to find a solution that fulfills the needs of every turker.
  • Workers fear blocking/retribution by requesters on giving a critical review on a forum ; prefer anonymity.
  • Workers from a developing country are much happier with an income of 2-3$/hour than an american counterpart. This makes it tough to define a suitable wage.


2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • Through AMT, requesters can literally access workers through APIs. They can integrate worker output directly into their algorithms
  • Design features and development of AMT has prioritized their needs .
  • AMT employers define HITs on AMT by creating web-based forms that specify an information task and allow workers to input a response
  • Employers define the structure of the data workers must input, create instructions,specify the pool of information that must be processed, set a price , and set filters like approval rating.
  • AMT’s participation agreement grants employers full intellectual property rights over submissions regardless of rejection
  • Workers’ dispute messages become signals to the requester. prompt them to review algorithm but they rarely do.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

The paper provides some valuable insight on requesters and specially workers by contrasting two experiments and showing the juxtaposition of their results. Throughout the reading we observed much more about workers than requesters as the experiments the paper was based on seemed oriented towards improving the quality of user-studies, the primary participants and contributors of which are workers.

Some of the observations that we could draw from the reading about the workers were derived from the experiments, the authors' motivations and most importantly, the outcome of those experiments. The contrast between experiment 1 and experiment 2 indicated the difference in quality when workers are assigned tasks which are structured differently. The first experiment required workers to rate articles and the results would be compared to the reliable and highly experienced Wikipedia users. Some additional subjective questions such as giving brief descriptions and suggesting improvements were added. The outcome of this task wasn't satisfactory. The user ratings were unreliable as they were not in line with the experts and and there was only marginal correlation between the two. Some of the timings devoted to tasks by the users were suspiciously small. The author notes that this could suggest "gaming of the system" by the users. There were also a high number of "semantically empty" or potentially invalid responses. Filtering these out left few responses on which a good statistical analysis could be carried out. It was also observed that in fact only a small number of users indulged in "gaming" the system. However, this small group drastically affected the outcome of the tasks as they did so multiple times which caused the overall percentage of invalid, semantically empty responses to rise up significantly.

In experiment 2, we observed that when users are faced with questions or tasks in which cheating or gaming the system is almost as effortful as providing the correct responses, then the quality of the work done for that task improves. The author suggests creating tasks where providing the correct response is as effortful or less effortful than providing an obviously random or malicious submission. This time, the questions were also verifiable and quantitative. The users were signaled that their answers could be scrutinized.

We noted that when faced with a task such as the one described in experiment 2, the user output improved and the factors that were taken into account when desiging the task for the second experiment led the users to be more wary of being malicious or providing invalid responses as they required just as much effort or were signaled that their responses could be scrutinized.


The number of observations we drew about requesters was less than that of the workers as the paper was primarily oriented around user-studies. However some of the things we found noteworthy and observed about requesters are the following:

It's usually difficult to find a large number of users in less time and cost for user-studies. They have to often resort to small sample sizes on which it is difficult to carry out robust studies. Online crowdsourcing platforms however provide a potential solution to this. It would be beneficial for requesters to have multiple means to catch suspicious responses. Requesters deal with the lack of "robust support" for managing workers or even executing simple tasks on platforms such as mechanical turk.

Requester perspective: The Need for Standardization in Crowdsourcing

One can infer from the reading that the author is a strong supporter for the need to standardize crowdsourcing platforms because of several major reasons. Some of the most pertinent ones are:

1. Convenience: This appears to be one of the major reasons why the author seems to be in favor of crowdsourcing. Standardization will lead the requesters to spend little time on setting up tasks and save effort that is spent on unproductive activities such as designing the interface, pricing tasks,finding the appropriate labor, ensuring that the instructions are worker friendly and will not be easy to understand etc.

2. Efficiency: By reducing the time spent on understanding the interface and other non-productive activities, efficiency is highly improved as workers do not need to "learn the intricacies or adapt to the interface" every single time. Employers do not have to implement from scratch the best practices that need to be followed when setting up tasks.

3. Completion of tasks predictably, easily, conveniently, timely: The author cites Henry Ford's production line as an example for why standardization could have several advantages in an environment where tasks are small and require human labor.

Although there are several other reasons cited or implied by the author, these constitute some of the major ones. Most of my observations about the workers from the reading have been through the author's perspective since it was him that provided his view of the market and how it could be improved. So, through the author's perspectives, some of the observations about workers that I could draw included the following:

Workers seem to have a lot of flexibility when it comes to online crowdsourcing platforms. This is a good thing for workers as it helps them choose what they want to work on and decide what's best for them. However, this does not bode well for the requester which requires an organized, efficient set of workers to work on a task in a timely manner. This seems to be missing from online crowdsourcing markets. The workers might have a lot of flexibility but they act as individuals rather than an organized efficient force. The author highlights the attributes of workers who have been "hired". Such workers are screened, trained and have incentives for good performance. Moreover, poor performance can cause them to lose their jobs. They are also required to adhere to a certain "STANDARD" set of instructions. Throughout the reading, I believe that the author implied that this stood in stark contrast to the workers on the online crowdsourcing platforms. As opposed to hired workers, reinforcement, whether positive or negative seems to be minimal in online crowdsourcing markets. These workers come from diverse backgrounds and aren't screened as well as hired workers. Moreover, these workers have minimal training and aren't exposed to any standardized work which could be problematic as these workers might not always adhere to the instructions. It seems to be strongly implied by the author that all these factors are detrimental to the quality of the work that the requester sets out to get done.

We noted another recurring observation in the reading: That workers do not yet possess any adequate method, tool, platform, way to search for tasks that are best suited to them. How such a way could improve the crowdsourcing market lies outside the scope of "observations" and therefore will be elaborated upon later in this submission.


Some of the observations that one could draw about the requesters from the reading could be derived from what the author had to say about the platforms that existed and the improvements the author suggested. This gave us some valuable insight about the requester's view and told us a lot about the requester himself. These observations are quite similar to the observations that were made about the workers but however are focused more on the requester. It seems quite apparent from the reading that the requesters feel the need to make some major improvements to the crowdsourcing patforms which in their current state aren't very conducive to the tasks that they require to be done. One of the major observations that we made was that the author repeatedly and strongly implied through the reading that the quality of the work they recieved wasn't very satisfactory and could be improved through standardization. Another major observation about requesters that we could note from the reading was that the crowdsourcing platforms can be highly inconvenient to them through several ways. When the author cites the advantages of crowdsourcing platform, he mentions that it would lead to "reusability" which would remove the need for requesters to think on how to create user interfaces, implement "best practices", set up the entire job for workers. According to the author, these are inconveniences to the requester which could be eliminated through standardization. Currently, the requesters have to go through "extensive structuring and managerial effort to make crowdsourcing feasible" which causes a lot of overhead. Pricing methods seem to be something that requesters feel can be improved too. The author notes that requesters have to deal with a recurring negative externality : fraud.

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • There isn't a work profile for workers.
  • Workers are severely restricted by the interface as it is difficult for them to find appropriate tasks based on the criteria they would like to set.
  • Currently it's difficult for workers to find requesters unless the requester puts their name in a keyword. So it's difficult to find a particular person to collaborate with due to the limitation of the user interface.
  • The workers do not have any way to check how fast the requester is releasing the payments
  • The workers are usually in the dark about the reputation of the requester when it comes to fair or unfair rejections.
  • In general the workers have few ways to gauge the trustworthiness of the requesters.
  • In practice the mean completion time for tasks is expected to increase continuously as we observe the market for longer periods of time.


2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

  • mturk uses command line tools for posting tasks on mturk. A technician might have to be hired if the requester is not experienced with the command line which shrinks the number of requesters.
  • Experienced requesters can monopolize the market.
  • Filters like Approval ratings and number of completed HITs can be manipulated - [1]
  • MTurk does not really guarantee the trustworthiness of the requesters.
  • Systems like TurkerNation and TurkOpticon have been created because amazon doesn't provide a system for requester ratings.
  • New requesters posting a big batch of HITs might not get desired quality of results because expereinced turkers will only do a few HITs as they are wary of the rejection rate or the trustworthiness of the requester.

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

Workers' needs

  • Workers don't really understand how rejection on MTurk affects them, as can be seen in this post: [2]
  • For new workers who are not yet initiated into the turkers community, the user interface seems daunting and they spend a significant fraction of their time on looking for work. Workers need a cleaner and easier interface which would allow them to use a greater fraction of their time in productive, paid work.
  • In CrowdFlower, workers had concerns about lack of availability of entry level jobs(i.e. that require 0 qualifications) - there are none at all, most of the time. Also, the statistics that allow one to progress through the levels are very opaque. It would be more encouraging if these statistics were clearly understandable so workers know exactly what to do.
  • Amazon has put in disproportionately low effort in improving the UI/UX for workers, which is illogical as they make up half of the equation. There are many browser extensions and add-ons available that inform workers about requester reputation, average time required, etc.

However there are problems with this : One worker reported that their access was blocked for making page requests too frequently to the server. Even if Amazon provided an API or daily/hourly stats dump, these extensions wouldn't have to scrape the data in such a network-intensive manner. This works to the benefit of both the requester as well as Amazon.

  • Many surveys require one to fill a form about demographic information before allowing them to proceed, ensuring that respondents are the right demographic for the survey. It has been suggested by a user that instead of this, every user can fill out their demographic information one time and surveys can query this to decide whether the user belongs to the right demographic. This would not necessitate filling out demographic information for every survey, which is a tedious task that might not even eventually be rewarded.

This measure also helps the requester since workers would not be able to 'fake' their demographic for every survey(many worker forums share the 'correct' demographic for surveys so workers can cheat and get through.)

  • Workers need responsive customer support from Amazon's end. There are many instances of workers whose accounts are suspended being stuck in Kafakaesque limbos, e.g. many comments in this thread point out the issue : [3]

Again, this inferior treatment of workers doesn't make sense since they are an integral part of the system for Amazon to make money.

Requesters' needs

  • /u/cottonrobot wanted to be able to turn down a submission without affecting a worker's approval rate :[4] in the case of a genuine mistake by a worker. In above case, many workers mistakenly hit submit when they hadn't completed the work, and notified the requester of it later. So this was not a case of workers scamming. In another post, this time by a worker, /u/wigglewiggle245 wanted to be able to cancel his submission which he knew was wrong, before it was rejected by the requester thus hitting his approval rating.
[5].

In both these cases we can see the theme that money for a particular task is not a turker's greatest concern with rejection; approval rating is primary since it affects the jobs they are eligible for far into the future.

  • Requesters need to know that a worker is putting effort into a job. As a turker works hours on end, their attention and engagement may start to fall off. Some suggest keeping a timer in the page to track the amount of time workers are spending.
  • Some survey requesters expressed a desire to be able to tweak the payout or approval percentage requirement for a job without creating a new job. Obviously the ability to reduce payout is prone to abuse(a requester may reduce the payout as 100s of workers are doing the task, thus resulting in them getting paid less than what they were promised) but the ability to increase payout was wanted.

Synthesize the Needs You Found

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

Worker Needs

  • A better system for beginners: For someone who is new to this, the system is not exactly friendly, because the new workers lack the approval percentages and the qualifications that requesters want. This leads to de-motivation in the new workers.

Evidence - Dennis spoke about how difficult it is to get started on oDesk for novices, because they lack a history of successful projects (unlike the veteran workers).

  • Every worker has their own special set of skills. However, the system does not account for this fact, even though it should.

Evidence - Nicole's statements about how various people tend to exhibit preferences towards various kinds of jobs,

  • A fairer system - MTurk seems to deliberately favor requesters over workers. However, workers on the other hand need a system that values them too as workers.

Evidence - There have been multiple complaints about the fact that workers are merely "computational units" and there is no 2-way feedback.

  • Workers need a way to gauge the trustworthiness of the requester

Evidence - The workers are often left with a bad situation when they are unfairly denied payments or their work is unfairly rejected. Moreover a worker might have to work inordinately more on a task that hasn't been priced well by the requester which is not exactly ideal. David voiced some discontent in the panel meeting about how a new turker's challenges include filtering out dishonest requesters. The reading "A Plea to Amazon: Fix Mechanical Turk" provides some of the most convincing arguments and evidence in favor of this need.

  • Workers need a way to search for tasks/jobs more efficiently and easily.

Evidence - In the current form, workers lack any reliable means to look for tasks efficiently. This is supported by almost all the readings that we've been asked to go through. Every author has noted the lack of any reliable means of distributing tasks to the correct worker demographic and how it is difficult for workers to find tasks that are suited for them.

  • Workers need a faster payment process for their tasks. A 30 day approval period is risky for turker's whose primary source of income is turking.

Evidence - Panos' blog and Being a Turker paper by Martin D et all mentions the turkers annoyance with the current payment procedure .

  • Workers need a way to make the registration process more convenient and with fewer restrictions to allow a more widespread demographic to participate in the crowdsourcing experience.

Evidence - Personal experience and the panel talk. Manish noted that indian users can not provide an SSN pertaining to the united states and therefore are not allowed to work on mturk. Our team consists of students from India we found it incredibly difficult to find work or even register on crowdsourcing platforms such as mturk. Many of the restrictions alienate a huge population of workers that could contribute very well to the crowdsourcing jobs and increase the workforce dedicated to it.


Requester Needs

  • Novice requesters need a way to adjust to the complexities and intricacies of the system without external help. This can be solved by having a better interface.

Evidence - To quote from the article "A Plea to Amazon: Fix Mechanical Turk", "It is high time to make it easier to requesters to post tasks. It is ridiculous to call the command-line tools user-friendly!". As he stated in that article, hiring a full time developer in order to deal with all the complexities of the system is cumbersome and expensive. This makes it difficult for the small guys to grow.

  • A Worker Reputation System - The current Reputation System on MTurk is inadequate.

Evidence - There have been complaints from requesters that the current system, which counts "Number of completed HITs" and "approval rate" are easy to game. When requesters cannot differentiate between a good worker and a bad worker, they assume that all workers are bad, resulting in the good ones being treated as the bad ones (as described in "A Plea to Amazon: Fix Mechanical Turk").

  • Requesters need a way to ensure the quality and timely completion of their work

Evidence - The author for "The Need for Standardization in Crowdsourcing" called for standardization due to the clearly evident dissatisfaction among requesters about the quality of work they receive. Even in the panel meeting, it was quite evident that some requesters weren't pleased with the submissions they received which could be invalid, semantically empty, or not in time at all. Gianluca raised a voice against the lack of any guarantee about when the job he wanted could be completed. Other requesters also voiced their discontent about the quality of work which might be affected significantly by the workers who intend to game the system ( as witnessed in the outcome of experiment 1 of the paper "Kittur A, Chi E H, Suh B. Crowdsourcing user studies with Mechanical Turk. Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2008: 453-456."

  • Requesters need a way to allocate tasks to a trustworthy set of workers in a convenient standardized fashion.

Evidence - Most of the evidence for this need can be found in the paper "The Need for Standardization in Crowdsourcing" in which the author strongly recommends standardization as the current platforms aren't very conducive to a crowdsourcing market and how it could be drastically improved by emulating a model similar to that of Henry Ford's factory line where a standard set of tasks are easy to follow and the worker does not have to deal with or face a new interface, set of instructions or type of task every single time. The requesters can allocate tasks which are followed through by workers with ease and this can help the requesters get reliable results with improved quality and in timely fashion.

  • Requesters need a way to set wages that can bridge the gap between what the worker asks and what the requester wants to pay.

Currently, the amount paid to the worker might not be justified based on the amount of work they do. Requesters might not be comfortable paying a lot as it could attract a lot of scammers or cheaters who want to game the system (as noted by Gordon and Serge in the panel talk) or because the task is not very difficult. Workers however might feel that they're getting underpaid. This is why we need a way to set wages that can make both ends meet where the requester wants to pay less and the worker wants to get paid more. This is supported in the paper "The Need for Standardization in Crowdsourcing" which calls for standardization in different aspects of crowdsourcing including this.