| || |
=='''J.''' Video Interviews. ==
=='''J.''' Video Interviews. ==
=='''K.'''Ipierotis, P (2012). ==
=='''K.'''Ipierotis, P (2012). ==
#workers avoid requestors who would negatively impact them
#workers avoid requestors who would negatively impact them
Revision as of 18:37, 24 January 2016
Purpose of this page
This wikipage intends to meet the requirements and expectations conveyed in the Winter Milestone 2. Collected here are the combined observations of @anotherhuman and @prithvi.raj to address the questions:
- What can you draw from the interview? Include any that may be are strongly implied but not explicit.
- What observations about requesters can you draw from the interview? Include any that may be are strongly implied but not explicit.
- What can you draw from the readings? Include any that may be are strongly implied but not explicit.
- What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.
A list form was chosen for this page for rapid assessment, quick count of the observations, and expected ease of use for annotation to identify needs.
- 1 Concept Maps of Crowdsourcing
- 2 Need-finding Results for Requestors
- 3 Need-finding Results for Workers
- 4 Observations about Requestors
- 4.1 A. Video Interviews.
- 4.2 B. Ipierotis, P (2010)
- 4.3 C. Bernstein, M. & et al. (2010).
- 4.4 D. Stolee, K. & Elbaum, S. (2010)
- 4.5 E. Martin, D. & et al. (2014)
- 4.6 F. Irani, L. & Silberman, M. (2013)
- 4.7 G. Ipierotis, P (2012).
- 4.8 H. TurkNation Bonus
- 4.9 I. Ipierotis, P (2011).
- 4.10 S. Rsezsatorski, J. & Kittur, A. (2011). 
- 5 Observations about Workers
- 5.1 T. Rsezsatorski, J. & Kittur, A. (2011). 
- 5.2 J. Video Interviews.
- 5.3 K.Ipierotis, P (2012).
- 5.4 L. Ipierotis, P (2010). Fix Turk
- 5.5 M. Bernstein, M. & et al. (2010) Soylent.
- 5.6 N. Irani, L. & Silberman, M. (2013)
- 5.7 O. Stolee, K. & Elbaum, S. (2010)
- 5.8 P.Martin, D. & et al. (2014)
- 5.9 Q. TurkNation Bonus
- 5.10 R. Ipierotis, P (2011).
- 6 Page Contributors
Concept Maps of Crowdsourcing
Need-finding Results for Requestors
Identified here are our results from the observations presented above.
|to eliminate worker identity variability[D2,D3,D12,D15,D21...]
|it was a recurrent theme that they needed the right people. requestors consistently identified that skill sets and acceptance rates were criteria to gauge suitability of a worker to complete the task. It is getting the experienced engineer rather than a student. additionally, requestors need to be able to control the interaction of their tasks with workers. Turk granulates tasks and spread them wide, which might be great for some tasks. However workarounds were designed to get one person to complete a sizable task of greater than 10 questions. after qualifications, requestors need to be able to control their tasks by granularity, skill set, worker task preferences (so far). this is fundamentally different from qualifications only approach and can be partially coordinated with event logs.
|throughout many papers, it was identified that requestors go back to turk for the same type of tasks and thus need to be able to get the same kinds of results from the worker's process.
|need to have the ability to control the presentation of their work, by priority, sequence, preference, iterative, random, importance...[D11,D27,D23,D25,D31...]
|Requestors have gone to great lengths to customize the UI, workflows, qualification systems, etc. through code and APIs. And use the GUI for bulk acceptance and monitoring. This is only part of the answer, since much of the conflict is based from the restricted nature of the Turk interface.
|to control product variability [D1,D4,D8,E1,E2...]
|Creating form primitives and task primitives would help to streamline the more common tasks found on the system. Additionally, this will help to create controls where requestors put more work into the platform than it was designed for. Turk is intended for microtasks that can be completed in less than 60 seconds and not full blown psychological studies.
|need "some understanding of the system capabilities and constraints" [D26,D27,D29,D30,D31...]
|Requestors who choose a system for incompatible tasks leads to variability that affects workers. On one hand, the system did something right to get the requestor on the site, but not everything.
|need system flexibility to customize anonymity[D3,F2,F12,F20,G3...], pay [D13,F20,G2], information gathering[D11,D17,D23,D24,D27...], expand worker reach [D14,D29,F1,F3,F11...], task templates [F1,F4,F13], communication channels[E3,F14,F19,F20]...
|need to know if their task is there for pay or play[E5,F1,F16,D28,F17...]
|"Button hell" ---these words were expressed by a turker during the video interview. She has a certain task she performs and prefers, specifically tasks like essay or free form responses. For her the task is work and requestors who pay enough will get her to be involved in the project. Even though she enjoys a certain task, she expects requestors to pay the right amount if they choose to create a task for her. Gamers will do the same task for fun.
|"to motivate better, cheaper, and faster performance... without paying much"[D9,E4,E3,G1,H2...]
|need easy entry and exit out of the platform[D10,G,F2,F13,F14...]
|Download the .csv file quickly, upload the already typed questions into turk for implementation. This is the type of interaction and experience requestors want because they performed the work before they got on turk.
|to communicate with workers to clarify task design [E3,G1] and expectations [H6,I2,I4,I5]
Potential Needs for Requestors to Watch
- to communicate with another Requestor in some fashion [H8,] -> they need to define their own standards!
- the system needs to direct to other services in the business space that are more apt for the task at hand [I2]
- needs to pay into the work system [D7,E3,F18,] even if fair market rewards are unknown[H5] and they take a guess from known laws [I1,]
- need sponsorship to pay for turk work [I3]
- types of requester: researchers(open ? types, button types),quick and dirty [E2,E1,F14], spammers, ...
Need-finding Results for Workers
|need to be directed [T1,K,L,M9,P6...]
|The requestors design the tasks and the starting approach towards the workers. The workers only respond to the requestors.
|Need to be separated upon their intent with turk[N20]: gamers (Bored) [N3,N4,N9...], volunteers and workers [L8,L9,N4,M10,N21...]
|Not each turker views the platform as a market place. Throughout the readings, writers focused heavily upon the "worker" turker, whom has the most to gain/lose from joining the system. Otherwise, there are at least 3 types, which might be misidentified here: gamers, volunteers, and workers. Gamers are motivated to do tasks they enjoy or to solve boredom. Volunteers do it to help out requestors of non-profit agencies and do it altruistically.
|need to eliminate task (process) type variability [T2, K,L4,M8,N10,N11...]
need reproducability [T2,M10,N11,N13,N14...]
|need to eliminate requestor variability [T2,K,L4,P7,O3...]
|Too many different approaches to the same task creates headaches. In the world there are so many to choose from and requestors should not be recreating this wheel. industry style guides have been around since... forever. 
|need bulk acceptance of multiple tasks [T3,K,L4,L13,O1...]
|need standardization of tasks to be able to scale up to viable living incomes[T2,N13,N14,N15,O2...]
|need granulification with permission from the requestor of tasks to be able to scale up to viable living incomes[T2,N13,N14,N15,O2...]
|This will help with the low professional earnings from turk. As one example, If a requestor sought and paid for a standardized form task that followed a model like tinder, whereby only a question and yes/no box were presented in the screen to the worker and then repeated, the worker can easily and rapidly complete turker type tasks without doing work above and beyond the task itself. Thereby reducing the questions and schemes they have to do prior to earning an income. Of course, this design is not that simple to just execute.
|need well designed scheduling theory based algorithm support if they turk for work[T2,T3,P38,P47,P51...]
|The turkers do this as they have multiple windows and inventories of tasks on their screen. Why not use the stuff from the field who has solved their problems before?
|need to feel safe from negative experiences[K1,L1,L7,N7,P5...]
|need a library of work primitives to support task design from requestors[K3,K4,K,L,M4...]
|Work primitives are the building blocks of repetitive tasks. Let's face it. Turk requestors seem to keep doing the same types of work...
|need to rely on strategies to select HITS[L10,L11,L12,R1,Q3...]
|need to be able to remedy errors [N6,P2] arising from miscommunication[L1], rejections[L9,L10,P2]
need to communicate with other workers in forums[N7,N1,P10,P9,Q...]
|need to be able to provide their own self governance[N3,N1,P10,P9,Q4...]
|Turkers have already done the work publicly. Software can improve what they have already done.
Potential Needs for Workers to Watch
- need to communicate requestor patterns with others that work against professional worker's intent[L1,P10,P5,R3]
- need the ability to control privacy rights and access[P10,P9, R2]
- need accessible alerts of changes in the system and how it affects them - kept simple![Q6]
- need filters to eliminate spam tasks[P5]
Observations about Requestors
A. Video Interviews.
There are different reasons people come to crowdsourcing for:
- income – housewives, workers who want to make extra money, free lancers
- getting work done – large companies, business personals and other personal works
The current crowdsourcing platforms are not fully satisfying the requirements of the requesters and also the workers.
- To manage and better understand how to get more money efficiently, there are worker forums, similarly there are requester forums to help requesters in various areas in the platform.
- There are communities such as "Turker Nation" which help workers get more qualitative work
- There are many people who are entirely dependent on such crowdsourcing platforms for their income and this is their only source.
- Even after usage of the platform for a certain time, the factors are not clear to some workers and take help from forums
- to better understand the platform
- teach requesters about Master's qualification
- Share the hits and such other tasks
- When the requester is accessible for clarification, the workers mostly provide feedback through mail on what clarification is needed and how should they refine the task
- Some workers were cheating in tasks so, the requester then added qualification to the task to filter out people who might possibly cheat.
- Engaging the workers is a difficult task for the requester
- Sometimes the message are not in English which is difficult to reply.
- When there are huge number of tasks posted on MTurk, it becomes very difficult to answer all the enquiries and so, the requesters hire people to answer queries and manage other requester works
- Sometimes study specific grousp to take advantage of worker productivity.
- Sometimes the built in templates to create tasks are not customizable - making it a difficult process to reproduce the whole template manually for a small change.
- There is still a generic way to be generated on the hourly rate to be paid for the worker
- New requesters are inclined to reject a slightly off work with the excitement of using their ability to reject
- Keywords - anonymous, complicated, tricky, tiring
- Choosing a task is always done keeping in mind the worker's rating
- The rating should not go below 99
- They wait to see if the requester is paying for the hits or is ignoring their work
- If the requester pays promptly or provides feedback promptly, then the worker is happy to work further knowing that the requester is a good work provider
- If the requester doesn't have a rating, the workers mail the requesters and if the requester replies promptly to the mails, then again it confirms that the requester is reliable.
- Another confirmation which the workers get when they receive a mail from the requester is that, they can now have a discussion if with their requester if their work is rejected and know why it has been rejected.
- Some users want to make the best of time and focus on keyboard shortcuts to perform better and faster.
- Some times there are automatic quality controlscripts which might or might not give accurate results.
- Keywords - Incredible, frustrating, empowering, tiring, lonely, isolated, diverse, passionate, dynamic
B. Ipierotis, P (2010)
- Find it much easier to navigate through amazon.com than through Turk
- Frustrated on repeating the same thing again and again
- People are building on top of MTurk to improve their ease of work
- People find it very difficult and so are trying to make a better marketplace than MTurk
- People want to Scale up, manage the complex API, manage execution time and ensure quality
- Posting tasks should be easy for requestors
- They find the interface very difficult to manage
- Sometimes requestors are hiring full time developers to get the complex tasks done
- It is very difficult for small requestors to grow
- Very few requestors have one-pass tasks. And the requestors who have huge number of tasks are not comfortable using MTurk.
- Some users build and inframe-powered HITs, misuse the system and get away
- For a long term user, this kind of additional personalized interface works, but not for short term users
- Requestors cannot easily differentiate good workers from bad workers
- Getting the task done by multiple uninterested workers and getting the quality check done is a huge frustration and a waste of time for the user
- User wants to get these questions verified
- Does the worker have the proper English writing skills?
- Can the worker proofread?
- Want to provide rating without cognitive load.
- Requestor is the most affected guy in MTurk unlike the sellers in amazon.com
- Rating must be given from the side which is getting affected.
- Requesters might have an idea or not have an idea of the time required to complete the task
C. Bernstein, M. & et al. (2010).
- Make grammatical mistakes while writing
- Find it huge to correct a tense change
- Find it difficult to trim the size of the written paragraph
- Find it difficult to explain the task to workers
- Commonly make spelling mistakes
- Take help from friends and people they know to proof read or reduce the article size
- Expects further help in complex tasks
- Expects text correction delay time to be less – prefers real time
- Is concerned about the task and content privacy
- Wants an interface which is understandable and helps him progress
- Waits for the correction to be clearly differentiable from the original data
- Looks out for what part of the sentence has been compressed and which part has been expanded
- Looks out for people who can complete the task on time – get things in limited pages and within the deadline
- Is concerned and wants to confirm if the worker has understood the task – wants to confirm
- wants the control in his hands to accept the corrections or not
- Wants to notify only the change required in certain pages or paragraphs
- Finds it difficult to express the correction method correctly to the computer in case of scripting.
- Finds it difficult sometimes even to express to workers on what correction is to be done
- wants to decide on how many people should work on his content
- Wants to avoid lazy turkers waste his time
- Similarly wants to avoid Eager Beavers as they complicate the task for him
- Wants to keep a check on if the workers are motivated enough while working
D. Stolee, K. & Elbaum, S. (2010)
- Requestor monitored the updates of tasks completion.
- Requestors devise methods to cull user diversity, user experience, and worker gaming of the system.
- Requestors devised routes to gather information that is normally anonymized by the turk system.
- "open ended answers helped us to understand points of confusion and why participants differed"
- "was the result of a misinterpreted question"
- requestors used the UI to approve work completed and access the results.
- a credit card is required to front load the requestor account.**********
- hit creation can be tested in the developer sandbox
- requestors find that Turk "provides a framework ... for recruiting, ensuring privacy, distributing payment, and collecting results."
- "results can be easily downloaded in a CSV format"
- Requestors "create custom qualification tests .. using the command line tool or API"
- Requestor who is a surveyor understand "the importance of having enough subjects (i.e. workers) of the right kind."
- Requestor "doubled the initial ... reward"
- requestor "sent emails to two internal mailing lists."
- Requestor might "observe students... instead of observing software engineers practicing."
- Requestor might "perform studies without human subjects." [bad practice]
- Requestor might "evaluate visualization designs, conduct surveys about information seeking behaviors, and perform NL annotations to train machine learning algorithms."
- Requestor might "leverage a global community... to solve a problem, classify data, refine a product, gather feedback"
- Requestor required "to pass a pretest."
- Researchers "estimated aptitude by measuring education and qualification score."
- Researchers create qualifications for works by using domain specific knowledge and quality of work history.
- Requestors evaluate work after completion.
- Resquestors made task templates and combined tasks with a shared type ID.
- Resquestors "presented [workers] with treated or untreated pipe for each task."
- Resquestors "could not impose their constraint and control for learning effects."
- Turk "caused us to waste some data."
- "An alternate [research] design would be to create..."
- requestors define the work goals, collect relevant information from the workers
- requestors "had less control over the [workers] participating... and variations caused by how prominently the study is displayed in the infrastructure search results."
- "even our study uses tasks that are much more complex and time consuming than those recommended by" Turk
- researchers "must consider if randomized assignment... is appropriate for their study"
E. Martin, D. & et al. (2014)
- to request work such as image tagging, duplicate recognition, transcription, translation, object classification, and generate content
- rely on turk to curate and manage the quality of content for their tasks
- requesters becomes confused about what actions constitute a bad worker [one man's trash is another's treasure]
- requestors block who they consider bad workers
- requestors fundamentally ask for help from the masses and a judgment from the asker is fundamentally a dynamic that is highly disrespectful
F. Irani, L. & Silberman, M. (2013)
- the best requesters use turk to complete large batches of micro-tasks
- requestors do not ask who, what, or where questions from workers to know them [false?]
- requesters utilize multiple avenues to assess "workers"
- requesters create form fields for data entry
- requesters upload audio for transcription
- requestors create requirements for data entry to address worker quality issues
- requestors define the structure of data entry
- requestors create instructions for data entry
- requestors specify the pool of information to be processed
- requestors define the criteria for work acceptance such as approval rate, country of origin, and skill specific mastery
- requestors recruit thousands of workers within hours
- requestors maintain intellectual property rights
- requestors vet worker outputs through algorithms (majority rule)
- requestors avoid responding to workers due to quantity
- requestors only respond to workers when things happen en masse
- requesters act as business people
- requester shape the interaction with the crowd
- requestors pay Amazon money
- requestors review workers mutually
- requestors have to address the work of people from multiple nations
G. Ipierotis, P (2012).
- requestors "require workers to closely and consistently adhere to instructions for a particular, standardized task."
- requestors decide on the price they will pay for the task
- requestors complain about spammers and design methods to address them
- verify ex ante that workers can do the task
- Every requestor generates its own work request
- each requestor prices the request independently
- each requestor evaluates the answers separately from everyone else
H. TurkNation Bonus
- requesters set automatic acceptance of hits after a certain period of time
- surprise bonuses create questions for workers
- Every requestor has to implement from scratch the “best practices” for each type of work.
- requestors learn from their mistakes and fix the design problems
- Every requestor needs to price its work unit without knowing the conditions of the market
- requestors avoid working with spammers and those who talk negatively about them
- requestors rely on truth and avoid fraud
- requestors do not work together to define commonly shared standards for tasks
I. Ipierotis, P (2011).
- requestors may calibrate their tasks to beat minimum wage
- requestors receive complaints and attacks based from turkers expectations and false realities (turk bubble, don't breath the air)
- requestors receive grants for turk research
- requestors create social tasks (i.e. help me for fun) when certain conditions are met
- requestors create market tasks (i.e. help me for money) when certain conditions are met
S. Rsezsatorski, J. & Kittur, A. (2011). 
- requestors design tasks poorly
- requestors split large tasks into smaller and smaller sub-tasks until they are fault tolerant
- requestors incorporate randomness into cooperative task designs (i.e. unknowns, surprises)
- requestors manipulate financial numbers and other outcome measures
- requestors redesign tasks to fit these methods
- requestors use validated data to sort out good workers from bad
- requestors calculate relationships between worker answers and identify erroneous workers
- requestors use trends to identify poor workers
- requestors have workers rate one another's products for quality control such as the majority-rule
- requestors create two types of tasks - those producing a diversity of options and those that are more standard
Observations about Workers
T. Rsezsatorski, J. & Kittur, A. (2011). 
- workers perform tasks no matter the quality in good faith
- workers may often accept multiple tasks and leave them open [in browsers] while finishing others.
- workers accept a queue of tasks.
J. Video Interviews.
Duration of work
- It is unpredictable
- Depends on available hits and according to the pay.
- Also based on when good requesters give work
- Sometimes it lasts all 7 days a week
- Sometimes there can be a pattern predicted on how and when the requester gives tasks
- The schedule goes all over the place. Setting a time and working is unpredictable
K.Ipierotis, P (2012).
- workers avoid requestors who would negatively impact them
- "workers ... come and go as they please"
- workers labeling image
- workers transcribe audio
- Workers need to learn the intricacies of the interface for each separate employer
- Workers need to adapt to the different quality requirements of each employer
- workers have a queue of tasks that need to be completed
L. Ipierotis, P (2010). Fix Turk
- Good workers are unable to get to the requestors
- Wants to be rated correctly by the requestor
- Wants to be able to appeal to the requestor on the work being rejected
- Want to be able to differentiate the type of tasks they are about to work
- Workers are not expert at all tasks, so they want to choose tasks only which they are comfortable at
- Workers want to rate the requestors according to their purpose.
- Workers wait for the requestor to start paying so that they can work further and rely upon the previous experience.
- Check for the speed of payment from the requester
- Check for the rejection rate for the requester
- Want to appeal for a rejection
- Check the previous work ratings and experiences with the requesters
- Like to know an estimate of how long it will take to complete the task
- Look for most recent HIT groups or the most HIT groups ignoring the smaller ones
M. Bernstein, M. & et al. (2010) Soylent.
- Find it easy to make corrections in already written sentences
- Generally help in article corrections
- perform spell checks
- Have a cognitive load to maintain the sentence meaning right.
- Find it not easy to every time get the article they understand
- Maintain the privacy of the content and task
- Sometimes unsure if they made the right corrections – expecting another proof-read
- look for an option to filter their area of comfort in article selection
- Is concerned if the requester is fine with his understanding of the task – wants to confirm
- Expects the requestor to accept the rightly done work.
- Finds it difficult to understand the current interfaces on how to intimate the requestor on the changes made
N. Irani, L. & Silberman, M. (2013)
- workers utilize screen names across many platforms
- workers will report their experiences with a requestor
- workers self evaluate their own work
- workers check (status/alert function) for approval and payment status for submitted work
- workers tolerate what they see on amazon turk and express outrage that requestors pay for the service without appropriate management
- workers respond with dispute messages
- workers convene as a mass to report problems to requestors (HIVE)
- workers give up intellectual property rights
- workers test their mturk task related skill sets
- workers respond to the quality of the data entry structure
- workers interpret instructions for data entry
- workers translate the information to be processed into data entry inputs
- workers read requirements beyond the scope of the intent of turk
- workers transcribe audio into form fields
- workers complete fields into requestor forms
- workers utilize multiple windows on the same screen
- workers utilize multiple tabs on the same browser
- workers forget the importance of ergonomics, rest, repetitive stress injuries
- turkers may not have learned about minimum wage laws
- turkers express 3 kinds of responses to turk: some do it for fun,cure boredom, or earn income[!!!!!turker types]
- turkers usually expect money from tasks
- workers see tasks posted from outside MTurk (?)
O. Stolee, K. & Elbaum, S. (2010)
- Workers might "select and configure predefined modules and connecting them."
- Workers try to avoid the search page and complete tasks.
- Workers see the qualifications but might not see the specifications of the requestor.
- Workers identify tasks that are of similar types to match their preferences.
- Workers "discover Hits by searching based on some criteria, such as titles, descriptions, keywords, reward or expiration date."
P.Martin, D. & et al. (2014)
- "view AMT as a labor market"
- "unfair rejection of work"
- "to receive pay for work"
- communicate with others through Turk
- workers identify scams
- workers move through poorly designed tasks
- workers develop relationships with requestors
- workers seek some form of relational reciprocity with requestors
- workers gather to collect information about tasks, the platform, and requestors
- workers protect their hall of fame and shame post at http://turkernation.com/forumdisplay.php?13-Requesters-Hall-of-Fame-Shame
- workers find work that they're happy with despite pay rate
- workers discuss money and methods to earn it best
- workers talk about fun, learning and play as a major reason for joining MTurk
- workers earn cash on the Mturk system
- workers contrasts tasks upon a play/pay continuum
- workers criticize the pay attitude in the forums
- workers rely upon MTurk to accentuate cash flow when real world work stops (The purpose for turk changes)
- workers select only the "best" oportunities for pay
- turkers compete with one another and ask questions regard what others earn
- turkers set their own targets
- turkers respond to external events in their lives and adjust how they interact with turk based from those events
- turkers schedule and allot certain times of the day to be on MTurk
- workers rely on it as a source of income -- partially because mturk is available, accessible, and easy to find work due to its requestor diversity
- workers might use turk as a breadline
- workers find mturk ideal because one doesn't have to consider the professional environment and transportation concerns
- workers avoid those requestors who are demeaning and practice mass rejection
- workers compare experiences
- workers seek out requestors based from responsiveness
- workers give positive and negative badges
- workers spend time searching for jobs
- workers need access to decent work
- workers avoid being blocked by requestors
- workers design HITs with requestors
- workers self-monitor communication practices with requestors
- workers expect quick pay
- workers sample tasks to test the requestor
- workers bag several hits from one requestor
- workers will work on several hits in multiple tabs in a browser
- workers examine how quickly a requestor responds to questions
- workers avoid majority rules grading practices -- probably used in ML labeling tasks
- turkers use a consensus scheme to assess requestors in the forums
- turkers follow rules of requestor good practice
- turkers seek to know how often a requestor is online, quick to responsive he is to a task, how polite the person is
- turkers base trust upon several dimesions - competence of the requestor, concern of the requestor, and the intergrity (consistency) of interactions with the requestor
- some turkers scam, others try to solve this scamming through social governance
- in threads inidividuals are accused of cheating qualifications
- turkers suffer from fatigue ("i was not paying enough attention)
- turker practice reciprocity with requestors
- turkers label individuals as flamers
- turkers can be overly sensitive to a rejection
- workers might loss work due to a bad connection if work is saved on the cloud.
Q. TurkNation Bonus
- workers wait at most 30 days for hit approval as set by turk policy 
- workers define one form of hope with approval and time 
- workers identify tasks that earn bonuses, especially if the bonus occur frequently 
- turkers provide scripts for others to use in requestor specific situations 
- new turkers demonstrate misunderstandings of the Turk system and may be unfairly lock out of turk
- turkers inherit the problems of bureaucracy without ever knowing how the system changes [bit.ly/1OCaX3H]
R. Ipierotis, P (2011).
- workers identify spam tasks
- workers retalite when they see difference in pay for the same task from the same requestor
- workers decide if a task is social (i.e. fun or curing boredom) or market (i.e. money)
These following people contributed to this page: @anotherhuman, @prithvi.raj