WinterMilestone 2@ahandpr

From crowdresearch
Jump to: navigation, search

Purpose of this page This wikipage intends to meet the requirements and expectations conveyed in the Winter Milestone 2. Collected here are the combined observations of @anotherhuman and @prithvi.raj to address the questions:

  1. What can you draw from the interview? Include any that may be are strongly implied but not explicit.
  2. What observations about requesters can you draw from the interview? Include any that may be are strongly implied but not explicit.
  3. What can you draw from the readings? Include any that may be are strongly implied but not explicit.
  4. What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

A list form was chosen for this page for rapid assessment, quick count of the observations, and expected ease of use for annotation to identify needs.

Concept Maps of Crowdsourcing

High level goals of microtask crowdware
Objects identified

Need-finding Results for Requestors

Identified here are our results from the observations presented above.

to eliminate worker identity variability[D2,D3,D12,D15,D21...]
it was a recurrent theme that they needed the right people. requestors consistently identified that skill sets and acceptance rates were criteria to gauge suitability of a worker to complete the task. It is getting the experienced engineer rather than a student. additionally, requestors need to be able to control the interaction of their tasks with workers. Turk granulates tasks and spread them wide, which might be great for some tasks. However workarounds were designed to get one person to complete a sizable task of greater than 10 questions. after qualifications, requestors need to be able to control their tasks by granularity, skill set, worker task preferences (so far). this is fundamentally different from qualifications only approach and can be partially coordinated with event logs.
need repeatability[D4,D5,D8,D17,D18...][1]
throughout many papers, it was identified that requestors go back to turk for the same type of tasks and thus need to be able to get the same kinds of results from the worker's process.
need to have the ability to control the presentation of their work, by priority, sequence, preference, iterative, random, importance...[D11,D27,D23,D25,D31...]
Requestors have gone to great lengths to customize the UI, workflows, qualification systems, etc. through code and APIs. And use the GUI for bulk acceptance and monitoring. This is only part of the answer, since much of the conflict is based from the restricted nature of the Turk interface.
to control product variability [D1,D4,D8,E1,E2...]
Creating form primitives and task primitives would help to streamline the more common tasks found on the system. Additionally, this will help to create controls where requestors put more work into the platform than it was designed for. Turk is intended for microtasks that can be completed in less than 60 seconds and not full blown psychological studies.
need "some understanding of the system capabilities and constraints" [D26,D27,D29,D30,D31...]
Requestors who choose a system for incompatible tasks leads to variability that affects workers. On one hand, the system did something right to get the requestor on the site, but not everything.
need system flexibility to customize anonymity[D3,F2,F12,F20,G3...], pay [D13,F20,G2], information gathering[D11,D17,D23,D24,D27...], expand worker reach [D14,D29,F1,F3,F11...], task templates [F1,F4,F13], communication channels[E3,F14,F19,F20]...
need to know if their task is there for pay or play[E5,F1,F16,D28,F17...]
"Button hell" ---these words were expressed by a turker during the video interview. She has a certain task she performs and prefers, specifically tasks like essay or free form responses. For her the task is work and requestors who pay enough will get her to be involved in the project. Even though she enjoys a certain task, she expects requestors to pay the right amount if they choose to create a task for her. Gamers will do the same task for fun.
"to motivate better, cheaper, and faster performance... without paying much"[D9,E4,E3,G1,H2...]
need easy entry and exit out of the platform[D10,G,F2,F13,F14...]
Download the .csv file quickly, upload the already typed questions into turk for implementation. This is the type of interaction and experience requestors want because they performed the work before they got on turk.
to communicate with workers to clarify task design [E3,G1] and expectations [H6,I2,I4,I5]

Requestors should have a simple and efficient way of categorizing their tasks for easy understanding/access of workers [S10]

The requester needs a method to efficiently look out in the crowd and differentiate lazy turkers and eager beavers [C20, C21]

Requesters need a system to efficiently handle the appeals from the workers (L3, L10 of Workers)


Potential Needs for Requestors to Watch

  1. to communicate with another Requestor in some fashion [H8,] -> they need to define their own standards!
  2. the system needs to direct to other services in the business space that are more apt for the task at hand [I2]
  3. needs to pay into the work system [D7,E3,F18,] even if fair market rewards are unknown[H5] and they take a guess from known laws [I1,]
  4. need sponsorship to pay for turk work [I3]
  5. types of requester: researchers(open ? types, button types),quick and dirty [E2,E1,F14], spammers, ...

Need-finding Results for Workers

need to be directed [T1,K,L,M9,P6...]
The requestors design the tasks and the starting approach towards the workers. The workers only respond to the requestors.
Need to be separated upon their intent with turk[N20]: gamers (Bored) [N3,N4,N9...], volunteers and workers [L8,L9,N4,M10,N21...]
Not each turker views the platform as a market place. Throughout the readings, writers focused heavily upon the "worker" turker, whom has the most to gain/lose from joining the system. Otherwise, there are at least 3 types, which might be misidentified here: gamers, volunteers, and workers. Gamers are motivated to do tasks they enjoy or to solve boredom. Volunteers do it to help out requestors of non-profit agencies and do it altruistically.
need to eliminate task (process) type variability [T2, K,L4,M8,N10,N11...][2]
need reproducability [T2,M10,N11,N13,N14...][3]
need to eliminate requestor variability [T2,K,L4,P7,O3...]
Too many different approaches to the same task creates headaches. In the world there are so many to choose from and requestors should not be recreating this wheel. industry style guides have been around since... forever. [4]
need bulk acceptance of multiple tasks [T3,K,L4,L13,O1...]
need standardization of tasks to be able to scale up to viable living incomes[T2,N13,N14,N15,O2...]
need granulification with permission from the requestor of tasks to be able to scale up to viable living incomes[T2,N13,N14,N15,O2...]
This will help with the low professional earnings from turk. As one example, If a requestor sought and paid for a standardized form task that followed a model like tinder, whereby only a question and yes/no box were presented in the screen to the worker and then repeated, the worker can easily and rapidly complete turker type tasks without doing work above and beyond the task itself. Thereby reducing the questions and schemes they have to do prior to earning an income. Of course, this design is not that simple to just execute.
need well designed scheduling theory based algorithm support if they turk for work[T2,T3,P38,P47,P51...]
The turkers do this as they have multiple windows and inventories of tasks on their screen. Why not use the stuff from the field who has solved their problems before?
need to feel safe from negative experiences[K1,L1,L7,N7,P5...]
need a library of work primitives to support task design from requestors[K3,K4,K,L,M4...]
Work primitives are the building blocks of repetitive tasks. Let's face it. Turk requestors seem to keep doing the same types of work...
need to rely on strategies to select HITS[L10,L11,L12,R1,Q3...]
need to be able to remedy errors [N6,P2] arising from miscommunication[L1], rejections[L9,L10,P2]
need to communicate with other workers in forums[N7,N1,P10,P9,Q...]
need to be able to provide their own self governance[N3,N1,P10,P9,Q4...]
Turkers have already done the work publicly. Software can improve what they have already done.

Need an option to know the how responsive the requestor is to questions and clarifications [J5, J6]

Potential Needs for Workers to Watch

  1. need to communicate requestor patterns with others that work against professional worker's intent[L1,P10,P5,R3]
  2. need the ability to control privacy rights and access[P10,P9, R2]
  3. need accessible alerts of changes in the system and how it affects them - kept simple![Q6]
  4. need filters to eliminate spam tasks[P5]

Observations about Requestors

A. Video Interviews.

There are different reasons people come to crowdsourcing for:

  1. income – housewives, workers who want to make extra money, free lancers
  2. getting work done – large companies, business personals and other personal works

The current crowdsourcing platforms are not fully satisfying the requirements of the requesters and also the workers.

  1. To manage and better understand how to get more money efficiently, there are worker forums, similarly there are requester forums to help requesters in various areas in the platform.
  2. There are communities such as "Turker Nation" which help workers get more qualitative work
  3. There are many people who are entirely dependent on such crowdsourcing platforms for their income and this is their only source.
  4. Even after usage of the platform for a certain time, the factors are not clear to some workers and take help from forums
    1. to better understand the platform
    2. teach requesters about Master's qualification
    3. Share the hits and such other tasks

Requester's viewpoint

  1. When the requester is accessible for clarification, the workers mostly provide feedback through mail on what clarification is needed and how should they refine the task
  2. Some workers were cheating in tasks so, the requester then added qualification to the task to filter out people who might possibly cheat.
  3. Engaging the workers is a difficult task for the requester
  4. Sometimes the message are not in English which is difficult to reply.
  5. When there are huge number of tasks posted on MTurk, it becomes very difficult to answer all the enquiries and so, the requesters hire people to answer queries and manage other requester works
  6. Sometimes study specific grousp to take advantage of worker productivity.
  7. Sometimes the built in templates to create tasks are not customizable - making it a difficult process to reproduce the whole template manually for a small change.
  8. There is still a generic way to be generated on the hourly rate to be paid for the worker
  9. New requesters are inclined to reject a slightly off work with the excitement of using their ability to reject
  10. Keywords - anonymous, complicated, tricky, tiring

B. Ipierotis, P (2010)

  1. Find it much easier to navigate through amazon.com than through Turk
  2. Frustrated on repeating the same thing again and again
  3. People are building on top of MTurk to improve their ease of work
  4. People find it very difficult and so are trying to make a better marketplace than MTurk
  5. People want to Scale up, manage the complex API, manage execution time and ensure quality
  6. Posting tasks should be easy for requestors
  7. They find the interface very difficult to manage
  8. Sometimes requestors are hiring full time developers to get the complex tasks done
  9. It is very difficult for small requestors to grow
  10. Very few requestors have one-pass tasks. And the requestors who have huge number of tasks are not comfortable using MTurk.
  11. Some users build and inframe-powered HITs, misuse the system and get away
  12. For a long term user, this kind of additional personalized interface works, but not for short term users
  13. Requestors cannot easily differentiate good workers from bad workers
  14. Getting the task done by multiple uninterested workers and getting the quality check done is a huge frustration and a waste of time for the user
  15. User wants to get these questions verified
  16. Does the worker have the proper English writing skills?
  17. Can the worker proofread?
  18. Want to provide rating without cognitive load.
  19. Requestor is the most affected guy in MTurk unlike the sellers in amazon.com
  20. Rating must be given from the side which is getting affected.
  21. Requesters might have an idea or not have an idea of the time required to complete the task

C. Bernstein, M. & et al. (2010).

  1. Make grammatical mistakes while writing
  2. Find it huge to correct a tense change
  3. Find it difficult to trim the size of the written paragraph
  4. Find it difficult to explain the task to workers
  5. Commonly make spelling mistakes
  6. Take help from friends and people they know to proof read or reduce the article size
  7. Expects further help in complex tasks
  8. Expects text correction delay time to be less – prefers real time
  9. Is concerned about the task and content privacy
  10. Wants an interface which is understandable and helps him progress
  11. Waits for the correction to be clearly differentiable from the original data
  12. Looks out for what part of the sentence has been compressed and which part has been expanded
  13. Looks out for people who can complete the task on time – get things in limited pages and within the deadline
  14. Is concerned and wants to confirm if the worker has understood the task – wants to confirm
  15. wants the control in his hands to accept the corrections or not
  16. Wants to notify only the change required in certain pages or paragraphs
  17. Finds it difficult to express the correction method correctly to the computer in case of scripting.
  18. Finds it difficult sometimes even to express to workers on what correction is to be done
  19. wants to decide on how many people should work on his content
  20. Wants to avoid lazy turkers waste his time
  21. Similarly wants to avoid Eager Beavers as they complicate the task for him
  22. Wants to keep a check on if the workers are motivated enough while working

D. Stolee, K. & Elbaum, S. (2010)

  1. Requestor monitored the updates of tasks completion.
  2. Requestors devise methods to cull user diversity, user experience, and worker gaming of the system.
  3. Requestors devised routes to gather information that is normally anonymized by the turk system.
  4. "open ended answers helped us to understand points of confusion and why participants differed"
  5. "was the result of a misinterpreted question"
  6. requestors used the UI to approve work completed and access the results.
  7. a credit card is required to front load the requestor account.**********
  8. hit creation can be tested in the developer sandbox
  9. requestors find that Turk "provides a framework ... for recruiting, ensuring privacy, distributing payment, and collecting results."
  10. "results can be easily downloaded in a CSV format"
  11. Requestors "create custom qualification tests .. using the command line tool or API"
  12. Requestor who is a surveyor understand "the importance of having enough subjects (i.e. workers) of the right kind."
  13. Requestor "doubled the initial ... reward"
  14. requestor "sent emails to two internal mailing lists."
  15. Requestor might "observe students... instead of observing software engineers practicing."
  16. Requestor might "perform studies without human subjects." [bad practice]
  17. Requestor might "evaluate visualization designs, conduct surveys about information seeking behaviors, and perform NL annotations to train machine learning algorithms."
  18. Requestor might "leverage a global community... to solve a problem, classify data, refine a product, gather feedback"
  19. Requestor required "to pass a pretest."
  20. Researchers "estimated aptitude by measuring education and qualification score."
  21. Researchers create qualifications for works by using domain specific knowledge and quality of work history.
  22. Requestors evaluate work after completion.
  23. Resquestors made task templates and combined tasks with a shared type ID.
  24. Resquestors "presented [workers] with treated or untreated pipe for each task."
  25. Resquestors "could not impose their constraint and control for learning effects."
  26. Turk "caused us to waste some data."
  27. "An alternate [research] design would be to create..."
  28. requestors define the work goals, collect relevant information from the workers
  29. requestors "had less control over the [workers] participating... and variations caused by how prominently the study is displayed in the infrastructure search results."
  30. "even our study uses tasks that are much more complex and time consuming than those recommended by" Turk
  31. researchers "must consider if randomized assignment... is appropriate for their study"

E. Martin, D. & et al. (2014)

  1. to request work such as image tagging, duplicate recognition, transcription, translation, object classification, and generate content
  2. rely on turk to curate and manage the quality of content for their tasks
  3. requesters becomes confused about what actions constitute a bad worker [one man's trash is another's treasure]
  4. requestors block who they consider bad workers
  5. requestors fundamentally ask for help from the masses and a judgment from the asker is fundamentally a dynamic that is highly disrespectful

F. Irani, L. & Silberman, M. (2013)

  1. the best requesters use turk to complete large batches of micro-tasks
  2. requestors do not ask who, what, or where questions from workers to know them [false?]
  3. requesters utilize multiple avenues to assess "workers"
  4. requesters create form fields for data entry
  5. requesters upload audio for transcription
  6. requestors create requirements for data entry to address worker quality issues
  7. requestors define the structure of data entry
  8. requestors create instructions for data entry
  9. requestors specify the pool of information to be processed
  10. requestors define the criteria for work acceptance such as approval rate, country of origin, and skill specific mastery
  11. requestors recruit thousands of workers within hours
  12. requestors maintain intellectual property rights
  13. requestors vet worker outputs through algorithms (majority rule)
  14. requestors avoid responding to workers due to quantity
  15. requestors only respond to workers when things happen en masse
  16. requesters act as business people
  17. requester shape the interaction with the crowd
  18. requestors pay Amazon money
  19. requestors review workers mutually
  20. requestors have to address the work of people from multiple nations

G. Ipierotis, P (2012).

  1. requestors "require workers to closely and consistently adhere to instructions for a particular, standardized task."
  2. requestors decide on the price they will pay for the task
  3. requestors complain about spammers and design methods to address them
  4. verify ex ante that workers can do the task
  5. Every requestor generates its own work request
  6. each requestor prices the request independently
  7. each requestor evaluates the answers separately from everyone else

H. TurkNation Bonus

  1. requesters set automatic acceptance of hits after a certain period of time
  2. surprise bonuses create questions for workers
  3. Every requestor has to implement from scratch the “best practices” for each type of work.
  4. requestors learn from their mistakes and fix the design problems
  5. Every requestor needs to price its work unit without knowing the conditions of the market
  6. requestors avoid working with spammers and those who talk negatively about them
  7. requestors rely on truth and avoid fraud
  8. requestors do not work together to define commonly shared standards for tasks

I. Ipierotis, P (2011).

  1. requestors may calibrate their tasks to beat minimum wage
  2. requestors receive complaints and attacks based from turkers expectations and false realities (turk bubble, don't breath the air)
  3. requestors receive grants for turk research
  4. requestors create social tasks (i.e. help me for fun) when certain conditions are met
  5. requestors create market tasks (i.e. help me for money) when certain conditions are met

S. Rsezsatorski, J. & Kittur, A. (2011). [5]

  1. requestors design tasks poorly
  2. requestors split large tasks into smaller and smaller sub-tasks until they are fault tolerant
  3. requestors incorporate randomness into cooperative task designs (i.e. unknowns, surprises)
  4. requestors manipulate financial numbers and other outcome measures
  5. requestors redesign tasks to fit these methods
  6. requestors use validated data to sort out good workers from bad
  7. requestors calculate relationships between worker answers and identify erroneous workers
  8. requestors use trends to identify poor workers
  9. requestors have workers rate one another's products for quality control such as the majority-rule
  10. requestors create two types of tasks - those producing a diversity of options and those that are more standard

Observations about Workers

T. Rsezsatorski, J. & Kittur, A. (2011). [6]

  1. workers perform tasks no matter the quality in good faith
  2. workers may often accept multiple tasks and leave them open [in browsers] while finishing others.
  3. workers accept a queue of tasks.

J. Video Interviews.

  1. Choosing a task is always done keeping in mind the worker's rating
  2. The rating should not go below 99
  3. They wait to see if the requester is paying for the hits or is ignoring their work
  4. If the requester pays promptly or provides feedback promptly, then the worker is happy to work further knowing that the requester is a good work provider
  5. If the requester doesn't have a rating, the workers mail the requesters and if the requester replies promptly to the mails, then again it confirms that the requester is reliable.
  6. Another confirmation which the workers get when they receive a mail from the requester is that, they can now have a discussion if with their requester if their work is rejected and know why it has been rejected.
  7. Some users want to make the best of time and focus on keyboard shortcuts to perform better and faster.
  8. Some times there are automatic quality controlscripts which might or might not give accurate results.
  9. Keywords - Incredible, frustrating, empowering, tiring, lonely, isolated, diverse, passionate, dynamic

Duration of work

  1. It is unpredictable
  2. Depends on available hits and according to the pay.
  3. Also based on when good requesters give work
  4. Sometimes it lasts all 7 days a week
  5. Sometimes there can be a pattern predicted on how and when the requester gives tasks
  6. The schedule goes all over the place. Setting a time and working is unpredictable

K.Ipierotis, P (2012).

  1. workers avoid requestors who would negatively impact them
  2. "workers ... come and go as they please"
  3. workers labeling image
  4. workers transcribe audio
  5. Workers need to learn the intricacies of the interface for each separate employer
  6. Workers need to adapt to the different quality requirements of each employer
  7. workers have a queue of tasks that need to be completed

L. Ipierotis, P (2010). Fix Turk

  1. Good workers are unable to get to the requestors
  2. Wants to be rated correctly by the requestor
  3. Wants to be able to appeal to the requestor on the work being rejected
  4. Want to be able to differentiate the type of tasks they are about to work
  5. Workers are not expert at all tasks, so they want to choose tasks only which they are comfortable at
  6. Workers want to rate the requestors according to their purpose.
  7. Workers wait for the requestor to start paying so that they can work further and rely upon the previous experience.
  8. Check for the speed of payment from the requester
  9. Check for the rejection rate for the requester
  10. Want to appeal for a rejection
  11. Check the previous work ratings and experiences with the requesters
  12. Like to know an estimate of how long it will take to complete the task
  13. Look for most recent HIT groups or the most HIT groups ignoring the smaller ones

M. Bernstein, M. & et al. (2010) Soylent.

  1. Find it easy to make corrections in already written sentences
  2. Generally help in article corrections
  3. perform spell checks
  4. Have a cognitive load to maintain the sentence meaning right.
  5. Find it not easy to every time get the article they understand
  6. Maintain the privacy of the content and task
  7. Sometimes unsure if they made the right corrections – expecting another proof-read
  8. look for an option to filter their area of comfort in article selection
  9. Is concerned if the requester is fine with his understanding of the task – wants to confirm
  10. Expects the requestor to accept the rightly done work.
  11. Finds it difficult to understand the current interfaces on how to intimate the requestor on the changes made

N. Irani, L. & Silberman, M. (2013)

  1. workers utilize screen names across many platforms
  2. workers will report their experiences with a requestor
  3. workers self evaluate their own work
  4. workers check (status/alert function) for approval and payment status for submitted work
  5. workers tolerate what they see on amazon turk and express outrage that requestors pay for the service without appropriate management
  6. workers respond with dispute messages
  7. workers convene as a mass to report problems to requestors (HIVE)
  8. workers give up intellectual property rights
  9. workers test their mturk task related skill sets
  10. workers respond to the quality of the data entry structure
  11. workers interpret instructions for data entry
  12. workers translate the information to be processed into data entry inputs
  13. workers read requirements beyond the scope of the intent of turk
  14. workers transcribe audio into form fields
  15. workers complete fields into requestor forms
  16. workers utilize multiple windows on the same screen
  17. workers utilize multiple tabs on the same browser
  18. workers forget the importance of ergonomics, rest, repetitive stress injuries
  19. turkers may not have learned about minimum wage laws
  20. turkers express 3 kinds of responses to turk: some do it for fun,cure boredom, or earn income[!!!!!turker types]
  21. turkers usually expect money from tasks
  22. workers see tasks posted from outside MTurk (?)

O. Stolee, K. & Elbaum, S. (2010)

  1. Workers might "select and configure predefined modules and connecting them."
  2. Workers try to avoid the search page and complete tasks.
  3. Workers see the qualifications but might not see the specifications of the requestor.
  4. Workers identify tasks that are of similar types to match their preferences.
  5. Workers "discover Hits by searching based on some criteria, such as titles, descriptions, keywords, reward or expiration date."

P.Martin, D. & et al. (2014)

  1. "view AMT as a labor market"
  2. "unfair rejection of work"
  3. "to receive pay for work"
  4. communicate with others through Turk
  5. workers identify scams
  6. workers move through poorly designed tasks
  7. workers develop relationships with requestors
  8. workers seek some form of relational reciprocity with requestors
  9. workers gather to collect information about tasks, the platform, and requestors
  10. workers protect their hall of fame and shame post at http://turkernation.com/forumdisplay.php?13-Requesters-Hall-of-Fame-Shame
  11. workers find work that they're happy with despite pay rate
  12. workers discuss money and methods to earn it best
  13. workers talk about fun, learning and play as a major reason for joining MTurk
  14. workers earn cash on the Mturk system
  15. workers contrasts tasks upon a play/pay continuum
  16. workers criticize the pay attitude in the forums
  17. workers rely upon MTurk to accentuate cash flow when real world work stops (The purpose for turk changes)
  18. workers select only the "best" oportunities for pay
  19. turkers compete with one another and ask questions regard what others earn
  20. turkers set their own targets
  21. turkers respond to external events in their lives and adjust how they interact with turk based from those events
  22. turkers schedule and allot certain times of the day to be on MTurk
  23. workers rely on it as a source of income -- partially because mturk is available, accessible, and easy to find work due to its requestor diversity
  24. workers might use turk as a breadline
  25. workers find mturk ideal because one doesn't have to consider the professional environment and transportation concerns
  26. workers avoid those requestors who are demeaning and practice mass rejection
  27. workers compare experiences
  28. workers seek out requestors based from responsiveness
  29. workers give positive and negative badges
  30. workers spend time searching for jobs
  31. workers need access to decent work
  32. workers avoid being blocked by requestors
  33. workers design HITs with requestors
  34. workers self-monitor communication practices with requestors
  35. workers expect quick pay
  36. workers sample tasks to test the requestor
  37. workers bag several hits from one requestor
  38. workers will work on several hits in multiple tabs in a browser
  39. workers examine how quickly a requestor responds to questions
  40. workers avoid majority rules grading practices -- probably used in ML labeling tasks
  41. turkers use a consensus scheme to assess requestors in the forums
  42. turkers follow rules of requestor good practice
  43. turkers seek to know how often a requestor is online, quick to responsive he is to a task, how polite the person is
  44. turkers base trust upon several dimesions - competence of the requestor, concern of the requestor, and the intergrity (consistency) of interactions with the requestor
  45. some turkers scam, others try to solve this scamming through social governance
  46. in threads inidividuals are accused of cheating qualifications
  47. turkers suffer from fatigue ("i was not paying enough attention)
  48. turker practice reciprocity with requestors
  49. turkers label individuals as flamers
  50. turkers can be overly sensitive to a rejection
  51. workers might loss work due to a bad connection if work is saved on the cloud.

Q. TurkNation Bonus

  1. workers wait at most 30 days for hit approval as set by turk policy [7]
  2. workers define one form of hope with approval and time [8]
  3. workers identify tasks that earn bonuses, especially if the bonus occur frequently [9]
  4. turkers provide scripts for others to use in requestor specific situations [10]
  5. new turkers demonstrate misunderstandings of the Turk system and may be unfairly lock out of turk[11]
  6. turkers inherit the problems of bureaucracy without ever knowing how the system changes [bit.ly/1OCaX3H]

R. Ipierotis, P (2011).

  1. workers identify spam tasks
  2. workers retalite when they see difference in pay for the same task from the same requestor
  3. workers decide if a task is social (i.e. fun or curing boredom) or market (i.e. money)

Page Contributors

These following people contributed to this page: @anotherhuman, @prithvi.raj