WinterMilestone 2@anotherhuman

From crowdresearch
Revision as of 08:17, 22 January 2016 by Aarongilbee (Talk | contribs)

Jump to: navigation, search


  1. What observations about workers can you draw from the interview? Include any that may be are strongly implied but not explicit.
  2. What observations about requesters can you draw from the interview? Include any that may be are strongly implied but not explicit.
  3. What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.
  4. What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Stolee, K. & Elbaum, S. (2010) Observations about Requestors

    1. Requestor monitored the updates of tasks completion.
    2. Requestors devise methods to cull user diversity, user experience, and worker gaming of the system.
    3. Requestors devised routes to gather information that is normally anonymized by the turk system.
    4. "open ended answers helped us to understand points of confusion and why participants differed"
    5. "was the result of a misinterpreted question"
    6. requestors used the UI to approve work completed and access the results.
    7. a credit card is required to front load the requestor account.**********
    8. hit creation can be tested in the developer sandbox
    9. requestors find that Turk "provides a framework ... for recruiting, ensuring privacy, distributing payment, and collecting results."
    10. "results can be easily downloaded in a CSV format"
    11. Requestors "create custom qualification tests .. using the command line tool or API"
    12. Requestor who is a surveyor understand "the importance of having enough subjects (i.e. workers) of the right kind."
    13. Requestor "doubled the initial ... reward"
    14. requestor "sent emails to two internal mailing lists."
    15. Requestor might "observe students... instead of observing software engineers practicing."
    16. Requestor might "perform studies without human subjects." [bad practice]
    17. Requestor might "evaluate visualization designs, conduct surveys about information seeking behaviors, and perform NL annotations to train machine learning algorithms."
    18. Requestor might "leverage a global community... to solve a problem, classify data, refine a product, gather feedback"
    19. Requestor required "to pass a pretest."
    20. Researchers "estimated aptitude by measuring education and qualification score."
    21. Researchers create qualifications for works by using domain specific knowledge and quality of work history.
    22. Requestors evaluate work after completion.
    23. Resquestors made task templates and combined tasks with a shared type ID.
    24. Resquestors "presented [workers] with treated or untreated pipe for each task."
    25. Resquestors "could not impose their constraint and control for learning effects."
    26. Turk "caused us to waste some data."
    27. "An alternate [research] design would be to create..."
    28. requestors define the work goals, collect relevant information from the workers
    29. requestors "had less control over the [workers] participating... and variations caused by how prominently the study is displayed in the infrastructure search results."
    30. "even our study uses tasks that are much more complex and time consuming than those recommended by" Turk
    31. researchers "must consider if randomized assignment... is appropriate for their study"

Martin, D. & et al. (2014) Observations about Requestors

    1. to request work such as image tagging, duplicate recognition, transcription, translation, object classification, and generate content
    2. rely on turk to curate and manage the quality of content for their tasks
    3. requesters becomes confused about what actions constitute a bad worker [one man's trash is another's treasure]
    4. requestors block who they consider bad workers
    5. requestors fundamentally ask for help from the masses and a judgment from the asker is fundamentally a dynamic that is highly disrespectful

Irani, L. & Silberman, M. (2013) Observations about Requestors

  1. the best requesters use turk to complete large batches of micro-tasks
  2. requestors are not ask who, what, or where the workers come from [false?]
  3. requesters utilize multiple avenues to assess "workers"
  4. requesters create form fields for data entry
  5. requesters upload audio for transcription
  6. requestors create requirements for data entry to address worker quality issues
  7. requestors define the structure of data entry
  8. requestors create instructions for data entry
  9. requestors specify the pool of information to be processed
  10. requestors define the criteria for work acceptance such as approval rate, country of origin, and skill specific mastery
  11. requestors recruit thousands of workers within hours
  12. requestors maintain intellectual property rights
  13. requestors vet worker outputs through algorithms (majority rule)
  14. requestors avoid responding to workers due to quantity
  15. requestors only respond to workers when things happen en masse

Irani, L. & Silberman, M. (2013) Observations about Workers

  1. workers convene as a mass to report problems to requestors (HIVE)
  2. workers give up intellectual property rights
  3. workers test their mturk task related skill sets
  4. workers respond to the quality of the data entry structure
  5. workers interpret instructions for data entry
  6. workers translate the information to be processed into data entry inputs
  7. workers read requirements beyond the scope of the intent of turk
  8. workers transcribe audio into form fields
  9. workers complete fields into requestor forms
  10. workers utilize multiple windows on the same screen
  11. workers utilize multiple tabs on the same browser
  12. workers forget the importance of ergonomics, rest, repetitive stress injuries
  13. turkers may not have learned about minimum wage laws
  14. turkers express 3 kinds of responses to turk: some do it for fun,cure boredom, or earn income[!!!!!turker types]
  15. turkers usually expect money from tasks
  16. workers see tasks posted from outside MTurk (?)

Stolee, K. & Elbaum, S. (2010) Observations about Workers

    1. Workers might "select and configure predefined modules and connecting them."
    2. Workers try to avoid the search page and complete tasks.
    3. Workers see the qualifications but might not see the specifications of the requestor.
    4. Workers identify tasks that are of similar types to match their preferences.
    5. Workers "discover Hits by searching based on some criteria, such as titles, descriptions, keywords, reward or expiration date."

Martin, D. & et al. (2014) Observations about Workers

    1. "view AMT as a labor market"
    2. "unfair rejection of work"
    3. "to receive pay for work"
    4. communicate with others through Turk
    5. workers identify scams
    6. workers move through poorly designed tasks
    7. workers develop relationships with requestors
    8. workers seek some form of relational reciprocity with requestors
    9. workers gather to collect information about tasks, the platform, and requestors
    10. workers protect their hall of fame and shame post at
    11. workers find work that they're happy with despite pay rate
    12. workers discuss money and methods to earn it best
    13. workers talk about fun, learning and play as a major reason for joining MTurk
    14. workers earn cash on the Mturk system
    15. workers contrasts tasks upon a play/pay continuum
    16. workers criticize the pay attitude in the forums
    17. workers rely upon MTurk to accentuate cash flow when real world work stops (The purpose for turk changes)
    18. workers select only the "best" oportunities for pay
    19. turkers compete with one another and ask questions regard what others earn
    20. turkers set their own targets
    21. turkers respond to external events in their lives and adjust how they interact with turk based from those events
    22. turkers schedule and allot certain times of the day to be on MTurk
    23. workers rely on it as a source of income -- partially because mturk is available, accessible, and easy to find work due to its requestor diversity
    24. workers might use turk as a breadline
    25. workers find mturk ideal because one doesn't have to consider the professional environment and transportation concerns
    26. workers avoid those requestors who are demeaning and practice mass rejection
    27. workers compare experiences
    28. workers seek out requestors based from responsiveness
    29. workers give positive and negative badges
    30. workers spend time searching for jobs
    31. workers need access to decent work
    32. workers avoid being blocked by requestors
    33. workers design HITs with requestors
    34. workers self-monitor communication practices with requestors
    35. workers expect quick pay
    36. workers sample tasks to test the requestor
    37. workers bag several hits from one requestor
    38. workers will work on several hits in multiple tabs in a browser
    39. workers examine how quickly a requestor responds to questions
    40. workers avoid majority rules grading practices -- probably used in ML labeling tasks
    41. turkers use a consensus scheme to assess requestors in the forums
    42. turkers follow rules of requestor good practice
    43. turkers seek to know how often a requestor is online, quick to responsive he is to a task, how polite the person is
    44. turkers base trust upon several dimesions - competence of the requestor, concern of the requestor, and the intergrity (consistency) of interactions with the requestor
    45. some turkers scam, others try to solve this scamming through social governance
    46. in threads inidividuals are accused of cheating qualifications
    47. turkers suffer from fatigue ("i was not paying enough attention)
    48. turker practice reciprocity with requestors
    49. turkers label individuals as flamers
    50. turkers can be overly sensitive to a rejection
    1. workers might loss work due to a bad connection if work is saved on the cloud.

TurkNation Bonus Observations about Requestors

    1. requesters set automatic acceptance of hits after a certain period of time
    2. surprise bonuses create questions for workers

TurkNation Bonus Observations about Workers [1]

    1. workers wait at most 30 days for hit approval as set by turk policy
    2. workers define one form of hope with approval and time


    1. workers identify tasks that earn bonuses, especially if the bonus occur frequently
    2. turkers provide scripts for others to use in requestor specific situations


    1. new turkers demonstrate misunderstandings of the Turk system and may be unfairly


    1. turkers inherit the problems of bureaucracy without ever knowing how the system changes


Martin, D. & et al. (2014)

    1. "how to motivate better, cheaper, and faster performance... without paying much"

Stolee, K. & Elbaum, S. (2010)

    1. to eliminate worker variability
    2. to control variability
    3. researchers need "some understanding of the system capabilities and constraints"
    4. the system needs to direct to other services in the business space that are more apt for the task at hand.
    5. requestors need to have the ability to control the presentation of their work, by priority, sequence, preference, iterative, random, importance...

Needs identified during the worker-requestor interviews

    1. to identify patterns of requestors
    2. to be able to receive work immediately
    3. to be be able to pause work demands
    4. to be able to SLEEP
    5. to meet a daily goal (how dos this daily goal shape decision making)
    6. to quickly test a hypothesis
    7. to write clear instructions
    8. to post a batch of questions
    9. to receive information on how to improve task design
    10. to engaging in conversation with workers
    11. to how to monitor approval ratings
    12. to gauge the level of threat a requestor is towards approval ratings
    13. to scatter work across requestors
    14. to select those who provide HITs with good background
    15. to have a name and email to contact people before HITs
    16. to post hits without much interaction
    17. to know people are on the other side
    18. to label X of this item
    19. to optimize worker speed in job design an shape UI
    20. to avoid rejecting workers
    21. to maintain quality assurance of workers
    22. to send out sample hits to test task
    23. to gauge preferences of requestors
    24. to be able to manage interactions with workers at scale
    25. to manage worker correspondence
    26. to rate workers fairly
    27. to automatically clarify and challenge qualification based rejection
    28. to meet face to face with others
    29. to manage time better as quantity grows (the challenges and methods change)
    30. to connect with outside services that expand one's professional capacities
    31. to have strong best-worker relationships (10 batches)
    32. to pre-schedule work at intervals
    33. to assure quality, truthful and honest responses
    34. to connect with worker communities
    35. to have a hit within an iFrame
    36. to be able to build one's own efforts
    37. to look at the hit and estimate the value of effort... (paper: Estimating Charlie's Run-time Estimator)
    38. to manage the expectations of work involved and run times tied to HITs
    39. to know the other worker's personalities and how it affects work
    40. to communicate with others the reality of a job
    41. to match job type with work preferences and skill sets
    42. to earn as much profit (revenue) as possible
    43. to match typing tasks with those who like typing tasks
    44. to match button clickers with button clicking tasks
    45. to estimate the value of a day
    46. to have a buffer of work demands over time
    47. to avoid random responses
    48. to accept as work that is better than chance
    49. to avoid rejections on the reputation system
    50. to have a constant contact ability with the requestor
    51. to have a buffer of points for reputation management
    52. to have buffer time that accommodate interruptions and disabilities

incredible enablers of scientific research avoids: anonymous, frustrating, complicated, tricky absolutely unpredictable, great, terrible power, frustrating,empowering, lonely, isolated, tiring diverse and amazing futuristic dynamic humans

what rules of thumb do people use? to try 2 minutes of work and then retry

types or requestors: volume, income