WinterMilestone 2@anotherhuman

From crowdresearch
Revision as of 06:43, 22 January 2016 by Aarongilbee (Talk | contribs)

Jump to: navigation, search


  1. What observations about workers can you draw from the interview? Include any that may be are strongly implied but not explicit.
  2. What observations about requesters can you draw from the interview? Include any that may be are strongly implied but not explicit.
  3. What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.
  4. What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Stolee, K. & Elbaum, S. (2010) Written Statements about Requestors

    1. Requestor monitored the updates of tasks completion.
    2. Requestors devise methods to cull user diversity, user experience, and worker gaming of the system.
    3. Requestors devised routes to gather information that is normally anonymized by the turk system.
    4. "open ended answers helped us to understand points of confusion and why participants differed"
    5. "was the result of a misinterpreted question"
    6. requestors used the UI to approve work completed and access the results.
    7. a credit card is required to front load the requestor account.**********
    8. hit creation can be tested in the developer sandbox
    9. requestors find that Turk "provides a framework ... for recruiting, ensuring privacy, distributing payment, and collecting results."
    10. "results can be easily downloaded in a CSV format"
    11. Requestors "create custom qualification tests .. using the command line tool or API"
    12. Requestor who is a surveyor understand "the importance of having enough subjects (i.e. workers) of the right kind."
    13. Requestor "doubled the initial ... reward"
    14. requestor "sent emails to two internal mailing lists."
    15. Requestor might "observe students... instead of observing software engineers practicing."
    16. Requestor might "perform studies without human subjects." [bad practice]
    17. Requestor might "evaluate visualization designs, conduct surveys about information seeking behaviors, and perform NL annotations to train machine learning algorithms."
    18. Requestor might "leverage a global community... to solve a problem, classify data, refine a product, gather feedback"
    19. Requestor required "to pass a pretest."
    20. Researchers "estimated aptitude by measuring education and qualification score."
    21. Researchers create qualifications for works by using domain specific knowledge and quality of work history.
    22. Requestors evaluate work after completion.
    23. Resquestors made task templates and combined tasks with a shared type ID.
    24. Resquestors "presented [workers] with treated or untreated pipe for each task."
    25. Resquestors "could not impose their constraint and control for learning effects."
    26. Turk "caused us to waste some data."
    27. "An alternate [research] design would be to create..."
    28. requestors define the work goals, collect relevant information from the workers
    29. requestors "had less control over the [workers] participating... and variations caused by how prominently the study is displayed in the infrastructure search results."
    30. "even our study uses tasks that are much more complex and time consuming than those recommended by" Turk
    31. researchers "must consider if randomized assignment... is appropriate for their study"

Stolee, K. & Elbaum, S. (2010) Written Statements about Workers

    1. Workers might "select and configure predefined modules and connecting them."
    2. Workers try to avoid the search page and complete tasks.
    3. Workers see the qualifications but might not see the specifications of the requestor.
    4. Workers identify tasks that are of similar types to match their preferences.
    5. Workers "discover Hits by searching based on some criteria, such as titles, descriptions, keywords, reward or expiration date."


Stolee, K. & Elbaum, S. (2010)

    1. to eliminate worker variability
    2. to control variability
    3. researchers need "some understanding of the system capabilities and constraints"
    4. the system needs to direct to other services in the business space that are more apt for the task at hand.
    5. requestors need to have the ability to control the presentation of their work, by priority, sequence, preference, iterative, random, importance...

Needs identified during the worker-requestor interviews

    1. to identify patterns of requestors
    2. to be able to receive work immediately
    3. to be be able to pause work demands
    4. to be able to SLEEP
    5. to meet a daily goal (how dos this daily goal shape decision making)
    6. to quickly test a hypothesis
    7. to write clear instructions
    8. to post a batch of questions
    9. to receive information on how to improve task design
    10. to engaging in conversation with workers
    11. to how to monitor approval ratings
    12. to gauge the level of threat a requestor is towards approval ratings
    13. to scatter work across requestors
    14. to select those who provide HITs with good background
    15. to have a name and email to contact people before HITs
    16. to post hits without much interaction
    17. to know people are on the other side
    18. to label X of this item
    19. to optimize worker speed in job design an shape UI
    20. to avoid rejecting workers
    21. to maintain quality assurance of workers
    22. to send out sample hits to test task
    23. to gauge preferences of requestors
    24. to be able to manage interactions with workers at scale
    25. to manage worker correspondence
    26. to rate workers fairly
    27. to automatically clarify and challenge qualification based rejection
    28. to meet face to face with others
    29. to manage time better as quantity grows (the challenges and methods change)
    30. to connect with outside services that expand one's professional capacities
    31. to have strong best-worker relationships (10 batches)
    32. to pre-schedule work at intervals
    33. to assure quality, truthful and honest responses
    34. to connect with worker communities
    35. to have a hit within an iFrame
    36. to be able to build one's own efforts
    37. to look at the hit and estimate the value of effort... (paper: Estimating Charlie's Run-time Estimator)
    38. to manage the expectations of work involved and run times tied to HITs
    39. to know the other worker's personalities and how it affects work
    40. to communicate with others the reality of a job
    41. to match job type with work preferences and skill sets
    42. to earn as much profit (revenue) as possible
    43. to match typing tasks with those who like typing tasks
    44. to match button clickers with button clicking tasks
    45. to estimate the value of a day
    46. to have a buffer of work demands over time
    47. to avoid random responses
    48. to accept as work that is better than chance
    49. to avoid rejections on the reputation system
    50. to have a constant contact ability with the requestor
    51. to have a buffer of points for reputation management
    52. to have buffer time that accommodate interruptions and disabilities

incredible enablers of scientific research avoids: anonymous, frustrating, complicated, tricky absolutely unpredictable, great, terrible power, frustrating,empowering, lonely, isolated, tiring diverse and amazing futuristic dynamic humans

what rules of thumb do people use? to try 2 minutes of work and then retry

types or requestors: volume, income