Difference between revisions of "WinterMilestone 2@ahandpr"
From crowdresearch
Aarongilbee (Talk | contribs) (→Need-finding Results) |
Aarongilbee (Talk | contribs) (→Video Interviews.) |
||
Line 11: | Line 11: | ||
=Observations about Requestors= | =Observations about Requestors= | ||
− | ==Video Interviews. == | + | =='''A.''' Video Interviews. == |
+ | |||
==Ipierotis, P (2010) FIX TURK. == | ==Ipierotis, P (2010) FIX TURK. == | ||
==Bernstein, M. & et al. (2010) Soylent. == | ==Bernstein, M. & et al. (2010) Soylent. == |
Revision as of 17:33, 22 January 2016
Purpose of this page This wikipage intends to meet the requirements and expectations conveyed in the Winter Milestone 2. Collected here are the combined observations of @anotherhuman and @prithvi.raj to address the questions:
- What can you draw from the interview? Include any that may be are strongly implied but not explicit.
- What observations about requesters can you draw from the interview? Include any that may be are strongly implied but not explicit.
- What can you draw from the readings? Include any that may be are strongly implied but not explicit.
- What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.
A list form was chosen for this page for rapid assessment, quick count of the observations, and expected ease of use for annotation to identify needs. Sorry if it looks ugly and is hard to read.
Contents
Observations about Requestors
A. Video Interviews.
Ipierotis, P (2010) FIX TURK.
Bernstein, M. & et al. (2010) Soylent.
Stolee, K. & Elbaum, S. (2010)
- Requestor monitored the updates of tasks completion.
- Requestors devise methods to cull user diversity, user experience, and worker gaming of the system.
- Requestors devised routes to gather information that is normally anonymized by the turk system.
- "open ended answers helped us to understand points of confusion and why participants differed"
- "was the result of a misinterpreted question"
- requestors used the UI to approve work completed and access the results.
- a credit card is required to front load the requestor account.**********
- hit creation can be tested in the developer sandbox
- requestors find that Turk "provides a framework ... for recruiting, ensuring privacy, distributing payment, and collecting results."
- "results can be easily downloaded in a CSV format"
- Requestors "create custom qualification tests .. using the command line tool or API"
- Requestor who is a surveyor understand "the importance of having enough subjects (i.e. workers) of the right kind."
- Requestor "doubled the initial ... reward"
- requestor "sent emails to two internal mailing lists."
- Requestor might "observe students... instead of observing software engineers practicing."
- Requestor might "perform studies without human subjects." [bad practice]
- Requestor might "evaluate visualization designs, conduct surveys about information seeking behaviors, and perform NL annotations to train machine learning algorithms."
- Requestor might "leverage a global community... to solve a problem, classify data, refine a product, gather feedback"
- Requestor required "to pass a pretest."
- Researchers "estimated aptitude by measuring education and qualification score."
- Researchers create qualifications for works by using domain specific knowledge and quality of work history.
- Requestors evaluate work after completion.
- Resquestors made task templates and combined tasks with a shared type ID.
- Resquestors "presented [workers] with treated or untreated pipe for each task."
- Resquestors "could not impose their constraint and control for learning effects."
- Turk "caused us to waste some data."
- "An alternate [research] design would be to create..."
- requestors define the work goals, collect relevant information from the workers
- requestors "had less control over the [workers] participating... and variations caused by how prominently the study is displayed in the infrastructure search results."
- "even our study uses tasks that are much more complex and time consuming than those recommended by" Turk
- researchers "must consider if randomized assignment... is appropriate for their study"
Martin, D. & et al. (2014)
- to request work such as image tagging, duplicate recognition, transcription, translation, object classification, and generate content
- rely on turk to curate and manage the quality of content for their tasks
- requesters becomes confused about what actions constitute a bad worker [one man's trash is another's treasure]
- requestors block who they consider bad workers
- requestors fundamentally ask for help from the masses and a judgment from the asker is fundamentally a dynamic that is highly disrespectful
Irani, L. & Silberman, M. (2013)
- the best requesters use turk to complete large batches of micro-tasks
- requestors are not ask who, what, or where the workers come from [false?]
- requesters utilize multiple avenues to assess "workers"
- requesters create form fields for data entry
- requesters upload audio for transcription
- requestors create requirements for data entry to address worker quality issues
- requestors define the structure of data entry
- requestors create instructions for data entry
- requestors specify the pool of information to be processed
- requestors define the criteria for work acceptance such as approval rate, country of origin, and skill specific mastery
- requestors recruit thousands of workers within hours
- requestors maintain intellectual property rights
- requestors vet worker outputs through algorithms (majority rule)
- requestors avoid responding to workers due to quantity
- requestors only respond to workers when things happen en masse
- requesters act as business people
- requester shape the interaction with the crowd
- requestors pay Amazon money
- requestors review workers mutually
- requestors have to address the work of people from multiple nations
Ipierotis, P (2012).
- requestors "require workers to closely and consistently adhere to instructions for a particular, standardized task."
- requestors decide on the price they will pay for the task
- requestors complain about spammers and design methods to address them
- verify ex ante that workers can do the task
- Every requestor generates its own work request
- each requestor prices the request independently
- each requestor evaluates the answers separately from everyone else
TurkNation Bonus
- requesters set automatic acceptance of hits after a certain period of time
- surprise bonuses create questions for workers
- Every requestor has to implement from scratch the “best practices” for each type of work.
- requestors learn from their mistakes and fix the design problems
- Every requestor needs to price its work unit without knowing the conditions of the market
- requestors avoid working with spammers and those who talk negatively about them
- requestors rely on truth and avoid fraud
- requestors do not work together to define commonly shared standards for tasks
Ipierotis, P (2011).
- requestors may calibrate their tasks to beat minimum wage
- requestors receive complaints and attacks based from turkers expectations and false realities (turk bubble, don't breath the air)
- requestors receive grants for turk research
- requestors create social tasks (i.e. help me for fun) when certain conditions are met
- requestors create market tasks (i.e. help me for money) when certain conditions are met
Observations about Workers
Video Interviews.
Ipierotis, P (2012).
- workers avoid requestors who would negatively impact them
- "workers ... come and go as they please"
- workers labeling image
- workers transcribe audio
- Workers need to learn the intricacies of the interface for each separate employer
- Workers need to adapt to the different quality requirements of each employer
- workers have a queue of tasks that need to be completed
Ipierotis, P (2010) FIX TURK.
Bernstein, M. & et al. (2010) Soylent.
Irani, L. & Silberman, M. (2013)
- workers utilize screen names across many platforms
- workers will report their experiences with a requestor
- workers self evaluate their own work
- workers check (status/alert function) for approval and payment status for submitted work
- workers tolerate what they see on amazon turk and express outrage that requestors pay for the service without appropriate management
- workers respond with dispute messages
- workers convene as a mass to report problems to requestors (HIVE)
- workers give up intellectual property rights
- workers test their mturk task related skill sets
- workers respond to the quality of the data entry structure
- workers interpret instructions for data entry
- workers translate the information to be processed into data entry inputs
- workers read requirements beyond the scope of the intent of turk
- workers transcribe audio into form fields
- workers complete fields into requestor forms
- workers utilize multiple windows on the same screen
- workers utilize multiple tabs on the same browser
- workers forget the importance of ergonomics, rest, repetitive stress injuries
- turkers may not have learned about minimum wage laws
- turkers express 3 kinds of responses to turk: some do it for fun,cure boredom, or earn income[!!!!!turker types]
- turkers usually expect money from tasks
- workers see tasks posted from outside MTurk (?)
Stolee, K. & Elbaum, S. (2010)
- Workers might "select and configure predefined modules and connecting them."
- Workers try to avoid the search page and complete tasks.
- Workers see the qualifications but might not see the specifications of the requestor.
- Workers identify tasks that are of similar types to match their preferences.
- Workers "discover Hits by searching based on some criteria, such as titles, descriptions, keywords, reward or expiration date."
Martin, D. & et al. (2014)
- "view AMT as a labor market"
- "unfair rejection of work"
- "to receive pay for work"
- communicate with others through Turk
- workers identify scams
- workers move through poorly designed tasks
- workers develop relationships with requestors
- workers seek some form of relational reciprocity with requestors
- workers gather to collect information about tasks, the platform, and requestors
- workers protect their hall of fame and shame post at http://turkernation.com/forumdisplay.php?13-Requesters-Hall-of-Fame-Shame
- workers find work that they're happy with despite pay rate
- workers discuss money and methods to earn it best
- workers talk about fun, learning and play as a major reason for joining MTurk
- workers earn cash on the Mturk system
- workers contrasts tasks upon a play/pay continuum
- workers criticize the pay attitude in the forums
- workers rely upon MTurk to accentuate cash flow when real world work stops (The purpose for turk changes)
- workers select only the "best" oportunities for pay
- turkers compete with one another and ask questions regard what others earn
- turkers set their own targets
- turkers respond to external events in their lives and adjust how they interact with turk based from those events
- turkers schedule and allot certain times of the day to be on MTurk
- workers rely on it as a source of income -- partially because mturk is available, accessible, and easy to find work due to its requestor diversity
- workers might use turk as a breadline
- workers find mturk ideal because one doesn't have to consider the professional environment and transportation concerns
- workers avoid those requestors who are demeaning and practice mass rejection
- workers compare experiences
- workers seek out requestors based from responsiveness
- workers give positive and negative badges
- workers spend time searching for jobs
- workers need access to decent work
- workers avoid being blocked by requestors
- workers design HITs with requestors
- workers self-monitor communication practices with requestors
- workers expect quick pay
- workers sample tasks to test the requestor
- workers bag several hits from one requestor
- workers will work on several hits in multiple tabs in a browser
- workers examine how quickly a requestor responds to questions
- workers avoid majority rules grading practices -- probably used in ML labeling tasks
- turkers use a consensus scheme to assess requestors in the forums
- turkers follow rules of requestor good practice
- turkers seek to know how often a requestor is online, quick to responsive he is to a task, how polite the person is
- turkers base trust upon several dimesions - competence of the requestor, concern of the requestor, and the intergrity (consistency) of interactions with the requestor
- some turkers scam, others try to solve this scamming through social governance
- in threads inidividuals are accused of cheating qualifications
- turkers suffer from fatigue ("i was not paying enough attention)
- turker practice reciprocity with requestors
- turkers label individuals as flamers
- turkers can be overly sensitive to a rejection
- workers might loss work due to a bad connection if work is saved on the cloud.
TurkNation Bonus
- workers wait at most 30 days for hit approval as set by turk policy [1]
- workers define one form of hope with approval and time [2]
- workers identify tasks that earn bonuses, especially if the bonus occur frequently [3]
- turkers provide scripts for others to use in requestor specific situations [4]
- new turkers demonstrate misunderstandings of the Turk system and may be unfairly [5]
- turkers inherit the problems of bureaucracy without ever knowing how the system changes [bit.ly/1OCaX3H]
Ipierotis, P (2011).
- workers identify spam tasks
- workers retalite when they see difference in pay for the same task from the same requestor
- workers decide if a task is social (i.e. fun or curing boredom) or market (i.e. money)
Need-finding Results
Identified here are our results from the observations presented above.
requestors |
|
workers |
|
Page Contributors
These following people contributed to this page: @anotherhuman, @prithvi.raj