From crowdresearch
Revision as of 14:32, 3 February 2016 by Aarongilbee (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


  • What are the types of work arounds employed by requestors to handle worker qualifications?
  • What are the most common base elements used in cloudworking platforms?
  • What elements of cloudowrking work best in microtask context and those in project contexts?
  • What are the major constraints that determine which projects are successful, failed, or ignored in the marketplaces?
  • How can CWA help to make Daemo a better platform?
  • What are the most common errors people face using these platforms?
  • What are the common elements of identity for workers/requestors in these platforms?
  • What strategies do workers employ when selecting tasks? requestors in selecting platforms?
    • number of hits
    • value per hit
    • value per time
  • What informational presentations might help workers find relevant projects?
  • How does one design a tight system that minimizes requester interoperability ingenuity?
  • How does one define a unit of work that is not manipulable by requestors in a way that undermines the worker?
  • how does time pressure affect worker performance over cloudwork?
  • Trust is competence, concern for the other, and integrity. What are the possible measures of trust according to this framework?
  • What prototypical tasks are used to assess crowdworkers?
    • raw log length
    • assignment ID
    • worker id
    • refined log length
    • discretized log length
    • total task time
    • before typing delay
    • within typing delay
    • on focus time
    • recorded time disparity
    • total clicks
    • total mouse movements
    • total scrolled pixels
    • total fields accessed
    • total focus changes
    • total keypresses
    • total pastes
    • total tabs
    • total backspaces
    • count of unique characters

Turker into Graphics Status System