Difference between revisions of "Milestone 8 TuringMachine Input and output transducers"

From crowdresearch
Jump to: navigation, search
(==)
Line 12: Line 12:
  
  
======
+
 
 
  It is important to understand why the quality of submitted tasks is low or why the instructions are confusing. This understand is key to improve quality of submissions. We propose following solutions to address this issue:  
 
  It is important to understand why the quality of submitted tasks is low or why the instructions are confusing. This understand is key to improve quality of submissions. We propose following solutions to address this issue:  
  

Revision as of 05:55, 21 April 2015

Input and output transducers

Tasks get vetted or improved by people on the platform immediately after getting submitted, and before workers are exposed to them. Results are likewise vetted and tweaked. For example, peer-review.

Challenges

Cost: who pays for this? In other words, can this be done without hugely increasing the cost of crowdsourcing?


Speed: is it possible to do this quickly enough to give near-immediate feedback to requesters? Like, 2–4 minutes? As spamgirl says, "The #1 thing that requesters love about AMT from her recent survey of requesters, is that the moment that I post tasks, they start getting done."

From Edwin: What happens when I have a task that I know is hard but I want workers to just try their best and submit? I’m OK with it being subjective, but the panel would just reject my task, which would have been frustrating.


From Edwin: Could this help deal with people feeling bad when rejecting work? Maybe we need a new metaphor, like revision.


It is important to understand why the quality of submitted tasks is low or why the instructions are confusing. This understand is key to improve quality of submissions. We propose following solutions to address this issue: 


Solution 1: Trust Circle Design Process: Requestor - Expert Worker - Supervisors - Workers

  • Figure 3.0 highlights the detailed review process
  • Select Expert Workers using automated algorithms
  • Select supervisors from the set of Expert Workers. Expert workers are paid higher and selected from pool of high accomplished individuals. We have also designed the ranking mechanism that can be integrated with the system to motivate workers to perform well and get into the class of Expert Workers. Motivation for being an Expert:
    • Intrinsic motivation: Bad quality submission or cheating behavior affects entire crowdsourcing community. Most of the workers want to stop the bad guys and volunteer their time. However, the current system does not have any mechanism that will involve workers in filtering out bad submissions. Pool of experts is a motivated group of individuals who want to maximize the social welfare.
    • Extrinsic motivation: The expert workers are paid higher for the experience & managerial skills they bring in. In addition, the proposed ranking mechanism & leaderboard system encourages the workers to do well and move into the class of socially recognized experts.


ranking mechanism

Class

The task flow

  • Figure 3.1 shows the task claimed by 6 workers. This number can be larger.
Class

Feedback & Gamification

  • Figure below highlights the worker A's dashboard. Please read the diagram from #0 to #5 i.e. from the bottom to top
  • The worker A receives real time feedback, motivational messages.
  • The worker A can see the live task statistics and performance of his colleagues can motivate him to participate and do well in the task.

Over the period of time the system can build a network graph of workers and requestors who work well together. This can be further extended to build the teams that can work together on highly complex tasks.

Class