Milestone 8 TuringMachine Input and output transducers

From crowdresearch
Revision as of 10:49, 22 April 2015 by Avrimmit (Talk | contribs) (Solutions)

Jump to: navigation, search

Input and output transducers

Tasks get vetted or improved by people on the platform immediately after getting submitted, and before workers are exposed to them. Results are likewise vetted and tweaked. For example, peer-review.

Challenges

Cost: who pays for this? In other words, can this be done without hugely increasing the cost of crowdsourcing?


Speed: is it possible to do this quickly enough to give near-immediate feedback to requesters? Like, 2–4 minutes? As spamgirl says, "The #1 thing that requesters love about AMT from her recent survey of requesters, is that the moment that I post tasks, they start getting done."

From Edwin: What happens when I have a task that I know is hard but I want workers to just try their best and submit? I’m OK with it being subjective, but the panel would just reject my task, which would have been frustrating.


From Edwin: Could this help deal with people feeling bad when rejecting work? Maybe we need a new metaphor, like revision.

Solutions

Solution 1: Fixing the TASK In this scenario we leverage the one to many relationship in the graph i.e. one Requestor many Workers

  • Case 1 Hypothesis: Emergency Break: The tasks that have serious design flaws are easy to fix:
    • If a task has serious design flaws then majority of workers will face issues understanding it. At present there is no mechanism to capture this feedback.
    • We propose an interface UI control that will allow workers immediately report the issues related to the task flaws. This is similar to the Emergency Break used in the Trains.
    • Once the task is posted it will have timestamp associated with it. If 40%-50% workers, who are working on the task apply the Emergency Break and report the unclear instructions then the task will be placed on hold and the requestor will be notified. In addition, workers can provide feedback to improve the task.


  • Case 2: The tasks has moderate design flaws:

Solution 2: Trust Circle Design Process: Requestor - Expert Worker - Supervisors - Workers

  • Figure 3.0 highlights the detailed review process
  • Select Expert Workers using automated algorithms
  • Select supervisors from the set of Expert Workers. Expert workers are paid higher and selected from pool of high accomplished individuals. We have also designed the ranking mechanism that can be integrated with the system to motivate workers to perform well and get into the class of Expert Workers. Motivation for being an Expert:
    • Intrinsic motivation: Bad quality submission or cheating behavior affects entire crowdsourcing community. Most of the workers want to stop the bad guys and volunteer their time. However, the current system does not have any mechanism that will involve workers in filtering out bad submissions. Pool of experts is a motivated group of individuals who want to maximize the social welfare.
    • Extrinsic motivation: The expert workers are paid higher for the experience & managerial skills they bring in. In addition, the proposed ranking mechanism & leaderboard system encourages the workers to do well and move into the class of socially recognized experts.


ranking mechanism

Class

The task flow

  • Figure 3.1 shows the task claimed by 6 workers. This number can be larger.
Class

Feedback & Gamification

  • Figure below highlights the worker A's dashboard. Please read the diagram from #0 to #5 i.e. from the bottom to top
  • The worker A receives real time feedback, motivational messages.
  • The worker A can see the live task statistics and performance of his colleagues can motivate him to participate and do well in the task.

Over the period of time the system can build a network graph of workers and requestors who work well together. This can be further extended to build the teams that can work together on highly complex tasks.

Class