Milestone 8 Pumas

From crowdresearch
Jump to: navigation, search

There is a need to have tasks polished and verified before they are shown to workers. However, having an external person edit and verify the tasks, can be expensive! We propose to turn these verification tasks into social activities, that people do for FREE because of the social exchanges they receive from doing them. Sample social exchanges:

  • Mingling and reviewing each others' verifications' tasks (task verification party!)
  • Doing someone's verification tasks so they attend your music concert.
  • Doing someone's verification tasks to spend quality time with them later ♥ ❤
  • Doing someone's verification tasks so they can be your date for your sister's wedding.

These mechanisms will allow people to better present their tasks to workers, without any extra costs, and while building friendships!


Story Board




Central issues to task verification:

  • The costs involved in peer-reviewing tasks. Another person (or people), which is not a requester nor a worker has to review the task posted by the requester before releasing it to all workers.
  • Task verification can take a lot of time if there is not someone available to review the task or if the verification process itself is very thorough and detailed.
  • When tasks are being reviewed by a third party there might be a risk of tampering the task it self, due to a subjective interpretation or non-attachment to the work from behalf of the reviewer or even if the reviewer feels time-pressured he/she will not perform at her/his best.
  • What if the task is not improved but actually tweaked or transformed into another task due to an inaccurate interpretation from the reviewer? This affects directly the workers, as they might be performing a task correctly - due to the standard of the reviewed version - but when submitted they might their task rejected. This type of rejection arises from doing a different task to the one the requester originally posted.
  • If a task is being "interpreted" or "improved" by a reviewer then they have too much power over the situation, the tasks and the relationship between the workers and requesters.
  • The speed at which a task is reviewed is also under scrutiny. Requesters want their tasks to be taken as soon as they post them or at least start have some kind of feedback. Near immediate responses from reviewers would be ideal for requester.

Discussion: Possible Limitations and Evaluation

People involved

Who are the people that should be involved in the task revision process?' Is it just workers or requesters as well? In previous milestones we proposed to work with empathy so that the Turk market can be improved by balancing power and increasing trust between workers and requesters. This type of collaboration can also be incorporated into this proposal. Having feedback from requesters can reduce the time of task revision and help reviewers in not tampering the main idea or task the requester is posting.

Bias Risk

This proposal has the risk of also becoming biased as every person has its own agenda. Some people might review for "friendship", others due to the possibility of future retribution (pay it forward), or to enjoy the party or to really have an interest in reviewing tasks. These factors can limit how reliable, correct and fast are the revisions made. Additionally, under this scheme people can be influenced by others to review tasks in a certain manner (i.e. peer pressure), which might not always outcome accurate results.

Correctness and Reliability

Due to the risk of being bias we need a methodology for measuring the correctness and reliability of this system. One proposal is to divide our testing into three groups. Our first group would be our control group with the now used task revision system. Our second group would implement our current proposal, doing task verification as a social activities. The third group can implement an algorithm (NLP + information retrieval + classification) to automatically review tasks. This way we can compare them and establish how effective is our proposal in terms of: speed, cost, reliability and correctness.


Near immediate feedback is a limitation of our system as it requieres time to gather people for a task verification party. Although our proposal could start with task verification parties and then as people get to know each other and establish relations parties might not be so necessary and there will be times where people can review task as a "friend favor" to others, whenever he/she has free time. Nonetheless our main goal, in this proposal, is to reduce cost of task revision.


[1] Eickhoff, Carsten, and Arjen de Vries. "How crowdsourcable is your task." Proceedings of the workshop on crowdsourcing for search and data mining (CSDM) at the fourth ACM international conference on web search and data mining (WSDM). 2011.

[2] Kulkarni, Chinmay, et al. "Peer and self assessment in massive online classes." Design Thinking Research. Springer International Publishing, 2015. 131-168.

[3] Dow, Steven, et al. "Shepherding the crowd yields better work." Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work. ACM, 2012.