WinterMilestone 1 Philanthrope

From crowdresearch
Revision as of 13:25, 17 January 2016 by Shyamjvs (Talk | contribs) (Experience as a Requester on Mechanical Turk)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This is Milestone-1 submission of team Philanthrope. We are a team of two (@shyam.jvs @adityakumarakash), studying Computer Science at the Indian Institute of Technology Bombay. This page contains an essence of our work done in week 1.

Experience as a Worker on Mechanical Turk

We have done 0.9$ worth tasks as a worker on the Sandbox version of Amazon Mechanical Turk. Being out of the US, we faced issues getting ourselves registered on the actual Mturk and hence the sandboxed version. It took us around 3 hours to earn this money. The overall experience while using Mturk was pleasant except for a few shortcomings that we could notice. We talk about the +ve and -ve points of Mturk briefly below.


  • A smoothly running platform and a scalable solution for crowd-sourcing at large.
  • Simple and easy to understand user interface for someone who is new to the platform.
  • Allows searching and sorting of HITs based on reward amount, time alloted, expiration date and so on.
  • A system for requesting and obtaining qualifications which help add credibility to the worker.
  • Statistics to gauge performance like HIT submission rate, HIT acceptance rate, HIT rejection rate, etc.
  • Being a web service, it is pretty accessible from almost any smart device with an internet connection. Thus, power to earn lies at your finger tips.
  • On philosophical grounds, these tasks give a feeling of fulfillment to the workers and make them think that their time is being used constructively.


  • No direct channel of communication between worker and employer. So no negotiation possible.
  • Tasks are typically paid less due to their morose nature which in turn lead to worker dissatisfaction. A feeling of not being compensated enough for the time and effort put in often arises. The worker might eventually give up considering that alternative jobs could be monetarily more fruitful.
  • Because responses are simply rejected in case they are not satisfactory, without any further feedback, workers do not have a chance to know what skills they lack on.
  • Employers might turn out to be fraudsters who take work but reject to pay, saying that the work was not satisfactory. Workers have no way to punish such fraudulent employers.
  • Communication among workers is also not present. This is a serious drawback because, important information like an employer being fraud, etc cannot be spread across workers. This has led to third party platforms like Turkopticon and Dynamo that aid communication.

Experience as a Requester on Mechanical Turk

We were unable to get going with the requester sandbox unfortunately, due to being outside the US. However, based on research from the internet and our own insight, we could find out the following +ve and -ve points regarding the Requester side of Mturk.


  • A smoothly running platform and easy to understand user interface for requesters to create HITs.
  • Allows tasks that cannot be performed by machines to be done pretty quickly by harnessing human intelligence from crowd. These tasks get done quickly, specifically because Mturk is a large scale platform with many workers participating regularly.
  • Requesters can obtain high quality data by imposing qualification restrictions on workers for a given HIT. Further, responses have been validated by various studies and shown to be decently good.
  • Requesters get their work done at a much cheaper rate than by utilizing a chosen panel of people for doing the same. This fall in cost can help them increase the amount of work/study they can extract.
  • Since the tasks are performed by a crowd that is composed of people from diverse places, professions, personalities, etc, the responses can be considered to be sampled as a fairly representative set of the entire population.


  • No direct channel of communication with workers. So there is no way by which they can come to know why their task is not being taken up by workers. (Like reward not satisfactory, time required too high, etc)
  • They need to put in effort again to judge the quality of responses from the workers. In some sense, this is double work.
  • Requesters often lack experience on how to set the reward amount for a given task. In case they set the reward higher than what should have been, they end up losing more money. In case they set up the reward lower, people would not take up the task.

Explore alternative crowd-labor markets

Since Mturk sandbox was accessible to us, we did not explore other crowd-sourcing platforms. Besides, since Mturk is almost synonymous to crowd-sourcing currently, we thought we would explore it in more detail.



Strengths of MobileWorks

  • Cheap way for human OCR
  • Simple interface which allows to get large user base without any training

Improvements / Suggestions

  • The platform could be simultaneously developed for smart phones eg. based on apps
  • Payments for tasks could be adjusted based on their difficulty
  • Choice between language and hand written characters and printed articles could be added
  • Data on how often subjects felt urge to use the system with time could help in increasing the excitement of the system


Strengths of Daemo

  • Maintaining a balance in power by using representatives from both workers and requesters in a democratic fashion is really good
  • Prototyping the task and further discussion on task between requester and worker helps improve task specification and consistency between deliverable of worker and expectation of requester
  • Establishing trust between involved parties improves work quality
  • Boomerang reputation system is cleverly designed to ensure requesters and workers rate correctly

Suggestions / Improvements

  • Initial ordering of tasks for workers and ordering of workers for requesters could be improved based by introducing categorization of tasks / workers based on field of work
  • Introducing following / followers concept could allow users to watch other workers / requesters and thus allow indirect ways to find tasks / workers
  • The above point would allow workers and requesters to shift from usual requesters and workers respectively and exploring, thus exploiting the market potential to fullest
  • There could be initial bias of workers / requesters set based on poor performance in beginning, but there is improvement of workers / requesters over period of time. So suggestion of workers / requesters should also be made based on how other requesters / workers have rated them in recent past. This essentially means introduction of overall reputation also.

Flash Teams

Strengths of Flash Teams

  • Allows macro tasks based on engineering to be done effectively using crowd sourcing
  • Exploit the potential of crowd by allowing expert collaboration
  • Faster completion of the tasks as compared to traditional ways


  • Management of users based on geographical regions would help better collaboration and facilitate meetings
  • Ownership of the work done would boost confidence in the system and maintain trust on other users
  • More sophisticated technique to assemble the experts, based on factors as working speed, geography, interests etc. would increase productivity

Milestone Contributors

Slack usernames of all who helped create this wiki page submission: @shyam.jvs, @adityakumarakash