WinterMilestone 1 Philanthrope
This is Milestone-1 submission of team Philanthrope. We are a team of two (@shyam.jvs @adityakumarakash), studying Computer Science at the Indian Institute of Technology Bombay. This page contains an essence of our work done in week 1.
Experience as a Worker on Mechanical Turk
We have done 0.9$ worth tasks as a worker on the Sandbox version of Amazon Mechanical Turk. Being out of the US, we faced issues getting ourselves registered on the actual Mturk and hence the sandboxed version. It took us around 3 hours to earn this money. The overall experience while using Mturk was pleasant except for a few shortcomings that we could notice. We talk about the +ve and -ve points of Mturk briefly below.
- A smoothly running platform and a scalable solution for crowd-sourcing at large.
- Simple and easy to understand user interface for someone who is new to the platform.
- Allows searching and sorting of HITs based on reward amount, time alloted, expiration date and so on.
- A system for requesting and obtaining qualifications which help add credibility to the worker.
- Statistics to gauge performance like HIT submission rate, HIT acceptance rate, HIT rejection rate, etc.
- Being a web service, it is pretty accessible from almost any smart device with an internet connection. Thus, power to earn lies at your finger tips.
- On philosophical grounds, these tasks give a feeling of fulfillment to the workers and make them think that their time is being used constructively.
- No direct channel of communication between worker and employer. So no negotiation possible.
- Tasks are typically paid less due to their morose nature which in turn lead to worker dissatisfaction. A feeling of not being compensated enough for the time and effort put in often arises. The worker might eventually give up considering that alternative jobs could be monetarily more fruitful.
- Because responses are simply rejected in case they are not satisfactory, without any further feedback, workers do not have a chance to know what skills they lack on.
- Employers might turn out to be fraudsters who take work but reject to pay, saying that the work was not satisfactory. Workers have no way to punish such fraudulent employers.
- Communication among workers is also not present. This is a serious drawback because, important information like an employer being fraud, etc cannot be spread across workers. This has led to third party platforms like Turkopticon and Dynamo that aid communication.
Experience the life of a Requester on Mechanical Turk
Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results. If you're from outside the USA and unable to access MTurk, you can try the MTurk requester sandbox, or CrowdFlower or Microworkers or Clickworker
Explore alternative crowd-labor markets
Since Mturk sandbox was accessible to us, we did not explore other crowd-sourcing platforms. Besides, since Mturk is almost synonymous to crowd-sourcing currently, we thought we would explore it in more detail.
Strengths of MobileWorks
- Cheap way for human OCR
- Simple interface which allows to get large user base without any training
Improvements / Suggestions
- The platform could be simultaneously developed for smart phones eg. based on apps
- Payments for tasks could be adjusted based on their difficulty
- Choice between language and hand written characters and printed articles could be added
- Data on how often subjects felt urge to use the system with time could help in increasing the excitement of the system
Strengths of Daemo
- Maintaining a balance in power by using representatives from both workers and requesters in a democratic fashion is really good
- Prototyping the task and further discussion on task between requester and worker helps improve task specification and consistency between deliverable of worker and expectation of requester
- Establishing trust between involved parties improves work quality
- Boomerang reputation system is cleverly designed to ensure requesters and workers rate correctly
Suggestions / Improvements
- Initial ordering of tasks for workers and ordering of workers for requesters could be improved based by introducing categorization of tasks/workers
based on field of work
- Introducing following / followers concept could allow users to watch other workers / requesters and thus allow indirect ways to find tasks / workers
- The above point would allow workers and requesters to shift from usual requesters and workers respectively and exploring, thus exploiting the market potential to fullest
- There could be initial bias of workers / requesters set based on poor performance in beginning, but there is improvement of workers / requesters over period of time. So suggestion of workers / requesters should also be made based on how other requesters / workers have rated them in recent past. This essentially means introduction of overall reputation also.
Strengths of Flash Teams
- Allows macro tasks based on engineering to be done effectively using crowd sourcing
- Exploit the potential of crowd by allowing expert collaboration
- Faster completion of the tasks as compared to traditional ways
- Management of users based on geographical regions would help better collaboration and facilitate meetings
- Ownership of the work done would boost confidence in the system and maintain trust on other users
- More sophisticated technique to assemble the experts, based on factors as working speed, geography, interests etc. would increase productivity
Slack usernames of all who helped create this wiki page submission: @shyam.jvs, @adityakumarakash