WinterMilestone 1 yashovardhan

From crowdresearch
Jump to: navigation, search

Team yashovardhan

Stanford cr.png

@yashovardhan : Yashovardhan Sharma

Experience the life of a Worker on MTurk Developer Sandbox

MTurk didn't work for me, so I tried out the Developer Sandbox as a Worker instead.

Likes :

  • All the relevant information is displayed next to the HIT.
  • Really easy to filter HIT's according to different criteria (eg. Reward).
  • It's pretty easy to fool the system into thinking you're actually providing genuine answers, if you find the right kind of task. That's pretty good from a Worker's perspective, not so much so from a Requester's.

Dislikes :

  • One can post HIT's with $0.0 reward, which is very misleading since that's basically asking someone to work for free. Such HIT's are not classified any differently from the "paid" ones, so it's extremely easy to click on one without noticing that their is no reward associated with it.
  • The User Interface is spartan, and to be frank, quite gloomy.
  • Some HIT's require a certain qualification to be able to do. This may or may not include a test. While the overall idea is nice, this makes it prevents newer workers from attempting such HIT's, as they don't have any qualifications at that point in time.
  • Did I mention that the User Interface while performing a HIT is even worse?
  • Unrealistic time limits on some tasks. (e.g. Instructions are a page long, while time allotted is 30 seconds).

View of my earnings as a worker on MTurk Developer Sandbox.

Experience the life of a Requester on MTurk Developer Sandbox

MTurk didn't work for me, so I tried out the Developer Sandbox as a Requester instead.

Likes :

  • The User Interface is miles better than the one in the Worker's Sandbox. Partiality much?
  • Creating a HIT is pretty simple. The wizard is easy to follow.
  • Both simple and complex HIT's can be created. For the sake of clarity, extra instructions can be provided.
  • You're given $10,000 to test creating and publishing HIT's.

Dislikes :

  • While Signing Up, your "company" must be located in the US! 6.7 billion people beg to differ.
  • Data can only be provided in CSV format.
  • Tasks that require classification are easy to create. Apart from them, it's very hard to design something.
  • There is no fixed time of completion. My project is still not finished.
  • The correctness of the classification provided cannot be vouched for. Unless you want to verify everything again yourself, which kind of defeats the purpose of using this platform.



Strengths of the system :

  • Targets the people in the right economic group, i.e, the ones at the very bottom. A very high percentage of India's population falls in this bracket, giving this platform a huge worker base, while also giving them a chance to earn a lot more money.
  • The UI is sparse and minimal, which is a good choice considering the devices and the networks speeds that it will be accessed on.
  • The efficiency of the platform is pretty high considering the low-cost technology involved and the limited education of the workers.
  • The task is pretty easy to perform on the go. The worker only needs a mobile phone, which almost everyone has these days.

Possible improvements :

  • The rewards can be made a little higher than what an average person earns per hour. This will incentivise more people to join the platform and participate actively.
  • The language is limited to English. If other languages were supported, say Hindi, the number of eligible workers will go up drastically.
  • The Sign-In with a Username/Password might not be the best way for logging in for people who are not very educated. There should probably be a way to save the Username/Password after the first Sign-In.
  • The payment platform cannot be what normal users use (e.g. Paypal, Online Banking, BTC). There should be a way for people to collect their reward offline from a MobileWorks centre or maybe a representative who can access payments online.


Strengths of the system :

  • Aims to bridge a crucial gap - the trust between workers and requesters in today's day and age.
  • Introduces an improved rating system called Boomerang, that helps provide more accountability in reputation of users and also rank them.
  • Allows highly rated workers to get early access (i.e, preference) to the requester's tasks.
  • The worker ratings determine the ranking of the list of tasks available to them, i.e, a highly rated requester's tasks will be given preference.
  • Prototype Tasks allow feedback iteration from workers before launching to the marketplace. This makes sure that the final task that is released to all the workers, it has been improved significantly in terms of clarity and problem explanations.

Possible improvements :

  • Make rating compulsory for workers. Also requesters should have to rate more than 5-10 workers. This ensures that everyone gets feedback about their work.
  • The way workers will get equal representation in the community isn't clearly specified. The term Open Governance Structure should be formally defined and its working clearly detailed.
  • Introduce some sort of "Qualifications". This means workers can choose which areas they are skilled in and requesters can give preference to workers whose qualifications match those of their task.

Flash Teams

Strengths of the system :

  • Allows for structured collaborations between experts from the crowd.
  • Tries to create a team structure which clearly dictates who all are working together and who is responsible for which task, such that even temporary groups can coordinate complex work effectively.
  • Crowd work is framed around sequences of linked tasks.
  • Flash teams enable a modular approach to completing a task. This allows combining multiple flash teams if required, based on task demand.
  • Flash teams enable complex work at crowd scale by automating the structures of traditional organisations
  • Rather than treating the crowd as redundant resources that cannot be fully trusted, flash teams view the crowd as an elastic, on-demand set of diverse and high-quality participants. Hence, they often aim to gather experts with different expertise rather than redundant view- points.

Possible improvements :

  • While the overall concept seems good in theory, it needs to be thoroughly tested in real-life scenarios.
  • Additionally, the system remains untested at scale so far. But I believe we guys are testing that out at this very moment through this venture!
  • Finding the right mix people for a flash team may be hard. Plus, once a good team is found, does it make sense to break it apart?
  • Modularity may be a great asset of this platform, but it may also lead to great confusion between flash teams as to how their work actually fits in the bigger picture.
  • One of the requirements of this platform is a bunch of willing experts. Filtering out potential applicants and even finding willing workers can prove to be quite a challenge.

Milestone Contributors