WinterMilestone 1 codexxx

From crowdresearch
Revision as of 07:12, 17 January 2016 by Paul abhratanu (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

In the first milestone,I experienced and studied various crowdsourcing platforms and got the real insight view of what crowdsourcing actually is.After visiting various platforms I came to know the strengths and weakness respect to each platforms.In this page I have written my experience on various crowdsourcing platform( mturk) and have tried to compare it with other platforms.Further I have written about the strengths and weakness of the research paper readings and hope the points will be noticed by the researchers.All over milestone 1 experience was really awesome and I would try to dedicate as much as I can in further milestones.

Experience life of a worker on Mechanical turk

Amazon Mechanical turk is definitely a good crowdsourcing platform.I came to know about this platform only during this internship.So,thanks to stanford university for giving me an exposure to the crowdsourcing market.

When I visited their website for the first time I saw two options one to sign-in as a worker and other to sign-in as a requester.When I registered I was asked to wait as a confirmation email will be sent to me which will verify my account.After one day I received a mail stating this 'We have completed our review of your Amazon Mechanical Turk Worker Account. We regret to inform you that you will not be permitted to work on Mechanical Turk.Our account review criteria are proprietary and we cannot disclose the reason why an invitation to complete registration has been denied. If our criteria for invitation changes, you may be invited to complete registration in the future.'

After this I was in a dilemma whether I will be able to sign-in which I think is the drawback of AMT that they didn't mention things clearly in their email.After signing in I could see three tabs (your account,hits,qualification).In the hits section,I saw some works with hits available and qualification required.When I clicked it asked me to complete a qualification test.I was getting little frustated so I sorted the results on which I am qualified to participate.Then I found some tasks and started to do it ,I completed 5 hits of $0.05 and in the account section it is showing that the results are pending.Allover,the experience was not so good AMT lacks coordination and workers are not given much benefits.Also,for the countries outside USA it was a hectic process to register,qualify for hits etc.


  • One can sort the results and can view the type of work they want to do.


  • Lacks coordination and synchronization.
  • Workers are not treated well.
  • Not much benefits for the countries outside USA.

Experience life of a requester on Mechanical turk

I used microworkers requester to complete the request.The experience was not so good as compared to the life of a worker because I didn't assign a job due to lack of money but I will discuss whatever the initial experience I got First,I needed to sign in then there was an option of 'create ttv campaign' .After clicking this there was several options eg. (microworkers group,create campaign,cost etc).I did not proceed further because I thought once I upload the task I will have to pay the money which I was lacking.


  • A good user interface with clear instructions.
  • categories listed are interesting.

Explore alternative crowd-labor market

I explored Taskrabbit crowd labor platform.On comparison of two I came to know about the differences between these two.There are difference in types of tasks provided by both of these.AMT focus on collective micro tasks distributed to the masses and taskrabbit provides the labor tasks eg. delivering,cleaning,handiwork,assistance,moving.The users becoming workers are known as (taskers) in Taskrabbit and they need to apply online for the mturk is for very simple digital tasks (like OCR, Image recognition), TaskRabbit is for non-digital/real word tasks (like plumbing, house-moving).

Taskrabbit is more user-friendly with much better GUI than mturk and the tasks are more in an organized form.Overall an user will prefer taskrabbit more than amazon mturk.


Mobile works

MobileWorkers is a mobile web-based crowdsourcing platform for people who live in developing countries(eg.india) to participate in microtask, such as human optical character recognition task


The application can easily reach out to poor or middle class people and it will try to create interest among more and more users due to its payment facility plus it will help in expanding the crowdsourcing market in India which is very limited to only the educated people


  • Very helpfull for those with limited English lietracy and lack access to a desltop computer.
  • Mobiles with limited screen resolution will not have problem with the application.
  • Provide accurate results for blurry text.
  • Very much efficient as people can easily earn 20-25 indian rupees while doing their regular job
  • Multiple entries provide very high quality crowdsourced human optical character recognition work on simple mobile phones


  • Local language translation is very much important if you are launching this application in India which can give you about 98% success rate.
  • Keeping in mind most of the users are iliterate passwords shold be kept secured in case an user forgets to log-out.
  • When an user sign-ups some penny should be paid so that they have further interest in doing the task
  • Full speech recognization support for blind people although this cannot be achieved in low level phones.


Daemo is a crowdsourcing platform in development with stanford university which is based on the initiative of appreciating the workers on completion of some microtasks.


This platform really has an upperhand as compared to other crowdsourcing platform like amazon mturk.Its simplified workflow is really impressive and I think it is a great initiative by stanford university.


  • Takes care of fair wages,respect for workers and convinence in authoring effective tasks.
  • Another effective thing is the prototype tasks,where each new task must first launch in anintermediate feedback.
  • The way big tasks are divided into smaller milestones and given a particular time to solve each milestone is very impressive.
  • Daemo addresses the power imbalance and mitigates the inherent issues by introducing a representive democratic governance model that elects a leadership board composed of three workers,three requesters and a researchers.


Although the project looks perfectly fine to me but still I would suggest some improvements.

  • A better interactive design and description so that the users are impressed at a glance.
  • With task category there should be a domain category where users will have ease to select task.
  • Review should be done in stages with persons expertise in a particular domain so that they are correctly able to identify the users and their abilities.

Flash Teams

It is a framework for dynamically assembling and managing paid experts from the crowd.


These teams consists of sequences of linked modular tasks and handoffs that can be computationally managed.the tasks are solved in less time which is a plus point.


  • Flash teams does teamwork to complete complicated and difficult task by dividing it into modules.
  • Flash teams are much better than the control teams because even the slowest team completed the task faster than the fastest team in control condition.
  • Flash teams required less coordination than the control teams and were more able to take advantage of on-demand recruiting from the crowd.
  • In contrast with the Flash teams,when member of one of the control teams quit the entire team's performance suffered but flash teams were robust and could out reach out to the crowd for a replacement quickly.


  • Flash teams are unable to gather paid experts from the crowd to complete complex and independent tasks quickly and reliably.
  • Flash teams are limited to napkin sketch design teams and other types of teams are not tested.Furthermore,there may be coordination conflict.
  • The teams do not always go according to plan,future work should also explore issues related to runtime course correction and dispute resolution.
  • Some motivational aspects should be kept for the teams so that they work more enthusiastically with the modules and create better results.


Narula P, Gutheim P, Rolnitzky D, et al. MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid. Human Computation, 2011, 11: 11.

Stanford Crowd Research Collective: Daemo:A Crowdsourced Crowdsourcing platform

Retelny D, Robaszkiewicz S, To A, et al. Expert crowdsourcing with flash teams. Proceedings of the 27th annual ACM symposium on User interface software and technology. ACM, 2014: 75-85