WinterMilestone 1 CRBlr200

From crowdresearch
Revision as of 17:38, 17 January 2016 by Prasutkumar (Talk | contribs)

Jump to: navigation, search


This is the wiki page for Winter 2016 Milestone 1 spanning 12th January 2016 to 17th January 2016.

Experience the life of a Worker on Mechanical Turk

Reflect on your experience as a worker on Mechanical Turk. What did you like? What did you dislike?
For some reason, MTurk did not accept me us as workers. So as a worker, that's already some sort of defeat, as it removes one lucrative source one could've worked for. It's heavily marketed as the best place to crowdsource or work, and since the other channel's are not so popular/backed by trustworthy brand like Amazon, worker's are left in doubt if the workplace they are working on is actually legal.

2 of our team members worked on Clickworker, 1 worked on Microworkers.

Clickworker Experience - @afreen
I started working at Clickworker. The tasks for the Indian community go from less to virtually non-existent. The learning curve is a little too steep for my liking. I had to browse through multiple subreddits like r/beermoney & r/uhrswork to get a fair idea of what to do. The websites themselves are not very efficiently helpful.

I started working on tasks like verifying if the search term appropriately matches the description. You can earn $0.02 per HIT. The qualification tests could be a lot more inclusive. A lot of people could fail the qualification tests because you may not get the hang of the task unless you have practiced a bit. Sometimes a qualification test may have 3 tries, sometimes 10. It should be a little more consistent regardless of the requester or platform.

Microworker Experience - @kr.prastut
The tasks were pretty well defined, albeit a few, & they were categorically grouped into categories based on most paying, time to rate, best jobs. The best grouping was time to rate, and most took a day to validate, even it was written that it would take 7 days. Most of the jobs were of type signups's. Problem:

  • Most of the signup's required to go to links which looked like phishing/malicious websites.
  • Personal details like emails and passwords were asked. People tend to give the same passwords.
  • No worker profile (atleast not where the tasks were posted) apart from the tasks which had a gold/silver medal based on ratings, so trust issues.
  • Require signing up on popular websites like Facebook, Instagram. The risk of the task deliverable being used against me are slim, but there. Some required signing up illegaly on Apple iTunes and he even provided a YouTube link for it!
  • Some required voting on a specific competitor in a competition. Is it ethical?

A TL,DR approach to what we liked as worker's:

  • Preparation material for each task by the requester, if perused properly, very useful.
    • Tasks were crisp, brief and understandable. Deliverable's were also clearly mentioned.
  • You may get tasks that may not require a lot of effort with higher compensations.
    • Performing reasonably on a task yields into a second task from the same employer. The second task was usually time bound, so involved higher compensation.
  • Time taken was clearly mentioned which was almost right.

A TL,DR approach to what we disliked as a worker (this list is obviously longer):

  • Learning curve -- making an account, learning the HIT worker culture, best practices not clearly defined. The interface was outdated, and provided no clues as to what to do. A tour of the website would have been really helpful.
  • Getting banned off tasks on failing qualifications
    • No feedback mechanism for self critique
  • Necessary to be bilingual, one of the languages being English
  • A worker might mark something right to the best of his/her ability while requester finds it wrong - no clear judgment.
  • About Tasks:
    • A lot of personal information was asked.
    • Voting on contestants - ethical?
  • Grouping tasks by Nationality - most of the tasks were limiting the user's belonging only to African or Asian countries.

Experience the life of a Requester on Mechanical Turk

Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results. If you're from outside the USA and unable to access MTurk, you can try the MTurk requester sandbox, or CrowdFlower or Microworkers or Clickworker

Explore alternative crowd-labor markets

Compare and contrast the crowd-labor market you just explored (TaskRabbit/oDesk/GalaxyZoo) to Mechanical Turk.



What do you like about the system / what are its strengths?

  • MobileWorks hit's the sweet spot by targeting the segment of Indian population, where people do have phones, but they aren't that smart.
  • A report by IAMAI and KPMG projected that India will reach 236 million mobile internet users by 2016, and 314 million by 2017.From 200 million internet users in 2013 to over 500 million internet users by 2017 hence the growth story of mobile internet in India is on the upsurge.

What do you think can be improved about the system?

  • It can be applied to even a level-up of users (middle income bracket) with a one-level higher complexity of tasks.
    • Smartphone users are increasing -> the tasks could become more complex
    • Issue based system (read: Github) could properly channelize work.
  • The users could be incentive with financial gains if they complete certain number of tasks in a day. (Example: 10% bonus on completing 50+ tasks)
    • Apply some aspects of Uber business model like surge pricing- tasks which are time bound, provide higher compensation - or bulk booking.
  • Currently a set of users are given duplicate tasks ( ie tasks that have already been given to another person), this is duplication of effort and could be reduced.
  • Concept of a game could be applied. The tasks could be interactive, for eg: if the user want's to learn English, the user by accomplishing tasks (which are sort of exercises) does the OCR.
  • Native languages support.
  • Concept of social sharing or reward system like Bonusly which indirectly provides motivation.


What do you like about the system / what are its strengths?

  • Not only talks of the mechanics of a crowd working platform, but actively works to solve problems faced between requester and worker which is very critical for success of any type of crowd sourcing platform.
  • Boomerang & Prototyping - Because of the above, such a crowd platform can be useful generic workers as well as workers of niche expertise.
    • In comparison to available Marketplaces, Daemo looks well designed (Follows Material which is on the rise right now)
      • For eg: Medium vs Blogger (google) , Quora vs Yahoo answers. All these matches are probably won by the team who was keeping design on the forefront.
      • $100
  • Due to cascade release of triggers/probable requester batch, more direction/vision specific role for the requester.
    • Indirectly we create subsets/relations of workers. Later on can be used to create heat maps.

We think the main strength of such a system is that it can applied to system like MobileWorks, as well as flash teams. The problems that this paper seeks to solve is as fundamental across all such platforms.

What do you think can be improved about the system?

  • If a worker gets the set of tasks from the requester that he/she has been given a postive feedback from, he/she would be acquainted with the type of work that the company/requester is performing - this might create a privacy issue in the future.
    • If one user who is capable of doing tasks which are high (let's say starting of a project) - but still does tasks which are medium level difficulty, average rating would essentially be biased?
  • Easy & quick communication to requester, would increase response time but since worker's would belong to a particular section in the society -> spamming of emails. Also they will become lazy and bombard questions to the requester.
  • This Platform web opens up the real world, all requester and workers would be on the same level. There could be a instance where a requester gets a worker of poor quality and wouldn't be aware of it. So he might have to re-assign his/her work to another worker. To avoid the above, he/she should be able to see the workers review on other platforms like MTurk etc, if it available. This could help a good worker fain an edge while starting off on Daemo.
  • Money is the prima concern which is not being expressed
    • To calculate (like the min rating), nowhere the concept of money offered is used.
  • Prototype:
    • No basis on the related worker which would join the prototyping perspective, since Boomerang is not being applied
    • Since prototype is about getting results, and then modify until it perfect

Flash Teams

  • What do you like about the system / what are its strengths?

A combination Foundry and flash team is an incredible way to leverage expert users as crowd workers! The feature where blocks can be reused over and again is a great way to reduce the time taken on Foundry Also, pipelining tasks and using out tags and input tags could help even unconventional tasks be completed. Instead of being restricted to only a particular type of tasks, this would open the platform to a larger variety of tasks.

  • What do you think can be improved about the system?

1. A diverse set of flash teams is likely to bring up a large set of issues regarding coordination.
2. The quality of work needs to be evaluated and should comply to what would be needed by the requester. If the requester isn’t involved continuously within the process, he/she could be dissatisfied with the job and the entire project would have to worked on again.

Milestone Contributors

Slack usernames of all who helped create this wiki page submission: @varshine, @kr.prastut, @afreen