Milestone 1 pixelpals

From crowdresearch
Jump to: navigation, search

SUBMISSION OF MILESTONE 1

To experiment on various Mechanical Turks and understand pros and cons for the same.

To view the experience of worker along with the snapshots of total money earned - [[1]]

To view and download the csv files of hits: [[2]]

Experience the life of a Worker on Mechanical Turk

Our experience as a crowd worker was new and intriguing. We replaced Amazon Mechanical Turk with Microworkers since we were not able to make a verified account on Amazon mTurk. The skill set required for working on jobs on Microworker is minimal as the tasks were very rudimentary. Some of the jobs included searching a particular product on Amazon and taking screenshot; installing and reviewing a mobile App; commenting on blogs, Youtube etc. There was no restriction on the time taken to complete a job and for each job an approximate time to complete the job was mentioned. The description of the job and what was required as the proof of job completion was clearly stated. This helped us greatly in getting a clear idea of the project before we accepted it. Completing light weighted and diverse tasks was easy and fun. We liked the minimal user-interface that was easy to learn so we could start the tasks quickly. For each available task they mentioned ‘Time to Rate’ which is the maximum time within which the assigned task will be verified. Payment was made into the account after the task was reviewed. We liked that there were no delays in reviewing the tasks we did. The amount of money paid was very meagre but considering the skill set required for the tasks it was justifiable. The site also keeps a track of the success rate of the worker. This would motivate workers like us to keep a tab on their submission quality. However, we felt that the platform could have been more interesting, and challenging if requesters posted some tasks which required some specific skills. For instance, designing a website or writing blogs/articles. This would give us more value for our time (in terms of the payment) Also, sometimes, the instructions and wants of a job were unclear until we did the actual job. So, a sample task would help workers in understanding the deliverable proofs better and not lose out on accuracy because of unclear instructions (such as on CrowdFlower). The number and variety of jobs at any given time was limited. We didn’t have many options to diversify our jobs. There was no classification of workers on the basis of their experience on the platform (such as on CrowdFlower). We as new workers had the same job availability as any other. This further highlighted the absence of skilled jobs on the platform.

Total money earned by the team members on Microworkers = $ 1.07

Snapshots of Total money Earned: pdf

Experience the life of a Requester on Mechanical Turk

As a requester, we tried to focus first on choosing appropriate crowd-sourcing platforms.Initially Amazon Mechanical Turk declined our login credentials as it only accepted US citizens to sign-up.So therefore we explored various non-AMT platforms like crowdflower,click-worker etc and finally we landed at Microworker crowd sourcing platform.We first explored various available tasks/jobs on microworker and then decided the task.

  • Task- Complete the suvey based on understanding the usability of CAPTCHA.
  • Total Questions- 7
  • Time to complete - 3 Min
  • Reward per assignment- $0.01
  • Minimum no of workers-15

Challenges: Designing simple and intuitive survey was a big challenge. So therefore , first we had built a sample survey of CAPTCHA which we had circulated amongst our peers and then we took feedback from them to understand the level of difficulty of questions on survey.Out of 37 responeses, 62% found survey easy, 28% found it moderate and 10% found it difficult to solve.On the basis of feedback , we again designed the survey so as to make it intuitive and simple. Then we uploaded the task as a Campaign (requester) on Microworker.Microworker took approximately 2 days to review and accept it. Finally we received 15 pass hits and 2 fail hits within 1 and half day.

  • CSV OF HITS (click and download)

csv

Explore alternative crowd-labor markets

Galaxy Zoo

Brief: GalaxyZoo is a platform for classifying galaxies that have been imaged according to their shape and other features. Due to the accuracy shortcomings of computer programs to recognise these patterns, human recognition was used. But classification of millions of galaxy images made it a perfect crowd market problem.

Comparison with Microworker: On Microworker, a lot of the tasks are based on data collection, such as, collection names of authors of particular articles. GalaxyZoo also collects data to create a database of classified images that can be later used as a ready and structured source by professional astronomers or scientists. Both platforms have tasks that do not require any specialised skill set and have detailed instructions for the worker to follow. While micro worker allows tasks of varied nature (such as searching particular information on sites, commenting on YouTube/Twitter), GalaxyZoo is solely dedicated to classification of galaxy images. Thus, the final use of worker contributions will be varied according to requester in Microworker but focused on research in astronomy in GalaxyZoo. A big difference we noticed was the no remuneration for the tasks in GalaxyZoo. The only incentives to a worker were his/her willingness to contribute to research in this field or keen interest in astronomy. While in Microworker, even tasks finished within 3 min had a payment of $0.1. For an actual crowd labour worker, Microworker provides more opportunities, varied problems and better payment. Since workers are paid, there is a system in place for verification of task done (through different kinds of proof mentioned in problem) before the payment is made. Galaxy Zoo needs no such system-it a simply for data collection. To ensure accuracy data collected Microworker sets a benchmark for accuracy that worker must meet in order to continue with more tasks. This accuracy is decided by the requester’s/employer’s review. In GalaxyZoo to ensure quality of results, classification is decided by comparing answers of multiple contributors for the same problem.

Readings

MobileWorks

Global crowdsourcing offers new and promising employment opportunities to low-income workers in developing countries. However, the impact yet has been very limited because poor communities usually lack access to computers and the Internet. Existing crowdsourcing markets are often inaccessible to workers living at the bottom of the economic pyramid. This is where MobileWorks comes into picture. MobileWorks is a California-based outsourcing company that was founded in 2011. They brand themselves as “socially responsible crowdsourcing. It is a mobile phone-based crowdsourcing platform intended to provide employment through human optical character recognition (OCR) tasks that can be completed by workers on low-end mobile phones through a web browser.

  • Process:

Because of the limited screen size on mobile phones, documents have to be chopped into small pieces of one or two words. Different workers digitize the pieces using the Mobileworks web application and submit it to the server. These smaller pieces are then put together to create a digitized copy of the document. Some examples of tasks and projects MobileWorks Premier has been used for include software testing, document editing and preparation, online research, personal assistant type tasks, and more.

  • Positive

To address the limited screen resolution available on low-end phones, MobileWorks divides documents into many small pieces and sends each piece to a different worker. A study found workers using MobileWorks average 120 tasks per hour at an accuracy rate of 99% using a multiple entry solution which is really high in comparison to other similar platforms like mClerk. Also, users had a positive experience with MobileWorks: all study participants would recommend MobileWorks to friends and family

  • Negative

The one negative aspect may be related to payments. They don’t pay you your money on a day-to-day basis. Also there is a minimum limit of 1$ which can be deterrent since it requires quite a lot of tasks initially. There is no information about the way/mode of payment also initially which leaves users wondering. Also it can be an issue in countries where dollar is valued very high as compared to the native currency.

mClerk

  • I basically like that it gives less skill full people a chance to earn from crowd sourcing. This is beneficial for developing regions. The technologies and simplicity are combined in a nice way for e.g. the sms sending graphical and digital messages which makes it highly usable. They have explained the different factors in their experiment with mclerk. The biggest advantage certainly is that it is a cheaper option for digitising local languages. It looked highly scalable with many persons joining and referring others.
  • The one thing to improve may be is mode of payments. Also not everyone may be knowing English (even broken ) so the way they accept input seems difficult for someone having no idea of the language. The accuracy rate of 90.1% as they themselves have said is quite less as compared to market standards. Too much influence of mobile networks on result, They give example of Airtel dropping messages which halt their progress for 3 days. During peak hours the result might also be delayed. Unequal weightages can be given to different users for increasing accuracy level.

Flash Teams

Flash Teams is great platform for prototyping both low-fidelity an high fidelity.Collaborative work could be done done Interactive links called blocks. Flash teams present a very elastically mode for performing task as it can hire more people according to task needs.To enable flash teams, Foundry was presented to end-user authoring platform and runtime manager that is, it had helped them to author modular tasks and manage through handoffs of intermediate work. It provides facility to make and understand the pipeline of project flow consisting of initialization of tasks ,branchinf of tasks and final results of task completion. Though it provides flexibility and scalibility on large scale, a sessions of evaluation should also be conducted regularly so that so there is track of each and every member and the task they are performing.