WinterMilestone 1 Ganler

From crowdresearch
Revision as of 20:06, 17 January 2016 by Prithvirajramakrishnaraja (Talk | contribs) (Flash Teams)

Jump to: navigation, search

Team Ganler: prithvi.raj

Experience the life of a Worker on CrowdFlower

Reflect on your experience as a worker on Mechanical Turk. What did you like? What did you dislike? If you're from outside the USA and unable to access MTurk, try try the worker sandbox, or other sites like CrowdFlower or Microworkers or Clickworker.

Overall View: The first point of clarity for a user when looking at the crowd sourcing website is the differentiation of workers and requesters. It gives a clear picture. Keeping in mind the design heuristics, I've gone through amazon turk, crowdflower, Upwork(previously Odesk), Microworker. The sites which differentiate the categories directly without naming them differently (like crowdflower initially differentiates the workers with a link as -'here for a task?' which is understandable that it is about a worker, but there is no option beside to ask 'are you here to give a task'. Without its pair, the option is confusing).

Platform: CrowdFlower


1. Good instructions after logging into CrowdFlower as a worker 2. The interface is good - as a worker viewing the list of jobs available (the popping out instructions played an important role in understanding the interface) 3. The filter option of each column heading is handy in categorizing the work according to the convenience of the worker 4. The icons are also colored and differentiable


1. It is difficult to find a job as a new worker. As a new worker, there are not tasks available to be done. The tasks which are available are for people with some experience and rating attached to them. 2. The UI for worker registration could be consistent with the UI of the main home page of CrowdFlower 3. After I loggedin there were 22 jobs available in general but none available to me. So, all the tasks were dimmed out and I couldn’t pursue them. 4. The UI at the bottom showing “previous” “next” needs to be dimmed out when they are not required as all the tasks available are present in one place itself.

Experience the life of a Requester on Crowdflower

Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results. If you're from outside the USA and unable to access MTurk, you can try the MTurk requester sandbox, or CrowdFlower or Microworkers or Clickworker

Likes: 1. The website gives a good overview of the process of creating a task.

2. It also provides efficient templates for creating the task from which the requester can choose the one he/she likes.


1. The links to proceed to the next page are confusing and are not clearly defined when to work and when not to work.

2. A very confusing form filling pattern which went on in loops while providing questions.

3. There was a minimum limit of providing 4 questions in the tasks for better quality which was not notified initially while uploading the csv file.

Explore alternative crowd-labor markets

Compare and contrast the crowd-labor market you just explored (TaskRabbit/oDesk/GalaxyZoo) to Mechanical Turk.



  • What do you like about the system / what are its strengths?

1. The breaking of tasks into micro tasks for people who are less educated makes it a very good idea to improve the efficiency of the system

2. The point that it took the market to the huge mobile owning audience made the system huge.

3. Its efficient model of doing the task immediately after login makes it really simple for the user using a small mobile screen focus on the job and not on searching the job

4. Good standardization - 20-25 Rs. an hour. A motivator for workers - The method of taking average tasks completed by a worker in an hour and creating an average pay per task brings a trust factor on the platform as it will pay the right amount per hour which is generally another task to be ensured for workers at the basic level.

5. Yes, the motivating factor of a rating affecting their further chances of getting work also makes sure they work for correct results.

  • What do you think can be improved about the system?

1. Though the cost of each task is calculated before the task is complete, if there is an option to view the worker's overall earnings per day/ earnings overall would be a nice option for the worker to rely upon. This reduces the cognitive load of the worker in performing calculations.

2. Round figures 25, 30, 35 and so on are huge motivating factors for workers(this is with reference to personal experiences so far). A notification to indicate the person can get to a round figured total earned money will motivate the worker to do a few more tasks. As an example, when the worker has earned 22.8 Rs., when notified that he will reach 25 Rs. within a few tasks will motivate the user to work further. This information needs to be put in a subtle way making sure we are not making the worker over work, reducing the chances of change in quality of work.

3. Language based task will be a huge hit in a country like India as there are many people who are educated in their local language and not in international or neighboring languages.


  • What do you like about the system / what are its strengths?

1. A very good Boomerang methodology of rating which makes the worker rate with his future comfort in mind, as the chances of working with the same person is decided by his rating.

2. Prototype tasks help in creating a good understanding between he requester and the worker, which makes the worker confidently execute his idea and the requester peacefully wait for the outcome as he has realized that the worker has understood the task.

3. Incentivizing more accurate ratings in again a good motivator for the raters and also new workers to believe in the system.

4. The multiple scenarios taken and testings proving the concept of prototyping tasks working is a good proof of designing tasks better.

5. The point which I mentioned as a deciding and motivating factor in 'Amazon Turk' and other crowd platforms is the hints given to understand the task. This point is being addressed in the prototyping phase of the task as depicted in Figure 7 in Daemo paper, the tasks which are better described have more possibilities of being taken up by the worker.

6. Though the testing has been done with a limited number of users, further different scenarios can be covered with again a smaller group of representatives as a sample from different group (Considering - Jakob Nielson. 2000. Why you only need to test with 5 users.)

* What do you think can be improved about the system? 1. A newly joining worker even though he is a good worker, still has to wait for some time till his ratings get good for him to access good tasks

2. The requester who are not reated, can be rated in a better way than the average rating system.

3. The are many other cases where the user might not be interested in rating the requestor or the requester might not be able to rate the huge number of workers for a task which dilute the rating values as the numbers increase.

4. During prototyping phase of the tasks, some tasks might require the requestor to put in a lot of effort taking into consideration the confusions of the workers. This is still a time taking factor and an extra effort by the requestor though he might receive good quality work later.

=== Flash Teams ===

  • What do you like about the system / what are its strengths?

1. A good platform to get important tasks performed by dependable people who are already experienced.

2. The fact that the flash team members can include other category team members and manage the work to get to the end point gives a lot of freedom and puts trust upon the workers

3. It is a very useful way of getting highly technical task by a newbie requestor with a good idea at hand.

4. The requester also benefits in the time saved by the flash team in completing the task and can focus better on other tasks.

  • What do you think can be improved about the system?

1. Managing dynamically hired teams can become a task to handle

2. Exactly transferring the thought process of the previous team’s work to the team which is about to take on the work is a challenge as it involves the abilities of the previous team in conveying the information. The requestor might not be technically sound to make sure the new team understands the existing work.

Milestone Contributors

Slack usernames of contributors: prithvi.raj