WinterMilestone 1 Lightsaber

From crowdresearch
Revision as of 18:00, 16 January 2016 by Vijaymahanteshsiddappamurganoor (Talk | contribs) (Experience the life of a Worker on Mechanical Turk)

Jump to: navigation, search

Experience the life of a Worker on Mechanical Turk

What do you like about the system / what are its strengths?

  • Mturk system is one of the many crowdsourcing platform which creates an alternative income source for the workers of all skills. People with basic internet access and knowing how to using computer can earn money.
  • Since trust plays a big role between the workers and the requestors. There are [link:link mturk forum] which can help you choose the best paying tasks.

What do you think can be improved about the system?

  • The worker doing manual task of entering data and putting it in a webform might be boring after so many repetitions. So having options to mixing it up with other HIT tasks might be an interesting idea to explore.
  • Building systems and tasks that not only collects data but also coaches it workers to be skillful in new tasks and doing it faster and better. Ex : Any web crawling based tasks, the worker has to open a new tab get the data and writing about it on a new tab can kill some time.
  • There is no minimum wage set by the amazon turk system. Following the state and country laws for minimum wage will be good for both workers and requesters. We think having uber pricing model is better.
  • In the current system, If the result is not adequate, the job is rejected and the requester is not required to pay. Tailoring the models to have partial pay will be encouraging to workers.

Experience the life of a Requester on Mechanical Turk

We posted a HIT task of scraping data from university websites to collect professor names and there position. Over a week we were awed by the responses. which will be discussed in the next two sections.

What do you like about the system / what are its strengths?

  • Amazon Mechanical Turk is a valuable data collection tool. It has easy interface to start a HIT with different categories such as tagging images, getting survey, transcription e.t.c.
  • The example scripts really helps the requestor to get started with the mturk. With little to none knowledge about HTML and Javascript one can create the forms just by drag and drop of form elements.
  • All the responses are recorded and returned back to the requestor in an excel format, which is easier to evaluate.
  • Options such as setting the number of assignments, time per HIT is useful for the requestor to measure the quality and filter only good data.
  • With master’s helps one can monitor HIT tasks get quality responses.
  • Requestor has options to build qualification tasks for the HIT. This is useful is selecting the workers.
  • Giving an upfront quote on the price is definitely good for the requestor to figure out his/her budget.

What do you think can be improved about the system?

  • For a beginner optimizing your HIT is hard with handful options like time, number of assignments per HIT, number of masters e.t.c. Suggestions to set these values will be helpful.
  • The tasks taken by the workers reduces drastically with time(because the workers are presented with other exciting HITS). One has to continuously monitor the task and keep optimizing the pay rates.
  • Guiding the requesters to design the tasks to be fun instead of plain old forms and breaking the tasks to very small chunks. Eg : Instead of asking to transcribe a video which is of 45 min length. Splitting them to 2 min or even 1 min.
  • Almost all the task are designed for monetary compensation. Some studies have shown that retaining the workers is highly correlated with enjoyment and self-fulfillment and this can developed by reward mechanism.

HIT results File:Mturklightsabre.csv got form the data collection.

Explore alternative crowd-labor markets

Compare and contrast the crowd-labor market you just explored (TaskRabbit/oDesk/GalaxyZoo) to Mechanical Turk.



What do you like about the system / what are its strengths?

It tailors to the local users. Based on the paper, the mobile phone penetration is relatively high, about 50%, and the cost of mobile internet is very low. Therefore, it is much easier to attract new users and distribute micro-tasks through mobile. The information is broken into pieces. Users might have different size of screens depending on what device they use. MobileWorks breaks the information into small pieces for users to process, and then put them back together after they complete the task. It makes the user experience better because MobileWorks won’t have to worry if users can view the information properly on their screen.

It is fairly accessible. Since users can complete all the tasks on their phones, it is very easy for them to access anytime they want, and this certainly goes well along with the mobile trend.

What do you think can be improved about the system?

First of all, we think the paper could have been more comprehensive. For example, when the author(s) tried to demonstrate the quality and efficiency, I think it’s hard to compare or determine if they were able to reach high efficiency because there wasn’t too much information for readers to compare.

The system can possibly further improve the way they send and retrieve information. At the moment, the system breaks down the information into pieces and send them to users one by one. While this might avoid the loss of information, it is still not the most effective way to process information.

Collecting data on user behaviors could be one way to solve this problem. If the system has the ability to detect what kind of device the user is using and makes it a larger sample, it’s possible to understand what devices most users have in order to make the information processing more seamlessly. Optimizing the way the system sends users information might help users save power of the phone as well as internet usage.


What do you like about the system / what are its strengths?

The problems that the system is trying to solve are some of the critical issues in the talents crowdsourcing field.

The systems helps requesters and workers to stay on the same page during the project. It is common that requesters sometimes find workers’ deliverables significantly different from what they expected. By breaking down the project into iterations, workers and requesters can eliminate miscommunications and receive feedback on the fly so as to lower the cost as well as risks and complete the project more effectively.

The more comprehensive rating system allows workers and requesters to understand the other side better and have better expectations on what it will be like working with each other.

What do you think can be improved about the system?

Besides the three limitations mentioned in the paper, we have two more suggestions regarding how to improve the system.

The system can be further improved by addressing the payment problem between workers and requesters. Even though the author(s) of this paper indicated that Daemo focused on improving the accuracy of reputation in the system, we still find it necessary for the system to take this issue into consideration and find a way to improve it, because regardless, payment is also one of the factors that the workers need to consider when they rate the requests.

While Daemo’s rating system is more comprehensive, it could still lead to potential bias since workers provide only one rating on the requester after completing the task. Furthermore, each individual might have a different preference on complexity, payment and task length, etc., so some of the reviews or ratings might not be as useful to other workers depending on the situation.

Flash Teams

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?

Research Engineer (Test Flight)

After installing the necessary dependancies we could launch the application locally. Here is the screenshot:


Milestone Contributors

  • @angelfu
  • @vijaym123