WinterMilestone 1 bdeloeste

From crowdresearch
Jump to: navigation, search

Here is my submission for Winter 2016 Milestone 1.

Experience the life of a Worker on Mechanical Turk

For this milestone, I submitted the HITs using the worker sandbox. To get the most out of my experiences as a Mechanical Turk worker, I attempted HITs that varied in task description and payments for each HIT. I accepted HITs that required labeling images, verifying URLs, and categorizing relational data. After about 30 minutes, I accrued $2.40 with a submission of 59 HITs: 32 were approved (19 were approved almost immediately) and 27 were pending.

I only completed HITs that were on the first few pages and the majority of the tasks that I accepted were straightforward to follow. I liked that the majority of the tasks were simple to perform and the time given to complete the tasks were quite generous. Some HITs that took less than a minute to complete for each HIT were allotted 60 minutes or even seven days to complete. I wasn't sure if this was because the platform differed since I used the worker sandbox, but that was convenient nonetheless. In addition, I enjoyed the quick HIT approvals from the requesters.

There were a few downsides in my worker experience. I observed several HITs that had ambiguous task descriptions with a non-intuitive interface. For instance, there was a HIT that required me to guess the object in an image blocked by a gray box and choose from a long list of potential objects. I was to continue guessing until I got the correct answer, but there were too many objects to choose from the list and took quite some time to finally get a correct answer and in some cases, I was randomly selecting choices without giving my decision any thought. Another downside was that the label for HITs weren't categorized. It would be nice if the HIT feed consisted of another section that described the type of task that the HIT was asking for. Lastly, I wanted to give feedback to the requester, but the interface for Mechanical Turk made it optional to provide a comment box for potential feedback and as a result, I was not able to provide my opinions.

Overall, I think that Mechanical Turk definitely needs to improve the interface to make it easier for workers to find tasks that they are interested in. While some tasks were generally easier than others, I had to do some digging to finally accept a task that I felt confident I could do. It would also be useful if all the HITs contained comment boxes that can be filled with feedback to improve the quality of the task description and interface.

Experience the life of a Requester on Mechanical Turk

I used the Amazon Mechanical Turk Sandbox for Requesters. I found the creation of a project intuitive and simple and did not encounter any issues making a categorization HIT. I created a task that required workers to categorize a few YouTube videos as funny or not. Mechanical Turk prompted me to specify the fields pertaining to each category and a field for the instructions that the workers will see. Next I uploaded a .csv file of YouTube links that I categorized as humorous and others that weren't. The humorous videos ranged from stand up comedy to comical celebrity appearances on talk shows and the remaining videos ranged from videos about sports or food.

Albeit being a rudimentary process, there was plenty of room as a requester to leave ambiguous categories, instructions, and even data sets for the workers. It isn't until after the task is published where any feedback would be sent back to the publisher and thus several workers may have poorly completed the HIT. Furthermore, I find it troubling for requesters to set the payment per task since some tasks that the requester claims simple and set at a lower price may take longer times for workers to complete; therefore the worker may not be fairly reimbursed.

Unfortunately, I did not receive any HIT completions so I was not able to obtain a .csv file of the HIT results. Overall the experience as a requester was okay. With respect to providing fairness to workers and requesters on a Mechanical Turk, requesters definitely have more control setting the standard for the platform.

Explore alternative crowd-labor markets

(Was not able to explore other crowdsourcing platforms)



MobileWorks is an excellent solution for providing employment to developing world users through a simple, mobile phone-based crowdsourcing platform. The system utilized the mobile centric demographic of India to adapt a platform catered to these workers. What was especially impressive was how the platform increased the throughput of the tasks completed while still yielding accurate results. As stated from the paper "the overall accuracy of the workers, without considering multiple entry error detection, was about 89%". Furthermore, task assignments followed a divide and conquer approach by decomposing them into smaller, simpler tasks for workers to complete. Simpler tasks allowed enhanced productivity for users. This, in tandem with the easy to use interface, produced a positive experience for workers. As a result, all users of the study mentioned they would highly recommend the system to friends and family.

The highlighted strengths of the platform are:

  • Simple tasks
  • Minimal interface
  • Improved task throughput while maintaining overall accuracy of workers
  • Use of historical accuracy to model future task payments

There are a few improvements that I propose for MobileWorks. Firstly, the pilot study conducted only surveyed 10 workers from two locations and it would be interesting to observe the results when scaling the platform to a large set of users from different locations, especially outside of India. Although 92% of Amazon Mechanical Turk (AMT) workers reside in the US and India, 8% (16,000) workers are from other parts of the world and may reside in developing areas. The results may differ in these extended areas. Also, with the reduced cost of current PCs and the advent of technologies like Project Loon, which aims to extend internet access to other parts of the world, MobileWorks can improve its interface to support more complex tasks and even provide real-time responses.


The most prominent aspects of Daemo are highlighted with the relational improvements among both workers and requesters by catching the problems inherent in other crowdsourcing platforms early on. Rather than amend negative effects downstream, Daemo aims to mitigate the problem at an earlier stage in the pipeline in order to prevent the quality and experience of workers and requesters from spiraling out of control. This method is one of Daemo's strengths compared to other crowdsourcing platforms.

Additionally, through the Boomerang field test results, strong incentive alignment can be accomplished and this allows for higher quality task results for requesters and productivity increase for workers. From the prototype tasks, I like that the requesters are given the chance to ensure that their tasks are clear and concise.

Although Daemo aims to alleviate the issues evident in other crowdsourcing platforms by restoring trust between worker and requester and removing ambiguous task instructions, there are a few suggestions that I suggest may provide additive improvements to the system. A first improvement would be to implement a chat system similar to Facebook's chat system so that when worker and requester are both online, they can connect and the requester can provide insightful feedback when a worker comes stuck. This can be especially effective when it comes to prototype task feedback. Additionally, perhaps it might be insightful to make it mandatory for workers and requesters to rate each other. If they know that their feed is determined by the quality of their rating, then they might perform the ratings anyway.

Flash Teams

(Unable to complete summary)

Milestone Contributors