WinterMilestone 4 QuickStart

From crowdresearch
Jump to: navigation, search

This submission details the necessity and the possible solutions to improving the worker quality and requester satisfaction by understanding the affects of task authorship by requesters with the relevance to the quality of results.

Introduction

Paid crowdsourcing platforms are increasingly being considered as a potential means to improve livelihood by low-income individuals (workers). One of many such recently trending platforms is the Amazon Mechanical Turk or AMT. The increasing popularity of this marketplace can be attributed to availability of a growing number human intelligence tasks (or HITs) posted by requesters open for completion by workers in exchange for a specified compensation per task (generally, monetary benefit).

The assessment in such marketplaces is currently being accomplished by the requesters themselves who are required to approve/ accept the results of the worker(s) in order for AMT to transfer the compensation to the worker(s). Although this seems fair enough, the sheer size of the marketplace has led to increasing concerns of workers whose work has been rejected and the requester whose results are fraudulent. In addition the current architecture of AMT increasingly prioritizes requesters in comparison to workers, who get to rate a workers performance (approval rating) which directly affects the available task feed to the workers and their ability to work on HITs with better compensation.

In order to better understand the cause and affects of these concerns we took to performing a user study with some of the active & well reputed workers, active requesters, and also moderators (essentially workers who have taken the responsibility to address worker-requester conflicts through strategic means). We got to learn about the significance of 3rd party tools that workers have been using to better optimize their time by performing tasks posted by well accepted requesters in community. Some of these include "Turkopticon" - a tool for rating requesters based on accepted criteria, and online forums such as turkernation or interpersonal communities among workers - used to be better informed about high paying tasks or even organize their daily schedules etc. An interesting reporting by the requesters has been about the necessary clarity in conveying the instructions to workers to obtain quality results. It was also up-voted by workers who were part of the user study and several forums as one of their primary concerns, along with several others such as non-responsive requesters who when consulted by the workers in order to learn about the reason for rejection would not consider to respond back.

From the task publishing perspective, the current structure of AMT supports a few set of templates which requesters use for posting the tasks alongside a few organizations who rely upon a personalized template for specific tasks. Except for a few experienced requesters most of the requesters seem to prepare their tasks with pre-conceptions about the quality and detail of instructions that would be necessary for completing the set task. Experienced workers have considered to first consult the requesters confirming the perspective on the task and then primarily commencing with the task at hand.

Considering that not all of the workers would consider to follow such a strategy because of manifold reasons such as other commitments, general preference to details based on schedules etc, we chose to construct a mechanism to qualitatively improve the requesters instructions so as to better suit their likely workers which would in turn improve the quality of the results to these specific tasks.

As part of this project we have introduced a new metric "HIT assistance" in Daemo, a crowdsourcing marketplace, to better understand the relevance of detailing of instructions in task authorship vs worker quality. We do this by capturing the central idea of the current marketplace perspective that workers with better reputation get to perform tasks with higher compensation. It must also be mentioned that one of the primary revelations of the user study was that workers tend to associate a sense of trust with the platform based on the metric of performance included in the system. Considering this, we chose to keep accordance with the popular metric of approval rating of workers by requesters as in AMT and also for the purpose of comparison of the effectiveness of our model which includes "HIT assistance" as the measurement metrics while posting the task feed to the workers. The HIT assistance was structured & presented based on the Twitch crowdsourcing platform so as to provide a better means for the workers to optimize resources based on their schedule, offering workers the opportunity to earn supplemental revenue during their commute to work or during other idle moments of the day.


20 workers (50% of whom were well reputed) were initially recruited along with 5 moderators which quickly expanded to 250 workers based on the recommendations by our recruits. With inputs from the online forums and other research projects, our worldwide consortium of workers, requesters and researchers prepared several different varieties of HITs which were then sent out partly for consultation to the reputed workers post which the requesters graded the effectiveness of consultation. Later, the requesters published the tasks which were then posted to the workers time feed based on requesters requirements of qualifications for completing a task.

In addition we also introduced an "assist me" option to let workers seek help from co-workers who could assist in a particular task in return for 20% exchange of the declared compensation by the requester under all circumstances and 75% fork of the approval rating for the specific task from the primary worker if the results were to be accepted by the requester.

We have observed a very steep increase (80% raise) in the acceptance of work by the requester and so increase in income for the workers, also the workers had reported that the ability to work with colleagues improved their ability to better manage work with human relationships within the community.

Also the secondary workers (who assisted their co-workers) earned 5-10% more than the primary workers but reported a rapid gain in approval rating through "quick tasks" or "tasks in favour" based on the recommendations of the their workers collective (workers, moderators, assistance seekers etc) thus offering them higher returns later on through the availability high paying tasks.

Preface

Hello All,

Firstly you guys are awesome! Although I didn't get to interact with this talented team over the past week I have followed the hangouts and read some of the many wonderful submissions (basically reviewing them while travelling from and to work!). They are truly amazing!

If you are wondering why I had to include a "preface section" its because this is would be my first submission. It took me some time to catchup with your pace and I a really enjoying the refinement in the research ideas and the very structured research happening in here since the beginning.

A big thumbs up to Prof. Michael, Rajan, the RA's and everyone who has taken the initiative to organize this multifarious concentration of aspiring researchers all whom share the idea of solving problems which could improves the lives of millions of people (as Michael mentions in one of the very first hangouts, creating a sustainable crowdsourcing platform which is capable of withholding a new generation of career opportunities).

Looking forward to working with each of you,

Milestone Contributors

@karthikpaga

References

@Team Despicables @RATH proposal @Duka @Team SneakyLittleHobbitses @Yone.Dayan @Team 1 EU @seko & @kamilamananova