Milestone 6 Padawans

From crowdresearch
Revision as of 15:23, 8 April 2015 by Nikitagupta (Talk | contribs) (Created page with "=='''Platform for Statistical View Tool and Complaint Forum'''== ==='''Abstract'''=== This paper presents a detailed description of a crowdsourcing internet marketplace that...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Platform for Statistical View Tool and Complaint Forum

Abstract

This paper presents a detailed description of a crowdsourcing internet marketplace that enables requesters to make use of human intelligence to get work done that the computers are unable to do. Our claim is that our platform has features that similar platforms currently lack and has foundation that caters to solving problems of trust and power. The paper describes foundational ideas and features for the proposed platform in detail.

Motivation

The major issue faced by the requesters on crowdsourcing platforms is that of the quality of task submitted by the workers. It is very difficult for a requester to go through all the submissions and select one of them. It not only increases the effort put in but also the time devoted by the requester.

Workers have very little say in the whole system. They are not able to voice their views regarding any of the issues like low and insufficient wages and delayed payments. There is no platform which could bring together communities of workers and requesters who face similar problems or have similar interests.

Insight

Our aim is to reduce the efforts put in by the requesters by providing a ‘statistical view’ of the all the submissions received for a particular task. Depending on the type of task and the kind of output expected, segregation will be done. Consider the following example: Requester R wants the code for obtaining the Fibonacci Series in C. He receives 100 submissions for the same. Instead of going through each one of them, he chooses to use the Statistical View(SV) tool. After providing all the required details(check next section for more about this), SV gives the result shown in figure.

Using the detailed report and the bar graph provided, requesters’ work has been simplified to a great degree. He/She can now judge the submissions by looking at the graphs and know instantly if the submission is relevant or not.

Let us look at another example: A requester wants an essay to be written within 250 words. He mentions a few keywords that he expects to be present in the essay. So, after specifying the details to the SV tool, we get a line graph between number of workers and the number of keywords used by a particular particular worker. Further details about that worker can be viewed by clicking on the the node on the graph that represents that worker.

In order to solve the workers’ problems, we have a complaint forum which lets workers post about the issues faced by them. Other workers get to upvote or downvote it. If the post gets a minimum number of upvotes, it is taken up as an immediate concern and the requesters are notified about the same. In case the issue deals with some particular requester, the requester will have to pay a penalty. This gives the worker the power to evaluate the requester. Another possibility could be that the worker gets a noticeably big number of downvotes as compared to the upvotes. Such a worker’s reputation will be affected negatively.


Statistical View.png

System

Complaint Forum

Our platform shall be accompanied with an official requester-worker forum. Every requester and worker shall be registered to the forum automatically. The main purpose of the forum shall be to hold complaints from workers. If a worker isn’t satisfied with the task description or if he feels that his work has been rejected just because he understood the work wrong, he can always file for a complaint. He can then lodge a complaint on the forum. He shall provide all the details of his issue, along with the requester username. Once a complaint has been posted, other members of the forum shall then rate the complaint through a like/dislike button. They can also comment and provide their opinion. Once, the complaint gains certain number of likes, the requester has to pay a penalty to the worker. To avoid any corruption on the forum, requesters/ workers with certain badge rating can only vote.The forum might be used for other purposes too, like forming communities of workers with similar interests, requesters with similar task requirement.


Statistical View tool

A requester after designing the task shall be asked to either design the test case for his task or proceed. If the requester chooses to design the test cases for his task, he shall first have to choose the category of his work from the drop down menu. After that, he can design his test case which can involve specifying desired output, no. of lines in the code, certain expected keywords etc. Based on the test case, similar answers received by the workers shall be categorized. Further details shall be provided, like a graphical representation of how many workers provided the same answer, etc. As a result, the requester wouldn’t have to check each response. Since all the variety of answers received shall be highlighted, the requester just has to check all the variety of answers received once. Once, he chooses a particular answer, all the workers who answered the same answer shall receive payment.


User Interface(refer to the figure below):

1. The requester chooses the type of task from the drop down menu.

2. Four types of criteria are present out of which at least one has to be filled.

3. Each criterion can be repeated a number of times using the “circular arrow” which duplicates the concerned criterion.

4. Finally, hit the submit button.

Image1.jpg

Features

There are some features that further enhance our platform. The workers can belong to any country in the world. Despite possessing the required skill, he may fail to apply for a task because he does not understand the language. So, the task description should be available in several different languages and not just one.

The platform has a Badge System where depending on reputation, the Worker and the Requester both receive a particular badge. The title of the badges are, for example, “Beginner”, “Novice”, “Expert”, “Night Owl”, etc. This will help in motivating workers to provide good quality work.

One of the biggest problems faced by workers include delayed payment. To deal with this, workers are paid a part of the total amount after regular intervals, on completion of a pre-decided number of HITs.

Evaluation

Requesters have largely been benefited due to easy assessment of work. Instead of scanning through hundreds of submissions, they are rendered with a minimum number of submitted work. Due to the existence of a large variety of tasks available on a crowdsourcing platform, the filtration process for some tasks is not efficient. But, with the growth of the software, we can claim that different kinds of software will be accommodated.

Complaint forum has helped in bringing workers together and raising their issues. Their views are addressed directly even to the requesters. The workers are empowered and can even claim a penalty. So, the requester is forced to play fair. This ensures better task descriptions.

Related Work

A similar work analysis idea has been used before in Crowdsourcing platform like MobileWorks[1]. However, there is a major difference in both ideas. MobileWorks checks the quality of the work done by distributing same task to two workers. If their answers match, only then are their answers accepted. If they differ, then the work is forwarded to another worker and so on till two answers do not match. However, the flaw in their implementation is that, there is a high possibility that a wrong answers may end up getting accepted due two workers giving the same wrong answer[2].

References

1. Narula P, Gutheim P, Rolnitzky D, et al. MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid. Human Computation, 2011, 11: 11.

2. http://crowdresearch.stanford.edu/w/img_auth.php/9/9a/MClerk_%28private%29.pdf