Difference between revisions of "Milestone 6 Sky"

From crowdresearch
Jump to: navigation, search
Line 26: Line 26:
In this part, we describe how different modules come into play interacting with each other and with the worker and requesters. In the following synopsis we have tried to describe the activation of each module and how each can serve the other modules or receive the service from them.
In this part, we describe how different modules come into play interacting with each other and with the worker and requesters. In the following synopsis we have tried to describe the activation of each module and how each can serve the other modules or receive the service from them.

Revision as of 00:29, 9 April 2015

Title : Quality Promising Win-Win Crowdsourcing Platform Based on the Power Distribution between Worker and Requester


The down of crowdsourcing millennium has brought many new opportunities to the work environment and has opened a totally new virtual market of the employees and employers. The nature of the market is subject to unclear definitions and in order to be established, it needs clear, concrete and coherent standards that it should conform to, in order to make it a real trustable and desirable work environment for all the participants. Current Platforms like Amazon MTurk, MobileWorks and mClerks inherit many of the characteristics of their traditional parents, real markets, but none of these can fully address the needs and problems. A crowdsourcing platform should be trustable, transparent and flexible both for the requesters and workers and these are not achievable unless we distribute the power between all the players of the system. We address the problems and needs with four main key components; Price-Quality Method (PQM), Validator Wizard, Delivery Filter System, Reward System. We claim that we can fully address the needs with a win-win strategy balanced between requester and worker in a modular architecture.


The goal of our system is to address all six needs synthesized in Milestone 4 as well as giving some novel ideas to ensure that the needs are addressed not only in a modular basis but also in system interactions. We observe that the worker is concerned about the payment and fairness issues and the requester is concerned about the quality of the delivered tasks and these things introduce many conflicts to the system.We believe that lack of power distribution and balance among worker and requester induces many trust conflicts. We want to mitigate these concerns by automatically blocking the sources of these trust conflicts. After analyzing MTurk and long time need finding in [1] and [2], we proposed a system with some key components PQM, CSP, Validator Wizard, Reward and Empathy, Delivery Filter System and we will fully describe the components in following sections.


We proposed Collective Statistic Panel (CSP), as a feature collector that delivers statistics as an input to different components of the system. It is also the main feature in transforming the current platforms to a personalized adaptive environment which is learning based on previous statistics.

Validator Templates Wizard Studio provides an easy to use, predefined task validator environment based on various validation strategies converging toward highly descriptive validatable tasks over time. Multilevel Result Delivery Filter System is an automatic, multilevel result feedback delivery component that can generate justifiable feedback at different checkpoints of the system. These filters guarantee quality control and enforce validation criteria.

Multilevel Result Delivery system to requesters idea we propose pre-defined filters to increase work's quality. We have integrated some of these filters as widgets in Wizard Studio. We define Validatable task including these filters for quality control and result filtering more efficient.

PQM is a mutual price offering system to balance the power of price placement among worker and requester. It leverages the quality-price ratio based on negotiations among both sides. PQM also fills the gap of underestimating the new un-experienced worker selection and undermining the overqualified worker skills.

Empathy and Reward System can provide the excellent worker with four kinds of rewards; Unexpected Reward that surprises the worker and motivates him/her toward more contribution, Certificate Reward to appreciate the workers contribution, the Achievement share is a reward to show the worker how important his participation is in a bigger goal and awards him/her with a sense of usefulness and feel connectivity (I’m part of this) , the benefits of Final result is another reward to show how the worker has benefited the accomplishment of a goal.


In this part, we describe how different modules come into play interacting with each other and with the worker and requesters. In the following synopsis we have tried to describe the activation of each module and how each can serve the other modules or receive the service from them.

Error creating thumbnail: Invalid thumbnail parameters

The Players of the systems are workers, requesters and modules that are activated in different epochs of the system. In the first place the requester enters the system and wants to create the task. It starts from the task creation interface. The task creation interface leads the requester to some Validator template Wizard Studio, which is mainly designed to enable automatic and systematic validation of the tasks as well as training the requester to have more reliable, and descriptive tasks over time. The output of the validator is fed into the quality agreement generator, to help filtering certain parameters in worker selection, and also it leads to creation of a verifiable task, after the creation of the task, the requester is waiting for the workers.

The workers start by introducing themselves to the system by subscribing to some task categories, or requester categories based on their interest .The tagging system and subscriber is one of the modules that can updates the recommender module and serves it as an input by matching the worker with the task categories or requester categories that the worker has subscribed in. The recommender module collects the data from CSP database based on previously completed tasks and their statistics as well as worker-requester statistics. The recommender module forwards its data as an input to the task feed. The worker is now ready to search for the jobs.

The worker queries the task feed, selects a task that he/she likes and then the worker is forwarded to query the PQM interface. The PQM interface decides whether the worker is “qualified”, “underqualified” or “overqualified”. If the qualifications match, the worker can start a task and it goes to Quality Requirement Agreement Module. This module enforces some of the requirements that a requester is concerned and strict about. If the worker agrees with the quality agreements then they can accept the task and go to the next level, otherwise the worker can dispute those agreements or express their concerns about these requirements on the Forum. The requirement agreements are generated by on the Quality Agreement Generator. The Quality Agreement Generator itself is generated by the constraints and validations that the requester enforces in the Validator Wizard Studio.

If the worker is identified as “overqualified” /(“unqualified” )by the PQM module, then he/she should offer a higher/(lower) price respectively as well as a justification, reasoning their eminent skill or lack of skill and sends it to the Three-Price offer System. The worker, based on his/her qualification can bid for the price of a task and waits for the requester to accept.

When the worker is accepted, then it starts completing the task, which is by nature validable. The result of the task, goes to the template validator parse. The Validator Template Parser, is generated based on the Validation resources that the requester seeks in the Validator Wizard Studio and provides those resources. If the validator parser, should be given moderators, then it consults the recommender system for moderators, and once given they are asked to review the results and send their reasoning to the Delivery filtering system. If the parser enforces extra workers, then the same task are given to those workers and the results are sent to Delivery Filtering System.

In the further step a survey about the task is given to the worker. The survey is created by CSP collector and one of its goals is to evaluate the task. The evaluation is fed into CSP database, to keep records of the tasks and also it is give as a feed back to task validator modifier, whose job is to train the requester for creating more descriptive and clear tasks. In this way the use of task validators improves over time and the system guarantees well trained requesters. After doing the survey, the result of the survey can be imported to CSP database and update the profile for the requester. At any point after the completion of the survey, the worker can receive a quick feedback. The worker can ask questions about the feedback on communities in Micro-Forum and the Micro Forum can reflect the problems to the Validator Module so that the requester knows what has been the problem. The quick feedback is not the complete feedback and the worker should wait for the complete results to come in.

The Delivery system filters the results and send them to preview generator module and now the requester can preview the results. If the requester approves the results which are partially validated automatically by the system and guaranteed to be acceptable, he/she then prepares a payment and a reward to the worker as well as the approval letter. The worker can receive the final feedback report generated by Result Feedback Generator as well as the payment. The requester can now apply the task results to his/her field or purpose and rewards the worker. The worker at this phase is waiting to the get results and feedbacks and in case of approval is provided with money, approval and reward. If the task is rejected, the feedback generator prepares a rejection letter. In either of these scenarios the results are reflected to CSP database and update the profile builder module and the lifetime of the task is ended.