Milestone 7 Sky

From crowdresearch
Revision as of 23:56, 15 April 2015 by Farzadsalimijazi (Talk | contribs) (System)

Jump to: navigation, search

Team Members

Farzad Salimi Jazi

Bita Kazemi Zahrani

WorkFlow has been Updated , PQM in progress, Will be update soon.


Title : Quality Promising Win-Win Crowdsourcing Platform Based on the Power Distribution between Worker and Requester by Mitigating the Trust Conflicts

Abstract

The down of crowdsourcing millennium has brought many new opportunities to the work environment and has opened a totally new virtual market of the employees and employers. The nature of the market is subject to unclear definitions and in order to be established, it needs clear, concrete and coherent standards that it should conform to, in order to make it a real trustable and desirable work environment for all the participants. Current Platforms like Amazon MTurk, MobileWorks and mClerks inherit many of the characteristics of their traditional parents, real markets, but none of these can fully address the needs and problems. A crowdsourcing platform should be trustable, transparent and flexible both for the requesters and workers and these are not achievable unless we distribute the power between all the players of the system. We address the problems and needs with four main key components; Price-Quality Method (PQM), Validator Wizard, Delivery Filter System, Reward System. We claim that we can fully address the needs with a win-win strategy balanced between requester and worker in a modular architecture.

Motivation

The goal of our system is to address all six needs synthesized in Milestone 4 as well as giving some novel ideas to ensure that the needs are addressed not only in a modular basis but also in system interactions. We observe that the worker is concerned about the payment and fairness issues and the requester is concerned about the quality of the delivered tasks and these things introduce many conflicts to the system.We believe that lack of power distribution and balance among worker and requester induces many trust conflicts. We want to mitigate these concerns by automatically blocking the sources of these trust conflicts. After analyzing MTurk and long time need finding in [1] and [2], we proposed a system with some key components PQM, CSP, Validator Wizard, Reward and Empathy, Delivery Filter System and we will fully describe the components in following sections.

Insight

We proposed Collective Statistic Panel (CSP), as a feature collector that delivers statistics as an input to different components of the system. It is also the main feature in transforming the current platforms to a personalized adaptive environment which is learning based on previous statistics.

Validator Templates Wizard Studio provides an easy to use, predefined task validator environment based on various validation strategies converging toward highly descriptive validatable tasks over time. Multilevel Result Delivery Filter System is an automatic, multilevel result feedback delivery component that can generate justifiable feedback at different checkpoints of the system. These filters guarantee quality control and enforce validation criteria.

Multilevel Result Delivery system to requesters idea we propose pre-defined filters to increase work's quality. We have integrated some of these filters as widgets in Wizard Studio. We define Validatable task including these filters for quality control and result filtering more efficient.

PQM is a mutual price offering system to balance the power of price placement among worker and requester. It leverages the quality-price ratio based on negotiations among both sides. PQM also fills the gap of underestimating the new un-experienced worker selection and undermining the overqualified worker skills.

Empathy and Reward System can provide the excellent worker with four kinds of rewards; Unexpected Reward that surprises the worker and motivates him/her toward more contribution, Certificate Reward to appreciate the workers contribution, the Achievement share is a reward to show the worker how important his participation is in a bigger goal and awards him/her with a sense of usefulness and feel connectivity (I’m part of this) , the benefits of Final result is another reward to show how the worker has benefited the accomplishment of a goal.

System

In this part, we describe how different modules come into play interacting with each other and with the worker and requesters. In the following synopsis we have tried to describe the activation of each module and how each can serve the other modules or receive the service from them.

flow text


Flow-Synopsis:

In the flow, we will have interface modules, activity modules and decision modules. We will walk through the mechanisms these modules communicate and how they are activated throughout the lifetime of the system. Each module may fall in one or more chronic phases and interacts with a bunch of other modules.

The very first activity is the Requester creating a task. In this phase workers may/may not exist in the system. The system offers a Task Design Interface (TDI). The interface provides the user with a menu of possible options 1. basic element selector, 2. Add Resources$pkg, 3. Add Validator widgets, 4. Bind Related widgets, 5. Simulate by test workers, 7. Review test results, 8. Assign expert Moderators, 9. Select Dispatch Strategy and 10. Licensing and Confirmation. Each of these options in the menu, have a role in task creation and facilitate the process of task creation by breaking it one step at a time. A requester may or may not use all of these options while creating a task. TDI interface communicates and activates the Validator Template Wizard Studio, which is a module to create different validation strategies for a task and creates a VALIDATABLE task by using the user's preferred validation methods. Validator Scripts available from a pool of scripts, Golden questions, arbitrator questions, condition validators, connector widgets to name a few. The created task is called VALIDATABLE task, because it has all the required widgets to be validated with a reasonable flow. Also, when the requester is creating a task, he/she can create certain quality agreement rules, that are exposed to the worker later.

alt textalt text alt text alt text alt text alt textalt text alt textalt text


Now that a VALIDATABLE task is created, the requester goes to the "waiting for worker" phase.

The workers enter the system. The first thing they should do is to introduce themselves to the system. How do they do it ? There is a Task Tagger and Subscriber module, which takes care of that. The worker identifies its interests and subscribe them for facilitating future task findings.

We have a CSP database, which keeps the records of tasks, workers and requesters of all the times. The data from CSP and the interest parameters coming from the subscriber go to a Recommender module which can recommend best matches of HITs/Requesters to the workers.

The worker has an task feed interface, which gives a list of possible HITs/tasks created by the recommender. A worker selects one of them. Now it's time for the worker to deal with the prices and requirements of the task, and decide whether he/she wants the task or not. After selecting the task, there is a Price-Quality-Method (PQM) interface that decides whether the worker is qualified or not. If the worker is qualified he/she can accept the task by agreeing upon certain quality agreements. The Quality agreements are created by Quality-agreement generator, whose base parameters are coming from the validation strategies and rules that the requester selects while creating the task. If the worker is "qualified" and agrees upon those requirements, he/she can accept the validatable task, otherwise the worker can go to the Micro-forum and talk about his/her concerns about some requirements and based on that, again start from the task feed to select the HIT or just ignore it.

If the worker is not qualified, he/she is either "unqualified" or "overqualified" based on PQM module. There is a three-price offer system that is fed with the workers qualification unqualified/overqualified along with a price to negotiate and a justification for the quality-price by the worker and then acts as a negotiator between worker/requester. The worker should wait for the requester response to this price-quality negotiation and if the requester accepts, then the worker can accept the task and go to the same quality agreement step and start the task.

The worker starts completing a VALIDATABLE task. Once he does it, two modules are activated. The first module is the Template Validator Parser. This module is responsible for providing all the preservations dictated by the requester in task creation phase and preserving all the resources it needs for validating the task. If the worker needs extra moderators , the parser provides some moderators by consulting the recommender. If it needs extra test workers to do the same task again it provides workers by consulting the recommender. The recommender as always decides based on profiles of the workers/moderators as well as their subscription. Once everything for validating a task is ready, the task can be validated by the Multiple Delivery Result Filtering System or MDRFS for short. This module decides the validation of the task and can report its validation at different phases of task completion. The MDRFS gives some report to both Task Feedback Generator and Result Preview Generator. The worker after doing the task, should complete a survey about the task. This survey is created by the CSP collector module to challenge different aspects of the task with the worker's opinion. Once the worker does the task the comments and feedbacks of the worker are written to CSP database and go to the Task Validator Modifier (TVM). The TVM now has a feedback from the task and can use this feedback to update the task validation design and lead to better and better task design overtime.

The task feedback is also written to CSP. The profile builder always consult the CSP database to update the profiles of workers and requesters and then updates the profile database.

Once the survey is done, the worker now can receive a very quick feedback about the task he/she has done and if he/she has any question and problem can reflect it to the Micro-forum. This feedback is generated with Task Feedback Generator module. The requester is also given a preview of the results. He can review it visually and only some part of the result is previewed to the requester. The worker can decide upon approval or rejection of the task now. If the workers task is passed through all the filters and the requester the task can be approved now. There is a module called Result Feedback Generator to create an approval/rejection letter along with the payment invoice. Based on RFS the Empathy/Reward module can be activated and it can reward the workers with a pool of different rewards. The reward can be a report of how the worker has contributed to some big goal, or how they have lead the requester to a goal like graduation and a lot more.

The main idea behind Empathy/Reward system is to give the worker a sense of contribution and participation and this way the worker will be motivated to return to the system and has a fulfillment.


PQM for combining quality and price
Motivation

One of the biggest concerns in the current crowdsourcing platforms is how to present a fair price offering system to eliminate the unfair pricing and weighting the task price based on the quality of the worker and the task. We already offered a three-price offering system in milestone 3, and now we want to integrate the idea based on PQM.

Related Work
Insight

The novelty of the system is based on the fact that, PQM is an automatic model, converting all the qualities to quantities and giving a price which is fair, however there is no communication and negotiation in the method, by using a tender system based on PQM for fair pricing, workers can negotiate the prices and the communications are two sided. In PQM worker interacts with the module, get a fair price and is rewarded with a contract, but in our approach we open the interaction channels between worker and requester and we believe that this initial negotiation over the price, leads to less conflicts, feel of underpayment for the worker, sense of trust for both worker and requester and fairness power is distributed among worker and requeseter based on quality.

System

PQM translates the qualitative attributes into quantitative scores which, when combined with the price scores, will enable the most suitable firm that provides the best offer to be selected for award.[1] The role of PQM in the system is to interpret all the quality attributes into quantity attributes, and turn all the qualifications of the worker as a vector of quantities and awarding the worker based on that quality [1]. PQM should provide a transparent pricing both to the worker and to the requester. The worker can apply for qualification check, based on those qualification a fairness price is considered in the system and the worker can tender the price based on that.

We Still working on Details of PQM Algorithm for our usage but you can see terminology and steps here. (Come back we will update with more details)

PQM.png

These are major steps of PQM. Details in each rectangle is in progress.

PQM step.png

Will UPDATE SOON. We are working on first draft.

References

References

1. http://www.bca.gov.sg/PQM/others/PQMv1_public.pdf

2. http://crowdresearch.stanford.edu/w/index.php?title=Milestone_4

3. http://crowdresearch.stanford.edu/w/index.php?title=Milestone_3_Sky_TrustIdea_1:_Multilevel_Result_Delivery_system_to_requesters

4. http://crowdresearch.stanford.edu/w/index.php?title=Milestone_3_Sky_PowerIdea_2:_Three_complementary_part_HIT_Tracer_system

5. http://crowdresearch.stanford.edu/w/index.php?title=Milestone_3_Sky_TrustIdea_1:_Collective_Statistics_Panel_%28CSP%29

6. http://crowdresearch.stanford.edu/w/index.php?title=Milestone_3_Sky_PowerIdea_1:_A_Three-priced_Offer_System_with_Negotiation_Mechanism_%28TBA%29