Difference between revisions of "MileStone 9 Sky"

From crowdresearch
Jump to: navigation, search
Line 70: Line 70:
*Evaluation of Workers(Tenderers) Non-Price Attributes / Determination of Quality Premium / Determination of Prefered Tender
*Evaluation of Workers(Tenderers) Non-Price Attributes / Determination of Quality Premium / Determination of Prefered Tender
*Grade Calculator
*Grade Calculator
*Base Price and points Estimator (Default values)  
*Base Price and points Estimator (Default values)  

Revision as of 23:22, 30 April 2015

Foundation 1: Unifying macro with microtask and Quality control by PQM and Quality Milestones


How does it work? Is it negotiation, oDesk style? Or job boards anyone can check, like Mechanical Turk? How do we address quality and make sure that people don't just grab a task, work for ten hours, and then submit low quality work? How do we unify macro with microtasking?


Three factor are involved in Macro-task's quality; human resource who starts the Macrotask, Quality Control (QC) in Micro-tasks&Quality Assurance as described in North Region Quality Management Plan and the Last but not least one the way of Breaking down macrotask.

PQM translates the qualitative attributes into quantitative scores which, when combined with the price scores, will enable the most suitable firm that provides the best offer to be selected for award.[1] The role of PQM in the system is to interpret all the quality attributes into quantity attributes, and turn all the qualifications of the worker as a vector of quantities and awarding the worker based on that quality [1]. PQM should provide a transparent pricing both to the worker and to the requester. The worker can apply for qualification check, based on those qualification a fairness price is considered in the system and the worker can tender the price based on that.

PQM System overview

These are major steps of PQM. Details in each rectangle is in progress.

PQM step.png

Price Quality Method is embedded in the very first interfaces that the worker should interact with. At the beginning of the worker’s journey to find a HIT and surfing the newsfeed, the PQM interface directs the worker to a price quality module to achieve a win-win price-quality negotiation among worker and requester. Our adapted PQM module gives both sides the flexibility to personalized and prioritize their price quality concerns as they want it to be. The module is replaced and updates the part of the system indicated in the figure below.

Flow milestone9 sky PQM part2.png

First we are going to explain the flow that is happening in the PQM component of our system. As shown in the following figure, the worker interacts with the PQM interface and decides whether they want to enter the system by tendering any price or they want to accept the base price. If the worker decides to accept a base price the process is very straight forward and the worker is directed to a base price recommender and is asked for further agreements. If the worker is eager to submit a tender, the interface provides the worker with a tender proposal in which the worker can specify their preferences as some non-price attributes. The tender proposal contains as many non-price attributes that the worker wants and a corresponding rating parameter. Once the proposal is ready, other resources of quality score and price collecting factors are activated. The resources are HIT real-time data, worker’s real-time data, requester’s real-time data and the data from the profiles. All of these have their own non-price attributes and weightages and at the end they can be converted to a price indicator.The PQM evaluator accepts the tenders from different workers, collect their individual non-price preferences and weightages, seeks the base price and computes a combined score based on all of these inputs. The base price is the main component for computing the tenderer ranking results. The module notifies the winner tenderer for further agreement. Below you can see the non-price to price conversion process in the spreadsheet. As shown i the spreadsheet there are 6 main components activated in the system : Worker/Tenderer Evaluator Base Price Recommender Proposal attribute grade calculator Real-time worker data grade calculator Real-time HIT data grade calculator Real-time requester data grade calculator Profile data grade calculator The first component is the worker/ tenderer evaluator and all other components are embedded within this component. The base price recommender is further explained with detail but for now we are going to focus on these components to get an overall understanding of the system proposed. The Worker/Tenderer evaluator gets its data from component 2 through 6 and gives each of them a weightage. The importance score of each of these resources is computed based on their grade and their weightage. Their grade is computed in the their corresponding grading component and sent to the first component as an input. All the scores for these resources are mapped to a score between 0-5. The index is computed for each of non-price attribute resources.


The Weighted Sum of non-price attributes are computed by summing all of the indices.


Then we introduce the concept of Weighted Sum Margin by subtracting the WS from the minimum WS among tenderers:


Now it is time to convert is WS margin to a price value called WQP or worker quality price :


The Price Schema parameter is based on the Quality Price Schema that we select for our PQM model. It can be 80:20, 70:30, 60:40 and so on. The price schema indicates what portion of our price is monetary and what portion is quality. Each tenderer has set its tender price to Tender i and the combined quality price score is calculated as below:


and the tenderer with the lowest CQPS wins and is directed to further agreement components.

PQM flowchart.png

Screenshot of spreadsheet Prototype

  • Evaluation of Workers(Tenderers) Non-Price Attributes / Determination of Quality Premium / Determination of Prefered Tender
Evaluate PQM farzadbita.png
  • Grade Calculator

Proposal grade cal farzadbita.pngWorker Grade calculator farzadbita.png

  • Base Price and points Estimator (Default values)

The fair base price estimator inherits and combines many of methods previously used in estimating a price. We refer to [Pricing Crowdsourcing Based Software Development Tasks] with the idea of categorizing the tasks and their drivers to plug them in a logistic regression method. We also refer to [Measuring Crowdsourcing Effort with Error-Time Area to extract some more drivers and categories].

One start suggestion is trying to find cost drivers in our system based on task type and nature. We are going to have table like this:

Estimate table.png

Unify Macro with Microtasking

We propose PQM method in Task assignment stage (before starting to executing the task ) by PQM and price offer system. Entrance PQM phase is shown in this picture.

PQM flowchart.png

Having Quality agreement and involving Quality parameters from different source is a good way to choose Qualified workers and Consequently more chance to have better human resource that is really important in work's quality.

Break it down Macrotasks

Based on [11] They are ten task primitives in Mturk and all the tasks can be either one of these or a combination of them. Primitives are : Binary ( Yes/No), Scale(Likert), Categorization, Tag, Describe, Transcribe, Find, Fix, Search and Math and each of these primitives have their own factors that should be considered when estimating a fair price.

Based on what we have as primitive tasks in [11] we can categorize the macro tasks into batch, procedural, fine-grained, arithmetic, sorting and transcription.

  • Batch Macro Task : The tasks that are presented as a whole in nature, like giving the worker 100 images and a bag of tags and ask him to tag the images. “Tag” macro tasks can be categorized in this group.
  • Procedural : The tasks that should be done step by step.
  • Fine-grained : The tasks that are micro-task grained in nature, like surveys in which we can separate the questions each as a micro-task. The “Find” and “Description” tasks are inherently fine-grained basically because we can break the task into sentences.
  • Arithmetic: Mathematical operations that can be separated like adding a list of 10 items. “Math” macro tasks can be placed in this group.
  • Sorting: Tasks like sorting a list, or sorting the labels with degree of importance. “Tag”, “Scale” macro tasks can be further categorized here.

Transcriptions : Transcriptions can be easily converted to microtask by selecting a part of the task.

By distinguishing the type of macro-task we can further break it down by the approach in [Break it Down 12] and send each micro-task iteratively to our QM module.

Macro and Microtask Quality control

But for Quality control in development phases of Macro-task or micro-tasks, We modify proposed method(PQM) and implement iterative version of this method with different quality parameters in each milestone that are specified by collective statistics, requester rating or moderator committee. In this Quality Control method after each phase or micro-task we have one Quality Milestone(QM) as described in [9] that we are going to run QM with quality parameters (Non-Score criteria) associated with that milestone.

We can have this option in each QM to add new worker or change the price according to previous phase worker's performance. At each round or iteration of QM module on of the micro-tasks enter QM and the winner of that micro-task quality-wise can go to the next round to compete for the next PQM iteration. At each level of PQM we can perceive any number of tenderers and send the best quality price tenders for that micro-task to next level. The workers that can compete quality at each round of the Quality Milestone(QM) iteration are those who are not taken the system for granted and therefore are guaranteed to ensure the quality standards.

We can have this option in each QM to add new worker or change the price according to previous phase worker's performance. QM flow can be as following picture.