WinterMilestone 3 seko-kamilamananova Further thoughts on: Dynamic pricing, Clustering, Reputation
Clustering of Tasks and Workers, Internal and External Rating and Dynamic Pricing Strategy
Image 1: Overview: Clustering of Tasks, internal and external rating factors, pricing strategy based on rating, scoring and task specific performance' ' The process flow (image 1) describes a new approach to rate workers, distribute dynamically wages and to cluster tasks.
Data is king
As much data we collect as much we can analyze and deduct improvements. Therefore, this paper suggests to gather more relevant data of the workers and their activities. The following map describes some of the relevant data sources.
The collected data of all five areas build the basis for an internal rating (1.2). The data of the internal rating can also be used for further segmentation of workers:
Country, device, gender, age etc. A requester could pay higher wages to workers who fulfill specific criteria (fast workers, reputation (recommendation by other requester?), low rejection rate etc. (see 1.3 on general map).
Cluster it, baby!
Before publishing a task, the requester should go through a multi-step process to tag the jobs as detailed as possible.
An example for a (sub-) cluster could be “Audio Transcription of Japanese Tapes”. The cluster would be a sub cluster of the primary cluster “Audio Transcriptions”. So clusters are build from sub clusters to have a higher accuracy for the automatic matching. Come on, rate me! Rate me good! During our conversations in Slack and during the presentations of @michaelbernstein we have found out that, subjective rating can lead to rating-inflation and other problems. Therefor an objective rating system which could be based just on pre-defined KPI-patterns could be a solution to prevent most of the problems. A requester would just enter parameters or goals for a set of pre-defined KPIs which a worker has to reach to get a specific grade or points (0-10 → 10 is best).
Image 5: Rating process by a requester: Objective semi automatic rating based on KPIs and a batch process
Image 5 describes the rating process of workers. Instead of rating each worker one by one the batch rating recommends to determine performance goals which workers have to reach to earn a specific rating/amount of points. The requester determines the minimum parameters for a specific rating. A range from 0 to 10 is used for the grading of the parameters.
The table can be continued to cover all specific performance goals for all possible grading points (0-10 points).
The above mentioned performance indicators are the basis for the reputation of a worker. The KPIs answer the question “How qualified is/was the worker?”.
Additional segmentation criteria help the requester a) to target a specific group of workers and b) “to cherry pick” the best workers (for a higher price).
Money, money – did I mention money?!
Most of the workers on crowdsourcing platforms are extrinsically motivated. It is important to a) avoid price dumping wars and b) to distribute wages based on the individual performance of a worker. The following map ties on the distribution of wages. A worker can earn up to 100% of the average wages per task if he has high enough TMPs (Total Matching Points). Even if he doesn’t have high TMPs, he can still get 100% of the wages per task through outstanding performance. Outstanding performance means to be within the best 10% (or x%) of the workers of the job.
Image 3: Calculation of TMP (Total Matching Points)
The average rating per cluster is weighted by 0.7. Additional segmentation criteria and “premium segmentation criteria” is weighted by 0.3. Both factors build the total weighted rating of a worker (or “Total matching points” – TMP) which have a direct impact on the wages per task. TMP = 0.7 * average rating of performance of a worker within a cluster (a number in the range 0-10) + 0.3 * rating of segmentation criteria (age, location, gender, past overall performance etc.)
The result will be a number between 0 and 10
Example (Impact of TMPs on wages): A requester pays 1$ per task. Daemo matches workers based on the cluster ratings and segmentation criteria of the requester with tasks. Daemo identifies a worker who has e.g. an average rating in cluster “x” of 7.3 and also a rating of 10 for the specific segmentation criteria (country, age, rejection rate etc.).
Results: worker would earn 1$ per task because he has a TMP of 8.11 which is within the range of 8-10 points.
An additional option to stimulate extrinsic motivation could be a bonus system. The bonus system depends totally on the performance of a worker on the current job. If the performance of a worker is so good that he is one of the best x% (e.g. 10%) of all workers, then he will get paid 100% of the possible wages per task/hit even though he has a low historic TMP (based on his past work in the cluster and the segmentation criteria).
This stimulates his motivation to deliver good work to receive higher payments. If he still delivers low quality, then his TMP will further drop and in the long term he will receive less work because his matching to clusters and tasks dropped.
- Worker has a low TMP so that he can’t earn 100% of the wages for task X
- E.g. he would just earn 70% of the wages
If requester rates the performance of the worker to be within the top 10% (or 20% or x%) of the best workers, then he should earn 100% of the wages per task instead of only 70% because of his outstanding performance.
So even workers with a lower TMP have the chance to earn 100% of the wages if they show good performance by delivering high quality results.
Improving quality by increasing learning curve of workers through clustering
The clustering of workers will help to increase specialization of workers through a high learning curve. The learning curve will increase by providing tasks to workers who already have done similar work. This raises the overall quality and it will also have an impact on wages.
Image 6: Clustering of workers