Milestone 4 A new pricing model: Distribution of Wages, Performance Based Ranking and Teams by Team 1

From crowdresearch
Jump to: navigation, search

Increasing reliability and quality by dynamic price distribution, team building and performance based ranking

In the last few years there has been a growing interest in identifying key factors to increase task quality in crowdsourcing platforms. The unreliability of the task performance of workers force requesters to increase confidence in the results by e.g. aggregating the results by majority voting. This increases the total costs for requesters and leads to market inefficiencies.

Therefore, this paper proposes a) smart pricing b) team building and c) performance based feedback to reduce the total costs for requesters (e.g. by reducing redundancies), to increase reliability and overall task quality.

Previous studies indicate that a reliable reputation system has an impact on the matching process of tasks and workers (Karger u. a., 2014). However, it is difficult for requesters to rate the task-results of a single worker because of the time consuming process of analyzing the worker specific submissions (Karger u. a., 2014).

Mason et al (2010) examined that there is a correlation of increasing prices for tasks and the quantity of tasks. However, increasing prices doesn't mean higher quality but quantity (Mason, Watts, 2010).

Workers have opportunity costs to participate in the system and therefore they strategically choose and execute tasks to optimize their returns in comparison to their investments/costs (Singer, 2014). This leads sometimes to low performance and the principal-agent behavior.

This paper examines the interdependencies of wages, reputation/performance and quality of workers and suggests theoretical concepts to challenge the problems.

TEAM BUILDING

Increasing task quality by internal assessment of tasks within a group.

Hypothesis 1: Assessment of tasks will lead to higher quality of task submissions.

The external assessment of tasks leads to higher rates of tasks per wage-unit and also to higher quality of task results (Dow et al, 2012). But external assessment increases also the transaction costs because a third party ("assessment team") would require further wages.

Therefore this paper proposes an automatic team building algorithm where workers are organized in small groups. The workers within a team review and assess the tasks of each of the membes before a member finally submits his work to the requester. To stimulate the motivation for the assessment activities an incentive system where a group with excellent ratings get a bonus, is part of this framework.

Team-feedback-cycle.png

Image 1: Internal assessment of tasks within a team of workers

Image 1 illustrates the process of how a team internally assess tasks before they are submitted by a member of the team to the requester.

The process:

  • Workers are clustered as a small team based on their preferences ( history of similar tasks)
  • Before submitting the work to the requester the team assess the work internally through a cycle process
  • After editing of the work the worker can submit his work to the requester
  • Top teams earn a bonus which is deducted from the total budget

The bonus for good performance stimulates the extrinsic motivation of workers. At the same time it saves transaction costs for requesters and it increases the task quality.

SMART PRICING

Distributing wages based on performance and further specifications.

Hypothesis 2: Good performance leads to higher wages and higher quality.

We propose a smart pricing model based on the performance of workers to distribute wages.

Vickrey-Clarke-Groves mechanisms "allocate resources optimally assuming agents (workers) bid truthfully, and enforce prices that support truthful reporting" (Singer, 2014). As already mentioned, opportunity costs can lead workers to strategically bid on tasks and in reality this could be a vulnerable point of a VCG mechanism that could lead to market inefficiencies. Therefore this paper proposes a budget distribution system where DAEMO calculates the wages for workers based on

  • Qualification (academic? degrees? school?)
  • Geography (USA? Europe? India? China? etc?) -> average income?
  • Rank/Reputation -> How was performance in the past for similar tasks (of the same cluster/category)?


Smart-pricing.png

Image 2: Price distribution based on specific parameters and performance of workers

Image 2 illustrates the process flow of the distribution of wages. The wages are ex post distributed on the basis of the performance of workers and their preferences. Workers with good performance will earn higher wages than workers with lower performance and requesters will have the opportunity to segment workers with further filter criteria (e.g. location, education etc.) . From the requester perspective this would mean that DEAMO estimates a price per task for requesters. The requesters have to enter their total budget so that DAEMO will distribute the budget to a specific number of workers who will earn based on their past performance. This will lead to smart pricing and each of the workers will see a different price for the same task.

PERFORMANCE BASED RANKING

Anonymous rating of performance goals instead of single workers.

Hypothesis 3: Anonymous rating will not lead to reputation inflation.

The dynamics of a rating system have also a link to social behavior (Dellarocas, 2003). The process of direct rating of workers by requesters has the potential that fake ratings or too subjective ratings could lead to rating inflation (Horton, Golden, 2015). Therefore this paper recommends a batch rating system where a requester will not rate a single worker but rather specific performance goals.

Horton and Golden (2015) show that anonymous ratings are substantially more candid.

Rating of workers.png

IMAGE 3: Prototype form: How to rate anonymously workers based on performance goals

Image 3 shows a prototype form on how to rate workers based on objective performance goals. The KPIs of workers are tracked by DAEMO so that the requester doesn't have to invest time and resources in the analyzing of the results of worker submissions. The requester rates the key performance indicators on an index of 1 to 10 (10 = best) for all workers who are within the range of the performance goals.


References

Balkanski, Eric; Hartline, Jason (2015): „Bayesian Budget Feasibility with Posted Pricing“. In:.

Dellarocas, Chrysanthos (2003): „The Digitization of Word of Mouth: Promise and Challenges of Online Feedback Mechanisms“. In: Management Science. 49 (10), S. 1407-1424, DOI: 10.1287/mnsc.49.10.1407.17308.

Dow, Steven P.; Kulkarni, Anand; Klemmer, Scott R. u. a. (2012): „Shepherding the Crowd Yields Better Work“. In: CSCW.

Horton, John J.; Golden, Joseph M. (2015): „Reputation Inflation: Evidence from an Online Labor Market“. In:.

Karger, David R.; Oh, Sewoong; Shah, Devavrat (2014): „Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems“. In: Operations Research. 62 (1), S. 1-24, DOI: 10.1287/opre.2013.1235.

Mason, Winter; Watts, Duncan J. (2010): „Financial incentives and the "performance of crowds"“. In: SIGKDD Explor. Newsl.. 11 (2), S. 100, DOI: 10.1145/1809400.1809422.

Singer, Yaron (2014): „Budget feasible mechanism design“. In: SIGecom Exch.. 12 (2), S. 24-31, DOI: 10.1145/2692359.2692366.


Contributions:

Team1: @seko - Sekandar Matin & @purynova - Victoria Purynova & @kamila - Kamila Mananova