Difference between revisions of "WinterMilestone 3 Team1 EU ReputationIdea: Gamified Narrative"

From crowdresearch
Jump to: navigation, search
(DARKHORSE: Gamified Narrative:)
Line 19: Line 19:
 
For requesters, if they are put on probation because of their poor quality tasks or badly defined terms in which the work should be carried out or delivered, their probation would need to take them to the sandbox where they need to document themselves on “posting guidelines”. Next, they post 10 types of HITs, twice (or something that takes 10 minutes of their time as they are there to work and more importantly, give work), which will be verified by the LB (or see reputation idea for classes-mediators) and if the tasks were good, automatically post them. Afterwards, they get assessed in a similar fashion to workers to make sure the desired behaviour has been consolidated (see below).
 
For requesters, if they are put on probation because of their poor quality tasks or badly defined terms in which the work should be carried out or delivered, their probation would need to take them to the sandbox where they need to document themselves on “posting guidelines”. Next, they post 10 types of HITs, twice (or something that takes 10 minutes of their time as they are there to work and more importantly, give work), which will be verified by the LB (or see reputation idea for classes-mediators) and if the tasks were good, automatically post them. Afterwards, they get assessed in a similar fashion to workers to make sure the desired behaviour has been consolidated (see below).
 
For workers on probation, the user in question will be assessed (for 10-20 tasks) on a different rating system alongside the original one at the end of every task. This needs to be designed to focus on the bad behaviour, having asking requesters to rate wether or not he has made improvements on whatever it was he got wrong in the past.
 
For workers on probation, the user in question will be assessed (for 10-20 tasks) on a different rating system alongside the original one at the end of every task. This needs to be designed to focus on the bad behaviour, having asking requesters to rate wether or not he has made improvements on whatever it was he got wrong in the past.
 +
 +
Any Category or Guild
 +
 +
== Charachter Stats in the Game: KPIs ==
 +
 +
In order to have Character stats, in other words your performance skills, the reputation is based on key performance indicators (KPIs) which determine e.g. rejection rates, duration per task etc.
 +
Every Class Experts (people with 10 points, see the Purgatory).
 +
Daemo (-or the game) should autoevaluate also other KPI out of standard data, for example:
 +
 +
Give all Ws 10 points (grading 0-10 where 10 is highest positive rating) who have:
 +
Rejection Rate < 10% Task Duration < 10 Minute Error Rate < 5% etc.
 +
Give all Ws 9 points who have
 +
Rejection Rate >10% < 15% Task Duration > 1 Minute > 3 Minute Error Rate > 5% < 10% etc.
 +
Give all Ws XYZ points who have
 +
Rejection Rate >x < y Task Duration > x  Minute > y Minute Error Rate > x  < y  etc.
 +
 +
 +
  
  
  
 
== Semi-automatic reputation system with KPIs ==
 
== Semi-automatic reputation system with KPIs ==

Revision as of 16:01, 31 January 2016


DARKHORSE: Gamified Narrative:

Base Narrative: think about the requesters and the workers as two big brothers, or two sides of an island, or two parts needing one each other and setup group dynamics that may trigger internal group monitoring (workers control other workers and requesters control other requesters) If jobs have no intrinsic motivation (they are made only for money), a narrative to link the job quality to a story (WoW) will improve job quality, W/R relation and overall power distribution.


Classes

Bringing a bit of game design you could design a reputation system where you have both levels and perhaps classes. In terms of levelling, this can be determined after your first couple of tries (perhaps 10-20 range) to rule out for your inexperienced ways around the platform and tasks and whatnot. Based on that generated level (let’s say 52), you could go either up or down, based on an algorithm that takes into account automatised rating (probably time) and the one you get from requesters/workers. This rating could be used to place you within categories from which workers/requesters can choose from while filtering for jobs or adding tasks and could work towards offering different rates based on the skill required, matching workers with requesters’ needs (saving them either time or budget). In terms of classes, when working on your profile (again, either w/r), you could opt in (in exchange for some benefits) into a class that’s either going to help mediate the resolution process between workers and requesters, or help requesters design their tasks better, or the flip coin to help workers (that have been previously reported as failing on certain tasks) with whatever they may need (considering it’s not IQ and within their similar range of interests).

Workers' and requesters' reputation linked directly to each class or area rather than an aggregate for all the tasks incorporating various skills all bundled into one. A worker may produce fantastic results on logo work but so so results on python coding, for example. In order to ensure an appropriate identification of skill type and level when onboarding new workers, they will self-certify their level for each class for which they want to accept work. A microtask for testing will be given to the users to complete for free if they choose self-certification.

Feedback: The Mighty Purgatory

If you were to bring transaction audits at every task-completion screen, both requesters and workers could easily rate as well as help by suggesting improvements. This can be done through a Likert scale from -2 to +2 (e.g gruesomely dissatisfied -2 whereas for +2 extremely pleased). This way the system is easily notified and you will have a fast acting agent of “bad behaviours” (e.g 5 users gruesomely dissatisfied would make an accumulated user rating of -10 and that will also trigger the system). Once your reach a limit of reports, you congratulate the user if he got 10 points or place them in The Purgatory. For requesters, if they are put on probation because of their poor quality tasks or badly defined terms in which the work should be carried out or delivered, their probation would need to take them to the sandbox where they need to document themselves on “posting guidelines”. Next, they post 10 types of HITs, twice (or something that takes 10 minutes of their time as they are there to work and more importantly, give work), which will be verified by the LB (or see reputation idea for classes-mediators) and if the tasks were good, automatically post them. Afterwards, they get assessed in a similar fashion to workers to make sure the desired behaviour has been consolidated (see below). For workers on probation, the user in question will be assessed (for 10-20 tasks) on a different rating system alongside the original one at the end of every task. This needs to be designed to focus on the bad behaviour, having asking requesters to rate wether or not he has made improvements on whatever it was he got wrong in the past.

Any Category or Guild

Charachter Stats in the Game: KPIs

In order to have Character stats, in other words your performance skills, the reputation is based on key performance indicators (KPIs) which determine e.g. rejection rates, duration per task etc. Every Class Experts (people with 10 points, see the Purgatory). Daemo (-or the game) should autoevaluate also other KPI out of standard data, for example:

Give all Ws 10 points (grading 0-10 where 10 is highest positive rating) who have: Rejection Rate < 10% Task Duration < 10 Minute Error Rate < 5% etc. Give all Ws 9 points who have Rejection Rate >10% < 15% Task Duration > 1 Minute > 3 Minute Error Rate > 5% < 10% etc. Give all Ws XYZ points who have Rejection Rate >x < y Task Duration > x Minute > y Minute Error Rate > x < y etc.




Semi-automatic reputation system with KPIs