Jsilver Reputation ideas

From crowdresearch
Revision as of 04:10, 13 August 2015 by Jsilver (Talk | contribs) (PEAK and Z index)

Jump to: navigation, search

Reputation Ideas by Jsilver

I will post a collection of my "wish-list" reputation ideas (among other ideas), some of which have been scattered in my mind and notes since January 2015 and even much prior, mostly based on my experience (as an oDesk worker, team manager, client since 2012). I've shared several/many of them countless times here on Slack (publicly and privately), Meteor, and Wikis. This repository is a work in progress.

Acronyms used:

RS= Reputation system

End-of-task Itemized Rating (Applicable to worker and client)

A client would rate a worker based on skills used in a particular task and based on other criteria (domains quality of work; communication/responsiveness; cooperation and work ethics/professionalism, deadline/turnaround; efficiency (like cost and time); and accuracy/consistency).

This RS is not perfect but would reflect a rating for each skill used in a task or project, rather than the typical "limited-view" 5-star rating. This is essential so that future clients would not bet blind on such a worker; If done in conjunction with interviewing the worker and checking the worker's job history, the client would have a very good idea of the worker's skills and experiences.

On the other hand, a worker could rate a client based on that 6-domain criteria above plus another criteria: knowledge of task. (criteria mentioned here may not be accurate nor final) As the platform evolves, it would have a substantial collection of rating data from workers and clients. By then, each worker would have worked with many clients and vice versa. If necessary, the platform would be able to put more weight on ratings given by skilled clients/workers (those who are able to rate workers/clients more accurately/honestly than others).

Implicit signals (Applicable to worker and client)

Rating is done during job application evaluation phase. This is a great way of providing reputation coverage throughout the worker (and client) population. I believe this could be implemented much easier than other reputation systems. See [1]

Tiered service fees (Maybe applicable to workers only, or clients too)

Meteor link: Worker performance (based on rating) would be tied to varying service fee.

Proposed worker tiered service fee matrix: (proposal may not be final) 5% to 15%

Proposed worker tiered service fee matrix: (proposal may not be final nor implemented) 1% to 5%

Top-performing workers would be eligible for decreasing service fees to as low as 5% or 6%. On the other hand, poor-performing workers would be subject to increasing fees up to 12% to 15% -- before getting banned from the platform. Same policy for clients.

Access to more tasks and better-paying tasks

Worker's access to more tasks and better-paying tasks is subject to his/her performance (rating). We could flip this and apply to clients simultaneously: client's access to the top workers could be contingent upon workers' historical rating for that client.

PEAK and Z index

PEAK: http://crowdresearch.stanford.edu/w/index.php?title=Milestone_8_sanjoseSpartans_Foundation2

Z index: (public and private rating + Task category A rating + Task category B rating + ....)/ 100 ? (calculation unsure)

Summer Milestone 11 Submissions

Complete (End-to-End) Reputation Cycle

This is a combination of Implicit Signals and End-of-task Itemized Rating. Please read the details above. This Rep Cycle would yield reputation from both sides of the task (at the start and end), and help increase reputation coverage regardless of quantity of workers hired (non-hired workers still get reputation). To incentivize clients to create implicit signals, the platform would offer them lower fees based on implicit signal quantity (this idea is subject to refinement).

Peer Feedback/Rating

Workers will assess their peers' work and give them reputation. To incentivize workers, the platform would offer them lower fees based on feedback quantity (this idea is subject to refinement). The disadvantage of this is not all work output can be assessed (e.g., those output subject to Non-Disclosure Agreements).


Spring Milestone Submissions [2] [3] [4]