Milestone 8 sanjoseSpartans Foundation2
Foundation 2: Input and output transducers
Challenge Question 1: Cost: Who pays for this? In other words, can this be done without hugely increasing the cost of crowdsourcing?
There are two ways to address this. Implementing #2 below would create the very first (fully or partially) crowdpowered platform.
1. With Additional Cost
Both input and output moderation could warrant additional costs.
Input moderation costs may be shouldered by the platform. In the case of output moderation:
a. whoever requests the procedure can elect to pay for the cost.
b. If client wants a 50-50 cost split with the worker: a worker can agree to this, or he/she can invoke a “client pay all” clause. This clause can be invoked when the worker deems that his/her work is of good quality/passes all requirements and does not need to go through moderation, yet the client insists for a moderation to take place.
2. Without Additional Cost
2a. The platform can generate revenue from ads (work-related or otherwise) posted across the entire platform. This revenue could help pay for the moderating team and other expenses. This can be combined with 2b below.
2b. Clients nor workers would not need to bear additional costs by implementing the Platform Enrichment features first mentioned in System #14 of our Milestone 6 proposal, Power to the Workers: Building a Worker-centric Job Platform: http://crowdresearch.stanford.edu/w/index.php?title=Milestone_6_sanjosespartans
The Platform Enrichment features consist of the Platform Enricher position and Performance Badge recognition.
A. A worker can help build a better platform by becoming a volunteer Platform Enricher (PE). A PE is a highly skilled and impartial platform workers (some/all are the best of the best) who take a big step further in upholding a worker-centric platform that seeks client satisfaction.
Functions: A PE is a facilitator and contributor. A PE can help in task clarity (task details, design, and pricing), addresses worker concerns (forum and chat support; flagging scammers and spammers; and task mediation, among others), and even check worker submissions (moderation).
Benefits: A PE will receive a variety of benefits: learning more about various tasks related to their skills, getting a chance to work on the tasks themselves, access to advanced or better-paying tasks, and getting points that accumulate in favor of a Performance Badge. A Platform Enricher will have a distinct badge viewable in his/her profile and throughout the platform.
B. A Performance Badge (PB) * will motivate workers toward excellence, and in turn, client satisfaction and help make a high-empathy, interactive platform. A PB is made up of an umbrella of features comprising public and private feedback from clients and task-specific performance ratings (star ratings). A PB will comprise different levels that are activity and metric based (performance, among other metrics), and offer benefits to high-performing workers (discounts, lower platform or withdrawal fees, faster payment processing, etc.). Being an Enricher is one of several criteria in receiving a PB. A PB may or may not comprise several badges seen below. A PB is another distinct badge viewable throughout the platform.
Perfomance Badge Levels and Criteria (Note: preliminary list; subject to change): PB will have 10 levels, 10 being the highest and most coveted badge. Criteria may include platform membership tenure; job rating; public and private client task feedback; completed jobs, forum posts, and dispute moderation task quantity; and platform violations.
Dispute Moderator (DM) badge: A Dispute Moderator, a kind of PE, is fair and transparent with his/her decisions. A DM is fair to both clients and workers and upholds platform policies 100%. Disputes are moderated privately and blindly (moderator does not know who the client and workers are). This new platform would be renowned for its worker-centric platform with very strong focus on dispute resolution and worker performance (which translates to higher client satisfaction).
Poor-performing workers (PPW) badge: workers with a performance rating (example,3.00-3.50 and below; highest rating is 5). This is a private badge visible only to the worker him/herself and relevant clients (i.e., clients whose jobs the PPW applied for). A PPW may be punished in terms of fees and quantity of available and eligible tasks. Assuming there is a starting platform fee of 10% per project, PPWs will be charged 11%-12% or more per project, and/or earn less (clients can pay them lower than the actual project price). This is another motivation for all workers to perform well.
Platform Enrichers and Performance Badge awardees can be included in the platform’s regularly-updated list of top workers and top clients. This list would be publicly viewable, and give workers ideas about good worker and client profiles (individual profiles can be set to private, public, entire platform-only, workers-only, clients-only, or specific client/worker-only). The platform could host regular announcements or recognition ceremonies for these top performers.
Human+Algo-based reputation system: I propose PEAK (PErformance Alpha Kaizen), a comprehensive human and algo-based reputation system that is tied to the overall improvement and sustainability of the ultimate jobs platform.
PEAK will comprise badges for high-performing and poor-performing workers and clients. PEAK will be comprehensive and promote platform sustainability since it will have criteria including job ratings (star ratings: public/human and private/algo feedback), forum posting, and dispute moderation, among possibly others. This is for the workers. PEAK may/will have a different set of criteria for clients (focused on tasks they offer) such as accuracy/consistency, communication, skills, ethics, and cooperation.
PEAK is the formal term for the Performance Badge (PB) mentioned above, and is part of the Platform Enrichment features proposed above.
Advantages of PEAK: 1. Volunteers and “reduced pay/expenses” will be inherent to the platform. Clients always prefer to reduce cost when dealing with workers, and at the same time, workers want to earn more, not less. 2. It is a combination of human and algo feedback. It could be robust enough to prevent “reputation inflation”. 3. It inherently helps promote improvement and sustainability to the platform. 4. It is flexible. The algorithm can accommodate revisions or the addition of more criteria or features.
Challenge Question 2: Speed: Is it possible to do this quickly enough to give near-immediate feedback to requesters? Like, 2–4 minutes?
A new platform would naturally start with a small pool of clients and workers (including moderators). But as the platform gains popularity, it is definitely possible to improve the speed of the feedback process.
It is important to note that not all tasks are equal:
1. Some tasks are simple enough that very little or no moderation would be required.
2. Many tasks would require moderation to make sure majority of task details and requirements are laid out clearly.
3. Some tasks are difficult to moderate, since they are subjective, like graphic design task outputs, etc.
4. Speed vs efficiency: Clients may elect to opt out of moderation with a caveat and explicit understanding that work output in this case may not be efficient enough, money- and time-wise. Moderation is meant to improve the quality of task design and therefore the task output.
Challenge Question 3: From Edwin: What happens when I have a task that I know is hard but I want workers to just try their best and submit? I’m OK with it being subjective, but the panel would just reject my task, which would have been frustrating.
As mentioned by JSilver in the Slack channel, if a client is certain that he is OK with being subjective, he should tick a “I willingly want to opt out moderation” checkbox, effectively waiving his right to high-quality work. Such option would trigger certain mechanisms wherein the client cannot reject work nor avoid payment. Slack channel’s acossette provided an excellent insight – adding this ‘moderation opt out’ option as a “searchable variable” that workers can look for.
Challenge Question 4: From Edwin: Could this help deal with people feeling bad when rejecting work? Maybe we need a new metaphor, like revision.
Revisions of task output should be allowed and encouraged. It’s notable that current workers and clients in oDesk already employ task revisions/regular feedback. Clients are encouraged to provide regular feedback (feedback in every milestone).
Workers should not feel bad about rejected work. Workers should always aim for high-quality output and client satisfaction (subject to fair compensation).