Milestone 3 Mustang
- Trust will come from greater transparency.
- It’s easier to regulate power once both parties know that the other party won’t misuse it, which again, points towards the need for mutual trust.
- Operators of new/future crowdsourcing platforms must make sure they are in some capacity actively involved as mediators between Rs and Ws, even if that means appointing people just to act as a second layer between Rs, Ws and the platform itself.
Please note: For brevity's sake, Requester/Requester and Worker/Workers are henceforth referred to R/Rs and W/Ws, respectively.
Building trust between Ws and Rs goes a long way in making sure that Ws and Rs continue to collaborate and ultimately come out their engagements satisfied.
- Allow tagging of HITs by skill name/job category so as to make sure workers can easily find relevant HITs based on skill level and HIT requirements. This will help increase trust on both ends – Ws can be assured that they will find relevant HITs, and Rs can be assured that Ws with the correct skills and adequate experience are attempting to complete their HITs.
- At-a-glance profile card for Ws – displays their username, skillset, experience and maybe a recent testimonial.
- Brownie point system/Karma/Kudos system for Ws. This is displayed against a Worker’s username everywhere on the platform, and signifies experience and builds trust. Hopefully, this feature will encourage Ws to help other users of the platform in order to build a solid reputation, as with users on various forums.
- Make a minimum payment (pro-rated based on amount of time spent/effort made) necessary even if HIT submission is rejected by a Requester.
- Introduce a benchmark for calculating compensation based on a combination of time invested and quality of submission, and adjust the benchmark depending on current payment/compensation trends in the marketplace. Use this Benchmark to suggest pricing to first-time Rs and Rs who have priced their task too low. If the compensation is still not fair, we can affect the ranking of the HIT as it appears in the HIT discovery process.
- If a W has a history of past-permissions that are sub-par, push their submissions further down the queue of responses visible to the R.
- No-tolerance policy for cheaters. First time offenders should have their access revoked for a certain amount of time, and repeat offenders should be banned permanently. Alternatively, first-time offenders could also have their HIT visibility affected for a while, depending on the seriousness of the transgression
- Employ a middle layer – maybe of other workers with the necessary experience and skills, and have them make sure answer quality is high/up to standard.
- Give Ws the ability to pause/abandon a task and ask the R for more information, and then unpause/restart the task.
- Reward Ws for reporting bugs/snags – almost like a mini bug bounty program, but for each HIT.
- For HITs that require a large time commitment, suggest that Rs make a downpayment to Ws involved, in order to show their commitment to pay once the task has been completed.
Regulation of power is important – Ideally, Rs and Ws are equal stakeholders that keep the platform running and growing. An imbalance in power can cause discord among Rs and Ws, which in turn can cause a platform such as this to crash and burn before it begins attaining its latent potential.
- It is important that Ws feel they are being treated as people, and are given somewhat of a personality on the platform. They would prefer not to be seen as ‘just a worker’; that only reinforces the fact that there is an imbalance of power on crowdsourcing platforms that currently exist, especially AMT. One way of helping foster this is to let Ws choose how they appear to others on the platform – within specific guidelines, of course – maybe let them choose and customize an avatar and/or pick a username? While this might not seem like it’s very important, we think this (coupled with the Kudos system) will enable and encourage workers to take pride in their work, which is one step closer towards making them feel like actual people while still technically being anonymous/nobodies.
- A support page for each HIT, where Ws can report bugs, ask questions, suggest edits and participate in discussions with other Ws and Rs.
- Give Ws easy access to relevant tasks by letting them filter through available postings. Over time, as the platform learns more about the user, start populating their default prospective HIT list with tasks more relevant to their skillset and experience.
- Ws who have successfully completed a HIT and received compensation for it can suggest changes to the HIT, ask the R to clarify instructions, add more information and so on.
- Give Ws the power to contest a rejected work submission, at which point a platform support team can go over details and decide whether or not the rejection was for a legitimate reason.
- ^Similarly, Rs *must* provide a reason in case they reject a work submission.
- If it comes down to it, allow Ws to mark/tag a R in case the R is being unfair/misbehaving. This is probably a good point in time for the platform support team/admins to step in and ask questions/conduct an investigation.
- Allow Ws to choose method of payment – For instance, AMT, a major player in the crowdsourcing platform industry, locks Ws into using Amazon payments. Amazon restricts people in certain countries from using their payments service. Even if a W has access to AMT’s payment gateway, the cash-out process is sometimes too complicated, as a result of which their money stays “locked” in their AMT account. At the moment, we’re not suggesting alternate methods of payment the platform could potentially use since that’s a decision that hinges on more than just one reason, and since there are multiple rules and technicalities involved that we cannot control, but locking Ws into a particular service or forcing them to go through a complicated cashout process is bad for the ecosystem.
- Give Rs the power to publish HITs that don’t rely on 3rd party software unless absolutely necessary. The platform should have the ability to let Rs publish the most common kinds of tasks – Surveys, Proofreading, transciption tasks and so on – natively, and allow Rs to test them in a sandbox, with the ability to invite testers to said sandbox. This would also make consolidation of errors and replication of bugs easier.
- Depending on the scale, importance and intensity of HIT published by Rs, give them the ability to screen prospective Ws for those tasks. oDesk already has a mechanism in place for this since it’s tailored towards tasks that take more time to complete than those on AMT, for instance. It’d be great if more platforms followed suit.
Dive Deeper into Specific Ideas
- [Milestone 3 Trust Idea 1: Profile Cards] - http://crowdresearch.meteor.com/posts/yovXD67ZYSHbso9WW
- [Milestone 3 Trust Idea 2: Kudos – Reputation Point System] - http://crowdresearch.meteor.com/posts/gLP8kMCh79vZ5Botv
- [Milestone 3 Power Idea 1: HIT Support Page] - http://crowdresearch.meteor.com/posts/mqGSTRazAwhKQgSkg
- [Milestone 3 Power Idea 2: Improved HIT Discovery] - http://crowdresearch.meteor.com/posts/Fx9jg5TPNq7p8ctF9