Milestone 3 AltaMira
The ideas you brainstormed, (at least 10 ideas for trust, and at least 10 ideas for power). Provide them in whatever format you want - diagrams, sketches, descriptions, or a combination (the wiki supports images, see here for instructions on uploading them).
1. Small 5 sec video introducing yourself and where you’re working from. This goes a long way to showing the environment you're working from, whether you're real or not etc. A small looping video goes a long way towards ensuring trust. You should be able to further explore the worker if you'd like but at a glance you'd see a gallery of photos of all workers that are on your task. It could look something like this:
2. Automatically build a trustworthiness score (per class of problems) for each Worker based on the difficulty of the tasks they’ve handled and their % error/rejection rate.
4. Answer sample questions of all types and determine a way to ask new questions, based on your answers you’re given a trustworthiness score. Some questions will intentionally not have answers to determine if you’re just trying to game the system.
5. A way to upload certifications and documents, which can be reviewed by MTurk. These would get put into a TurkLocker and show up as completion score. Reviewing documents does not have to be thorough because they only have to be reviewed for passing the basic requirements. A user that has entered a DOB certificate, a HS diploma would have a higher score than one with no documents. The score would be visible to the Requesters and would serve as a better qualification system because it is based on real life and leverages real life systems instead of arbitrary tests.
6. For certain classes of problems (e.g. classification): ensure each item is classified by two independent workers. Automatically assign to a third if there is a dispute (or flag for Requester)
7. Requester groups - put all requesters you have worked with into a group, assign a score representing activities you have done. New activities will be weighed as riskier, new requesters will also be riskier, color code each HIT into a risky/not risky based on this measurement. Leverages your monetary goal set for the month and shows how much of the bucket this would fill. This helps the workers determine which requester is riskier. This is TurkOpticon on steroids, it groups requesters and remembers which ones you have had good experiences with. Turkers said that they remembered which requesters they liked working with and that they work with many requesters, this would be a simple way to protect workers without having to user third-party tools.
8.Verify restricted emails domains e.g. college emails. That way you know you are dealing with someone who has a .edu or .mil or .gov email address.
9. Encourage a video from Requester along with each posting.
10. A mandatory video chat with each worker. Similar to the video but this uses a live stream video with each workers that a requester can use to verify they are real and they are qualified for the posting. The only problem with this is that it takes up too much time to chat with someone and human behavior dictates certain formalities have to be maintained which might deter workers in developing nations. They would also need a fast internet connection to live chat which is not always available. It could look like this:
1. Independent committees and voting. If a worker has a disptue, now they have no way of resolving it. Amazon is not very supportive of workers in cases of rejection and the volume of work does not allow for an efficient system. In an ideal system, if they had a way of letting an independent committee of requesters and workers determine if they were treated fairly it would work better. This work itself could be treated an MTurk HIT and the payment would be made by the person losing the appeal. This ensures that not everything is appealed and a fair outcome is served so that requesters don't hold all of the power in each transaction.
2. Secondary markets - request for requesters. What requesters want from MTurk is a solution, quality work that they are willing to pay for. As with all real world systems, this means that managing thousands of workers is not ideal. They cannot independently ensure that results are accurate and quality is delivered. Secondary markets solve this problem. Requesters put out a problem or something they want done. Workers can take up the task and hire it out to other workers. They would then be responsible for ensuring that the quality is delivered. Then the main requester can glance through the data to ensure it meets the standard of quality. It could look something like this:
3. If a job has a fixed number of tasks and a fixed cost per task, the Requester could be billed for the total amount right away. That money could be held in escrow by the system. That way, the workers can be confident that the Requester has the money.
4. There could be multiple classes of Worker e.g. subject matter experts. Workers could enter this class either via demonstrating competence in a class of problems and/or by providing a credential (e.g. college degree) which can be verified by the system. Subject matter experts would be paid more and could act as intermediaries between entry-level Workers and the Requesters. They could
5. Requester auto reputation, on signup auto assign a task for free and get feedback. This can be a simple task. This is the first piece of reputation for the requester given by the workers. Track every metric of the requester and add it to their auto reputation, requesters would have to strive to maintain their reputation and it should provide workers be able to provide feedback back to requester. This wouldn't require workers to specifically do additional work since the system could track rejection rates and payout rates, number of disputes and how many times they won. Since the Requester has much more power in this system, this would shift the balance in the worker's direction by a little bit. It will continue to have the same problem of people not giving bad feedback simply because of bad karma and they don't want to be called on a (plausible) requester forums.
6. Peer reviewed work. Workers are the best graders of other work. Tell requesters of their total task costs by calculating work done + reviews. IE 10 surveys = 20 tasks, 10 surveys taken + 10 reviews from peers. This is a method to ensure that requesters don't have to review work submitted to them. It also empowers workers because it ensures a fair method of delivering quality work and having more than one way of vetting work that doesn't depend entirely on the requester.
7. For certain classes of problems (e.g. classification), the system could automatically route the appropriate tasks to the appropriate workers. The Requester could specify how many workers should complete each task, and what should happen in the case of a disagreement (e.g. automatically assign to a tiebreaker worker or flag to Requester).
8. Chop down platform to verifiable tasks only. If the task is not verifiable through an automated system, it should not be allowed, some action must signal a task is complete. This will force requesters to come up with inventive ways to verify tasks and take away the problems of the system. This severely limits the system but it is a sure fire method of providing value for work done in the system. If the systems can be improved, most tasks can actually still be done and only the riskiest HITs that workers don't like doing will be taken away. There would be no further need for disputes and the work falls on the requester to use the system innovatively rather than workers having to deal with approvals/rejections.
9. Turn each task into an auction. Requesters set either a budget for what they want to spend in total and/or a max cost per task (CPT). Workers set the fee they would like to charge per task. Each worker is also assigned a bid modifier based on their skill in the problem-class and their reputation (or lack there of). For example, if we know from prior data that workers of this type have a 20% error rate, we can add a multiplier of 1.2 to their bid. Each task is then auctioned off, taking into account the workers skills, their desired fee and the Requesters budget/max CPT. Example: Assume a Requester set the max cost-per-task (CPT) to $0.20. A new or low skilled worker bids $0.10 per task, but in the auction their bad/lack of reputation adds a 1.6x multiplied, turning that into a $0.16 bid. Meanwhile, a higher skilled/more reputable worker bids $0.15, and in the auction their reputation gets them a 1x multiplier, turning that bid into $0.15. The task goes to the higher-skilled worker, who also earns more per task their lower skilled colleagues.
10. Eliminate low-skilled workers for some problem-classes (e.g. classification). Allow teams to develop machine learning models which compete to do the best job on the set of tasks.
- Milestone 3 AltaMira TrustIdea 1: Video introductions - http://crowdresearch.meteor.com/posts/4Dfo3cT7RSwXWXpmX
- Milestone 3 AltaMira TrustIdea 2: Requestor Stats and Reviews - http://crowdresearch.meteor.com/posts/MtMsspiqtSxyFTnbY
- Milestone 3 AltaMira PowerIdea 1: Secondary markets - http://crowdresearch.meteor.com/posts/mKC7XH2SbCXyZq3Eg
- Milestone 3 AltaMira PowerIdea 2: Task Auction - http://crowdresearch.meteor.com/posts/A9SjNivsP3PCMxCat