Milestone 8 sanjoseSpartans Foundation1
Foundation 1: Micro+macrotask market
Our team member, Jsilver, has advocated since the beginning, the creation of an all-encompassing platform that caters to microtasks and macrotasks. It seems fair to conclude that majority of current and future real-world tasks are macrotasks or sometimes a combination of the two. oDesk and Elance are examples of platforms where people can find jobs (microtasks and macrotasks) quickly.
Wikipedia offered definitions for microtask and macrotask. It says microtasks are tasks that are “repetitive but not so simple that they can be automated”. It further noted that microtasks usually “are large volume tasks”, “can be broken down into tasks that are done independently”, and “require human judgement”.
Meanwhile, macrotasks are “done independently”, take a “fixed amount of time”, and require “special skills”.
Challenge Question 1: What would such a marketplace look like? Is there a way to adapt a microtasking model so it feels natural and useful for macrotasks?
The marketplace would look largely or entirely the same. The idea is to create something familiar and user-friendly.
Challenge Question 2: What would the tasks look like? How are they submitted? Does this look like AMT where any expert certified in an area can accept the task? Or like an oDesk negotiation?
The ideal approach includes having the tasks employ a task design template and/or undergo moderation before publication. The task design template and the moderation part may be human- and algo-powered.
For crowdpowered (soft) moderation, a pool of workers will volunteer and help create a task design template that clients could use to make a complete/detailed task posting. Workers would focus on their desired tasks and list down the usual requirements and details that are critical for a successful job posting and completion. Task details could include criteria like fair and realistic pay and price range and realistic deadline. Once the crowd has submitted their ideas, a separate team/the job platform's team will collect, collate, and create a task design template for each type of task (e.g., graphic design, website creation, blog writing, proofreading, transcription, etc.). This task design template can then be implemented in the platform with the help of an algo. The success of this task design template is dependent on the participation/input of many workers with diverse skills. The Performance Badge (PB) is one way to incentivize skilled workers to participate in this. Please see Foundation 2 regarding PB: 
Challenge Question 3: How do we ensure high-quality results? Do you let an expert work for hours and submit? That seems risky. Should there be intermediate feedback mechanisms?
High-quality results can be ensured by matching tasks with the most-qualified workers. An additional approach is employing moderation before starting the task, during the task process (feedback for each task milestone), and/or after task completion. Milestone feedbacks can be done between the worker and client or with the help of a qualified (skill-relevant) moderator.
A skill-relevant moderator is someone qualified for a specific task. For example, graphic design-related tasks can only be moderated by skilled graphic designers; transcription tasks can only be moderated by qualified transcribers, etc.
Challenge Question 4: How do you trust that someone is an expert?
The platform would mandatorily subject workers to qualifications/skills-based tests that are relevant to their desired tasks. This test would comprise several sets of questions wherein each set of questions would be taken up by each worker only once or twice a year. This way, we reduce or eliminate the chances of cheating that happens in current job platforms where tests can be retaken monthly and test answers are readily available on the internet.
Top scores on these tests, aside from the job/performance ratings (including Performance Badge), would significantly improve the trust component in the platform and among its workers.