Milestone 9

From crowdresearch
Jump to: navigation, search

This week pick one of these four foundations and prototype it. Rapid prototyping as before: it can be a paper prototype, a quick code hack (front-end only), or a video. People interested in contributing to infrastructure should check the #infra slack channel, and Infrastructure wiki page.

  • Youtube link of the meeting today: watch
  • Meeting 8 slideshow: pdf

Foundation 1: Macro+micro

Questions

How does it work? Is it negotiation, oDesk style? Or job boards anyone can check, like Mechanical Turk? How do we address quality and make sure that people don't just grab a task, work for ten hours, and then submit low quality work? How do we unify macro with microtasking?

Suggestions

A unified model! All tasks can be taken up without negotiation by anyone who qualifies, and worked on immediately. For all task submissions on our marketplace, we require at least one milestone. That milestone serves as a checkpoint: if it's a microtask, it can be after 5% of tasks are complete; if it's a macrotask, it might be a description of what they should do first (e.g., "Submit an architecture diagram of the code you will write"). The requester can set the number of max workers who they will pay to do each task in the milestone. The results of that milestone can be used to select specific workers to qualify to work on the rest of the task, or just to launch the rest of the tasks with no qualification. The requester can add as many milestones along the way as they want; we suggest one every couple days.

Foundation 2: Input/output transducers

Questions

Who pays for it? How does it work?

Comments

Michael doesn't buy the algorithmic transformations here yet; they'd be too noisy.

Suggestions for input transducer

While the task is in the first milestone stage, workers can leave feedback on the design of the task publically. That feedback gets returned to the requester when the milestone completes. The requester can use that feedback to iterate on the task before launching.

Suggestions for output transducer

By default, the platform checks a box that sends all work for review to a worker who is one (or two) levels more advanced than the worker before publishing it back to the requester. It is done by publishing a new task back to the marketplace with the right qualifications. This default adds cost and time, but addresses quality control. To address speed, we can provide feedback (as simple as something like a progress bar may be) to the user that it's making progress, even before it gets reviewed, so it's not super slow...

Foundation 3: External quality ratings

Questions

Is it algorithmic or human? Who exactly rates who on what dimensions?

Comments

Michael's note is that we can't base it entirely on accept/rejects or 1-5 stars, since there's a major positive bias in these scores on oDesk and AMT. Interesting suggestion was to make it like AirBnB, where feedback cannot be later linked to a single job.

Suggestions

As written up previously, we have multiple promotion tiers for skill area (e.g., Photoshop Level 1-6). After each task, we ask the requester to optionally provide feedback: for example, if they're Photoshop Level 3, we can ask: "given what you’ve seen, is this 1) below, 2) at, or 3) above the level of a Photoshop Level 3?” The results are delayed and aggregated to be shown in batches of, say, 5 jobs. Once you get enough upvotes to the next level from trusted requesters, Photoshop Level 4s (or 5s?) can look at your portfolio of past work and vote whether to promote you. I suggest that we start by asking people to do these reviews as volunteers, but am open to workers paying to get reviewed (like taking the SAT). We have the same ranking levels for requesters.

Foundation 4: Open Governance

Questions

How exactly will this work?

Comments

Most people seemed to be suggesting that we elect representatives, with the ability to put things to an everybody-ballot when necessary.

Suggestions

Participants elect three worker reps and three requester reps on a yearly basis to make decisions for the platform. To pass rules, it requires four votes.

Let's name our system!

This is your opportunity to suggest names for our system. What do we call the system we're working so hard for? Be creative! :-) You can vote here to post an idea or upvote existing one's.

Deliverable

If you've worked on the lo-fi prototype, then create a wiki for your submission. However, if you've worked on the front-end prototype of a foundation chosen then host it on Github pages, and add a link so we can visit your site and play with it.

Quick tip for people working on front-end: Populate it with some mock data (fake data you invented, but which looks realistic) so that we can see what it would look like if it were actually used. Ie, if it's a job market, there should be some fake job postings. If it's an interface for moderation, there should be some fake task to review, fake comments, etc.

Please remember that this is a collective effort. Use the slack channels to communicate your thoughts with the group. Also ensure that you visit the google document often, read what others have written and help extend or refine those ideas further.

Help regarding Web Technologies

We're building a web service, so for your prototype, we'll be using web technologies to build it! This will give you some exercise with front-end development (if you aren't already comfortable with it), and will help out the implementation teams (since they can reuse your html/css/javascript code).

If you aren't familiar with front-end development, please read some lessons on HTML, CSS, JavaScript, and jQuery (in that order) online! Here are some good resources for teaching yourself:

Codecademy (covers the basics)

Codeschool (covers additional advanced topics)

Submitting

Create Wiki Pages for your Team's Submission

Please create a wiki page for your team's submission at http://crowdresearch.stanford.edu/w/index.php?title=Milestone_9_YourTeamName&action=edit (substituting in YourTeamName with the team name). Copy over the template at Milestone 9 Template .

[Team Leaders] Post the links to your prototypes until 30th April 11:59 pm

We have a service on which you can post prototypes, comment on them, and upvote ones you like.

http://crowdresearch.meteor.com/category/milestone-9

Post links to your prototypes only once they're finished. Give your posts the same title as your submission. Do not include words like "Milestone", "Prototype", or your team name in the title.

-Please submit your finished prototypes by 11:59 pm 30th April 2015, and DO NOT vote/comment until 1st May 12:05 am

[Everyone] Peer-evaluation (upvote ones you like, comment on them) from 12:05 am 1st May until 9 am 2nd May

Post submission phase, you are welcome to browse through, upvote, and comment on others' prototypes. We encourage you especially to look at and comment on submissions that haven't yet gotten feedback, to make sure everybody's submissions get feedback.

Step 1: Please use http://crowdresearch.meteor.com/needcomments to find submissions that haven't yet gotten feedback, and http://crowdresearch.meteor.com/needclicks to find submissions that haven't been yet been viewed many times.

Step 2: Once you find an idea of interest or less attended, please vote and comment upon it. Please perform this action from 3 to 5 submissions - this will help us balance the comments and votes. Please do not vote your team's research proposals. Once again, everyone is supposed to vote+comment, whether you're the team leader or not.

COMMENT BEST-PRACTICES: As on Crowdgrader, everybody reviews at least 3 submissions, supported by a comment. The comment should provide constructive feedback. Negative comments are discouraged - if you disliked some aspect of a submission, make a suggestion for improvement.

[Team Leaders] Milestone 9 Submissions

To help us track all submissions and browsing through them, once you have finished your Milestone 9 submission, go to the link below and post the link:

Milestone 9 Submissions

Weekly Survey

Please fill out the weekly survey so we can continue to improve your crowd research experience.