Milestone 8

From crowdresearch
Jump to: navigation, search

This week you will be doing something new!

We synthesized our first foundations from all of our recent ideas. As we succeed, we will move on to the others. Our next step will be to address challenges that we face with each of these four foundations. How could we design the system to solve these problems?

Take each of these three foundations and brainstorm a specific design solution to address the challenges. We'll start prototyping these next.

Note: Infrastructure teams should keep working on the infrastructure design and implementation. They are welcome to participate in the research process as well.

  • Youtube link of the meeting today: watch
  • Meeting 8 slideshow: pdf (contains all foundations and challenges).

Foundations

Foundation 1: Micro+macrotask market

Create a marketplace that spans micro- and macro-tasks. Could the same marketplace scale from 2 to N people not just labeling images, but also Photoshopping my vacation photos or mixing my new song? The main idea is to maintain the submission approach from microtask markets, which is focused on two to hundreds of replications, but find ways to make it accessible to both microtask and expert work.

As Adam Marcus mentioned last week: "Why would a programmer want to join a crowd work platform? The draw of AMT right now is that you can find jobs quickly. Can we enable that same benefit for programmers?”

Challenges

Now we need to figure out: what would such a marketplace look like? Is there a way to adapt a microtasking model so it feels natural and useful for macrotasks?

What would the tasks look like? How are they submitted? Does this look like AMT where any expert certified in an area can accept the task? Or like an oDesk negotiation?

How do we ensure high-quality results? Do you let an expert work for hours and submit? That seems risky. Should there be intermediate feedback mechanisms?


How do you trust that someone is an expert?


Foundation 2: Input and output transducers

Tasks get vetted or improved by people on the platform immediately after getting submitted, and before workers are exposed to them. Results are likewise vetted and tweaked. For example, peer-review.

Challenges

Cost: who pays for this? In other words, can this be done without hugely increasing the cost of crowdsourcing?


Speed: is it possible to do this quickly enough to give near-immediate feedback to requesters? Like, 2–4 minutes? As spamgirl says, "The #1 thing that requesters love about AMT from her recent survey of requesters, is that the moment that I post tasks, they start getting done."

From Edwin: What happens when I have a task that I know is hard but I want workers to just try their best and submit? I’m OK with it being subjective, but the panel would just reject my task, which would have been frustrating.


From Edwin: Could this help deal with people feeling bad when rejecting work? Maybe we need a new metaphor, like revision.

Foundation 3: External quality ratings

Metaphor of credit ratings: rather than just people rating each other, have an (external?) authority or algorithm responsible for credit ratings (A, B, C, etc.)


Benefit: this reduces the incentive to get positively-biased five star ratings on everything — those ratings become meaningless

Challenges

Is this a group/authority? For example, Wikipedia reviews are subjective and based on voting. Or is it an algorithm?


If it’s a group, who pays for their time to review you?


From Anand: “How do you do skills-based ratings, etc., without hindering tasks with a requirement to categorize them?”

Foundation 4: Open governance

žLeadership shared by requesters, workers, (researchers?)


žPolicy changes can be worked out by this group

Challenges

žIs it direct voting on everything? Or representative democracy?


žHow exactly will this work?


žCan the research group have a hand here?


žIf there are changes that require engineering effort, who executes that? Us? Other volunteers?

Others we will move on to next

Open governance Empathy Mentorship Engaging with mobile users Price+quality joint model Features:

  • Recommendation
  • Wikified bug reports on tasks

Submitting

Create Wiki Pages for your Team's Submission

Please create a separate page for each foundation idea (there are four of them) for your team's submission at http://crowdresearch.stanford.edu/w/index.php?title=Milestone_8_YourTeamName_FoundationN&action=edit (substituting in YourTeamName with the team name, and replacing N of "FoundationN" with 1, 2, 3 or 4). For example, for "Foundation 3: External quality ratings", replace N with 3. Copy over the template at Milestone 8 Template .

[Team Leaders] Post the links to your research proposals until 22nd April 11:59 pm

We have a service on which you can post research proposals you generated, comment on them, and upvote ones you like.

http://crowdresearch.meteor.com/category/milestone-8-foundation-1

http://crowdresearch.meteor.com/category/milestone-8-foundation-2

http://crowdresearch.meteor.com/category/milestone-8-foundation-3

http://crowdresearch.meteor.com/category/milestone-8-foundation-4


Post links to your research proposals only once they're finished. Give your posts the same title as your submission. Do not include words like "Milestone", "Research Proposal", or your team name in the title.

-Please submit your finished research proposals by 11:59 pm 22nd April 2015, and DO NOT vote/comment until 23rd April 12:05 am

[Everyone] Peer-evaluation (upvote ones you like, comment on them) from 12:05 am 23rd April until 9 am 24th April

Post submission phase, you are welcome to browse through, upvote, and comment on others' research proposals. We encourage you especially to look at and comment on submissions that haven't yet gotten feedback, to make sure everybody's submissions get feedback.

Step 1: Please use http://crowdresearch.meteor.com/needcomments to find submissions that haven't yet gotten feedback, and http://crowdresearch.meteor.com/needclicks to find submissions that haven't been yet been viewed many times.

Step 2: Once you find an idea of interest or less attended, please vote and comment upon it. Please perform this action from 3 to 5 submissions - this will help us balance the comments and votes. Please do not vote your team's research proposals. Once again, everyone is supposed to vote+comment, whether you're the team leader or not.

COMMENT BEST-PRACTICES: As on Crowdgrader, everybody reviews at least 3 submissions, supported by a comment. The comment should provide constructive feedback. Negative comments are discouraged - if you disliked some aspect of a submission, make a suggestion for improvement.

[Team Leaders] Milestone 8 Submissions

To help us track all submissions and browsing through them, once you have finished your Milestone 8 submission, go to the link below and post the link:

Milestone 8 Submissions