Winter Milestone 2

From crowdresearch
Jump to: navigation, search

Due date (PST): 8:00 pm 24th Jan 2016 for submission, 12 pm 25th Jan 2016 for peer-evaluation.

The goals for this week are to:

  • Learn about needfinding
  • Determine the needs of workers and requesters (from panels and from readings)
  • Youtube link of the meeting today: watch
  • Youtube link of the worker-requester panel:watch
  • Winter Meeting 2 slideshow: slides pdf
  • Panel discussion notes: by @willtchiu and by @dcthompson
  • Archive of Milestone 1 submissions

Learn about Needfinding

We talked about some highlights of needfinding in this week's meeting. We suggest that you watch Scott Klemmer's Coursera HCI lectures on needfinding, especially the first one:

Another good resource is Dev Patnaik's book on needfinding (optional reading).

Attend a Tuesday Panel to Talk with Expert Workers and Requesters

There will be a panel on Hangouts on Air where we will have experienced workers and requesters discussing their experiences on crowdsourcing platforms (MTurk, oDesk, etc). You should attend one to better understand the needs of workers and requesters. Please add and vote to a set of questions on this Google spreadsheet.

  • Panel: 6pm PST (Pacific Time) Tuesday Jan 19th/ 7:30am IST (Indian Time) Wednesday Jan 20th

If you cannot attend the panel live, the recording is archived on YouTube and you can watch it here.

Deliverable

When talking about needfinding, it is best practice to organize your thoughts into three stages:

  • Observations: What you see and hear
  • Interpretations: Why you think you are hearing and seeing those things. What is driving those behaviors? This is the "recursive why" we talked about in team meeting.
  • Needs: These are the deeper, more fundamental driving motivators for people. As we talked about in team meeting, needs must be verbs, not nouns.

For more detail about these, see this week's meeting slides.

The deliverable for the panel subsection: report on some of the observations you gathered during the panel. You can hold back on interpretations and needs until you finish the rest of the observation-gathering in the next steps.

Reading Others' Insights

An hour in the library can save a year of fieldwork. Please read the following materials. These papers will also grow out our foundation of related work. The papers are copyrighted, so please don't redistribute them. You need to be signed in to view them.

Worker perspective: Being a Turker

Martin D, Hanrahan B V, O'Neill J, et al. Being a turker. Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 2014: 224-235.

Worker perspective: Turkopticon

Irani L C, Silberman M. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013: 611-620.

If you are able to access Mechanical Turk, you can go try Turkopticon yourself here.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

Kittur A, Chi E H, Suh B. Crowdsourcing user studies with Mechanical Turk. Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2008: 453-456.

Requester perspective: The Need for Standardization in Crowdsourcing

The Need for Standardization in Crowdsourcing

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

A Plea to Amazon: Fix Mechanical Turk

Soylent: A Word Processor with a Crowd Inside

Look out for techniques for programming crowds section, a paper by Michael Bernstein et.al.

Deliverables

Just as in the previous deliverable, we will focus on observations. As you do these readings, lay out observations of raw behaviors and issues. Try to avoid including interpretations and needs right now, even though the authors likely included many of them. Focus just on behaviors. What is *happening* on these systems? Remember, we'll get to interpretations and needs next, so hold off.

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations about requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Recommended (but optional) Materials

If you are interested in doing more reading, these optional materials will help you better understand workers and requesters:

If you are able to access Mechanical Turk, you can go try Turkopticon yourself here.

The People Inside Your Machine - a 22-minute NPR radio program about crowd workers

Helpful Blog Posts To Help You Design Your HITs - links to several resources aimed towards helping requesters create better HITs

Marshall C C, Shipman F M. Experiences surveying the crowd: Reflections on methods, participation, and reliability. Proceedings of the 5th Annual ACM Web Science Conference. ACM, 2013: 234-243. - discusses using MTurk for surveys (requesters' needs)

Stolee K T, Elbaum S. Exploring the use of crowdsourcing to support empirical studies in software engineering. Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, 2010: 35. - discusses using MTurk for software engineering studies, and challenges recruiting workers with specialized skills (requesters' needs)

Little G, Chilton L B, Goldman M, et al. Turkit: human computation algorithms on mechanical turk. Proceedings of the 23nd annual ACM symposium on User interface software and technology. ACM, 2010: 57-66. - discusses using MTurk programmatically for iterative tasks (requesters' needs)

Salehi N, et al. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. 2015. - a system for organizing workers towards collective action (workers' needs)

Guidelines for Academic Requesters - A set of guidelines written by Dynamo participants which discusses how to avoid common mistakes made by requesters

OPTIONAL: Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

We have the opportunity to do a little bit of fieldwork as well. There are a number of forums dedicated to Mechanical Turk workers and requesters, such as Turker Nation, MTurk Forum, MTurk Grind, etc. Spamgirl has made a great list of them here. If you are interested in speaking with workers there, DM @spamgirl on Slack. Then, go introduce yourself, and if folks are willing, engage with them in their chat rooms and threads. Be thoughtful! Researchers have a bit of a reputation for just marching in and telling Turkers what they really need, which isn't appreciated.

Requesters also often ask for help and advice on Reddit - you can Search Requester Help on Reddit. There are lots of issues implicit in what's posted here. Other resources on Reddit include r/mturk and r/HITsWorthTurkingFor

You should browse these resources (you are welcome to find other crowd-work related resources as well), aiming to discover needs that workers and requesters have.

Deliverable

List out the observations you made while doing your fieldwork. Links to examples (posts / threads) would be extremely helpful.

Synthesize the Needs You Found

Now it's time to synthesize your results. This may be the most intense part of this week's milestone, and should involve your whole team (or just your whole brain, if you're participating on a solo team).

First, synthesize your raw observations into interpretations. Your eventual goal is to produce needs. Ask yourself *why* you think you saw certain things in your observations. Suggest a reason. Ask yourself why that reason matters. This should let you come up with another reason. Eventually these will pop into needs.

Remember, needs are verbs, not nouns. Not "Workers need money" or "Workers need independence" --- those are nouns. More like "Workers need to trust that they'll get paid later for the work they're doing now."

The method we recommend: if you're in a room together, put your observations on stickies and organize, reorganize, and reorganize them. Keep discussing what you've found, what patterns are in the data, contradictions, things that people say are fine but are clearly too cumbersome, and so on. Set aside a large block of time to do this if you can. It's tough to rush it. If your team is remote, we recommend getting on a Google Hangout or Skype call and using a Google Doc as your scratch space.

Deliverables

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

A set of bullet points summarizing the needs of workers.

  • Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.

A set of bullet points summarizing the needs of requesters.

  • Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.

Research Engineering (Test Flight)

Congrats on getting setup locally!

Tutorials

By the end of this week, we expect everyone in the test flight group to be well acquainted with our codebase, and have familiarity with Angular, Django, and Material Design. Check out Introduction to Research Engineering for documentation and tutorials. If you choose to brush up on some of your skills with a tutorial (this is completely optional), please share a screenshot indicating the completion of your tutorial in #engineering-deliver on Slack by Sunday at 8pm PST.

Issues

This week we will also start tackling issues! Woohoo! We will be working specifically on the following issues on github: 500 (profile), 511 (task) , 558 (csv), 637 (forms). A few notes about how to start working on these issues:

  • Join the appropriate channel on Slack: #_engineering-{issue name}.
  • Post your github id in the channel so we can give you access to the issue's branch.
  • Pull the appropriate branch (see the channel for more info). See Working with GitHub
  • Before you start coding, @channel stating what portion of the issue you intend to work on for the next period of time, e.g. (I'm going to work on allowing the user to upload a picture to their profile for the next hour).
  • Before you @channel, make sure you have read what's going on in the channel so you don't start working on the same thing as someone else.
  • When you have finished some module and made sure you haven't broken anything, push to the branch and then @channel saying what feature you just added or problem you solved.
  • Pair programming is highly encouraged (i.e. have a hangout with someone and work together on solving some problem).
  • DO NOT push to the branch anything that breaks other code.

Do not hesitate to reach out to the channel of the issue you are working on, or any of the DRIs for this milestone: @dmorina, @shirishgoyal, @aginzberg. As this is the first week we are digging into issues, we expect questions to come up and problems to arise, so please do ask for help. The deliverable for solving an issue will be a PR made by the members of the channel who worked on solving the particular issue.

PLEASE NOTE: This model of collaborating on a single branch is just for this week (and maybe one more if necessary). Very soon, each contributor will be creating a pull request (individually, or maybe in small groups) for whatever issue/feature they are working on which will then be reviewed by the code review team and merged.

Submitting

Please create a page for your team's submission at http://crowdresearch.stanford.edu/w/index.php?title=WinterMilestone_2_YourTeamName&action=edit (substituting in YourTeamName with the team name), copy over the template at WinterMilestone 2 Template . If you have never created a wiki page before, please see this or watch this video on YouTube.

[Team Representative] Submission or Post the links to your ideas until 8:00 pm PST 24th Jan

We have a [Reddit like service] on which you can post the links to the wiki-pages for the submissions, explore them, and upvote them.

Sign-up Instructions: Log in with either Twitter or Facebook on the [website]. When it asks you to pick your username, pick the same username as your Slack, this will help us identify and track your contributions better.

Link to the website: Meteor site. Post links to your ideas only once they're finished. Give your posts titles matching your team name this week.

-Please submit your finished ideas by 8:00 pm PST 24th Jan Sunday, and DO NOT vote/comment until then.

[Everyone] Peer-evaluation from 8:05 pm PST 24th Jan Sunday until 12 pm PST 25th Jan Monday

Post submission phase, you are welcome to browse through, upvote, and comment on others' ideas. We encourage you especially to look at and comment on ideas that haven't yet gotten feedback, to make sure everybody's ideas gets feedback. You can use http://crowdresearch.meteor.com/needcomments to find ideas that haven't yet gotten feedback, and http://crowdresearch.meteor.com/needclicks to find ideas that haven't been yet been viewed many times.

COMMENT BEST-PRACTICES: Everybody in the team reviews at least 3 ideas, supported by a comment. The comment has to justify your reason for upvote. The comment should be constructive, and should mention positive aspect of the idea worth sharing. Negative comments are discouraged, rather make your comment in the form of a suggestion - such as, if you disliked an idea, try to suggest improvements (do not criticize an idea, no idea is bad, every idea has a scope of improvement).