Milestone 2

From crowdresearch
Jump to: navigation, search

Due date (PST): 11:59 pm 11th March 2015 for submission, 9 am 13th March 2015 for peer-evaluation. Everyone, please note that the Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. People out of the US might want to track the new changes.

The goals for this week are to:

  • Learn about needfinding
  • Determine the needs of workers and requesters (from panels and from readings)
  • Youtube link of the meeting today: watch
  • Meeting 2 slideshow: pdf
  • Youtube link of the Panel 1: watch
  • Youtube link of the Panel 2: watch


Learn about Needfinding

We talked about some highlights of needfinding in this week's meeting. We suggest that you watch Scott Klemmer's Coursera HCI lectures on needfinding, especially the first one:

Another good resource is Dev Patnaik's book on needfinding (optional reading).

Attend a Monday Panel to Talk with Expert Workers and Requesters

There will be two panels on Hangouts on Air where we will have experienced workers and requesters discussing their experiences on crowdsourcing platforms (MTurk, oDesk, etc). You should to attend one to better understand the needs of workers and requesters:

  • Panel 1: 8:30 am PST (Pacific Time) / 10 pm IST (Indian Time) on Monday March 9th
  • Panel 2: 6 pm PST (Pacific Time) / 9 pm EST (Eastern Time) on Monday March 9th

Please note that, Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. So people outside of the US, please check your time accordingly, and match it with PST/California time.

Deliverable

When talking about needfinding, it is best practice to organize your thoughts into three stages:

  • Observations: What you see and hear
  • Interpretations: Why you think you are hearing and seeing those things. What is driving those behaviors? This is the "recursive why" we talked about in team meeting.
  • Needs: These are the deeper, more fundamental driving motivators for people. As we talked about in team meeting, needs must be verbs, not nouns.

For more detail about these, see this week's meeting slides.

The deliverable for the panel subsection: report on some of the observations you gathered during the panel. You can hold back on interpretations and needs until you finish the rest of the observation-gathering in the next steps.

Reading Others' Insights

An hour in the library can save a year of fieldwork. Please read the following materials. These papers will also grow out our foundation of related work. The papers are copyrighted, so please don't redistribute them. You need to be signed in to view them.

Worker perspective: Being a Turker

Martin D, Hanrahan B V, O'Neill J, et al. Being a turker. Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 2014: 224-235.

Worker perspective: Turkopticon

Irani L C, Silberman M. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013: 611-620.

If you are able to access Mechanical Turk, you can go try Turkopticon yourself here.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

Kittur A, Chi E H, Suh B. Crowdsourcing user studies with Mechanical Turk. Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2008: 453-456.

Requester perspective: The Need for Standardization in Crowdsourcing

The Need for Standardization in Crowdsourcing

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

A Plea to Amazon: Fix Mechanical Turk


Deliverables

Just as in the previous deliverable, we will focus on observations. As you do these readings, lay out observations of raw behaviors and issues. Try to avoid including interpretations and needs right now, even though the authors likely included many of them. Focus just on behaviors. What is *happening* on these systems? Remember, we'll get to interpretations and needs next, so hold off.

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations bout requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Recommended (but optional) Materials

If you are interested in doing more reading, these optional materials will help you better understand workers and requesters:

If you are able to access Mechanical Turk, you can go try Turkopticon yourself here.

The People Inside Your Machine - a 22-minute NPR radio program about crowd workers

Helpful Blog Posts To Help You Design Your HITs - links to several resources aimed towards helping requesters create better HITs

Marshall C C, Shipman F M. Experiences surveying the crowd: Reflections on methods, participation, and reliability. Proceedings of the 5th Annual ACM Web Science Conference. ACM, 2013: 234-243. - discusses using MTurk for surveys (requesters' needs)

Stolee K T, Elbaum S. Exploring the use of crowdsourcing to support empirical studies in software engineering. Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, 2010: 35. - discusses using MTurk for software engineering studies, and challenges recruiting workers with specialized skills (requesters' needs)

Little G, Chilton L B, Goldman M, et al. Turkit: human computation algorithms on mechanical turk. Proceedings of the 23nd annual ACM symposium on User interface software and technology. ACM, 2010: 57-66. - discusses using MTurk programmatically for iterative tasks (requesters' needs)

Salehi N, et al. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. 2015. - a system for organizing workers towards collective action (workers' needs)

Guidelines for Academic Requesters - A set of guidelines written by Dynamo participants which discusses how to avoid common mistakes made by requesters

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

We have the opportunity to do a little bit of fieldwork as well.

There are a number of forums dedicated to Mechanical Turk workers and requesters, such as Turker Nation, MTurk Forum, MTurk Grind, etc. Spamgirl has made a great list of them here. Go introduce yourself, and if folks are willing, engage with them in their chat rooms and threads. Be thoughtful! Researchers have a bit of a reputation for just marching in and telling Turkers what they really need, which isn't appreciated.

Requesters also often ask for help and advice on Reddit - you can Search Requester Help on Reddit. There are lots of issues implicit in what's posted here. Other resources on Reddit include r/mturk and r/HITsWorthTurkingFor

You should browse these resources (you are welcome to find other crowd-work related resources as well), aiming to discover needs that workers and requesters have.

Deliverable

List out the observations you made while doing your fieldwork. Links to examples (posts / threads) would be extremely helpful.

Synthesize the Needs You Found

Now it's time to synthesize your results. This may be the most intense part of this week's milestone, and should involve your whole team.

First, synthesize your raw observations into interpretations. Your eventual goal is to produce needs. Ask yourself *why* you think you saw certain things in your observations. Suggest a reason. Ask yourself why that reason matters. This should let you come up with another reason. Eventually these will pop into needs.

Remember, needs are verbs, not nouns. Not "Workers need money" or "Workers need independence" --- those are nouns. More like "Workers need to trust that they'll get paid later for the work they're doing now."

The method we recommend: if you're in a room together, put your observations on stickies and organize, reorganize, and reorganize them. Keep discussing what you've found, what patterns are in the data, contradictions, things that people say are fine but are clearly too cumbersome, and so on. Set aside a large block of time to do this if you can. It's tough to rush it. If your team is remote, we recommend getting on a Google Hangout or Skype call and using a Google Doc as your scratch space.

Deliverables

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

A set of bullet points summarizing the needs of workers.

  • Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.

A set of bullet points summarizing the needs of requesters.

  • Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.

Submitting

Please start early, so you have time to ask questions or figure out Crowdgrader system. Thanks.

Create a Wiki Page for your Team's Submission

Please create a page for your team's submission at http://crowdresearch.stanford.edu/w/index.php?title=Milestone_2_YourTeamName&action=edit (substituting in YourTeamName with the team name), copy over the template at Milestone 2 Template . If you have never created a wiki page before, please see this or watch this.

Submit on CrowdGrader and do Peer Evaluations

After you have put your team's submission on the wiki, post the link to the wiki page you created on CrowdGrader!

Step 1 for everyone - For the most of you, you don't have to enroll. I have done it for you. You can directly go to: http://www.crowdgrader.org/crowdgrader/venues/view_venue/879 . However, if you cannot access it, please self-enroll using this link: http://www.crowdgrader.org/crowdgrader/venues/join/879/dufipo_fivuvy_tunyge_qedumy

Step 2 for team leaders: [[[Repeats every week]]] Make sure all of your team members have enrolled into the system (though I have done it for you, please double check). Now add them to your group/team - there's an option to add collaborators (your team members might get an email, and have to confirm before it shows up as your collaborators). Yes, you have to repeat the process, we're working with Crowdgrader, so you don't have to do it every week. However, for now, please give yourself enough time, so you can add collaborators into your team. You CANNOT add collaborators after you make your submission.

Step 3 for team leaders: Make the submission and represent your team, only team leaders should make the submission (unless, its not possible for him/her).

Step 4 for everyone: Begin peer-evaluation. We will NOT send any email notification for this, please check back on Crowdgrader to find submissions to evaluate. Everyone will be randomly assigned 3 submissions to grade (5 in total, you can skip 2), and 25% of your grades depend on your duty to peer-grade others - check Crowdgrader to find and grade the submissions.

Please comment and justify why you gave this score, and point out good/bad points about the submission. For this week, look for the most interesting and insightful needs, see if you can find or infer some of it, and synthesize it with your feedback as something which can be shared with all of us. Team leaders, please make sure that every member of your team grades the submissions.

Milestone 2 Submissions

To help us track all submissions and browsing through them, once you have finished your Milestone 2, go to the link below and post the link:

Milestone 2 Submissions

Fill out this week's survey

Please provide your feedback on this week's meeting and milestone so we can improve it, by filling out this survey