Difference between revisions of "Winter Milestone 2"

From crowdresearch
Jump to: navigation, search
Line 1: Line 1:
'''Due date (PST): 11:59 pm 11th March 2015 for submission, 9 am 13th March 2015 for peer-evaluation.''' Everyone, please note that the Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. People out of the US might want to track the new changes.
+
'''Due date (PST): 8:00 pm 14th Jan 2016 for submission, 12 pm 25th Jan 2016 for peer-evaluation.'''  
  
 
The goals for this week are to:
 
The goals for this week are to:

Revision as of 23:24, 17 January 2016

Due date (PST): 8:00 pm 14th Jan 2016 for submission, 12 pm 25th Jan 2016 for peer-evaluation.

The goals for this week are to:

  • Learn about needfinding
  • Determine the needs of workers and requesters (from panels and from readings)
  • Youtube link of the meeting today: watch
  • Meeting 2 slideshow: pdf
  • Youtube link of the Panel 1: watch
  • Youtube link of the Panel 2: watch


Learn about Needfinding

We talked about some highlights of needfinding in this week's meeting. We suggest that you watch Scott Klemmer's Coursera HCI lectures on needfinding, especially the first one:

Another good resource is Dev Patnaik's book on needfinding (optional reading).

Attend a Monday Panel to Talk with Expert Workers and Requesters

There will be two panels on Hangouts on Air where we will have experienced workers and requesters discussing their experiences on crowdsourcing platforms (MTurk, oDesk, etc). You should to attend one to better understand the needs of workers and requesters:

  • Panel 1: 8:30 am PST (Pacific Time) / 10 pm IST (Indian Time) on Monday March 9th
  • Panel 2: 6 pm PST (Pacific Time) / 9 pm EST (Eastern Time) on Monday March 9th

Please note that, Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. So people outside of the US, please check your time accordingly, and match it with PST/California time.

Deliverable

When talking about needfinding, it is best practice to organize your thoughts into three stages:

  • Observations: What you see and hear
  • Interpretations: Why you think you are hearing and seeing those things. What is driving those behaviors? This is the "recursive why" we talked about in team meeting.
  • Needs: These are the deeper, more fundamental driving motivators for people. As we talked about in team meeting, needs must be verbs, not nouns.

For more detail about these, see this week's meeting slides.

The deliverable for the panel subsection: report on some of the observations you gathered during the panel. You can hold back on interpretations and needs until you finish the rest of the observation-gathering in the next steps.

Reading Others' Insights

An hour in the library can save a year of fieldwork. Please read the following materials. These papers will also grow out our foundation of related work. The papers are copyrighted, so please don't redistribute them. You need to be signed in to view them.

Worker perspective: Being a Turker

Martin D, Hanrahan B V, O'Neill J, et al. Being a turker. Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 2014: 224-235.

Worker perspective: Turkopticon

Irani L C, Silberman M. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013: 611-620.

If you are able to access Mechanical Turk, you can go try Turkopticon yourself here.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

Kittur A, Chi E H, Suh B. Crowdsourcing user studies with Mechanical Turk. Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2008: 453-456.

Requester perspective: The Need for Standardization in Crowdsourcing

The Need for Standardization in Crowdsourcing

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

A Plea to Amazon: Fix Mechanical Turk


Deliverables

Just as in the previous deliverable, we will focus on observations. As you do these readings, lay out observations of raw behaviors and issues. Try to avoid including interpretations and needs right now, even though the authors likely included many of them. Focus just on behaviors. What is *happening* on these systems? Remember, we'll get to interpretations and needs next, so hold off.

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations bout requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Recommended (but optional) Materials

If you are interested in doing more reading, these optional materials will help you better understand workers and requesters:

If you are able to access Mechanical Turk, you can go try Turkopticon yourself here.

The People Inside Your Machine - a 22-minute NPR radio program about crowd workers

Helpful Blog Posts To Help You Design Your HITs - links to several resources aimed towards helping requesters create better HITs

Marshall C C, Shipman F M. Experiences surveying the crowd: Reflections on methods, participation, and reliability. Proceedings of the 5th Annual ACM Web Science Conference. ACM, 2013: 234-243. - discusses using MTurk for surveys (requesters' needs)

Stolee K T, Elbaum S. Exploring the use of crowdsourcing to support empirical studies in software engineering. Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, 2010: 35. - discusses using MTurk for software engineering studies, and challenges recruiting workers with specialized skills (requesters' needs)

Little G, Chilton L B, Goldman M, et al. Turkit: human computation algorithms on mechanical turk. Proceedings of the 23nd annual ACM symposium on User interface software and technology. ACM, 2010: 57-66. - discusses using MTurk programmatically for iterative tasks (requesters' needs)

Salehi N, et al. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. 2015. - a system for organizing workers towards collective action (workers' needs)

Guidelines for Academic Requesters - A set of guidelines written by Dynamo participants which discusses how to avoid common mistakes made by requesters

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

We have the opportunity to do a little bit of fieldwork as well.

There are a number of forums dedicated to Mechanical Turk workers and requesters, such as Turker Nation, MTurk Forum, MTurk Grind, etc. Spamgirl has made a great list of them here. Go introduce yourself, and if folks are willing, engage with them in their chat rooms and threads. Be thoughtful! Researchers have a bit of a reputation for just marching in and telling Turkers what they really need, which isn't appreciated.

Requesters also often ask for help and advice on Reddit - you can Search Requester Help on Reddit. There are lots of issues implicit in what's posted here. Other resources on Reddit include r/mturk and r/HITsWorthTurkingFor

You should browse these resources (you are welcome to find other crowd-work related resources as well), aiming to discover needs that workers and requesters have.

Deliverable

List out the observations you made while doing your fieldwork. Links to examples (posts / threads) would be extremely helpful.

Synthesize the Needs You Found

Now it's time to synthesize your results. This may be the most intense part of this week's milestone, and should involve your whole team.

First, synthesize your raw observations into interpretations. Your eventual goal is to produce needs. Ask yourself *why* you think you saw certain things in your observations. Suggest a reason. Ask yourself why that reason matters. This should let you come up with another reason. Eventually these will pop into needs.

Remember, needs are verbs, not nouns. Not "Workers need money" or "Workers need independence" --- those are nouns. More like "Workers need to trust that they'll get paid later for the work they're doing now."

The method we recommend: if you're in a room together, put your observations on stickies and organize, reorganize, and reorganize them. Keep discussing what you've found, what patterns are in the data, contradictions, things that people say are fine but are clearly too cumbersome, and so on. Set aside a large block of time to do this if you can. It's tough to rush it. If your team is remote, we recommend getting on a Google Hangout or Skype call and using a Google Doc as your scratch space.

Deliverables

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

A set of bullet points summarizing the needs of workers.

  • Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.

A set of bullet points summarizing the needs of requesters.

  • Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.

Submitting

Please create a page for your team's submission at http://crowdresearch.stanford.edu/w/index.php?title=WinterMilestone_1_YourTeamName&action=edit (substituting in YourTeamName with the team name), copy over the template at WinterMilestone 1 Template . If you have never created a wiki page before, please see this or watch this video on YouTube.

[Team Representative] Submission or Post the links to your ideas until 8:00 pm PST 17th Jan

We have a [Reddit like service] on which you can post the links to the wiki-pages for the submissions, explore them, and upvote them.

Sign-up Instructions: Log in with either Twitter or Facebook on the [website]. When it asks you to pick your username, pick the same username as your Slack, this will help us identify and track your contributions better.

Link to the website: Meteor site. Post links to your ideas only once they're finished. Give your posts titles matching your team name this week.

-Please submit your finished ideas by 8:00 pm PST 17th Jan Sunday, and DO NOT vote/comment until then.

[Everyone] Peer-evaluation from 8:05 pm PST 17th Jan Sunday until 12 pm PST 18th Jan Monday

Post submission phase, you are welcome to browse through, upvote, and comment on others' ideas. We encourage you especially to look at and comment on ideas that haven't yet gotten feedback, to make sure everybody's ideas gets feedback. You can use http://crowdresearch.meteor.com/needcomments to find ideas that haven't yet gotten feedback, and http://crowdresearch.meteor.com/needclicks to find ideas that haven't been yet been viewed many times.

COMMENT BEST-PRACTICES: Everybody in the team reviews at least 3 ideas, supported by a comment. The comment has to justify your reason for upvote. The comment should be constructive, and should mention positive aspect of the idea worth sharing. Negative comments are discouraged, rather make your comment in the form of a suggestion - such as, if you disliked an idea, try to suggest improvements (do not criticize an idea, no idea is bad, every idea has a scope of improvement).