Difference between revisions of "Winter Milestone 2"

From crowdresearch
Jump to: navigation, search
Line 1: Line 1:
'''Due date: 8:00 pm 14th Jan 2016 for submission, 12 pm 25th Jan 2016 for peer-evaluation.'''
+
'''Due date (PST): 11:59 pm 11th March 2015 for submission, 9 am 13th March 2015 for peer-evaluation.''' Everyone, please note that the Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. People out of the US might want to track the new changes.
  
DRI: @rajanvaish
+
The goals for this week are to:
 +
* Learn about needfinding
 +
* Determine the needs of workers and requesters (from panels and from readings)
 +
* Youtube link of the meeting today: [http://www.youtube.com/watch?v=p88j4AmEOec watch]
 +
* Meeting 2 slideshow: [[:Media:Week2 presentation.pdf | pdf]]
 +
* <strong>Youtube link of the Panel 1</strong>: [http://www.youtube.com/watch?v=7qNE1GgkoPE watch]
 +
* <strong>Youtube link of the Panel 2</strong>: [http://www.youtube.com/watch?v=EdlzAwIvOUU watch]
  
Slack channel for coordination and questions: #general
 
  
In this milestone we want you to:
+
== Learn about Needfinding ==
  
* Get experience of what it is like to be a worker and requester on current crowd-labor marketplaces
+
We talked about some highlights of needfinding in this week's meeting. We suggest that you watch Scott Klemmer's Coursera HCI lectures on needfinding, especially the first one:
* Read some research papers about crowdsourcing
+
* <strong>Youtube</strong> link of the meeting today: [http://www.youtube.com/watch?v=Un7QN2LHeMY watch]
+
* <strong>Meeting 1 slideshow</strong>: [[:Media:01-11-kickoff.pdf| slides pdf]]
+
  
== Experience the life of a Worker on Mechanical Turk (Mandatory) ==
+
* [https://www.youtube.com/watch?v=_jlgOJNxE-Q&list=PL4SrvqsowBdJ28eZW2mla-b7pEO7HukyY&index=4 Participant Observation]
 +
* [https://www.youtube.com/watch?v=ESPhIoAinaM&index=5&list=PL4SrvqsowBdJ28eZW2mla-b7pEO7HukyY Interviewing]
 +
* [https://www.youtube.com/watch?v=YpYiygICKvI&index=6&list=PL4SrvqsowBdJ28eZW2mla-b7pEO7HukyY Additional Needfinding Strategies]
  
Sign up as a worker for Mechanical Turk [https://www.mturk.com/ here]. Then, practice doing some tasks - you should do enough tasks to earn $1. You can check out forums like [http://www.turkernation.com/ Turker Nation] to find tips about working on MTurk.
+
Another good resource is [[:Media:Needfinding patnaik (private).pdf | Dev Patnaik's book on needfinding]] (optional reading).
  
'''From outside the USA?''' - If having difficulty signing up as a worker (e.g., if they don't respond within the 48 hour period), you can try the [https://workersandbox.mturk.com worker sandbox], or other sites like [http://www.crowdflower.com/ CrowdFlower] or [https://microworkers.com/ Microworkers] or [http://www.clickworker.com/en Clickworker]. Your peers will post some worker sandbox HITs at [[Milestone 1 Sandbox HITs]].
+
== Attend a Monday Panel to Talk with Expert Workers and Requesters ==
 +
 
 +
There will be two panels on Hangouts on Air where we will have experienced workers and requesters discussing their experiences on crowdsourcing platforms (MTurk, oDesk, etc). You should to attend one to better understand the needs of workers and requesters:
 +
 
 +
* <strong>Panel 1</strong>: 8:30 am PST (Pacific Time) / 10 pm IST (Indian Time) on Monday March 9th
 +
* <strong>Panel 2</strong>: 6 pm PST (Pacific Time) / 9 pm EST (Eastern Time) on Monday March 9th
 +
 
 +
Please note that, Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. So people outside of the US, please check your time accordingly, and match it with PST/California time.  
  
 
=== Deliverable ===
 
=== Deliverable ===
  
Reflect on your experience as a worker on Mechanical Turk. What did you like? What did you dislike? Write it up on this wiki, as described [[#Submitting | here]]
+
When talking about needfinding, it is best practice to organize your thoughts into three stages:
 +
* Observations: What you see and hear
 +
* Interpretations: Why you think you are hearing and seeing those things. What is driving those behaviors? This is the "recursive why" we talked about in team meeting.
 +
* Needs: These are the deeper, more fundamental driving motivators for people. As we talked about in team meeting, needs must be verbs, not nouns.
 +
For more detail about these, see this week's meeting slides.
  
== Experience the life of a Requester on Mechanical Turk (Mandatory) ==
+
The deliverable for the panel subsection: report on some of the observations you gathered during the panel. You can hold back on interpretations and needs until you finish the rest of the observation-gathering in the next steps.
  
Sign up as a requester for Mechanical Turk [https://requester.mturk.com/ here]. Then, post some HITs to tackle a task of interest to you. Please aim to follow best practices for designing these tasks --- be clear in your task description, and do not arbitrarily reject work. If you need funds to support your task, please create a paypal account and fill out [https://docs.google.com/forms/d/1m5ZeeHOfqy7WUUE9alYdKujATyeLrcXRFMa6t3p8-bE/viewform this form] so we can PayPal you a few dollars. If you don't want to spend money, you can post them via the [https://requestersandbox.mturk.com/ requester sandbox], list them on [[Milestone 1 Sandbox HITs]] and ask people to do them for you. Aim to have your HITs done by at least 15 different people.
+
== Reading Others' Insights ==
  
'''From outside the USA?''' - If having difficulty signing up as a requester, you can try the [https://requester.mturk.com/developer/sandbox MTurk requester sandbox], or sites like [http://www.crowdflower.com/ CrowdFlower] or [https://microworkers.com/ Microworkers] or [http://www.clickworker.com/en Clickworker].  
+
An hour in the library can save a year of fieldwork. Please read the following materials. These papers will also grow out our foundation of related work. The papers are copyrighted, so please don't redistribute them. You need to be signed in to view them.
  
Examples of tasks you might want to have done (you can pick one of the tasks below, or do a task of your own choosing):
+
=== Worker perspective: Being a Turker  ===
  
* Transcribe an audio recording
+
[[:Media:Being a Turker (private).pdf | Martin D, Hanrahan B V, O'Neill J, et al. Being a turker. Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 2014: 224-235.]]
* Get descriptions for images
+
* Label parts of speech in a sentence
+
  
=== Deliverable ===
+
=== Worker perspective: Turkopticon ===
  
Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results.
+
[[:Media:turkopticon (private).pdf | Irani L C, Silberman M. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013: 611-620.]]
  
== Explore alternative crowd-labor markets (Optional/If MTurk is inaccessible)  ==
+
If you are able to access Mechanical Turk, you can go try Turkopticon yourself [https://turkopticon.ucsd.edu/ here].
  
Crowdsourcing is not only about Mechanical Turk, its much more than that. And a variety of platforms are trying to address different problems. To help you get a flavor of the diversity of the area, choose one of the following crowdsourcing sites. If you're from outside the USA, it is possible that you may not be able to access MTurk, in that case, try the following websites:
+
=== Requester perspective: Crowdsourcing User Studies with Mechanical Turk ===
  
* [http://www.clickworker.com/en/ Clickworker]
+
[[:Media:Crowdsourcing User Studies with Mechanical Turk (private).pdf | Kittur A, Chi E H, Suh B. Crowdsourcing user studies with Mechanical Turk. Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2008: 453-456.]]
* [https://microworkers.com/ Microworkers]
+
* [https://www.taskrabbit.com/ TaskRabbit]
+
* [https://www.odesk.com/ oDesk]
+
* [http://www.galaxyzoo.org/ GalaxyZoo]
+
  
Explore it and a get a feel for how the site works. If you want to, try doing a job on the site. You should not spend any more than 30-45min on this part.
+
=== Requester perspective: The Need for Standardization in Crowdsourcing ===
  
=== Deliverable ===
+
[http://www.behind-the-enemy-lines.com/2012/02/need-for-standardization-in.html The Need for Standardization in Crowdsourcing]
  
Compare and contrast the crowd-labor market you just explored (TaskRabbit/oDesk/GalaxyZoo) to Mechanical Turk.
+
=== Both perspectives: A Plea to Amazon: Fix Mechanical Turk ===
  
== Research Engineering (Test Flight) ==
+
[http://www.behind-the-enemy-lines.com/2010/10/plea-to-amazon-fix-mechanical-turk.html A Plea to Amazon: Fix Mechanical Turk]
  
For folks who cannot wait to get their hands dirty with code, check out  [[Introduction to Research Engineering]] and hop onto [https://crowdresearch.slack.com/messages/research-engineering/ #research-engineering channel on Slack].
 
  
'''The deliverable for this week is a screen shot showing that you have successfully gotten set up locally.''' Post the image in #engineering-deliver on Slack by '''Sunday at 8pm PST.'''
+
=== Deliverables ===
  
For the test flight, we assume that you already know (or can quickly learn on your own) the part of our stack that you want to work on. If you are new to Git, Angular, Django, or anything else, we highly recommend reading up on the docs or browsing a few tutorials [http://crowdresearch.stanford.edu/w/index.php?title=Introduction_to_Research_Engineering#Documentation_and_Specifications here]. Our expectation is that by the end of this week, you will have completed Engineering Milestones 0-2 on [[Introduction to Research Engineering]]. Feel free to continue on to the other Milestones once you have finished.
+
Just as in the previous deliverable, we will focus on observations. As you do these readings, lay out observations of raw behaviors and issues. Try to avoid including interpretations and needs right now, even though the authors likely included many of them. Focus just on behaviors. What is *happening* on these systems? Remember, we'll get to interpretations and needs next, so hold off.
  
Milestone directly responsible individuals (DRIs): @aginzberg, @dmorina and @shirish.goyal on Slack.
+
1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.
  
Please note, we'll have research-engineering specific milestones in future, this is just for people who cannot wait.
+
2) What observations bout requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.  
  
[https://crowdresearch.slack.com/files/sehgalvibhor/F0JAPN1HP/Daemo_Install_Without_Vagrant__Windows_10_ How to install Daemo without Vagrant on Windows 10]
+
=== Recommended (but optional) Materials ===
  
Finally, to play with Daemo, '''[https://daemo-playground.herokuapp.com see this link]'''.
+
If you are interested in doing more reading, these optional materials will help you better understand workers and requesters:
  
== Readings (Mandatory) ==
+
If you are able to access Mechanical Turk, you can go try Turkopticon yourself [https://turkopticon.ucsd.edu/ here].
  
Please skim over the following papers. They're copyrighted, so please don't redistribute. You need to be signed in to the wiki view them.
+
[http://www.npr.org/blogs/money/2015/01/30/382657657/episode-600-the-people-inside-your-machine The People Inside Your Machine] - a 22-minute NPR radio program about crowd workers
  
=== MobileWorks ===
+
[http://mechanicalturk.typepad.com/blog/2014/11/helpful-blog-posts-to-help-you-design-your-hits.html Helpful Blog Posts To Help You Design Your HITs] - links to several resources aimed towards helping requesters create better HITs
  
[[:Media:MobileWorks (private).pdf | Narula P, Gutheim P, Rolnitzky D, et al. MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid. Human Computation, 2011, 11: 11.]]
+
[[:Media:Experiences Surveying the Crowd (private).pdf | Marshall C C, Shipman F M. Experiences surveying the crowd: Reflections on methods, participation, and reliability. Proceedings of the 5th Annual ACM Web Science Conference. ACM, 2013: 234-243.]] - discusses using MTurk for surveys (requesters' needs)
  
=== Daemo ===
+
[[:Media:Crowdsourcing to support Software Engineering (private).pdf | Stolee K T, Elbaum S. Exploring the use of crowdsourcing to support empirical studies in software engineering. Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, 2010: 35.]] - discusses using MTurk for software engineering studies, and challenges recruiting workers with specialized skills (requesters' needs)
  
[[:Media:Daemo white paper (private).pdf| Stanford Crowd Research Collective. Daemo: A Crowdsourced Crowdsourcing Platform (White paper, do not share)]]
+
[[:Media:Turkit (private).pdf | Little G, Chilton L B, Goldman M, et al. Turkit: human computation algorithms on mechanical turk. Proceedings of the 23nd annual ACM symposium on User interface software and technology. ACM, 2010: 57-66.]] - discusses using MTurk programmatically for iterative tasks (requesters' needs)
  
=== Flash Teams ===
+
[[:Media:Dynamo (private).pdf | Salehi N, et al. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. 2015.]] - a system for organizing workers towards collective action (workers' needs)
  
[[:Media:Flash Teams (private).pdf | Retelny D, Robaszkiewicz S, To A, et al. Expert crowdsourcing with flash teams. Proceedings of the 27th annual ACM symposium on User interface software and technology. ACM, 2014: 75-85. ]]
+
[http://wiki.wearedynamo.org/index.php/Guidelines_for_Academic_Requesters Guidelines for Academic Requesters] - A set of guidelines written by Dynamo participants which discusses how to avoid common mistakes made by requesters
  
=== Other Recommended Readings ===
+
== Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc ==
  
These readings are optional (you don't need to write up on them), but recommended:
+
We have the opportunity to do a little bit of fieldwork as well.
  
[http://www.cs.cmu.edu/~jbigham/posts/2014/half-workday-as-turker.html My MTurk (half) Workday]
+
There are a number of forums dedicated to Mechanical Turk workers and requesters, such as [http://www.turkernation.com/ Turker Nation], [http://mturkforum.com/ MTurk Forum], [http://www.mturkgrind.com/ MTurk Grind], etc. Spamgirl has made a great list of them [[Forums_and_Other_Resources | here]]. Go introduce yourself, and if folks are willing, engage with them in their chat rooms and threads. Be thoughtful! Researchers have a bit of a reputation for just marching in and telling Turkers what they really need, which isn't appreciated.
 +
 
 +
Requesters also often ask for help and advice on Reddit - you can [http://rh.reddit.com/r/mturk/search?q=flair%3ARequester%2BHelp&sort=top&restrict_sr=on&t=all Search Requester Help on Reddit]. There are lots of issues implicit in what's posted here. Other resources on Reddit include [http://www.reddit.com/r/mturk r/mturk] and [http://www.reddit.com/r/HITsWorthTurkingFor r/HITsWorthTurkingFor]
 +
 
 +
You should browse these resources (you are welcome to find other crowd-work related resources as well), aiming to discover needs that workers and requesters have.
  
 
=== Deliverable ===
 
=== Deliverable ===
  
For each system, please write down:
+
List out the observations you made while doing your fieldwork. Links to examples (posts / threads) would be extremely helpful.
 +
 
 +
== Synthesize the Needs You Found ==
 +
 
 +
Now it's time to synthesize your results. This may be the most intense part of this week's milestone, and should involve your whole team.
 +
 
 +
First, synthesize your raw observations into interpretations. Your eventual goal is to produce needs. Ask yourself *why* you think you saw certain things in your observations. Suggest a reason. Ask yourself why that reason matters. This should let you come up with another reason. Eventually these will pop into needs.
 +
 
 +
Remember, needs are verbs, not nouns. Not "Workers need money" or "Workers need independence" --- those are nouns. More like "Workers need to trust that they'll get paid later for the work they're doing now."
 +
 
 +
The method we recommend: if you're in a room together, put your observations on stickies and organize, reorganize, and reorganize them. Keep discussing what you've found, what patterns are in the data, contradictions, things that people say are fine but are clearly too cumbersome, and so on. Set aside a large block of time to do this if you can. It's tough to rush it. If your team is remote, we recommend getting on a Google Hangout or Skype call and using a Google Doc as your scratch space.
 +
 
 +
=== Deliverables ===
 +
 
 +
List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.
 +
 
 +
A set of bullet points summarizing the needs of workers.
 +
* Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.
  
* What do you like about the system / what are its strengths?
+
A set of bullet points summarizing the needs of requesters.
* What do you think can be improved about the system?
+
* Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.
  
 
== Submitting ==
 
== Submitting ==

Revision as of 23:23, 17 January 2016

Due date (PST): 11:59 pm 11th March 2015 for submission, 9 am 13th March 2015 for peer-evaluation. Everyone, please note that the Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. People out of the US might want to track the new changes.

The goals for this week are to:

  • Learn about needfinding
  • Determine the needs of workers and requesters (from panels and from readings)
  • Youtube link of the meeting today: watch
  • Meeting 2 slideshow: pdf
  • Youtube link of the Panel 1: watch
  • Youtube link of the Panel 2: watch


Learn about Needfinding

We talked about some highlights of needfinding in this week's meeting. We suggest that you watch Scott Klemmer's Coursera HCI lectures on needfinding, especially the first one:

Another good resource is Dev Patnaik's book on needfinding (optional reading).

Attend a Monday Panel to Talk with Expert Workers and Requesters

There will be two panels on Hangouts on Air where we will have experienced workers and requesters discussing their experiences on crowdsourcing platforms (MTurk, oDesk, etc). You should to attend one to better understand the needs of workers and requesters:

  • Panel 1: 8:30 am PST (Pacific Time) / 10 pm IST (Indian Time) on Monday March 9th
  • Panel 2: 6 pm PST (Pacific Time) / 9 pm EST (Eastern Time) on Monday March 9th

Please note that, Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. So people outside of the US, please check your time accordingly, and match it with PST/California time.

Deliverable

When talking about needfinding, it is best practice to organize your thoughts into three stages:

  • Observations: What you see and hear
  • Interpretations: Why you think you are hearing and seeing those things. What is driving those behaviors? This is the "recursive why" we talked about in team meeting.
  • Needs: These are the deeper, more fundamental driving motivators for people. As we talked about in team meeting, needs must be verbs, not nouns.

For more detail about these, see this week's meeting slides.

The deliverable for the panel subsection: report on some of the observations you gathered during the panel. You can hold back on interpretations and needs until you finish the rest of the observation-gathering in the next steps.

Reading Others' Insights

An hour in the library can save a year of fieldwork. Please read the following materials. These papers will also grow out our foundation of related work. The papers are copyrighted, so please don't redistribute them. You need to be signed in to view them.

Worker perspective: Being a Turker

Martin D, Hanrahan B V, O'Neill J, et al. Being a turker. Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 2014: 224-235.

Worker perspective: Turkopticon

Irani L C, Silberman M. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013: 611-620.

If you are able to access Mechanical Turk, you can go try Turkopticon yourself here.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

Kittur A, Chi E H, Suh B. Crowdsourcing user studies with Mechanical Turk. Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2008: 453-456.

Requester perspective: The Need for Standardization in Crowdsourcing

The Need for Standardization in Crowdsourcing

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

A Plea to Amazon: Fix Mechanical Turk


Deliverables

Just as in the previous deliverable, we will focus on observations. As you do these readings, lay out observations of raw behaviors and issues. Try to avoid including interpretations and needs right now, even though the authors likely included many of them. Focus just on behaviors. What is *happening* on these systems? Remember, we'll get to interpretations and needs next, so hold off.

1) What observations about workers can you draw from the readings? Include any that may be are strongly implied but not explicit.

2) What observations bout requesters can you draw from the readings? Include any that may be are strongly implied but not explicit.

Recommended (but optional) Materials

If you are interested in doing more reading, these optional materials will help you better understand workers and requesters:

If you are able to access Mechanical Turk, you can go try Turkopticon yourself here.

The People Inside Your Machine - a 22-minute NPR radio program about crowd workers

Helpful Blog Posts To Help You Design Your HITs - links to several resources aimed towards helping requesters create better HITs

Marshall C C, Shipman F M. Experiences surveying the crowd: Reflections on methods, participation, and reliability. Proceedings of the 5th Annual ACM Web Science Conference. ACM, 2013: 234-243. - discusses using MTurk for surveys (requesters' needs)

Stolee K T, Elbaum S. Exploring the use of crowdsourcing to support empirical studies in software engineering. Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, 2010: 35. - discusses using MTurk for software engineering studies, and challenges recruiting workers with specialized skills (requesters' needs)

Little G, Chilton L B, Goldman M, et al. Turkit: human computation algorithms on mechanical turk. Proceedings of the 23nd annual ACM symposium on User interface software and technology. ACM, 2010: 57-66. - discusses using MTurk programmatically for iterative tasks (requesters' needs)

Salehi N, et al. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. 2015. - a system for organizing workers towards collective action (workers' needs)

Guidelines for Academic Requesters - A set of guidelines written by Dynamo participants which discusses how to avoid common mistakes made by requesters

Do Needfinding by Browsing MTurk-related forums, blogs, Reddit, etc

We have the opportunity to do a little bit of fieldwork as well.

There are a number of forums dedicated to Mechanical Turk workers and requesters, such as Turker Nation, MTurk Forum, MTurk Grind, etc. Spamgirl has made a great list of them here. Go introduce yourself, and if folks are willing, engage with them in their chat rooms and threads. Be thoughtful! Researchers have a bit of a reputation for just marching in and telling Turkers what they really need, which isn't appreciated.

Requesters also often ask for help and advice on Reddit - you can Search Requester Help on Reddit. There are lots of issues implicit in what's posted here. Other resources on Reddit include r/mturk and r/HITsWorthTurkingFor

You should browse these resources (you are welcome to find other crowd-work related resources as well), aiming to discover needs that workers and requesters have.

Deliverable

List out the observations you made while doing your fieldwork. Links to examples (posts / threads) would be extremely helpful.

Synthesize the Needs You Found

Now it's time to synthesize your results. This may be the most intense part of this week's milestone, and should involve your whole team.

First, synthesize your raw observations into interpretations. Your eventual goal is to produce needs. Ask yourself *why* you think you saw certain things in your observations. Suggest a reason. Ask yourself why that reason matters. This should let you come up with another reason. Eventually these will pop into needs.

Remember, needs are verbs, not nouns. Not "Workers need money" or "Workers need independence" --- those are nouns. More like "Workers need to trust that they'll get paid later for the work they're doing now."

The method we recommend: if you're in a room together, put your observations on stickies and organize, reorganize, and reorganize them. Keep discussing what you've found, what patterns are in the data, contradictions, things that people say are fine but are clearly too cumbersome, and so on. Set aside a large block of time to do this if you can. It's tough to rush it. If your team is remote, we recommend getting on a Google Hangout or Skype call and using a Google Doc as your scratch space.

Deliverables

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

A set of bullet points summarizing the needs of workers.

  • Example: Workers need to be respected by their employers. Evidence: Sanjay said in the worker panel that he wrote an angry email to a requester who mass-rejected his work. Interpretation: this wasn't actually about the money; it was about the disregard for Sanjay's work ethic.

A set of bullet points summarizing the needs of requesters.

  • Example: requesters need to trust the results they get from workers. Evidence: In this thread on Reddit (linked), a requester is struggling to know which results to use and which ones to reject or re-post for more data. Interpretation: it's actually quite difficult for requesters to know whether 1) a worker tried hard but the question was unclear or very difficult or an edge case, or 2) a worker wasn't really putting in a best effort.

Submitting

Please create a page for your team's submission at http://crowdresearch.stanford.edu/w/index.php?title=WinterMilestone_1_YourTeamName&action=edit (substituting in YourTeamName with the team name), copy over the template at WinterMilestone 1 Template . If you have never created a wiki page before, please see this or watch this video on YouTube.

[Team Representative] Submission or Post the links to your ideas until 8:00 pm PST 17th Jan

We have a [Reddit like service] on which you can post the links to the wiki-pages for the submissions, explore them, and upvote them.

Sign-up Instructions: Log in with either Twitter or Facebook on the [website]. When it asks you to pick your username, pick the same username as your Slack, this will help us identify and track your contributions better.

Link to the website: Meteor site. Post links to your ideas only once they're finished. Give your posts titles matching your team name this week.

-Please submit your finished ideas by 8:00 pm PST 17th Jan Sunday, and DO NOT vote/comment until then.

[Everyone] Peer-evaluation from 8:05 pm PST 17th Jan Sunday until 12 pm PST 18th Jan Monday

Post submission phase, you are welcome to browse through, upvote, and comment on others' ideas. We encourage you especially to look at and comment on ideas that haven't yet gotten feedback, to make sure everybody's ideas gets feedback. You can use http://crowdresearch.meteor.com/needcomments to find ideas that haven't yet gotten feedback, and http://crowdresearch.meteor.com/needclicks to find ideas that haven't been yet been viewed many times.

COMMENT BEST-PRACTICES: Everybody in the team reviews at least 3 ideas, supported by a comment. The comment has to justify your reason for upvote. The comment should be constructive, and should mention positive aspect of the idea worth sharing. Negative comments are discouraged, rather make your comment in the form of a suggestion - such as, if you disliked an idea, try to suggest improvements (do not criticize an idea, no idea is bad, every idea has a scope of improvement).