Welcome to the wiki for the Crowd Research project!
NEW: The Stanford Crowd Research Summer Program is accepting applications now, apply or spread the word: hci.st/crowdresearch
Some research projects are too big and too important to tackle alone. Sometimes, we need to team up.
At Stanford’s Computer Science department, we've observed that people who are aiming to get research experience or launch their research career will often fall into an expertise valley. Undergraduates are assigned extremely tightly scoped activities within research projects, getting little room for creativity. Then these folks get into PhD programs, have literally the entire space of human knowledge to explore, and don’t have enough scaffolding to make quick progress.
Here in the Stanford HCI Group, we’re going to create a crowdsourced research team to tackle both these challenges together. We’ll gather as many talented folks as we can get, and work to build out that intervening bridge between tightly-scoped work and open-ended exploration. It will have far more flexibility than a typical research experience, but with a focused goal where we can bring each other back on track each week and nobody gets lost.
About the project
Whether you need help gathering data, labeling machine learning training examples, running experiments, or transcribing audio, today we use crowdsourcing platforms such as Amazon Mechanical Turk. However, these platforms are notoriously bad at ensuring high quality results, producing respect and fair wages for workers, and making it easy to author effective tasks. It’s not hard to imagine that we could do better.
This research will be a complete design, implementation, launch, and evaluation of a new crowdsourcing platform. What would it take to create an effective marketplace? One where workers have more power in the employment relationship, or could take additional responsibility for the result quality? How might we design such a market? Could we launch it and become the new standard? This research in human-computer interaction will involve a combination of design thinking, web development, and experimental design. This is far more ambitious than your typical project. It’s an entire marketplace design question. Thus, we’re banding together to solve it.
Well, first, there’s creating a crowdsourcing market that becomes the new standard. This could lead to a far better future for crowdsourcing and crowd work, and millions of people could eventually use it. It’s research, of course, so there’s always risk it might not work out — but if we knew it would work, it wouldn’t be research!
Second, we’ll be planning papers to top-tier conferences based on our work. If you are considering an MS or especially a PhD program, being a heavily contributing author on a paper can greatly improve your chances. How much you contribute to the project will determine author order. Last, I really do hope to build relationships with a diverse range of researchers.
Meetings and slides
Meeting 1: Why do we need a new crowd platform? and introducing the crowd research program.
Meeting 2: Worker and requester reflections + need finding techniques
Meeting 3: From needs to ideas, two factors at play: trust and power
- Youtube link of the meeting today: watch
Open gov mid-week meeting: watch
Reputation sys meeting 1: watch
Reputation sys meeting 2: watch
Getting Started with infrastructure efforts: watch
All timings in PST (California time)
Milestone 1 - 11:59 pm 4th March 2015 for submission, 9 am 6th March 2015 for peer-evaluation.
Milestone 2 - 11:59 pm 11th March 2015 for submission, 9 am 13th March 2015 for peer-evaluation. Everyone, please note that the Daylight Saving Time (United States) 2015 begins at 2:00 AM on Sunday, March 8. People out of the US might want to track the new changes.
Milestone 3 - 11:59 pm 18th March 2015 for submission, 9 am 20th March 2015 for voting and commenting on others' ideas.
Milestone 4 - 11:59 pm 25th March 2015 for submission, 9 am 27th March 2015 for voting and commenting on others' ideas.
Milestone 5 - 11:59 pm 1st April 2015 for submission, 9 am 3rd April 2015 for voting and commenting on others' prototypes.
Milestone 6 - 11:59 pm 8th April 2015 for submission, 9 am 10th April 2015 for voting and commenting on others' prototypes.
Milestone 7 - 11:59 pm 15th April 2015 for submission, 9 am 17th April 2015 for voting and commenting on others' prototypes.
Milestone 8 - 11:59 pm 22nd April 2015 for submission, 9 am 24th April 2015 for voting and commenting on others' prototypes.
Milestone 9 - 11:59 pm 30th April 2015 for submission, 9 am 2nd May 2015 for voting and commenting on others' prototypes.
Milestone 10 - 11:59 pm 6th May 2015 for submission, 9 am 8th May 2015 for voting and commenting on others' prototypes.
Milestone 11 - 11:59 pm 13th May 2015 for submission, 9 am 15th May 2015 for voting and commenting on others' prototypes.
Milestone 12 - 11:59 pm 20th May 2015 for submission, 9 am 22nd May 2015 for getting user-study done.
Infrastructure - homepage for Infrastructure related efforts, contains separate milestones.
General weekly plan
We’ll meet weekly over videochat and lay out our goals for the next week. At the end of the week, you’ll submit what you’ve been working on. Your peers and a Ph.D. student here at Stanford will peer critique the work, and we’ll talk about the best stuff each week in our meeting. The sky’s the limit.
I’m sure we’ll adjust this as we go. Because, this entire crowdsourced research idea is a bit of a research project in itself, too.
- Saturday morning 9 am PST: Prof meeting with participants and milestone set for the next week (over Google Hangout on Air)
- Saturday after meeting - Wednesday midnight PST: participants work on their milestones (~5 days)
- Post Wednesday midnight PST - Friday morning 9 am PST: peer-evaluation by the participants (1+ days)
- Friday post 9 am PST - Friday evening PST: Research Assistants (RAs) check the top submissions and meet the Prof in evening. Pre-weekly meeting discussions happen. Milestones designed for the upcoming week etc.
- Saturday morning 9 am PST: Prof meets based on the input from RAs and top submissions. Participants receive their next milestone and a feedback survey after every meeting.
In research? None. Anybody who is smart and dedicated can help us envision the future of crowdsourcing and articulate how it might play out.
In terms of skills, there are many different ways that you can participate. If you want to contribute design skills, having a portfolio of past work would be helpful. If you are a CS major or enjoy programming, you’d likely need to have completed an introductory programming course sequence to succeed. We’re currently building infrastructure and implementing foundations using Djanjo, Angular.js, PostgreSQL, REST framework, so knowledge or experience in these areas would be extremely helpful. If you have experience in social science methods (e.g., surveys, qualitative work, designing controlled experiments), there will be lots to do as well to help us make sure we’re creating the right thing.
For everyone, a class in human-computer interaction (such as Scott Klemmer’s HCI Online, which you can complete as prep) will be a huge leg up.
- Slack - used for chat and discussion
- Meteor - used for voting, evaluation and comments
- Github - used for collaborating the development process
- Relevant Work - used to save relevant articles, links and papers
- Forums - discussions about this project on external sites. Note that all official announcements and communications will occur via Slack and email.
- Resources - used to index all platforms and resources as we evolve
Want to share some other resources? Create a wiki page, and post it at Resources.
About this wiki
You need to be logged in to edit this wiki and view private pages. If you already have an account but cannot login, try resetting your password. If you need an account, please contact Geza (@geza on slack) and you will be emailed your login details.
All pages on this wiki are public by default - so anyone visiting this site can see them.
Any page which ends with (private) will be private - so you need to be logged in to see them. See example.