Introducing Crowd Research Initiative and Recap
Welcome, summer program researchers!
Congratulations! you've been accepted to participate in one of the most ambitious projects in the research area of human-computer interaction. Together, we're building the next generation crowdsourcing platform - 300+ of us. As you know, we've been exploring this space for a while, so this page will serve to get you onboard quickly and on pace with the rest of us. Let's start, shall we?
- 1 What is crowdsourcing?
- 2 What is a crowdsourcing platform?
- 3 Why do we need a new platform?
- 4 What do workers and requesters have to say?
- 5 What are the factors at play?
- 6 What are the emerging foundation and feature ideas?
- 7 What is the plan of action?
What is crowdsourcing?
Crowdsourcing is the process of obtaining needed services, ideas, or content by soliciting contributions from a large group of people, and especially from an online community, rather than from traditional employees or suppliers. A great example of crowdsourcing is Wikipedia - a website we all use to learn about specific topics. The wikipedia is built by people like us (crowd), who have spent their knowledge and time to create the world's largest encyclopedia. None of these contributors were traditionally hired, but the entire effort was crowdsourced. Remember, crowd is us, it can be anyone willing to make contributions for a given task or goal. Watch this quick video to learn more or read about research in this space post slide 33.
What is a crowdsourcing platform?
In order to harness crowd potential, we need to reach to crowd. A crowdsourcing platform is an internet marketplace, that enables individuals and businesses (known as requesters) to coordinate the use of human intelligence to perform needful tasks. The crowd is often termed as workers in such platforms. Platforms like UpWork caters to larger projects like website development, while platforms like Amazon Mechanical Turk helps in getting micro tasks done like labeling images or filling in surveys. Other platform includes:
- Taskrabbit - to get crowd to help you with physical world needs, like shopping and delivery.
- Zooniverse - to get crowd to help explore space, not possible through world's super computers.
- Gigwalk - to get crowd to help you find local information from anywhere in the world.
Why do we need a new platform?
Today’s platforms are notoriously bad at ensuring high-quality results, producing fair wages and respect for workers, and making it easy to author effective tasks. What might we create if we knew that our children would become crowd workers? Our goal is to reconsider the design of the crowdsourcing platforms (and train you to become awesome researchers while we do that). Most of the current research focusses on improving the output on a variety of crowdsourcing platforms. However, we want to change the platform itself such that the output is improved already.
What do workers and requesters have to say?
During first few weeks, we asked participants to put on the shoe of a worker and a requester and share their experiences. Participants were encouraged to explore a variety of platforms like: Mechanical Turk, Clickworkers, etc and then do some needfinding. This exercise helped us identify the problems and areas of improvement.
As a worker
- Very difficult to find work. We kept seeing the same task that nobody is doing, over and over. It becomes super repetitive.
- Concerns about wages.
- Need to be treated fairly and respectfully, and have a voice in the platform
- People with no experience earned much less than people with Turking experience. On AMT, the high-paying tasks were gated behind qualifications. They need to be able to expose their skills so they can get work they are qualified for and advance their skills
- Scam is quite common.
- Suggestion: Check out Turkopticon, Reddit’s /r/mturk and /r/HITsworthturkingfor
As a requester
- Among the people who were new to the platform: signing up was a bear, and wrangling CSVs was not much better
- It’s hard to trust the results: had to go back and inspect every single response
- Need to get their HITs completed (quickly / correctly)
- Need to have workers who have the appropriate skills and demographics do their tasks and trust them
- Need to be able to easily generate good tasks
- Need to be able to price their tasks appropriately
- Seemed like AMT was designed to be most friendly to requesters
- Suggestion: Check out Panos' blog and Reddit /r/mturk requester issues
What are the factors at play?
There are two main factors at play: trust and power.
- How do I trust who you say you are? How do I trust that the results I get are results that will be good? How do I trust that you’ll respect me as a worker, and pay me accordingly?
- Who has the power to post work? To edit other peoples’ posted work? To return results to the requester? Can I, as a worker, send it back myself, or does someone else need to vet it?
These factors inspired bunch of research questions and ideas to address them, some of them being (please see this slideshow for details):
- Task clarity: How might workers+requesters work together help produce higher-quality task descriptions?
- Data and Results: How might workers+requesters work together to produce higher-quality results?
- Disputes: How might workers+requesters work together to produce higher-quality results?
- Empathy: How might we build more empathy between workers and requesters?
- Transparency: How might we make payment clear and transparent?
- Reputation: How might we make payment clear and transparent?
Suggestion: Learn about how to prototype and do storytelling here.
What are the emerging foundation and feature ideas?
First, a reminder, we aim to emphasize a conceptual point of view. A string of features is not a research contribution. A point of view gives us a single angle that informs our decisions and features. Before we talk about the emerging foundation and feature ideas, let's try to understand the difference between them in this project's context:
- Foundation: A new high-level approach to organizing our crowd platform to improve trust and power. For example: Workers organize themselves into collectives
- Features: Ideas which improve the strength of any platform but aren’t holistic or don’t give it a high-level purpose. For example: Task recommender systems
Some of the emerging ideas we've been exploring are (please see this slideshow for more details):
- [Active] Input + output moderation: Before tasks get posted to the system, they get looked at by a panel of reviewers and either edited or passed
- [Active] Import finance concepts/External quality ratings: Reputation, like credit score: worker ratings of requesters place them into A, B (good), C (fair), D (poor)
- Mobile crowd tasks: mClerk: tasks are designed so that people can complete them on their phones
- Tiers and mentors: Categorize workers into tiers based on experience, like entry level workers receive aid to establish and familiarize themselves with the platform
- Price+quality mechanisms: Combine price and quality together
- Empathy and community: Deploy people within the system to help react to conflicts by creating a sense of community
- [Active] Open governance: Workers and requesters share power over the platform through annual votes (representative democracy?)
- [Active] Micro+macrotask market: Maintain the submission approach from microtask markets, which is focused on two to hundreds of replications, but find ways to make it accessible to both microtask and expert work
Suggestion: See this to learn more about challenges in the active foundation ideas.
- Recommendation systems
- How do we pay? Bitcoin?
- Language transducers to simplify complex writing for a local (international) audience as well as check that it’s fair
- Culturally-specific adaptations
- Link issues + bug reports directly to HITs so everyone can see
What is the plan of action?
As part of this program, we want to create a new marketplace that we’re all proud to support and use. Its a chance for you to learn with us as we experiment with new forms of research at scale. Later, we plan to submit a research paper at top-tier academic venues, with you as a coauthor. You can also request a recommendation letter from Prof. Michael Bernstein of Stanford CS. In short, we want you to own this project and work together to achieve this ambitious goal. We want to design a new future, a future following a user-centered research trajectory:
- Empathy and needfinding
- Brainstorming and ideation
- Rapid prototyping and implementation
We will work in teams toward weekly milestones of your choice, give feedback on each others’ milestones, and take the best ideas to move forward. Each week, we will use the results from our efforts so far to decide on a milestone that we’ll pursue for the next week. Collaborate with your team or form new one's to execute the milestone.
After submitting your team’s milestone, you’ll have about 12 hours to give feedback on a few peers’ submissions. We will use this feedback to: Highlight the highest-rated submissions, Invite teams to join the Hangout on Air, Guide our next steps.
- Saturday 9am PST: team meeting + milestone opens
- Thursday 11:59pm PST: milestone closes
- Friday 12:00am PST: peer feedback on milestones
- Occasional meetings to brainstorm and on need basis
Research and engineering
As part of this project, we will address some open-ended and challenging research questions. To testify or evaluate many of these questions, we might have to engineer prototypes. Prototypes can range from lo-fi (like a paper drawing) to fully developed one's. You are encouraged to form goal oriented teams with people with varied skills, and accomplish goals. In human-computer interaction, research and engineering go hand in hand along with usability, interface and interaction design. We're lucky to have participants from a range of skill sets.