Milestone 2 mathias

From crowdresearch
Jump to: navigation, search

Task Description /

Learn about needfinding Determine the needs of workers and requesters (from panels and from readings)

Deliverable / When talking about needfinding, it is best practice to organize your thoughts into three stages:



What you see and hear



Why you think you are hearing and seeing those things. What is driving those behaviors? This is the "recursive why" we talked about in team meeting.

- Needs:

These are the deeper, more fundamental driving motivators for people. As we talked about in team meeting, needs must be verbs, not nouns.

report on some of the observations you gathered during the panel. You can hold back on interpretations and needs until you finish the rest of the observation-gathering in the next steps.

Notes /

User 1 - Teacher and Requestor

Requested nearly $250,000

Class available:

Another site lead by U1:

User 2 - Admin and moderator

Primarily a worker since 2007/08

Bounces between worker and advocate/helper

Moderate lots of forums and active on Reddit

"To me it's a balance between doing the work myself and how much I enjoy the work and what kind of work I enjoy doing and interacting with requesters and also helping with works who are new" Focused much on the community

Runs a site:

User 3 -

Known on the forums

"trifecta of mechanical turk"

10 years been there

Runs multiple groups

Sees workers as often the "forgotten perspective"

Desires to see a better plaatform, excited by Daemo

User 4 - Asst. Professor at University w/ Business school

Role is primarily as requestor for research on crowdsource workers

Collecting objective and survey data to study the behavior of workers

Desires to bridge disparate communities together (working, research, and platforms)

Found the turnout rate on platforms is very high for workers

User 5 -

Started in 2009, didn't do much at first

After 2011 w/ son, started working more

Disabled, so hard to keep other jobs

Left job in fast food industry b/c of disability

Liked that pace and found it as a modest means of income.

Started at $10, then finally got to $50. Then $100 and could hardly believe it

"My options are extremely limited."

O = Observation

I = Interpretation

N = Need

Q1 - Daily Rhythm U5 /

O-Starts with kids, turn on scripts that search for work

O- go through the daily thread on Turker Nation

O- Found that to be the place people actually get together

I - Found a community to be involved to make the work more amenable

O- People post threads worth working on

O - that communal workflow helped make the work more efficient to

N - The task of finding turks of value is or can be done as a task, like a division of the labor

O - Provides feedback to Requestors to improve workflow (Reminds me of Adam Smith's DOL description)

O - (Muddy Description) appears to be describing an evaluation process of task set

O - "I work through the thread that way" (19:40)

O - That build a queue which is worked through, then return to the daily thread

Q2 - Workflow Length

U2 /

O - Don't really schedule breaks

O - Depends on HITs,

O - higher value, might stop and do that work

O - "Sometimes you can kind of sense patterns"

O - But patterns are not concrete or scheduled out

I - Analyzing task 1 x 1 by skimming their content

O - 24h a day 7d a week, always stuff being posted

I - People build systems to do their work.

N - Those are living somewhere...Collect and share them? (build a toolbox, build your own tools)

U5 / O - Kids nap and such but work affects and changes that

O - Sets a Daily goal (

N - returns to my gameification concept on (Get specific value out of time spent)

Q3 - What works and what does not?

U1 /

O - Some tasks are free and loose

O - on academic side more on the creative side

O - Noted an adjective noun composition Turk from PhD student

O - It allows testing a hypothesis in a day & if it makes sense by deploying it

// Thought you were clear conveying but not coming out the other end?

O - Seems to show up in the results that not getting right "type" of result

O - Worker feedback helps to make things more clear

O - Asks for some better worker engagement

// Yes, conversation seems to play a key role. Workers, how are you evaluating?

U5 /

O - Has to do with "how" it will affect my worker rating

O - Do not risk losing 99% approval rating

O - If requestor is completely new, I'll work it, but only until when in danger of 99%

U2 /

O - Agree about approval rating

O - I generally select based on past reputation

O - If I hadn't heard it before, may send a check-in email to ensure communication will work well

O - When they don't respond, likely uninterested in developing the question

// Hypothesis that this is going to grow and thus likely to get more messy...

U6 (New user jumped in) /

O - Post HITs, just productivity hits, need to get books labeled

O - Put a lot of effort into design to ensure outcome the other side

O - We do automatic accepting of HITs

O - We accept to keep up worker performance

O - If they are messing up on tasks, they

U1 /

O - The engagement of workers/requestors could be more frictionless

O - I would get a tone of emails from non-native english speakers saying "sir... I want to work on your HIT"

O - Those non0native emails were easy to skim through

O - When I scaled my HITs, I found my time to interact didn't scale.

O - To monitor interacting, sent asst. to do that

I - "Outsourcing my crowdsourcing" seems to be another division of labor

O - Reputation of requestors is built in as well even if not explicit

O - Doing right by workers makes the good workers gravitate to you

O - I'd like to right by the workers... not rejecting people unfairly

O - "In my ideal world, the platform would have an automatic adjudication" extra 20% of my est. price

O - anytime I reject someone, it can be disputed

O - Fairness to both w/r is very important

O - Don't get the opportunity to hear from them personally

O - Translating workers into serial numbers tends to treat less fairly since not treated like people

Q3 - What types of HITs do you want? >> What can't you do?

U4 /

O - The batch requestors provide a lot of work

O - My role as a requestor is generally with surveys

O - surveys are easy to implement. Us (Quantrex) as an external source and I post links

O - Differently, I study specific groups who also participate in online communities to take advantage of both

O - They seem to follow highly routine scheduls

// Any struggles with the surveys?

O - one thing you'll have to grapple with is the attention span.

O - How do you validate they are giving real thought

O - b/c I went to Christie and Turker nation first, I could trust them better

O - That Turknation conversation cleared that problem

O - No other real problems

U6 /

O - We don't use the built in templates

O - Need to have HIT inside an iFrame << had to use mturk in that way

// NOTE need support for workflows..

Q4 - How do you track your general wages?

U2 / O - I learned estimating over time. I give 1-2 a try to see if it's worth my time

O - time estimates often really off. Totally overest. not realizing our speed or no idea how complicated and underest.

O - In that way, workers are better at setting time estimates than requestors.

N - Workers giving estimates or dividing that taks into a task in and of itself before becomig worked on

    • What if we created work groups where a delegator could coordinate tasks into a specific "worker segment"
    • That would reduce the junk you don't want to do and give feedback on a task before finding out it was poorly described

O - Participating and giving your own time estimates would be valuable

U3 O - Try 2m HITs and see how you're doing

O - Kept spreadsheets to see

O - If requestor came along I knew, easier to estimate

O - I'm really really quick, so if it's someone new I have to get a feel for them

// There's a very large individual variation in comprehension (typing, comprehension, etc. )

// What if we ranked tasks to get your range

U5 /

O - The variation btw. worker is key

O - I type 75-80 wpm, but clicking on images, not so much

N - Tasks need category by activity type to match my skills

Q5 - How variable are your earnings?

U2 /

O - I'ts all over the place

O - Depends on your willingness

O - "penny hits" tasks that pay one penny

O - It's huge, $2 to a couple hundred

O - Makes it difficult to budget for it

U1 /

O - Shameless plug: Crowdworkers, helps track the hourly rate

O - Tool gathers time it took people to complete a task and earned value, captured into a est. hourly wage

O - Look at what other people are submitting to get an idea of it's value

Q6 - Rejection: What does this look like in practice?

// Feels like a heavy-handed issue, early requestors tend to be too brash on rejection

U1 /

O - I have a top and bottom threshold, use that decide if they are filling it randomly

I - He's building in evaluation into the tasks

O - "It's a trick subject for sure"

U5 /

O - Constantly being aware of rejection

O - "For a new worker, it's death"

O - If you get a bad reputation, hard to get any well paying tasks thereafter

O - Every batch, I'm making sure I have a contact link

O - If rejected, I go to the requestor witha screenshot

O - There's reasons, but you always want to have backup to prove you did the work

Q7 - If you could use 5 words to describe w/r experiences, what that would be?

U1 / "Incredible enablers of scientific research"

U5 / "Anonymous, frustrating, complicated, tricky, and I'm good there"

U2 / "Absolutely undredictable, great and terrible."

U3 / "Power, frustrating, empowering, lonely/isolated, tiring"

U4 / "Passionate, futuristic, dynamic, human."

--- Critical Reflection on the interviews as a whole.

Conversation seems to be a key element of the workspace. Workers evaluate the requestor over the task. Requestors build methods to evaluate honesty in effort from the workers. Regardless, both have skin in the game and needs that involve being able to exit the mechanical world and deal with complications socially.

The knowledge that both have seems to be building tacitly and through experience. Workers seem to get better at removing the junk and getting to tasks that they want and requestors refine their methods for task creation to get the results they are looking for. Neither comes equipped with that and do have to ameliorate their practice to get value out of it.

Equity in rejection, wages, and reliability are all captured in the responses and seem to be a common theme. Sometimes things seem great, or completely illogical and terrible. The 5 words in the last questions seem very polarizingbut within the confines of the individual. In other words, they have amazing and awful experiences, and may have some in between but aren't identifying those.

The process is work, no doubt there, but many of the things broght up reminded me of my sociology studies of Adam Smith, describing how the division of labor led to incredible levels of productivity. That seems to be under a microscop here when tasks are broken down to something one person sees as perfect, that another person actually doing the work can evaluate more appropriately. Smith noted that one of the large benefits in the division of labor is that workers can find better solutions that the original job itslef, and thus much of their work is not in accomplishing the task but instead ameliorating the process. This model and the descriptions are eerily aligned with his sociological perspectives fromt he pin factory 300 years ago, now in a digital space. << I will return to that and investigate in correlation to the interviews if people find that interesting...