WinterMilestone 1 DavidThompson

From crowdresearch
Jump to: navigation, search

Experience the life of a Worker on Mechanical Turk

The sign up for Mechanical Turk (mTurk) was quite easy, although I was surprised at how long it took for me to be able to use the system (almost 2 days). The user interface is quite sparse, and looks like a surprisingly neglected piece of Amazon real estate (quite a dated design).

I was surprised there was no simple tutorial describing what to expect and how to navigate the different elements of looking for work. This would have been quite useful as I realised through clicking about that I wasn't eligible for most of the tasks. Given that I was looking for just $1, I set the initial filter at this amount. The first task I selected wanted me to transcribe text from images of menus (about 10 of them) into a Google Spreadsheet. For every mistake I would be docked a 'point', and once I had 5 points I wouldn't receive my money. Mistakes could be as simple as a typographical error, missed punctuation etc. This did not seem like a good way to earn a dollar, so I passed on this one.

I then found a HIT looking to test an email API again, for $1. I accepted this, but it seemed to just be a spammy way to get an email and address information. The conditions for submitting the HIT were not clear and after a couple of minutes I just declined this and moved on.

I was about 20 minutes into an hour I'd decided to spend on this task, and I realised I'd probably have to look at lower paying HITs. I lowered my target to 50 cents, and found a survey to complete. This was from a university (and one I'd heard of), had IRB like sounding text, and was generally professional. It took about 10 minutes to complete. I searched about for more surveys like this, but the remainder seemed to be limited to people who'd previously completed 'part ones' of ongoing research and were not open to me.

I lowered my pay range again, this time to 10 cents. Here's where the money started rolling in ...

I found a simple task to Google three words and record the number of paid ads on the top and side of each page. Easy, 10 cents in the bank ... and the requester approved immediately (as they'd promised).

I then found a survey for 20 cents looking at my thoughts on Twitter and how I felt about a (possibly fictional) Twitter user based on their content. This took another 10 minutes. For my final 20 cents I found yet another survey, this time looking at how I think through decisions. Both of these surveys similarly came from universities I'd heard of, and were run through Qualtrics (I had some faith they were legitimate requests for information versus spammy phishing).

While I've done $1 worth of work, I have only received 10 cents to date. The experience was super interesting, although left me feeling I needed to be on guard as I got the impression requesters were looking to rip me off (the email API experience didn't help at all). The marketplace has the feel of a one-sided bazaar; as a consumer of the HITs I felt like I needed to be vigilant throughout.

MTurk Worker Details Winter Milestone 1 dcthompson.png

Experience the life of a Requester on Mechanical Turk

The Requester interface was more of what I might have imagined from Amazon. It was clear, clean, and easily navigable with a well thought out flow. It was straightforward to set myself up with a project (I had planned on collecting data from a survey. I had pre-built the survey in Google forms and including the link to it, and text explaining the purpose of the survey, was very easily accomplished with a wizard-like functionality.

I had only a few questions (< 10) and chose to pay 5 cents for the 1 - 2 minutes worth of work. In retrospect this seems quite generous, but given my own brief experience with mTurk as a worker, I am okay with that. I paid for $2 worth of completions, and ended up paying $2.80 after the, surprisingly large, mTurk fee. I required participants from the US as my only qualification, and had 9 completed submissions within 10 minutes. I had all 40 completed by the time I had logged back into mTurk the following day.

I haven't reviewed the data in detail but it looks fine at first glance. I had set up to auto pay after 3 days, but ended up approving them all when I checked back in. All in all a super simple way to collect survey data, through a slick interface. It would have been good to have better guidance around what to pay per task, but other than that it was extremely easy to use.

Media:MTurk Requester Details Winter Milestone 1 dcthmpson.csv

Readings

MobileWorks

I thought the system was well thought out and empathic in it's design (simple UI, customized for expected infrastructure). It would be very straightforward to further customize and iterate on this design to include other types of common tasks.

Regarding the quality score, I think there is an opportunity to provide richer feedback which should be of more use to the worker. At present the worker is "dinged", for getting a wrong answer. It might be an interesting thing to consider providing richer guidance or feedback around why the user got the answer wrong. This then might become more of a learning platform that could be used to consistently raise users quality rankings.

Daemo

A well written paper that I found to be consistent with my understanding of this project and my onboarding to date.

Following my reading I got curious about the effect of the Boomerang score on a more general population over a long time. Are the design features that are currently employed, enough to avoid the creation of an uneven playing field for new, or consistently "meets expectation", workers?

Regarding feedback: As is mentioned, as a Requestor, I have no real incentive to provide feedback all the time. How can this be made to be something I want to do? Could my providing feedback be a component of my ranking as a requestor?

Regarding the use of prototype tasks: This is an Interesting example of the explicit instantiation of a 'negotiation' phase as might be described by Fernando Flores' work. See, eg. (http://conversationsforaction.com/history/basic-action-workflow)

Regarding open governance: Dameo represents the creation of a digital commons, how can this leverage Elinor Ostrom's work and findings? [A question for me to ask and subsequently look into!]

Flash Teams

Fascinating stuff. Loved the broader more abstract framing and the explicit use of research from the field of org. behavior.

Curious about the use of the model with a less select team (less star performers and more average contributors, or a more average type of oDesk user).

Interesting to think about what this model looks like as connected to a larger organization (or when the input, context, or conditions are not as fixed). One might imagine the conditions self-organizing team within the organization, self-organizing team from oDesk, and team from oDesk using Foundry.

Could an organization, with a purpose and a set of products/services be run exclusively using Foundry?

Milestone Contributors

Slack usernames of all who helped create this wiki page submission: @dcthompson