Wintermilestone 2 yoni.dayan

From crowdresearch
Jump to: navigation, search

Hello everyone :)

I will continue my reflection on my experience as a crowdsourced worker in Wikistrat for 3 years now http://crowdresearch.stanford.edu/w/index.php?title=WinterMilestone_1_yoni.dayan#Experience_the_life_of_a_Requester_on_Wikistrat, and blend it with the material and references of the Week 2.

Observations from my own experience at Wikistrat, the panel with specialists, and the scientific articles

- Pertaining to both workers and requester, there's a problem of turn over rate being very high. I saw that as a supervisor at Wikistrat. Workers (including myself) tend to consider such crowdwork as a "bonus activity" and are therefore not fully engaged and committed.

- For workers, crowdsourcing allow people who have difficulty to have conventional/physical work to have an activity. This is something i've witnessed in Wikistrat too. For example retired personnel not employed anymore but who can continue to be active.

- Something that i haven't seen at Wikistrat and that seem pervasive in mechanical turk is that as a worker, you can’t have a fixed schedule, requests are created any time, this is very unpredictable whereas our crowdsourcing efforts are planned ahead of time, announced, our crowd-analysts can chose to participate or not, as well as their level of commitment.

- Good communication and a quest for engagement for both roles. As a requester, writing clear enough instructions, answering to queries from worker, will have a positive outcome for the crowdwork while workers are more engaged and feel more valued if the requester acknowledge them.

- Another problem i haven't had at Wikistrat is that in mturk, as a worker, you never take a job if its endangering your approval rating. I could feel a real pressure of the rating, if you have several disapproval, it takes time to pretend to good (in terms of hourly wage) missions. In our crowdsourced company, the approval wasn't a score, but recommendations from their peers, points (gamification system), etc. It's making the crowdsourcing addictive, and not a pain/trauma to stick to the 99% of approval.

- A very common issue we had at Wikistrat, and that is also in mturk, is for the requester to know the quality of our expected crowdsourced simulations. How to make sure analysts' ouput is good, original and not copied, etc. It's the same imperative in mturk with the fear of cheating with the system.

- In terms of concrete organization, as a requester, it's very difficult to scale engagement with workers if they are more and more of them, and it's impossible to answer to all the emails. And it has impact on the work as workers feel like numbers, and aren't engaged.

- There's an issue pervasive in both role, it's estimating the amount of time/effort to complete a task. In wikistrat, as requesters, we try to gauge how many hours per week of analysis a participant would need to do to pretend to rewards, while analysts need to be careful and be sure to have time before committing themselves to a task then not doing it, hurting their image.