WinterMilestone 1 yoni.dayan

From crowdresearch
Jump to: navigation, search

I'm a crowdworker for 3 years now, in a company called Wikistat, and i thought it would be an original addition to our mix to reflect on this experience, while i'm also discovering Amazon Mechanical Turk. This will be a honest recap of this experience.


What is Wikistrat

A crowdsourced consultancy company. Think of Deloitte or McKinsey, but instead of doing "black box consulting" (clients are asking for an audit, a research, then a handful of specialists at the traditional consultancy companies are working for days/weeks/months in a rather obscure-to-the-client manner, before delivering a report), it's leveraging the wisdom of the crowd.

The topics are usually about international affairs, with many also on technological trends.

Clients like NATO are tasking us to do forecasts, like "what could be the situation of this resource, 20 years from now?", "what could be the future private space industry in 2050?"

Then, we assemble a team of lead analysts (experts in the related fields), supervisors (mix of analysis & managing a group-wide effort, this is what i'm doing), who design a framework for the crowdwork. The framework is usually a simulation using Atlassian's Confluence (a bit like a Wiki with collaborative project management features), lasting from a few days to 3 weeks, where up to hundreds of analysts can contribute by creating scenarios, editing them, creating competing scenarios, commenting, voting on "master narratives" (patterns emerging from the scenarios), recommendations, and sometimes crowd-writing the report. I've seen up to two dozens of analysts collaboratively editing a single scenario within a simulation.

Experience the life of a Crowd-Worker

This is one of the best experiences i had in my professional life. There are several reasons for that, i will try to sum them up in a synthetic fashion:

  • The "human factor". Being able to get to know up to a hundred of analysts in just a few days, with varied experience/seniority, age range, culture, and harnessing that for a deep, multi-faceted, analysis, has tremendous value both for me intellectually, but for the clients too.
  • Wisdom of the crowd is real, with a proper framework, we can have an output that is much more diverse, creative, thinking outside the box, than with traditional working conditions.
  • Flexibility/agility/mobility: i can work from wherever i want, whenever (mostly) i want, however i want.
  • "Together we are strong", "everyone knows something, no ones knows everything". Even if i'm researching the topics i'm working on, i can't be an expert on every topic. Being able to have general ideas (let's say, on a future internet application, without being a computer scientist, network specialist, etc.), express them, then having other more knowledgeable than me coming and improving this idea, is awesome.
  • The gamification and rewarding aspect is incredibly addictive. The more i contribute, the more the platform gives me points & badges, the more i have those, the more i'm paid (micro-transactions) and invited to further simulations purchased by clients (with real money that will be re-distributed to analysts).

What i dislike:

  • The wiki format and global workers makes synchronous interactions rather difficult, it's more "i'm adding something to the platform, then a few minutes/hours laters, someone will kick in". The direct interactions with human that we have in offices, in university, etc, is important (feeling of relatedness in the Maslow pyramid of needs).
  • Honestly, when we are too many contributing, it means the client's money needs to be redistributed to a lot of people, diluting the amount i will have. In the end, it seems all my effort isn't recognized & compensated enough.
  • Related to the former point, there are a lot of tasks that aren't tracked by Confluence. I'm not only creating, editing, and commenting on pages, i'm preparing the simulation, researching, reading, chatting with third-party platforms, thinking. All this isn't tracked, and only compensated by human guessing/gauging. A crowdsourced platform need to acknowledge that many work isn't really traceable algorithmically.
  • Related to the first point, this is still a distant, immaterial job. I'm a social kind of guy, i need to see people at one point. Could be a feature of the Daemo, a concept of "blended crowdsourcing", mixing virtual crowdwork with real life interactions & work (a map could show "hot spot" of crowdworking and enticing you to meet one time per week your peers in a Paris coffee).

Experience the life of a Requester on Wikistrat


  • Clients get consulting for a far cheaper price than with conventional services
  • They tap into a diversity of analysts and therefore output that Deloitte and McKinsey can't even dream of
  • They can directly participate in the simulation, on a permanent basis, like other participant, and chose to be visible to frame what is happening, or be anonymous or masqueraded as someone else to not interfere with the crowdwork. This real-time, permanent, monitoring of the crowdwork, is difficult to get in traditional services (most of the time, they have intermediary reports at best)


  • Most of the time, the output is good, but sometimes, for several reasons (a high percentage of the 90 analysts we've enlisted for the simulation end up not being very active, have last minute obligations, or the simulation wasn't designed well), the quality is lower than expected. How to ensure a 99% quality with crowdsourcing involving hundreds of individuals scattered around the world?
  • Most of the previous clients simulations are under NDA, so it's difficult to acquire new clients as they are wary of such novelty and want insurances that the process is working.

Explore alternative crowd-labor markets

From what i've understood and been able to explore, Mechanical Turk is going a step beyond Wikistrat in terms of number of crowdworker. The level of distributed work is far wider. But then it's limiting the depth of the tasks, those are rather "simple" and lacking deep thinking.

How could we mix Wikistrat (crowdsourcing for humanities, intelligence, strategy, technological trends) and Amazon Mechanical Turk (dividing gigantic projects in the aforementioned domains, in manageable tasks)?



  • What do you like about the system / what are its strengths: making an impact in developing world, the Minimum Viable Product Approach (testing on a small but real sample of end-users, etc.)
  • What do you think can be improved about the system the tasks are for the moment ungrateful, not much possibility for real empowerment through learning new skills for example ; some parts of the study could be refined, like the survey indicates a high usability of the system but was it compared to one from smartphone, computers, etc?


  • What do you like about the system / what are its strengths: the prototyping phase for the requester is quite fascinating, pushing requester to refine their demand ; emphasis on trust
  • What do you think can be improved about the system: from experience, i think that crowdsourcing needs aspects of gamification, playful features, having fun, and feeling the "human factor". For the moment this part is a bit lacking in Daemo, the intent is to do a more efficient and more trustworthy crowdsourcing platform, but how about adding a "a platform i want to stick with, learn from it, make new acquaintances, etc"?

Flash Teams

  • What do you like about the system / what are its strengths: i loved the concept of "flash teams", tools to create efficient teams to tackle collectively task.
  • What do you think can be improved about the system: principles of social sciences, social physics, science of collaboration, how to make great teams, idea flow modelization, etc, could be applied. Especially things from hackathons, game jams, startup week, what makes a great team, one that you want to stick with, and how to reproduce that in a crowdsourcing environment?