Organizational Design and Market Friction

From crowdresearch
Revision as of 05:37, 8 February 2016 by Angelarichmondfuller (Talk | contribs) (Experiment)

Jump to: navigation, search


As we (Dilrukshi, Angela, Yoni, AMDP, Pierre, Mohammed, Alison) have been discussing Organizational Design, we resisted the urge to discuss macro Daemo governance solutions and took a more purposeful, transactional approach to the impact of organization on the balance of trust and power between workers and requesters. To some extent, we've been bantering around the larger ideas of platform governance for a while and until we establish daemo and have a regular cadre of workers and requesters, experimentation of macro organizational impact is a bit difficult. However, there are many documented points of market friction between workers and requesters, and one wonders, can organizational design re-balance asymmetrical relations or does the pendulum swing to the other extreme. In a more practical vein, these transactional problems and potential remedies are easier to review, examine and experiment against. As such we can further define the crowdsourcing space as these problems align well with our focus on micro work. Of the many problems we could address, three caught the most focus/attention: Pricing, Grievence Management, Learning.


Using A/B or Multivariate testing we'd like to stress test various Organizational Design approaches to assess the extent to which balance/harmony can be achieved.

Pricing: Which approach is best at correcting the MTurk Pricing/Quality imbalance. We'd look at guilds, a system driven by a dynamic pricing engine and an unstructured group. We were spit-balling an experiment in which we'd recruit three worker groups of 15 folks each as well as a slew of requesters, run a couple cycles of HITS through Daemo and collect a variety of data points to determine which system achieved a level of equilibrium.

Grievance: Which system is best at adjudicating grievances and facilitating communication between parties. This could either be run in parallel with the pricing experiment or be a stand alone

Learning: Suggested experiments to try.

  Training interventions for task authorship
  Worker learning

Metrics/What we want to measure

  • 1. At this point we need to do a deeper dive into the concept of equilibrium, is it perception, a system metric, economic or a combination of variables.
  • 2. In the A/B testing we think pairing a system in which there is negotiation (Guilds/Mturk individual system) versus Systems approach (Mturk requester based pricing/Dynamic Pricing Engine) is a nice pairing
  • 3. We'd like to run the experiment through a series of HITS over a period of a month...measure change over time.

Additional Problems

Can Various Organizational designs address the following problems:

  • 1. Cold Start for workers
  • 2. Expertise and Competency expectation/standardization (Reputation/Trust). Can guilds solve the ironic relationship between pricing and skill exhibited in Mturk (
  • 3. Voice for workers: grievence and programatic. (Balance of Power)
  • 4. Universal pricing
  • 5. Upward mobility/skills acquisition/professional development.
  • 6. Improve communication amongst workers
  • 7. Improve the depth of the crowd. Right now 10% of the workers in the crowd do 90% of the work. Can Guilds flatten out the crowd?
  • 8. Can Guilds be dropped into MTurk and fix problems or are they part of an integrated solution....Or, do Guilds only work in new holistic open systems.
  • 9. Creating a sense of belongingness/relatedness for the workers.



  • @Dilrukshi
  • @AMDP
  • @PierreF
  • @yoni.dayan
  • @m.Kambal
  • @Ferlin87
  • @Anotherhuman
  • @acossette