Guilds and Computational Compatability

From crowdresearch
Jump to: navigation, search

Why Guilds

  • Do crowd sourcing systems need guilds?
  • What Benefits do Guilds Bring?
  • What Cost Savings do Guilds deliver?
perhaps its not the benefits the guild brings to individuals, but the edge cost savings to the requester that is most important.
  • Of all the Org Designs to choose from why Guilds
  • How do Guilds impact quality work?
  • Why would a requester choose a Guild?
  • How does a Guild work/function
  • Are guilds the domain of the elite or can/will they support the entire skills tree?
  • How do people join guilds, what if it’s poor quality, etc? (MB Question)
  • Like, walk me through. I’m a new worker on Daemo. What do I do? Am I already part of a guild? How do I get into one? How do I get work once I’m in one? What if the requester doesn’t like what I do? And how does all this solve the reputation problem?(MB Question)
  • Points of Human Intervnetion

Problem/Solution

Current crowdsourcing platforms have a well documented reputation inflation problem that stymies worker/requester alignment. A natural response to this reputation asymmetry is to subject workers to testing or an assessment, which as studies have proven is a poor option to guaranteeing worker claims to competency. So, how can we introduce into the crowdsourcing process, organizations and/or tools to return balance to the reputation challenge? Short of removing the reputation inflation problem entirely by hand picking each and every worker and building a closed network, we propose a socio-technical solution in the form of crowd collectives. Collectives are organizational structures that deploy computational evaluation tools that assess a workers compatability with tasks and align them with tasks they are suitable to complete with a high degree of reliability. With this clustering of workers, collectives are able to provide requesters with a single point of contact, an organization that can leverage collective intelligence or deploy volume of workers to complete a task. Regardless of the method, collectives, because of the computational compatability tools, can ensure high reliability in task completion.

In some regards it is more difficult to design systems of complexity to solve task matching/reputation challenges at the micro level, because the attributes required are by definition skills neutral/negative. One needs to have only basic soft skills: attention to detail, timeliness and the like; concepts that can be captured in a heuristic, but in context the computational compatability we seek at the micro level really becomes far more optimized as we move into more complicated macro tasks and projects. Nevertheless, we must abide by the hierarchy of needs and develop a technical process that while focusing on our target audience, can also dynamically scale.

So, why collectives or guilds to address this problem, why not deploy a system-wide computational compatibility model? We feel that the intimacy of clustering workers around a task or a skill ultimately provides community. A cross cultural, cross generational collection of like minded individuals that share common professional/techncial interests, professional development goals and a sense of reasonableness that allows individuals to act as a network, to leverage collective intelligence, to collaborate and conspire. Bonds, that when scaled can create universal skills/compatability norms and rational dynamic pricing. Again, the need for these community elements at the bottom of the digital pyramid are arguably not as critical as in macro task environments, nevertheless, such cultural components are critical to the effeciency of the Guild.

In summary guilds address the reputation challenge in two ways by clustering like minded individuals who have been computationally vetted against tasks and by collectively taking on risk in exchange for trust.

Additional points

Individuals are not obligated to join a guild, we don't want to squash that individualistic spirit, however those folks will operate at a disadvantage; not having the benefit of scale. The type of guild we envisioned (though not the only type) was a guild with high barriers to entry. The exclusivity exudes pedigree and prestige that can easily be marketed as competency and expertise. Marketing is a key phrase here; guilds can reach out and solicit work. Workers are either selected to join or are culled from applications. Upon registering with Daemo, workers will have a menu of Guild options to choose from, as we easily see an ecosystem (or network) of guilds forming around tasks, geography, interests or skills. Some guilds will specialize in specific subtasks (Teo's idea), others may have low barriers to entry and build a collective reputation on something other than quality. Individual guilds will each establish a unique identity based on their Raison D'etre, which brings diversity and choice to the platform, which in turn affords more opportunity for competition and other market forces to enter into play.

Regardless of the channel to entry, the vetting process in the guild we propose is similar to the compatibility model deployed to match work to worker ...computationally rigorous. Since we propose a guild with high barriers of entry, the rigor of entry establishes a quality quotient or level. Regardless of how the new guild member levels in, we will want to indoctrinate the new guild member into our worldly ways, our cultural determinates, best practices, any propritary tools we deploy. We could integrate workers using parallel processing techniques with a Sr. Member/Peer, Mentoring through QC, and other techniques to reflect the guilds norms and standards. This consistency strikes to reliability and dependability. These normative foundations of a guild are important to ensuring high reputation. Guilds as a social construction will have a core group of workers at various levels that will maintain historical knowledge, mentor and nurture newbies, attract work. There will also be a fluidity to Guilds as workers follow a develop path that takes them to other guilds, other tasks. The computational compatability model will get better, more accurate over time, but there will be mistakes and if a worker ends up being a mismatch, the worker should be free to depart of their own free will.

The initial engagement of requesters and guilds is a cold start challenge in its own right. Are individual requesters more inclined to trust an individual or an organization? Through mechanisms of managed risk, requesters are probably more disposed to using an organization to complete tasks over an individual. Guilds will be able to leverage their community, there individual approaches to vetting and ensuring quality…perhaps in the form of guarantees or rebating. However, one we move beyond that initial engagement, it will be up to how a guild positions itself in the market.

How does a collective work (Social).

While the intention is to keep org design as flat as possible, there are oligarchical elements that must be catered to. The origin story of guilds with regards to this Milestone is to complete micro tasks, and while some guilds will stay in this world and develop an org matrix to maximize utility in the micro space, we envisioned guilds addressing micro work as an entre into a larger world or macro tasks. That being said, the social component of a guild is as follows. As much as we’d like to say that guilds are organic, they are formed by individuals that see opportunity in larger organizations. These folks will have a mental model of network processing, collective intelligence and a technical backgroun/profeciency in the task/skill their guild will represent; they are entrepreneurs (Potentially a new class in the crowd?), who will use BPO outsourcing models as their inspiration. As such they will have the accuum to build a Computational Compatability tool, which they will use to identify new members, thereby validating the model and establishing the internal skills hierarchy. But as this momentum builds, these leaders will be able to market to and negotiate with requesters whose micro task carry the complexity of volume. Much as a drug dealer will provide free samples in exchange for an addicted customer. Guilds will ultimately attract work and retain it in much the same fashion. The drug is the computational compatability model.

How does a collective work (Technical)

While a skill may not be necessary to perform micro tasks, many attributes are required to relaibly complete tasks. Attention to detail, timeliness, task focus, immunity to repetition disorder and pride in work(maybe :) are all elements that can be heuristically measured from resumes/CV, Git Hub, MOOCS, a guild application, prior experience as a crowd worker, etc. Utilizing graph databases we’d extract data/attributes so as to define compatability proximity to a task. The challenge may be that,when implementing in a microtask only platform, willl there be enough discernable variation among workers and tasks to cluster meaningfully and/or have a statistically significant distance metric. Without this meaningful variation it is less likely to impact the task feed in a positive way. Only through development of the algorithm and underlying data model (graph vs. relational) will we be able to find the necessary variance thresholds required to determine effectiveness and appropriate implementaiton of this design. Until a certain density is achieved in the collective, human intervention is required to validate core competencies, until the system has achieved enough learning to automate. Spot checking will occur. Not only does computational compatibility give proximity to tasks, near misses can also be identified and converted into learning opportunities (associations with MOOCS or internal training) for workers.

System:

  • 1 Requesters will use a task authoring tool to atomize work. This could be the current task authoring tool deployed by Daemo, but ultimately the compatability logic we use to vett/align workers will be integrated into the task authorship process. We need vectors.
  • 2. Until that time however, a requester will have the ability to request a guild to work with or if they so desire, they can work with an individual. As this is a guild conversation, the requester, because of the size of the task (say image processing 25,000 photos) they come to a guild. Instead of having to give instructions, answer emails from a multitude of individuals, they have 1 point of contact. In exchange for trusting the guild, the convenience of the transaction they pay a premiuim, though moving forward we’d like to establish relationships with requesters that lead to retained services pricing structures or subscription based services. Once terms and conditions have been established, the task is released into the guild.
  • 3 Pricing becames a dynamic activity, (Subject to further spitballing)
  • 4. When we say released, it’s more like distributed. A guild will have transparency into current work, future work and allied work (Closely related work that may reside in another guild), with workers able to not only access this data realtime, but also collaborate and connect with fellow guild members (Slack). Guild managers will have a dashboard establishing who is available and when, their proximity to the task and current and future workload. This realtime accounting of availability allows for more effecient task completion. Much like developing with the sun or if the task requires ‘heavy lifting’ a workers could be stacked on a specific element of the task.
  • 5. Minimizing the risk and enhancing the responsability for returning the task on time in the manner requested is tethered to 1. the appropriate matching of the work to the worker going in and 2. QC checking as the product is completed and delievered. This duality of task in, task out, is a dynamic method of training and adding measurable moments to a workers profile. It gives cold starters/newbies an opportunity to review work and/or participate in live work with the safety net. In a high frequency, low latency environment this double blind process is standard practice.

Experiment 1.

We do an A/B test. A being 1 to N and B N to N. The objective would be to measure the edge costs associated with executing micro tasks in two different network models. The hypothesis is that when both workers and requesters interact with a single node the maintenance cost (administrative time associated with getting work completed) is less than when maintaining edges with mutliple worker/requester nodes. Further works could determine the effects and effectiveness of this central node when manifested as an individual (work broker/agent), a collective (guild structure) or algortihm(purely technical central node). .

Experiment 2.

Build a model to determine if there is enough variability in micro tasks to merit such a computational compatability system. The challenge may be that,when implementing a compatability model in a microtask only platform, will there be enough discernable variation among workers and tasks to cluster meaningfully and/or have a statistically significant distance metric. Without this meaningful variation it is less likely to impact the task feed in a positive way. Only through development of the algorithm and underlying data model (graph vs. relational) will we be able to find the necessary variance thresholds required to determine effectiveness and appropriate implementaiton of this design. Further, at what point does a micro task acquire enough variation?

Experiment 3

We create three Guilds: 1. Workers selected with a rules based skills algorithm 2. Workers selected through a computational compatability model 3. Workers self sort. Each has a different barrier to entry and a supposed quality quotient. We observe and measure the guild creation process and then the accuracy of the delivery/completion of tasks. Which Guild approach ‘works the best.’

Research/Articles

http://www.arpitaghosh.com/papers/games_hcomp.pdf

  • (PierreF) Behavioral Mechanism Design: Optimal Crowdsourcing Contracts and Prospect Theory. David Easley, Arpita Ghosh. Proc. 16th ACM Conference on Economics and Computation (EC), 2015.

http://www.arpitaghosh.com/papers/EC15-full.pdf

  • (PierreF) Galen Pickard, Wei Pan, Iyad Rahwan, Manuel Cebrian, Riley Crane, Anmol Madan, and Alex Pentland. Time-critical social mobilization. Science, 334:509–512, 2011.

DOI: http://dx.doi.org/10.1126/science.1205869

  • (PierreF) Eleni Koutrouli and Aphrodite Tsalgatidou. Reputation Systems Evaluation Survey. ACM Comput. Surv. 48, 3, Article 35, December 2015.

DOI: http://dx.doi.org/10.1145/2835373 - File:Reputation evaluation survey A35-koutrouli.pdf

  • (PierreF) S.R. Epstein. Craft Guilds, Apprenticeship, and Technological Change in Preindustrial Europe. The Journal of Economics History, Vol. 58, N°3, September 1998.

File:Craft guilds apprenticeship and technological change.pdf

  • Game Theory and Incentives in Human Computation Systems, Arpita Ghosh. Book chapter. Handbook of Human Computation, Springer 2013.
  • Behavioral Mechanism Design: Optimal Crowdsourcing Contracts and Prospect Theory. David Easley, Arpita Ghosh. Proc. 16th ACM Conference on Economics and Computation (EC), 2015.
  • Galen Pickard, Wei Pan, Iyad Rahwan, Manuel Cebrian, Riley Crane, Anmol Madan, and Alex Pentland. Time-critical social mobilization. Science, 334:509–512, 2011.
  • Eleni Koutrouli and Aphrodite Tsalgatidou. 2015. Reputation Systems Evaluation Survey. ACM Comput. Surv. 48, 3, Article 35 (December 2015), 28 pages. DOI=http://dx.doi.org/10.1145/2835373

Contributors:

@teomoura @pierref @trygve @lucasbamidele @yashovardhan @gbayomi @horefice @scorpione @markwhiting @dilrukshi @acossette @m.kambal @yoni.dayan @rcompton @arichmondfuller

to connect with this guild in development

visit: https://crowdresearch.slack.com/messages/ogov-and-operations/details/ or #ogov-and-operations