Leveling Brainstorm

From crowdresearch
Jump to: navigation, search

Overview

Leveling, the process by which the platform defines skill competency and expertise, is very much the keystone process for the platform. Unto itself, the leveling process can be a challenging undertaking, but when you also take into account the impact those decisions have on pricing, reputation, and training/learning, the enormity can be overwhelming. Hence the need to establish the process, methodology, algorithm and scalablity of the leveling process before moving into development. In the equation of trust and power, leveling is more than a peer to peer dialogue, it impacts the clients and the platforms ability to deliver on micro and macro tasks. So when discussing leveling one must do so through these optics:


Leveling:

  • How many levels are there per skill?
  • Should there be a standardized template for all skills?
  • What competencies are required for each stepping stone?
  • How is competency determined and validated? Exams, interviews, outside world experience
  • How does one move from level to level?
  • Are there grade's within a level? (Beginner, Knowledgeable, Expert)
  • How do we apply the standards across different cultural/educational systems?
  • How dynamic should the assessment of skills be?

Training/Learning:

  • How does one acquire the skills, education and technique to advance through the system?
  • Do we create a learning library from open source/free content? Partnership with Coursera?
  • How important is mentoring, real time reviews, peer assessments in validating skill?

Pricing:

  • How do we set a price based on skill and task?
  • Does the individual or the platform set the price?
  • How dynamic is the pricing engine to demand, region and requestor?

Reputation:

Foundational Impact

Foundational Effect Table

Workers Requesters
Trust Trust/Workers Text Here Trust/Requester Text Here
Power Power/Worker Text Here Power/Requester Text Here

Further Solutioning

So, to dig deeper into a leveling system for our platform, there are four silos (Data Sets/Collection, Methdology/Algorithm, distribution/impact on other elements within the enterprise, Output/UI) that merit deeper investigation

Data Sets/Collection: Its not just what we collect but how we collect it and treat it; relative to the Algorithm and desired output. We will need to drill down in to the observable (Skils, education, certification, etc) and latent (expertise, abilities, timliness, etc) data sets upon which Skill Utility, Worker Quality and Marketplace Context rest. Are the fields required to fuel the algorithm(s) baked into the Data Model?

Methodology/Algorithm: A literature review and a review of slack has identified two approaches, Hidden Markov Model (HMM) and PageRank/SkillRank methodology. Both aggregate multiple data points to create output/score/ranking. Yet, how scalable and dynamic are both approaches. Also how compatible are they with feeding other systems.

Distribution: How integrated is pricing, leveling, reputation? How integrated should they be, especially when looking at scaling up, adopting new technologies and new algorithms.

Output/UI: How do we present the data to both requestors and workers alike. How does the manner in which we collect data, impact its presentation. Are we interested in dashboards, predictive analytics, a menu of options?

Considerations

  • The higher up the levels you go, the more important it is to Requester Trust that the leveling be highly accurate.
  • Is leveling static or dynamic?
  • Needs/Goals of the Leveling Mechanism
  1. Timely
  2. Achievable
  3. Scalable
  4. Accurate/Representative

Skills Matrix

Skill Matrix (Example):

Interconnection of Leveling, Reputation and Pricing:

DB Fields:

Task Matching:

Literature:

http://people.stern.nyu.edu/mk3539/papers/ICIS2014.pdf

http://www2007.org/papers/paper516.pdf

http://john-joseph-horton.com/papers/labor_allocation_in_paid_crowdsourcing_nudges_prices.pdf

http://ijcai.org/papers07/Papers/IJCAI07-427.pdf

https://www.lri.fr/~mbl/ENS/CSCW/2012/papers/Kittur-CSCW13.pdf

Crowd Research (internal) Content (Milestones. etc)

http://crowdresearch.stanford.edu/w/img_auth.php/4/40/Crowdgaikwad.png

http://crowdresearch.stanford.edu/w/index.php?title=MileStone_4_GeekyGirls

http://crowdresearch.stanford.edu/w/index.php?title=MileStone_4_Team_Nike_New

http://crowdresearch.stanford.edu/w/index.php?title=Milestone_3_PixelPerfect_TrustIdea_2:_User_Rating_System

Meeting Notes

Brainstorming Meeting on 6/25/15

  • Reviewed the task process as an anchor for our conversation. There was some consensus that the process, while rudimentary, might actually work. Forwarded recommendations to Neal.
  • Discussed the leveling brainstorming Wiki and how to use it.
  • Segued into a conversation about creating a learning environment and how we can develop tools to help workers acquire skills, share skills and teach skills. Once we've completed the milestones of delivering data (Skills Matrix and Categories) to the dev team, we'll move to the learning platform.
  • Discussed balancing reputation with skills and how stressing skills over reputation would be a better long term goal for the platform (Tangible product: Skilled workers to message out to requestors and a strong appeal to workers that we'll help grow skills and make them more marketable)
  • Matching workers with Tasks was also discussed and the mechanism for that. Niki also asked about the pricing process and why there was no bidding. We discussed maintaining wage integrity and aligning a value of work to reward.
  • DELIVERABLES: We're going to have 2 deliverables due to the Dev team July 3rd. 1. We'll have identified and defined the high level categories of tasks/skills (Development/marketing/Design, etc) and 2. define 5 skills per category that we will perform a skills matrix.
  • Uwe offered up his dev skills to the process and we left committed to be aware of the UX for the product.

Contributors

@acossette, @arichmondfuller, @dmorina, @james, @neilthemathguy, @Trygve, @niki_ab @ucerron @claudia