Crowd-sourcing the crowd-sourced crowd-sourcing platform
As soon as you read the "dark horse idea" title, you begin to wonder as to what exactly I mean. Let me break it up for you.
Daemo is a crowd-sourcing platform built by the collaborative efforts of worldwide developers, designers and researchers which means to say that the making of the crowd sourcing platform itself is crowd-sourced. Now, how about we take this to another level?
MOTIVATION - When we look at larger picture of crowd work, we realize that the job security that these platforms offer are extremely low. Is there a way we can increase security of jobs in order to be able to draw people to these platforms and compete with traditional labor systems? The requesters hold the authority to price or decide qualifications for tasks or sub-tasks which can't really be challenged openly.
WORKAROUND - Looking at this particular problem, we realize that this is a pretty complex thing to handle because there really is no straight forward approach to this problem, in particular. But I think, one part of the solution to this tremendously overwhelming problem might be to - crowd-source the "Crowd-sourced crowd-sourcing platform"!!
We all understand that the current generation of crowd-sourcing platforms is defective with respect to producing fair wages and treatment of its workers, assignment and breakdown of tasks etc. There are also several technical issues involved in this process - where the submit entity of the task doesn't exactly work or the saving or transmission of work might be an issue or the link might be broken etc. which many result in workers not getting paid for work which they perhaps, completed, well almost. So, we need to overcome these issues in order to come up with the "next generation" crowd-sourcing platform.
How about we include an inbuilt (or a third party) verification or maintenance system which is also crowd-sourced whose job is to ensure that links provided are not broken, instructions are clear etc.!! We can also integrate a auto tagging system where tags are automatically added based on the nature of the tasks (which is pretty complicated to implement). Another feature we could crowd-source through this is, broaden its breadth and deepen its depth, by extending it to various other domains like research projects, software testing, social survey (social studies), history documentation, linguistics, data analytics etc. This way we can get tasks in different domains to cater to wider range of expertise and audience opening up to a whole new world of opportunities including education and skill development. Since this process ripples through another stage, we can say that it will result in better pay, and hence, improves job security prospects and its going to be truly open governance since qualifications is also going to be collectively decided.
Maybe we can have an efficient machine learning algorithm in the back end to perform efficient pattern matching and prevent redundant tasks where workers are required to repeatedly classify images, transcribe, translate etc. If the computer learns the technique then the rest of the tasks which are similar can be automatically performed. We can also, introduce crowd sourced quality filters to check counter-productive contributions.