- 1 Introduction
- 2 CrowdResearch Work Dynamics
- 3 What to do Next?
- 4 Milestones
- 5 Milestone 1: Actions Required
- 6 Submission Milestone 1
- 7 System Architecture
- 8 Vision
- 9 Initial Brainstorming
We propose fault tolerant, extensible, and modular system that scales to the level of intended usage. Our main goal is to design and create the core infrastructure to support basic interactions between workers and requestors. More specifically, we aim to have a design that can automatically adapt to the power and trust structures that a given collective of requestors and/or workers define.
CrowdResearch Work Dynamics
The CrowdResearch teams will collaborate to collectively produce the Core Architecture. A list of tasks needed to carry out the architecture have been identified. Teams will sign up for the tasks to collectively complete them. Each task will have a collective of teams assigned to it. Team collectives will execute each task (teams collaborate to execute a task) We will then connect each of the tasks and have a finished architecture!
What to do Next?
- Take a look at the task division
- Choose the task as per your expertise and own it
- Submit deliverables on GitHub and Wiki
- Sign up for tasks you want to do, have experience in, want to learn from, etc. here.
Below is the pathway to build the system:
Milestone 1: Design & Start Implementing the Core Modules; Timeline: Friday, 17th 9am PST
The Core Modules consist of:
- User Management
- Create Account
- User Roles
- User Profiles
- Dashboard for workers
- List of Available jobs
- Selecting a desired job.
- Executing & Submit the job
- Profile with list of jobs completed and payment accumulated.
- Publish tasks
- Dashboard for requestors
- Create Project & design Tasks
- Create Qualification
- Publish Tasks
- Review Results
- Mechanisms for viewing the jobs that workers executed.
- Profile with list of jobs that were requested and the amount of money spent on each.
- DataModels supporting above functionalities
- Mockup Designs supporting above functionalities
- Unit test cases covering above cases
- Documentation to explain above system
Milestone 2: Finish implementation & Testing of the Core Modules Timeline: Friday, 24th 9am PST
Milestone 1: Actions Required
- Sign up with your team for the tasks (modules) you want to help execute. Edit the wiki of the tasks you want to do by adding your team name and names of members.
- Each team needs to sign up for 1-3 tasks.
- The teams working under a particular task need to communicate with each other and have a work plan to execute the task (We recommend for each task, having one team who will lead the others.)
- For each task, you will need to collectively provide:
- Basic design of what you will implement (diagrams and short description)
- Provide expected input and output of what you will implement (technical diagrams and/or short description.)
- Explain how other components will communicate with your part. Here we recommend to create a list of other team collectives (teams working for a certain task) that your part needs to communicate with. Talk with these team collectives to say how your part will communicate with what they are doing.
- Have a setup ready to execute your task.
- Start implementation.
- Basic design of what you will implement (diagrams and short description)
Submission Milestone 1
1) Please choose one of the tasks according to your expertise
2) Submit the milestone deliverables using below links.
3) There is one submission per task, you will collaborate with members from different the teams.
4) Sign up here.
- Nginx is used as a reverse-proxy and serve the static files
- Gunicorn will handle the WSGI applications, in our case the Django Apps.
- Rest API The Django app is a great way to modularization. After completing the main web application we will work on rest APU with OAUTH2 autheentication. This app will be used for mobile and desktop clients. Other applications can be derived as project progresses.
- Websockets: We will need websockets for live communication between the client apps and the users themselves, we will start with Tornado if it plays well with Django.
- Gunicorn can run on multiple web workers and we will use redis to handle the sessions for websockets and so on.
- In this architecture it is very easy to implement new features, either by grouping them into a module and just integrating the urls in the urls.conf file. This way you may implement any feature and just plug it in the existing application.
- Another way would be by extending the current code, it can be done in three simple steps:
- Create your html templates
- Add the class based views in the views.py or another file(s)
- Import the views in the urls.conf file and define your url mappings in there, this will not in any way affect the existing features.
- Client makes a request via web using AngularJS ngResource or native app made using PhoneGap
- Request makes a REST API call to the Heroku hosted Django server.
- Request prepended with /api/<call> gets routed via a gunicorn to Django API server running REST framework.
- Multiple instances of the api server will be provisioned on different nodes to scale for traffic, each request is round robin(ed) until a free server is found and accepts the request.
- Django talks with the database coordinator which itself talks only to the Master database.
- Master database either reads from slaves or writes to master and syncs, this will the job of the PG coordinator. In future data center can be scaled using pgpool-II, middleware that works between PostgreSQL servers and a PostgreSQL database client can be implemented. Watchdog can be used to ensure the high availability feature o it.
- Data is sent back up the chain via a HTTP response on the REST API and the client is reloaded. There is no page refresh required anywhere and this allows for a smooth native mobile interface as well. This is provided natively by Heroku but this setup can be utilized for any system on AWS, GCE, Rackspace or any cloud provider to allows for maximum scaling of the application.
Data Model For detailed example of data-model please see: Data Model
The following analysis was made so that we could separate what all teams have in common (basics) and the "additional" needs teams might have. Some teams focus on empathy, others on trust or legal sense. The following diagrams exemplify this.
The core of our platform will be held by two concepts: trust and power. Each team has different ways of assigning trust and power to their platform and proposal. Here we list some examples accordingly, so that everyone can grasp the idea. The important aspect to remember is that TRUST and POWER needs to be the core of out platform, as it is the core and motivation to our proposals.
- Background checks
- Chat available to communicate with workers, requesters and moderators.
- Profiles for each will be separate but interlinked i.e. everyone will have a personal and a work profile. Someone can be excelling as a worker but not so good on the social aspect or vice versa.
- Personal Profile
- personal information
- cultural information
- activities to get involved in and tracking of previous activities. (A log of previously done activities and a blog with postings of current activities to get involved in).
- Work Profile
- Previous hits done
- Skill set
- Badges or endorsements (useful, in particular, for newbies). Similar to LinkedIn where users can be new but they can start polishing their profiles.
- Communication channels for both parties. Everyone can comment, question and review hits. Power is distributed among participants.
- Requesters and workers can volunteer as mentors to others.
- Issues & conflicts can be resolved through moderators.