WinterMilestone 4 Team-UXCrowd

From crowdresearch
Revision as of 07:02, 7 February 2016 by Senadhipathigesumindaniranga (Talk | contribs) (Introduction)

Jump to: navigation, search


Ensuring quality in crowdsourced platform by introducing a Platform ready certification, Sentiment analysis and Standard gold test. By S.S.Niranga | Alka Mishra


A global phenomenon with minimal barrier to entry, crowdsourcing has transformed human force from mere consumers of products to active participants in value co-creation. The crowdsourcing ecosystem is one in which work is being re- defined as an online meritocracy in which skilled work is rewarded in real time and job training is imparted immediately via feedback loops[1]. Under such working conditions, the diverse pool of untrained participants: workers and requester, often find themselves circling with mistrust and ambiguity with respect to result quality and task authorship. This indicates that there is a requirement for quality control mechanisms to account for a wide range behavior: bad task authorship, malicious workers, ethical workers, slow learners, etc.[2]. Although many crowdsourced platforms offer clear guidelines, discussion forums and tutorial sessions to overcome some of these issues but, there is still a large percentage of workers and requesters unaware with the use of platforms. In this paper, we assess how crowd workers can produce a quality output by introducing below three proposed methods.

• Platform ready certifications

• Sentimental analysis system

• Gold Test


Crowdsourcing is use to complete a task for a low production rate, publishing that task to the general public. There are many crowd sourcing platforms available in the market and most of them offers a decent service to the users[3]. Although many task requesters get benefit out of the system, some requesters questions the production quality that workers offers[4]. Half cooked task creation, lack of attention, among the reasons for this but many platform have their own mechanism to prevent this. Some crowd sourcing platforms have many tutorial, practice sessions, general forums to mitigate the risk and sometimes they offers interactive training sessions for the users. However, majority of workers and requesters hardly use the platforms features and sometimes they don’t have a sufficient knowledge about the system.

Platform ready certifications

So, to address this issues, we introduce “platform ready certifications” to the users. The certification will have a multiple stages such as,

• Platform ready (Beginner)

• Intermediate

• Expert

Each certification stage will define proficiency of the user and to start working on a task each worker must get the Platform ready certification (Beginner). In order to achieve the certification each user will get a series of basic questions related to the platform. Ex: - How to accept a task, How to communicate with the requester, How to rate etc. The certification will encourage users to learn about the platform thoroughly and this will produce a quality output. Once the worker complete adequate number of projects, reasonable working hours, higher ratings from the requester and sufficient community support sessions such as article writing etc., the worker can take the Intermediate or Expert level certification. The advance certification levels will motivate the worker to be a professional worker and help out the community.


Requester also could take the certification but, it’s not mandatory. However, if they take the certification, they will identify as certified requester which will add more value to their profile and the workers will like to work with them.

Sentimental analysis system

In addressing the issue of inefficient task authorship, Daemo proposes feedback iteration of prototype task from workers and the usage of those feedbacks to create refined task by requesters. At this point we are proposing that feedback from the prototype task should be presented to requesters in the form of sentiment analysis. Opinion mining or sentiment analysis, deduce and analyzes the emotions conveyed in texts and is highly efficient in case of complex task with large number of feedbacks. In this case, the mood board (visual representation) generated by analyzing the feedbacks would be much easier to understand by requesters and will not require any language proficiency[5].

Figure01.jpg Figure 1: We can see 3 out of 14 Daemo’s prototype task feedback from workers (right) which can be used to revise the task interface (left).

Milestone Contributors

S.S.Niranga @niranga,

Alka Mishra @alkamishra