Milestone 4 researchinprogress

From crowdresearch
Jump to: navigation, search

Design axis for 3 themes

THEME 1 : Results

How might workers and requesters work together to produce higher-quality results?


Similarities:

a) Both have a system to detect false reviewers. For example: Peer-Review-System involves filtering of unnecessary reviewers by a mechanism (this mechanism has not been mentioned on the wiki page, thus we are assuming that there will be a filtering mechanism otherwise the idea would be meaningless), same goes for the Quality-Control-Managers.

b) Both have outcomes benefits of filtering out the true workers and appreciating them for their work by payment.

c) Both of them include the idea of distributing the tedious work of a requester as one entity to decide on the correctness of a submitted work.

d) Both have a drawback that an incorrect HIT would get accepted, if majority of the workers/managers accept the HIT and only minority of the workers/managers actually went through the work and gave the result accordingly. This leads to false acceptance of both submitted work and managers.

e) Both of them only serve the issues of requesters and not the workers, so there might be many issues faced by workers which are not handled.

f) Since the work has to reviewed by either the peer workers or managers, thus the work is open for all to view, which can lead to higher chances of being plagiarised.


Differences:

a) Peer-Review-System involves workers to review their colleagues while Quality-Control-Managers involves a new set of employees, labelled as managers which will judge the work done by workers.

b) Peer-Review-System involves rechecking by the requester itself of the bad/rejected HITs however, in the case of Quality-Control-Managers the rechecking of a HIT is done by majority rule. If maximum managers approve a work, then it is paid. It also helps them to filter the unwanted managers.

c) There is an edge of trust involved in the Quality-Control-Managers because the requesters do not review the rejected HITs however in the Peer-Review-System the rejected/invalid HITs are re-reviewed by the requester.


Design Axes

Design axes for Results theme

THEME 2 : Transparency

How might we make payment clear and transparent?


Similarities:

a) All of these are based upon the transparency of payment “from requesters, to workers”.

b) All of them have been written while keeping human perspective in mind, and are quite depictive of how human/worker behaves to certain “<insert word here>”

c) All these ideas haven’t already been implemented (except Kickstarter model), and the sculptor of these really hope that these ideas be implemented in a new crowdsourcing platform. They all sound realistic to us.


Differences:

a) While, the fourth idea “The Kickstarter Model” is already present in the market, it does seem interesting. The comparison to “Uber Surge” is done in a quite interesting manner.

b) The fourth idea is the only one which seems close to being a “Dark Horse”.

c) Standardization of Task pricing have used the “supply/demand principle” and have improved upon it quite well in introducing the equilibrium and ranges.

d) TestSet have used history as a source for their idea and they have proposed the question about the importance of satisfaction in a job. That’s an interesting question which is very subjective, while leaving the idea of "satisfaction” open-ended.


THEME 3 : Reputation

How might we design better reputation systems?


Similarities:

a) All the ideas talk about rating or categorising all the workers based on parameters like experience and skillset. This benefits requesters and other workers to know who are the people who give higher quality work, and requesters and workers can contact them when required.

b) In all the ideas, requesters would want to prefer a worker of higher rating to do the task. Hence, the not so experienced workers might face an issue of lesser tasks being in less demand or having less work options.


Differences:

a) 'User Rating System' talks about rating the workers based on the number of tasks they do and how much experience they have, While 'Top Workers Closer to Requesters' and 'Expose worker skills' talk about rating the workers based on their skillset. 'Different Levels of workers' talks about rating the workers based on various factors like achievements obtained from previous tasks done and achievements obtained accordingly.

b) While 'Top Workers Closer to Requesters' and 'Different Levels of workers' automatically rate the worker based on the number of tasks/ kind of work he does, 'Top Workers Closer to Requesters' does not talk about how the top workers have been rated from the skillset. 'Expose worker skills', however, says that the worker should rate himself based on the predefined points allotted for each skill.

c) 'User Rating System' talks about the rating of the workers as well as the requestors. Requestors are rated on the basis of factors like number of tasks they have put up and how regularly they work on the portal. Rating of requester tells the workers about the experience and authenticity of requesters, and workers (specially the top rated ones) can choose if they want to work for a certain rated requester or not. 'Top Workers Closer to Requesters' and 'Expose worker skills' only talk about the rating of workers.'Different Levels of workers' also talks about moderators who are high level workers who help workers and act as intermediates between the requestor and worker along with requestors being chosen by workers as 'favourites' to separate them from other requesters.


Design Axes:

Design axes for Reputation theme


List of Ideas and the Theme they belong to

Milestone 4 researchinprogress Results: Managers for both workers and requesters

Milestone 4 researchinprogress Transparency: Standard TestSet Checkpoint Kickstart

Milestone 4 researchinprogress Reputation: Mutual Rating System