Milestone 3 sanjosespartans TrustIdea 2: Best Payment on Accuracy of Responses

From crowdresearch
Jump to: navigation, search

Introduction :

Crowdsourcing has gained momentum in the QoE(Quality of Experience) research community as a means to both expedite and reduce the cost of conducting subjective user assessments, while allowing end users to perform tasks in their real-world settings. The idea is to outsource a job (in this case subjective quality assessment) to an anonymous crowd of users in the form of an open call. In the case of existing commercial Internet crowdsourcing platforms such as Amazon Mechanical Turk and Microworkers, “employers” submit certain tasks, while “workers” (referring to widespread Internet users) may complete such tasks for an announced payment. In the context of subjective user studies, such an approach may significantly reduce the time and costs of conducting lab tests and offer access to a large and diverse panel of internationally spread users. These benefits come with a price attached, in terms of the quality of the results, and potential instrumentation difficulties, which make the approach unsuitable for certain types of assessment (e.g., cases when special equipment/devices or controlled end user settings are needed). When conducting Web QoE studies, crowdsourcing seems like a potentially good approach to assess large numbers of test conditions.

We note that a reliable user is considered to be one that expresses true feelings regarding perceived quality, while unreliable users may be found to assign random or constant grades when conducting quality assessment, look to finish the assessment as quickly as possible, or not complete all steps related to a given task.


Effect of Incentives on Participation :

Related studies have found that while participation rates are increased with an increase in payment, data quality (e.g., in terms of reliability, accuracy) seems to be virtually independent from payment levels.

Research has found that financial incentives may encourage improved quality e.g., if a bonus is offered in the case of accurate results . With regards to the quantity of work performed, studies have found that subjects generally worked less when the payment was lower. Other studies that have addressed worker motivation have found that in addition to extrinsic motivation (e.g., financial incentive), intrinsic motivation to complete a task (e.g., enjoyment, social contacts) often plays a key role.

One of the advantages, and also problems, of crowdsourcing is the fact that the population of users is global, and has a certain bias towards developing countries. This allows for a varied userbase, but also may result in a test population not representative of the intended userbase.


Conclusion :

Crowdsourcing does provide a valuable mechanism for quickly and cheaply conducting experiments or any related process while still obtaining meaningful results.

Another apparent impact of the increased payments was the much faster completion of the test campaign. While this is in some cases desirable, it also results in a narrower variety of users in terms of demographics (due, for example, to the influence of time-zones). It might be worth taking this into account when proposing the campaigns, and possibly throttling their execution in order to obtain more representative population samples.

We note that at this point we are not able to really make generalized claims based on these results, as they strongly depend on the actual crowdsourcing platform. However, it seems clear that the payment aspects need to be considered when setting up crowdsourcing campaigns in the context of QoE assessment and modeling. It would be also interesting to investigate the effects of decreasing the payment levels, both with regards to the campaign completion times, and results’ quality, as well as comparing them to other ways to do crowdsourcing “for free” (e.g. with student groups, or via social platforms such as Facebook).