Milestone 6 munichkindl

From crowdresearch
Jump to: navigation, search

Gaining Trust in Crowds through Personal Interaction

Abstract

Current crowdsourcing platforms appear to be lacking in the provision of balance of power and trust between their workers and requesters. We hereby present a new crowdsourcing platform, focusing on creating trust between all members and on distributing the power equally. In order to do so, it is based on two core elements: workathons and a peer-based mentoring system. Complimentary features include a mutual rating system with achievement-unlocked badges, promotion of top performers and a system that enforces payment if a task review is delayed too much. By adding those interaction elements, we change the platforms dynamics to be more natural and avoid that workers can simply be seen as a ‘human API’.

Motivation

Due to recent considerations made by our group, there are two main problems in nowadays crowdsourcing platforms: non-existing trust and wrong distribution of power. How can workers and requesters trust that the other party’s intentions are in their own interest? How can requesters trust in the quality of the results they receive? How can workers trust that they will be paid and respected? How can we create a balance of power between workers and requesters?

As in the professional sense trust is the channel between a worker and a requester [1], in our opinion it is crucial to solve these questions to obtain better quality of work on crowdsourcing platforms, which enhances the reputation of such systems. In order to create trust, workers and requesters need to build personal relationships, which are naturally created in any other community where people are working together in a physical environment. In the currently most prominent crowdsourcing platform, Amazon Mechanical Turk, there is little interaction between requesters and workers encouraged.

Related Work

One of the key issues any online platform faces is the matter of trust and reliability. In order to be successful with a crowd based task, a requester has to take a lot of risk by trusting almost unknown people to complete their work. The platform should help improve this gap by adding a more human centered approach to the task. [1]

Trust is a viable aspect in any relation between people. In the professional sense trust is the channel between a worker and a provider [2]. This channel is enforced with more interaction and hence allows both parties to understand the second person better and develop more trust. By arranging more interaction we give them a better chance to understand each other’s temperament and the required work.

Another aspect is to think of the humans in a crowdsourcing environment as ‘Human Devices’ [3]. This implies that while humans will eventually finish the task, they require tuning to perform at their best. One solution is proving a mentorship program as we propose in our paper. Hirth et al [5] have indicated that the best approach to detect task replies of insufficient quality are either majority decisions on simple tasks or peer-review for more complex jobs. Having experienced personal support from the beginning, workers will familiarize themselves faster and are expected to be more likely to become a regular user of the platform.

However, we should not forget that unlike machines humans have complex ways of gaining trust. This must be kept in consideration during workathons, as humans generally trust persistent recurrency[4]. This pattern can be used to assess a worker's efficiency as well by giving a select number of tasks in a given time to be solved in a persistent manner. This would improve relations between both the worker and the requester.

Insight

We propose a new crowdsourcing platform, based on regular workathons, which are 12- or 24-hour periods in which a group of requesters and pre-selected workers work closely together as well as a mentoring program is provided which helps new workers to get to know the system and some of the other workers and requesters, thereby making each of them appearing more human to the others and improving trust and final results.

Furthermore, smaller novelties of our platform are the promotion of reliable members by achievement badges in their profiles, which supports the enhanced search for skilled workers by promoting them to requesters. The last important new feature of our system is that payment is guaranteed if review is delayed, shifting some of the requesters’ power to the workers and improving trust in our platform even more.

System

Our main feature, the so-called workathons, will always be organized by a closed group of requesters inviting a defined number of workers beforehand. During the duration of 12 or 24 hours, all participants communicate either via message, video chat or meet in person to get as much work done as possible. Every workathon ends with an evaluation phase. In this way, requesters and workers get to know each other and each other’s skills or reliability, respectively.

The second core element of our platform is that every new worker gets assigned a mentor, matching them considering their skills and profile data. The experienced person helps the new worker to get started. In order to ensure accurate work quality and make up for lacking reputation, the first 50 tasks of each worker are peer-reviewed by their mentor. They receive a share of the task salary as compensation. In the unlikely case requesters complain about the results, this will have a negative influence on the mentor’s reputation. With this mentoring system, new workers will have an easier start, trust can be build and the quality of new members’ work gets improved.

Additionally, requesters have the option to invite workers personally to do their tasks. This invitation can be based on the workers skills or on the personal relationship with the worker, making the dynamics of our platform more similar to those in a real-world company. The same goal can be reached by sending workers gifts at a special price by cooperation with a third party and by having the option to rate a worker’s skills after he or she submitted a task.

However, in order to not give all power to the requesters, there will be the necessity to review submitted work within a time frame, worker and requester agreed upon beforehand. If the requester fails to do so, the worker gets paid automatically. In this way, workers can feel safer, especially when doing tasks of new requesters.

Evaluation

There are a number of common metrics, which can potentially be utilized to measure the success: overall average answer quality, average salary in relation to time since registration, number of members with certain achievement badges and relation to tasks (increase in offers for workers, time until all jobs offered completed for requesters). We are hoping to enabling trust by adding a level of personal interaction, so the party on the other side of the tasks becomes more relatable, encouraging everyone to have more task interactions than previously.

References

The reference section is where you cite prior work that you build upon. If you are aware of existing related research papers, list them here. We also encourage you to borrow ideas from the past submissions (see the meteor links above). Please list the links of the ideas you used to create this proposal (there's no restriction in terms of number of ideas or whether its yours or others'). You can use the following template:


[1] Saxton, Gregory D., Onook Oh, and Rajiv Kishore. "Rules of crowdsourcing: Models, issues, and systems of control." Information Systems Management30.1 (2013): 2-20.

[2] Mislove, Alan et al. "Measurement and analysis of online social networks."Proceedings of the 7th ACM SIGCOMM conference on Internet measurement. ACM, 2007.

[3] Zambonelli, Franco. "Pervasive urban crowdsourcing: Visions and challenges."Pervasive Computing and Communications Workshops (PERCOM Workshops), 2011 IEEE International Conference on. IEEE, 2011.

[4] Muir, Bonnie M. "Trust between humans and machines, and the design of decision aids." International Journal of Man-Machine Studies 27.5 (1987): 527-539.

[5] Hirth et. al: "Cost-Optimal Validation Mechanisms and Cheat-Detection for Crowdsourcing Platforms". In: Proceedings of the Workshop on Future Internet and Next Generation Networks (FINGNet). Seoul, Korea, 2011.