Milestone 7 munichkindl

From crowdresearch
Jump to: navigation, search

Title

Make humans, not robots

Abstract

Current crowdsourcing platforms appear to be lacking in the provision of balance of power and trust between their workers and requesters. We hereby present a new crowdsourcing platform, focusing on creating trust between all members and on distributing the power equally. In order to do so, it is based on two core elements: workathons and a peer-based mentoring system. Complimentary features include a mutual rating system with achievement-unlocked badges, promotion of top performers, a chat, tutorials and a system that enforces payment if a task review is delayed too much. By adding those interaction elements, we change the platforms dynamics to be more natural and avoid that workers can simply be seen as a ‘human API’.

Motivation

Due to recent considerations made by our group, there are two main problems in nowadays crowdsourcing platforms: non-existing trust and wrong distribution of power. How can workers and requesters trust that the other party’s intentions are in their own interest? How can requesters trust in the quality of the results they receive? How can workers trust that they will be paid and respected? How can we create a balance of power between workers and requesters?

As in the professional sense trust is the channel between a worker and a requester [1], in our opinion it is crucial to solve these questions to obtain better quality of work on crowdsourcing platforms and enhances the reputation of such systems. In order to create trust, workers and requesters need to build the personal relationships, which are naturally created in any other community where people are working together in a physical environment. However, in the currently most prominent crowdsourcing platform, Amazon Mechanical Turk, there is little interaction between requesters and workers encouraged.

Related Work

One of the key issues any online platform faces is the matter of trust and reliability. Crowdsourcing workers can be considered to be part of more or less collaborative virtual teams, depending on the task type they are working on. In traditional, office-based work settings the amount of independence given to virtual workers and lack of communication between task assigner and worker would be unusual.

On the other hand, in order to be successful with a crowd based task, a requester has to take a lot of risk by trusting almost unknown people to complete their work well. The platform should help improve this gap by adding a more human centered approach to the task. [1]

The level of self-management skill needed to succeed in a virtual work environment cannot be automatically expected from each new entrant to the platform. Self-set goals have proven to be sometimes not sufficiently motivating for people to finish a task [2] and by working on a self-assigned task that is not intrinsically motivating, workers will probably struggle with completing it.

By including a mentoring program within the proposed platform, we refer to prior research in the area of leadership and organizational science. Even in a virtual workplace, intensified leader-member exchange (LMX), including socio-emotional interactions such as mentoring, has been demonstrated to have positive effects on worker commitment, job satisfaction and performance [3, 4].

Another aspect is to think of the humans in a crowdsourcing environment as ‘Human Devices’ [5]. This implies that while humans will eventually finish the task, they require tuning to perform at their best. One solution is proving a mentorship program as we propose in our paper. Hirth et al [5] have indicated that the best approach to detect task replies of insufficient quality are either majority decisions on simple tasks or peer-review for more complex jobs. Having experienced personal support from the beginning, workers will familiarize themselves faster and are more likely to become regular users of the platform.

Insight

We propose a new crowdsourcing platform, based on regular workathons, which are 12- or 24-hour periods in which a group of requesters and pre-selected workers work closely together as well as a mentoring program which helps new workers to get to know the system and some of the other workers and requesters, thereby making each of them appearing more human to the others and improving interpersonal trust and final results.

For the same reason, we implemented a chat, which can be used from all requesters and workers at any time to contact each other. Besides that, smaller novelties of our platform are the promotion of reliable members by achievement badges in their profiles, which supports the enhanced search for skilled workers by promoting them to requesters. For example, every worker who was a mentor for a new platform participant will receive the respective badge. Besides, giving information to requesters, such badges, is an additional extrinsic motivation to the workers. Furthermore, we developed and provide tutorials for the requesters which explain to them the best pratices for designing new tasks in an easily understandable way. The last important new feature of our system is that payment is guaranteed if review is delayed, shifting some of the requesters’ power to the workers and improving trust in our platform even more.

System

Our main feature, the so-called workathons, will always be organized by a closed group of requesters inviting a defined number of workers beforehand. During the duration of 12 or 24 hours, all participants communicate either via message, video chat or meet in person to get as much work done as possible. Every workathon ends with an evaluation phase. In this way, requesters and workers get to know each other and each other’s skills or reliability, respectively.

The second core element of our platform is that every new worker gets assigned a mentor, matching them considering their skills and profile data. The experienced person helps the new worker to get started. In order to ensure accurate work quality and make up for lacking reputation, the first 50 tasks of each worker are peer-reviewed by their mentor. They receive a share of the task salary as compensation. In the unlikely case requesters complain about the results, this will have a negative influence on the mentor’s reputation. With this mentoring system, new workers will have an easier start, trust can be build and the quality of new members’ work gets improved.

The chat we implemented can be used from all requesters and workers whenever they need to contact each other. It is directer and has therefore a higher answer rate than emails. Additionally, requesters have the option to invite workers personally to do their tasks. This invitation can be based on the workers skills or on the personal relationship with the worker, making the dynamics of our platform more similar to those in a real-world company. The same goal can be reached by sending workers gifts at a special price by cooperation with a third party and by having the option to rate a worker’s skills after he or she submitted a task.

Furthermore, we aim to solve understanding problems between requesters and workers before they even appear by providing several tutorials to them which explain on how to design well different types of tasks. Thereby, it gets easier for the requesters to put new tasks and clearer for the workers what is expected from them.

Also, in order to not give all power to the requesters, there will be the necessity to review submitted work within a time frame, worker and requester agreed upon beforehand. If the requester fails to do so, the worker gets paid automatically. In this way, workers can feel safer, especially when doing tasks of new requesters.

Evaluation

There are a number of common metrics, which can potentially be utilized to measure the success: overall average answer quality, average salary in relation to time since registration, number of members with certain achievement badges and relation to tasks (increase in offers for workers, time until all jobs offered completed for requesters). We are hoping to enabling trust by adding a level of personal interaction, so the party on the other side of the tasks becomes more relatable, encouraging everyone to have more task interactions than previously.

However, we should not forget that unlike machines humans have complex ways of gaining trust. This must be kept in consideration during workathons, as humans generally trust persistent recurrency [7]. This pattern can be used to assess a worker's efficiency as well by giving a select number of tasks in a given time to be solved in a persistent manner. This would improve relations between both the worker and the requester.

References


[1] Saxton, Gregory D., Onook Oh, and Rajiv Kishore. "Rules of crowdsourcing: Models, issues, and systems of control." Information Systems Management30.1 (2013): 2-20.

[2] Rawolle, M., Glaser, J. & Kehr, H. M. (2007). Why self-set goals may sometimes be non-motivating. In C. Wankel (Ed.), The Handbook of 21st Century Management (pp. 203-210). Thousand Oaks, CA: Sage

[3] Timothy D. Golden, John F. Veiga, The impact of superior–subordinate relationships on the commitment, job satisfaction, and performance of virtual workers, The Leadership Quarterly, Volume 19, Issue 1, February 2008, Pages 77-88

[4] Ng Siew Fong, Wan Fara Adlina Wan Mansor, Mohamad Hassan Zakaria, Nur Hidayah Mohd Sharif, Norul Alima Nordin, The Roles of Mentors in a Collaborative Virtual Learning Environment (CVLE) Project, Procedia - Social and Behavioral Sciences, Volume 66, 7 December 2012, Pages 302-311

[5] Zambonelli, Franco. "Pervasive urban crowdsourcing: Visions and challenges."Pervasive Computing and Communications Workshops (PERCOM Workshops), 2011 IEEE International Conference on. IEEE, 2011.

[6] Hirth et. al: "Cost-Optimal Validation Mechanisms and Cheat-Detection for Crowdsourcing Platforms". In: Proceedings of the Workshop on Future Internet and Next Generation Networks (FINGNet). Seoul, Korea, 2011.

[7] Muir, Bonnie M. "Trust between humans and machines, and the design of decision aids." International Journal of Man-Machine Studies 27.5 (1987): 527-539.