From crowdresearch
Jump to: navigation, search

A Feed-forward Expectation Visualization System

Sensemaking can be identified at moments of uncertainty, which can be found at moments when workers and requesters seek information about the potential other to help them fulfill their motivations. These motivations can be both Intrinsic (learning, curiosity, mastery) and Extrinsic (badges and points, money, competition) in a depending manner. Many of the physical dynamics that individuals rely on to decide whether or not another person is able to meet those motivations disappear in online environments. Thus, the informational gap in making sense of the other person are more complex and time consuming. Crowdsourcing platforms have been questing for effective aids to other-sensemaking for task partner selection in the forms of trust systems, reputation systems, qualification systems.

At the heart of crowdsourcing systems are the roles that users -the requester and worker- play. A requester provides the demand which manifests into opportunity in the form of tasks or higher level projects, meanwhile workers’ services supply to the requestor’s demand.

Researchers indicate that requester motivations to use these platforms include: influencing better products/services, sense of efficiency, taking advantage of skill variety of the workers, analyzability and ultimately lower cost and almost immediate response.

Researchers have identified that workers on social platforms utilize these services to gain recognition, access an open and constructive atmosphere, receive payment and rewards, compete against the other workers, achieve a sense of efficiency, access freelance opportunities, improve skills and knowledge gathering, hold task autonomy, gain the opportunity to work on enjoyment and community based tasks and receive direct feedback from the requester. They are also motivated to contribute to tasks created by friends.

Studies have proven how platforms might be misused and led to negative experiences. When their goals merge, intentional or unintentional acts may hinder workers’ ability to perform a task and result in negative experiences for both. For example, researchers have identified the layout design, task instructions, visual design, coordination difficulties, and task decomposition are such areas where unintentional acts that hinder worker performance. However, requesters have applied the services to figure out ways to improve their task design. This leads to a better understanding of the crowdsourcing process and a better prediction of completion times when crowdsourcing various tasks. A service such as Crowdforge has reported that creative workarounds can have a positive impact on worker products.

Such approaches, however, are not intended to address intentional acts of abuse from requestors. Often times requestors abuse the workers by introducing them into pyramid recruiting schemes, and use specific qualifications of workers with a powerful social network profile to generate content for their own service. In these latter cases workers are paid intentionally below the known value of their work. Other rather advanced market-based strategies requesters were found to apply were to drive down worker compensation for a particular task after seeing increased popularity. Additionally, a requestor might wait until the end of the work performance to perform a mass rejection without giving a reason to all the workers who completed the worker’s task. Such acts become sources of caution for experienced workers.

Both of these intentional and unintentional acts result in unpleasant experiences and have forced both workers and requesters to develop systems against recurrence. To date, efforts to improve sensemaking in the space of understanding the user behind the role been focused on trust systems, reputation systems, and quality systems.

Trust systems detail the subjective view of of another’s trustworthiness and tend to be difficult to describe in practice. Jøsang (2007) identified that trust seems identifiable through trust transitivity, a belief by which one party associates reliable information and decision quality of another to incorporate into their own decision making. These appear as provisional trust, access trust, delegation trust, identity trust and contextual trust (p.11). Connecting these concepts is the overall goal that motivated action in the first place. In practice, trust systems make use of “assurances, references, third party certifications, and guarantees

Reputation systems provide a score from which a community understands an individual user. This score separates low quality market members from high quality ones and enables pricing differentiation upon historical interactions. Such systems require long-time service users, current feedback distribution of interactions, and reliance of feedback for future decision making. However these systems fail when users do not provide feedback, demonstrate positivity bias, and user manipulation.

Quality Systems attempt to solve problems from more empirical methods based in statistical methodologies. Khazankin, et al. (2012) explore Quality of Service from a requestor's perspective, whereby tasks are assigned to workers by their reported availability and skills. Further development in the quality in crowdsourcing has resulted in a taxonomy that identified worker’s reputation and expertise as well as elements of task design. However, missing from quality systems research in crowdsourcing are issues tied to the voice of the customer, or in this case the voice of the worker. Voice of the customer is a concept originating from Total Quality Management as applied within a quality for deployment framework. The concept explains that the voice of the customer is explicitly expressed, as it is in this paper, by captured statements. Turkopticon, where workers regularly write descriptions of their experiences with individual tasks with requester, provides a forum for crowd platforms to identify what things about which a worker is most concerned. For example, the voice of the worker regularly identifies how long it took to complete a task, what the requester said about the task, how prompt the payment was and details of that sort on this web site. The voice of the worker becomes strongest when work is rejected or another violation of trust occurs. Such violations might be expressed as:“Rejected an already underpaid task on the basis of something not covered in the instructions.” (turkopticon, 2006)

Instructions are one of the approaches that requestors implement a quality control system. Other such methods include style guides, contracts, qualification tests inside turk, and qualification tests outside turk. When such violations, as well as more opposite positive statements, are publicly shared, the voice of the worker translates to word of mouth, which has been a source of study for researchers.

On Turkopticon, this word of mouth has become a direct source of engagement between a requester and worker, where complaints by worker Jon Brelig were intervened by requester InfoScout, Inc. (Brelig, 2016).

Access the image by clicking on the hyperlink Forum discussions

This paper seeks to introduce a feed-forward expectation visualization system based upon captured voice of the worker comments from Turkopticon. The system creates a mechanism that: Make explicit what requestors personally guarantee workers before the start of a task. Create a framework for requestors to know the “Voice of the Worker.” Prevent requesters from becoming inundated with potentially hundreds of demands coming to them to resolve an otherwise known and preventable issue. Establish a quality check for workers before dealing with a requestor that avoids the subjective and independent quality systems.

Previous work in visualization systems demonstrated that they can shape and confirm understanding of communities and individuals, affect the length of participation within a crowdsourcing community.


  • Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H. R., Bertino, E., &
  • Brelig, J. 2016.
  • Dustdar, S. (2013). Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing, (2), 76-81.
  • Chen, JJ, Menezes, NJ, Bradley, AD, & North, TA (2011). Opportunities for crowdsourcing research on amazon mechanical turk. Interfaces , 5 (3).
  • Gilbert, E., & Karahalios, K. (2009). Using social visualization to motivate social production. Multimedia, IEEE Transactions on, 11(3), 413-421.
  • Griffin, A., & Hauser, J. R. (1993). The voice of the customer. Marketing science, 12(1), 1-27.
  • Heo, M., & Toomey, N. (2015). Motivating continued knowledge sharing in crowdsourcing. The impact of different types of visual feedback Online Information Review , 39 (6), 795-811.
  • Jøsang, A., Ismail, R., & Boyd, C. (2007). A survey of trust and reputation systems for online service provision. Decision support systems, 43(2), 618-644.
  • Khazankin, R., Schall, D., & Dustdar, S. (2012, June). Predicting qos in scheduled crowdsourcing. In Advanced Information Systems Engineering (pp. 460-472). Springer Berlin Heidelberg.
  • Kittur, A., Smus, B., Khamkar, S., & Kraut, R. E. (2011, October). Crowdforge: Crowdsourcing complex work. In Proceedings of the 24th annual ACM symposium on User interface software and technology (pp. 43-52). ACM.
  • Kulkarni, A., Can, M., & Hartmann, B. (2012, February). Collaboratively crowdsourcing workflows with turkomatic. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (pp. 1003-1012). ACM.
  • Haleh Amintoosi, Mohammad Allahbakhsh, Salil Kanhere. (2013, October). Trust Assessment in Social Participatory Networks. In 3rd International Conference on Computer and Knowledge Engineering, ICCKE 2013.
  • Irani, Lilly C., and M. Silberman. "Turkopticon: Interrupting worker invisibility in amazon mechanical turk." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013.
  • Ma, L., Sun, B., & Kekre, S. (2015). The Squeaky Wheel Gets the Grease—An Empirical Analysis of Customer Voice and Firm Intervention on Twitter.Marketing Science, 34(5), 627-645.
  • Martin, D., Hanrahan, B. V., O’Neil, J., Gupta, N. (2014, February). Being A Turker. In Performing Crowd Work, CSCW 2014.
  • Naumer, C., Fisher, K., & Dervin, B. (2008, April). Sense-Making: a methodological perspective. In Sensemaking Workshop, CHI'08.

NPR, 2015

  • Richard M. Ryan and Edward L. Deci University of Rochester (2000). Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being
  • Resnick, P., Kuwabara, K., Zeckhauser, R., & Friedman, E. (2000). Reputation systems. Communications of the ACM, 43(12), 45-48.
  • Shneiderman, B. (2000). Designing trust into online experiences.Communications of the ACM, 43 (12), 57-59.
  • Turkopticon, 2006
  • Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., & Zhao, B. Y. (2012, April). Serf and turf: crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web (pp. 679-688). ACM.

Contributors to this page according to alphabetical order

@anotherhuman @mahsa @reneelin @vlado