Winter Milestone 4@System Proposal-A feedforward visualization system

From crowdresearch
Jump to: navigation, search


This proposal introduces the idea of a feedfoward expectation system. Using identified factors considered important to turkers, we establish what the "voice of the worker" is. The input becomes a series of questions asked to requestors using the exact words from Turkers. Once complete, workers will have immediate access to a requestor's responses in a system located nearby their name in the task stream.

Data capturing "Voice of the Worker"

Voice of the Worker.png

A quick sampling (n=16) was collected from mechanical turk to identify one factor important to Turkers. It was found that the response time to pay, or accept, a task was key issue. When describing their expectations of payment, Turkers expressed time as the major dimension of judgment ties to satisfaction. Notably, Turkers in this small sampling did not express weeks as a measure of task acceptance. Expressed were minutes, hours, days, month, and never (also used as infinity here). This gap was not explored. It was discovered that Amazon policy established the month expectation as that all tasks are expected to be accepted 30 days after completion.

Please note that exact timings are not established yet in this proposal as that we are trying to gain an initial understanding of what the voice of the customer means. Minutes, for example, might mean 5 minutes to one turker, yet mean 37 minutes to another. Time is currently a subjective quality to be explored more deeply if this system is further elaborated and developed.

Visualizing How feedfoward might work

A Feedfoward Expectation Visualization System.png

What requesters see?

What requestor sees.jpg

A Start: The Javascript (AngularJS) in the Machine

var responseTimeFramesPay = [ { option.timeframe:'Minutes', state:1, hex: "#00FF00", description: 'The requestor can respond in a couple of Minutes', canDisplay: true, }, { timeframe:'Hours', state:2, hex: "#32fF32", description: 'The requestor can respond in a few hours', canDisplay: true, }, { timeframe:'Days', state:3, hex: "#66FF66", description: 'The requestor can respond in a couple of days', canDisplay: true, }, { timeframe:'Month', state:4, hex: "#99FF99", description: 'The requestor can respond in a month.', canDisplay: true, }, { timeframe:'Never', state:5, hex: "#e5FFe5", description: 'The requestor is unresponsive.', canDisplay: true, } ];

What the worker might see?

Worker sees 1.jpg

Upon mouse over of the little green square...

Worker sees 2.jpg


H1: A feed-forward expectation system will decrease worker complaints related acceptance/response time to pay than without the system.

H2: A feed-forward expectation system will encourage requestors pay or accept tasks faster than without the system.

Control Group will not receive the system information. Experimental groups will receive the system.

More Details: A Feed-forward Expectation Visualization System


Sensemaking can be identified at moments of uncertainty (p. 3, Naumer, Fisher & Dervin, 2008), which can be found at moments when workers and requesters seek information about the potential other (Irani & Silberman, 2013) and if the other can help them fulfill their motivations. These motivations can be both Intrinsic (learning, curiosity, mastery) and Extrinsic (badges and points, money, competition) in a depending manner (Richard M. Ryan and Edward L. Deci 2000). Many of the physical dynamics that individuals rely on to decide whether or not another person is able to meet those motivations disappear in online environments. Thus, the informational gap in making sense of the other person are more complex and time consuming. Crowdsourcing platforms have been questing for effective aids to other-sensemaking for task partner selection in the forms of trust systems, reputation systems, qualification systems.

At the heart of crowdsourcing systems are the roles that users play--the requester and worker. A requester provides the demand which manifests into opportunity in the form of tasks or higher level projects, meanwhile workers’ services supply to the requestor’s demand.

Researchers indicate that requester motivations to use these platforms include: influencing better products/services, sense of efficiency (Antikaninen, et al., 2010), taking advantage of skill variety of the workers, analyzability (Zheng, et al., 2010) and ultimately lower cost and almost immediate response (Smith, et al., 2013)

Researchers have identified that workers on social platforms utilize these services to gain recognition (Zheng, et al. 2011), access an open and constructive atmosphere (Antikaninen, et al. 2010), receive payment and rewards, competition, sense of efficiency (Muhdi & Boutellier 2011), attain a sense of efficiency (Antikaninen, et al., 2010), freelance opportunities, improving skills and knowledge gathering (Brabham 2010), social contract, task autonomy, enjoyment-based, community-based, direct feedback from the job and pastime (Kaufmann et al. 2011), Motivation to contribute to tasks created by friends. (Haleh Amintoosi, et al. 2013)

Researchers have identified how platforms might be misused and led to negative experiences. When their goals merge, intentional or unintentional acts may hinder workers’ ability to perform a task and result in negative experiences for both (Chen, et al., 2011; Kitter, et. al., 2011; Kulkarni,et al., 2012; Wang, et al, 2012). For example, researchers have identified the layout design, task instructions, visual design (Chen, et al., 2011; Kulkarni,et al., 2012), coordination difficulties, and task decomposition (Kittur, et al., 2011) are such areas where unintentional acts that hinder worker performance. However, requesters have applied the services to figure out ways to improve their task design (Kittur, et. al, 2011; Martin, et al., 2014; XXX). This leads to a better understanding of the crowdsourcing process and a better prediction of completion times when crowdsourcing various tasks. A service such as Crowdforge has reported that creative workarounds can have a positive impact on worker products (Kittur, et. al, 2011; XXX).

Such approaches, however, are not intended to address intentional acts of abuse from requestors. Wang, et al. (2012) identified that requestors will abuse workers by introducing them into pyramid recruiting schemes, clone profiles of attractive members on dating sites, and use specific qualifications of workers with a powerful social network profile to generate content for their own service. In these latter cases workers are paid intentionally below the known value of their work. Other rather advanced market-based strategies requesters were found to apply were to drive down worker compensation for a particular task after seeing increased popularity (NPR, 2015).

Additionally, a requestor might wait until the end of the work performance to perform a mass rejection without giving a reason to all the workers who completed the worker’s task (Martin, et al., 2014). Such acts become sources of caution for experienced workers.

Both of these intentional and unintentional acts result in unpleasant experiences and have forced both workers and requesters to develop systems against recurrence. To date, efforts to improve sensemaking in the space of understanding the user behind the role been focused on trust systems, reputation systems, and quality systems.

Trust systems detail the subjective view of of another’s trustworthiness and tend to be difficult to describe in practice (p. 7, Jøsang, Ismail, & Boyd,2007). Jøsang (2007) identified that trust seems identifiable through trust transitivity, a belief by which one party associates reliable information and decision quality of another to incorporate into their own decision making. These appear as provisional trust, access trust, delegation trust, identity trust and contextual trust (p.11). Connecting these concepts is the overall goal that motivated action in the first place. In practice, trust systems make use of “assurances, references, third party certifications, and guarantees” (Shneiderman, 2000).

Reputation systems provide a score from which a community understands an individual user (p. 7, Jøsang, Ismail, & Boyd,2007). This score separates low quality market members from high quality ones and enable pricing differentiation upon historical interactions. Such systems require long-time service users, current feedback distribution of interactions, and reliance of feedback for future decision making (Resnick, et. al, 2000). However these systems fail when users do not provide feedback, demonstrate positivity bias, and user manipulation.

Quality Systems attempt to solve problems from more empirical methods based in statistical methodologies. Khazankin, et al. (2012) explore Quality of Service from a requestor's perspective whereby tasks are assigned to workers by their reported availability and skills. Further development in the quality in crowdsourcing has resulted in a taxonomy that identified that worker’s reputation and expertise as well as elements of task design (Allahbakhsh, 2013). However, missing from quality systems research in crowdsourcing are issues tied to the voice of the customer, or in this case the voice of the worker. Voice of the customer is a concept originating from Total Quality Management as applied within a quality for deployment framework (Griffin & Hauser, 1993). The concept explains that the voice of the customer is explicitly expressed, as it is in this paper, by captured statements. Turkopticon, where workers regularly write descriptions of their experiences with individual tasks with requester, provides a forum for crowd platforms to identify what things about which a worker is most concerned. For example, the voice of the worker regularly identifies how long it took to complete a task, what the requester said about the task, how prompt the payment was and details of that sort on this web site. The voice of the worker becomes strongest when work is rejected or another violation of trust occurs. Such violations might be expressed as:“Rejected an already underpaid task on the basis of something not covered in the instructions.” (turkopticon, 2006)

Instructions are one of the approaches that requestors implement a quality control system. Other such methods include style guides, contracts, qualification tests inside turk, and qualification tests outside turk. When such violations, as well as more opposite positive statements, are publicly shared, the voice of the worker translates to word of mouth, which has been a source of study for researchers (p. 628, Ma, et al., 2015).

On Turkopticon, this word of mouth has become a direct source of engagement between a requester and worker, where complaints by worker Jon Brelig were intervened by requester InfoScout, Inc. (Brelig, 2016).

Turker-Requestor Exchange Pay Response Time.png Turker-Requestor Exchange Pay Response Time2.png

Access the image by clicking on the hyperlink Forum discussions

This paper seeks to introduce a feed-forward expectation visualization system based upon captured voice of the worker comments from Turkopticon. The system creates a mechanism that:

  1. make explicit what requestors personally guarantee workers before the start of a task;
  2. create a framework for requestors to know the “Voice of the Worker”;
  3. prevent requesters from becoming inundated with potentially hundreds of demands coming to them to resolve an otherwise known and preventable issue;
  4. and establish a quality check for workers before dealing with a requestor that avoids the subjective and independent quality systems.

Previous work in visualization systems demonstrated that they can shape and confirm understanding of communities and individuals (Gilbert & Karahalios, 2009), affect the length of participation within a crowdsourcing community (Heo & Toomey, 2015).



  • Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H. R., Bertino, E., & Dustdar, S. (2013). Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing, (2), 76-81.
  • Brelig, J. 2016.
  • Dustdar, S. (2013). Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing, (2), 76-81.
  • Chen, JJ, Menezes, NJ, Bradley, AD, & North, TA (2011). Opportunities for crowdsourcing research on amazon mechanical turk. Interfaces , 5 (3).
  • Gilbert, E., & Karahalios, K. (2009). Using social visualization to motivate social production. Multimedia, IEEE Transactions on, 11(3), 413-421.
  • Griffin, A., & Hauser, J. R. (1993). The voice of the customer. Marketing science, 12(1), 1-27.
  • Heo, M., & Toomey, N. (2015). Motivating continued knowledge sharing in crowdsourcing. The impact of different types of visual feedback Online Information Review , 39 (6), 795-811.
  • Jøsang, A., Ismail, R., & Boyd, C. (2007). A survey of trust and reputation systems for online service provision. Decision support systems, 43(2), 618-644.
  • Khazankin, R., Schall, D., & Dustdar, S. (2012, June). Predicting qos in scheduled crowdsourcing. In Advanced Information Systems Engineering (pp. 460-472). Springer Berlin Heidelberg.
  • Kittur, A., Smus, B., Khamkar, S., & Kraut, R. E. (2011, October). Crowdforge: Crowdsourcing complex work. In Proceedings of the 24th annual ACM symposium on User interface software and technology (pp. 43-52). ACM.
  • Kulkarni, A., Can, M., & Hartmann, B. (2012, February). Collaboratively crowdsourcing workflows with turkomatic. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (pp. 1003-1012). ACM.
  • Haleh Amintoosi, Mohammad Allahbakhsh, Salil Kanhere. (2013, October). Trust Assessment in Social Participatory Networks. In 3rd International Conference on Computer and Knowledge Engineering, ICCKE 2013.
  • Irani, Lilly C., and M. Silberman. "Turkopticon: Interrupting worker invisibility in amazon mechanical turk." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013.
  • Ma, L., Sun, B., & Kekre, S. (2015). The Squeaky Wheel Gets the Grease—An Empirical Analysis of Customer Voice and Firm Intervention on Twitter.Marketing Science, 34(5), 627-645.
  • Martin, D., Hanrahan, B. V., O’Neil, J., Gupta, N. (2014, February). Being A Turker. In Performing Crowd Work, CSCW 2014.
  • Naumer, C., Fisher, K., & Dervin, B. (2008, April). Sense-Making: a methodological perspective. In Sensemaking Workshop, CHI'08.
  • NPR, 2015
  • Richard M. Ryan and Edward L. Deci University of Rochester (2000). Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being
  • Resnick, P., Kuwabara, K., Zeckhauser, R., & Friedman, E. (2000). Reputation systems. Communications of the ACM, 43(12), 45-48.
  • Shneiderman, B. (2000). Designing trust into online experiences.Communications of the ACM, 43 (12), 57-59.
  • Turkopticon, 2006
  • Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., & Zhao, B. Y. (2012, April). Serf and turf: crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web (pp. 679-688). ACM.

Contributors to this page according to alphabetical order

@anotherhuman @mahsa @reneelin @vlado