Difference between revisions of "Crowdsourcing Success"

From crowdresearch
Jump to: navigation, search
(Evaluation Lit Review Brainstorm)
 
 
Line 17: Line 17:
 
'''Collaboratively crowdsourcing workflows with turkomatic'''
 
'''Collaboratively crowdsourcing workflows with turkomatic'''
  
[https://users.soe.ucsc.edu/~rcompton/crowdwork/Collaboratively%20Crowdsourcing%20Workflows%20with%20Turkomatic.pdf Kulkarni, A., Can, M., & Hartmann, B. (2012, February). Collaboratively crowdsourcing workflows with turkomatic. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (pp. 1003-1012). ACM.
+
[https://users.soe.ucsc.edu/~rcompton/crowdwork/Collaboratively%20Crowdsourcing%20Workflows%20with%20Turkomatic.pdf Kulkarni, A., Can, M., & Hartmann, B. (2012, February). Collaboratively crowdsourcing workflows with turkomatic. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (pp. 1003-1012). ACM.]
  
 
This paper serves more so as an example of how we an use an A/B testing like framework to test the effects of a feature being used vs. the feature being absent within the platform.
 
This paper serves more so as an example of how we an use an A/B testing like framework to test the effects of a feature being used vs. the feature being absent within the platform.

Latest revision as of 01:41, 10 February 2016

Small literature review to examine previous evaluation metrics on crowdsourcing platforms, as well as explore what is expect out of successful crowdsourcing communities.


Good Lit Review

Evaluation on crowdsourcing research: Current status and future direction.

Zhao, Y., & Zhu, Q. (2014). Evaluation on crowdsourcing research: Current status and future direction. Information Systems Frontiers, 16(3), 417-434.

User Motivations

Users' motivation to participate in online crowdsourcing platforms

Hossain, M. (2012, May). Users' motivation to participate in online crowdsourcing platforms. In Innovation Management and Technology Research (ICIMTR), 2012 International Conference on (pp. 310-315). IEEE.

This paper helps provide an outline of participant behavior within crowdsourcing. This gives us a list of goals that users wish to achieve and if we can narrow this down to goals related to entering Daemo, then we can justify how Daemo is increasing this goal satisfaction.

Systems Evaluation

Collaboratively crowdsourcing workflows with turkomatic

Kulkarni, A., Can, M., & Hartmann, B. (2012, February). Collaboratively crowdsourcing workflows with turkomatic. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (pp. 1003-1012). ACM.

This paper serves more so as an example of how we an use an A/B testing like framework to test the effects of a feature being used vs. the feature being absent within the platform.

Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms

Vakharia, D., & Lease, M. (2015). Beyond mechanical turk: an analysis of paid crowd work platforms. Newport Beach, California, USA.

This paper outlines the failings of AMT. Listed are such aspects as: Inadequate Quality Control, Inadequate Management Tools, Missing support for fraud prevention, and Lack of automated tools. This already gives us some aspects to focus on for an evaluation. If our tools improve these aspects, this is a first step in the right direction.

Furthermore, this paper provides criteria for Platform Assessments: -Distinguishing Features: What platform aspects particularly merit attention? -Whose Crowd?: Does the platform maintain its own workforce, does it rely on other vendor “channels” to provide its workers, or is some hybrid combination of both labor sources adopted? -Demographics & Worker Identities: What demographic information is provided about the workforce? -Qualifications & Reputation: Is some form of reputation tracking and/or skills listing associated with individual workers so that Requesters may better recruit, assess, and or manage workers? -Task Assignments & Recommendations: Is support provided for routing tasks or examples to the most appropriate workers? -Hierarchy & Collaboration: What support allows effective organization and coordination of workers, e.g. for traditional, hierarchical management structures (Kochhar et al., 2010; Nellapati et al., 2013), or into teams for collaborative projects (Anagnostopoulos, Becchetti, Castillo, Gionis, & Leonardi, 2012)? If peer review or assessment is utilized (Horton, 2010), how is it implemented? -Incentive Mechanisms: What incentive mechanisms are offered to promote Worker participation (recruitment and retention) and effective work practices? -Quality Assurance & Control: What quality assurance (QA) support is provided to ensure quality task design? -Self-service, Enterprise, and API offerings: Enterprise “white glove” offerings are expected to provide high quality and may account for 50-90% of platform revenue today -Specialized & Complex Task Support: Are one or more vertical or horizontal niches of specialization offered as a particular strength, e.g. real-time transcription -Automated Task Algorithms: What if any automated algorithms are provided to complement/supplement human workers (Hu, Bederson, Resnik, & Kronrod, 2011)? -Ethics & Sustainability: How is an ethical and sustainable environment promoted for crowd work (Fort, Adda, & Cohen, 2011; Irani & Silberman, 2013)?

Community Literature

Exploring online social behavior in crowdsourcing communities: A relationship management perspective.

Shen, X. L., Lee, M. K., & Cheung, C. M. (2014). Exploring online social behavior in crowdsourcing communities: A relationship management perspective. Computers in Human Behavior, 40, 144-151.

This paper uses a survey method for testing their hypotheses about communities of crowdsourcing. This method is an option we can use if we wish to go the survey route.

Behaviors contributing to crowdsourcing success

All of these papers are good references toward behaviors we want to encourage if we are going to work on creating guilds within Daemo. We need to build an environment to promote good crowdsourcing behavior within a community.

Leveraging crowdsourcing: activation-supporting components for IT-based ideas competition

Leimeister, J. M., Huber, M., Bretschneider, U., & Krcmar, H. (2009). Leveraging crowdsourcing: activation-supporting components for IT-based ideas competition. Journal of management information systems, 26(1), 197-224.

Task Division for Team Success in Crowdsourcing Contests: Resource Allocation and Alignment Effects

Dissanayake, I., Zhang, J., & Gu, B. (2015). Task Division for Team Success in Crowdsourcing Contests: Resource Allocation and Alignment Effects. Journal of Management Information Systems, 32(2), 8-39.

Determinants of success in crowdsourcing software development.

Tajedin, H., & Nevo, D. (2013, May). Determinants of success in crowdsourcing software development. In Proceedings of the 2013 annual conference on Computers and people research (pp. 173-178). ACM.