Qualitative Analysis RDQA

From crowdresearch
Revision as of 00:45, 23 March 2016 by Aarongilbee (Talk | contribs) (Method)

Jump to: navigation, search


This page presents findings from worker responses collected from TurkOpticon. These responses demonstrate a potential payment expectation for survey tasks and how worker and requesters relate outside and within Turker.


TurkOpticon Reviews

Data Sampling Procedure

Data was collected based from a convenience-random sampling. The researcher chose to collect 2 full pages of responses that were present at the time when the researcher was on the site. TurkOpticon presents difficulty to the task of data collection as that the response collected on the web site stream in real time and the pages update with changing data. To handle this challenge, the researcher needs to keep the web site open without updating to maintain a consistent data set. The data presented is randomly presented to the researcher as TurkOpticon workers from all over the world enter their experiences at the times of their choosing.


RQDA is the R Qualitative Data Analysis package. The package enables researchers to enter data into a database and codify the data by 4 levels: codes, code categories, cases, and annotations. The most basic level are the codes given to data sources. To give structure to the codes, code categories can be composed from several lower level codes. Only codes and code categories were used for the purpose of this study.


Turker thoughts about pay on Survey Tasks

Pasted image at 2016 03 19 05 12 AM.png

Data: TurkOpticon 5 Votes v. All Others for Survey Tasks
Welch Two Sample T-Test Students' T-Test
p = 0.005428 p = 0.003642

Data CSV from March 19

data: dat$Tasks.w..5s and dat$All.Others t = 3.3403, df = 12.791, p-value = 0.005428 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 1.976024 9.246208 sample estimates: mean of x mean of y 10.863968 5.252852

Hairball Map: What might happen outside of Turk?


Scenario Development Example

---Begin Task Submission--- ---Evidence---
1.Worker submits work GENERIC
2.1 Requester mass rejection parameter kicks in GENERIC
3.1 Requester team screens rejected tasks [Account 46]
4.1 Requester team submits results report to Worker [Account 46]
5.1 Requester team posts to worker review page [Account 46]
2.2 Requester sends verification email (UNKNOWN) [Account 56]
2.3 Requester sends automated email [Account 62]
2.3.1 includes a task ticket confirmation [Account 17]
---Begin Generic Email Response---
1. Worker writes email to requester GENERIC
2.1 Requester responds to email quickly GENERIC
---something happens---
2.2 Requester does not receive email GENERIC
2.3 Requester marks worker's email as "spam" [Account 17]
NOTE: 17 is vengeful worker. "make sure I was paid my 20 cents".

Might have acted in a way to have pushed requester to mark emails as "spam".

How might Requesters manipulate tasks as a response?

These strategies are areas of control for the requester to achieve an unknown goal with similar tasks posted sequentially. Workers monitor requesters for these changes.

1. Increase/Decrease Pay 17
2. Introduce Test Screeners before task 30
2.1 Announced/Unannounced
2.2 Paid/Unpaid
3. Task Qualification Constraints In/Decrease GENERIC
4. New Task Attempt Recreation 27
5. Control/Block Emails 17
5.1 Mark all email communications as spam
5.2 Mark partial emails as spam
5.3 Mark none
6. Avoid posting more tasks GENERIC
7. Partition Task Quantities 27

Tasks w/ 5s All Others Mean 10.86396774 5.252851782 SD 4.807581499 2.259549664 N 10 10 P 0.003642357



TurkOpticon RDQA 3.21.1714 database

TurkOpticon RDQA 3.19.1700 database

Data CSV from March 19