Milestone 9 taskforce

From crowdresearch
Jump to: navigation, search

Give the Crowd What they Understand Best (extended ++)

In previous submissions we proposed our ideas on "Give the Crowd What they Understand Best" ( see Milestone 7_taskforce ).

We believe that this idea is related to Foundation 2: Input/output transducers.

While it is true that purely automatic methods will introduce quite some noise (e.g. automatic language translations might need a human review and deealing with cultural taboos might be challenging), we think that we could have a hybrid system combining machine and human computation. We imagine that the human inputs could come from the crowd, who would be paid for helping in the clarity and cultural adaptations. These workers could also get additional points in their status or badges- recognition, after all.


The first thing we would like to do is to conduct a study on crowdsourcing platforms in order to understand what the status of the problem is. We want to know to what extent current microtasks could be adapted in terms of task clarity and cultural dimensions. We would like to know to what extent workers could benefit from this. We created a very initial draft of a survey:

We have also created two examples of CrowdFlower tasks to showcase the way they could be adapted:

Task clarity adaptation

This is how a task with "difficult language" could look like:

We aim at having a simplified version of the task like:

Cultural adaptation

Imagine a microtask to assess the sentiment of tweets. We have created an original microtask designed in English:

This task could be adapted to the Spanish culture by (1) translating the instructions to Spanish and showing the tweets in Spanish (2) introducing some colourful design in the instructions (let's assume that Spaniards find grey boring). The culturally adapted version of the task (for Spaniards) could look like:

The cultural and clarity adaptation is done for deploying tasks in different versions, to address different target crowds. However, the system would also have a design support feature that allows the requester to realize about aspects she might have ignored. For example, given a microtask to assess the sentiment of tweets, if we think of publishing this task in China, the system would should an alert saying that in China Twitter is forbidden, and therefore it might be more convenient to target other cultures. Or, the requester might have thought of publishing the microtasks in Germany, but forgot that the task on tweets about airlines might not be convenient due to the recent disaster that occurred when a plane crashed and many German citizens died. Along the same lines, the tweet may contain comments about alcohol, something that might be a taboo in some cultures. All these things this will affect the microtask settings.