Milestone 1 munichkindl

From crowdresearch
Jump to: navigation, search

Template for your submission for Milestone 1. Do not edit this directly - instead, make a new page at Milestone 1 YourTeamName or whatever your team name is, and copy this template over. You can view the source of this page by clicking the Edit button at the top-right of this page, or by clicking here.

Experience the life of a Worker on Mechanical Turk

We could not get experience as real worker because we are not allowed to register from Germany, so we did the tasks in the sandbox.

The first thing to notice is that working on the provided tasks is not really fun and quite monotonous. Some of the tasks can be considered as "something different" (like drawing or giving feedback about visual things, that can be a more creative exercise), but in general we suppose that the main motivation to do it is the money.

However the money, even though for sure can serve as a motivation for the workers, is not at all enough to substitute an average salary in a developed country. It could be enough for people in some poorer regions, but the target groups of potential full-time workers gets limited by the little money they can earn.

So there are some questions that we would like to explore in the coming weeks:

  • Is this model of crowdsourcing creating an asymmetric labour market?
  • Is getting cheap labour the main value provided by the current crowdsourcing platforms?
  • Is the low quality of the results the reason to offer low payments? If not, what is the reason?
  • What can we do to change that?
  • In this experiment we are trying to write an investigation paper through crowdsourcing. Can we create a tool to allow everyone to get high quality workers and results good enough to write investigation papers?

Experience the life of a Requester on Mechanical Turk

We've used in the past Mechanical Turk to label handwritten letters. For that we had to build our own template because the tools offered by Mechanical Turk are enough for basic things like text inputs but not for more advanced interactions.

Our beginning point was an image like this: [1].

So the final output of our tasks must be every letter and the coordinates of each letter in the image.

We had to split the tasks in smaller tasks because workers were complaining about long tasks. It looks like they prefer small tasks rather than more complex tasks (hypothesis to check).


Task 1: Select with a polygon every word that appears in this image

Instructions

Segments all the words that appear in the next image.

  • Use the square to select a single word each time.
  • Don't leave outside of the square any part of the word, it must be entirely inside of the selection square.
  • To qualify as accepted all the words must be correctly selected.

Instructions image:

Result csv

We built a tool to check the results. You can test it here using the previous csv

Check tool (Only in Chrome)


After checking the results of the job and selecting the correct words the generated csv is used to create the task 2.

Task 2: Select every letter that appears in the word and label them

Instructions

Segments all the letters that appear in the next image.

  • We are showing you a big image, so maybe you need to scroll over it to find the word in the white space.
  • Draw a polygon point by point using your mouse to select every letter.
  • To draw the polygon points click with the left button. Once the polygon is closed (see the example gif), click the right button of your mouse to finish it.
  • After the first click a window will prompt and you will need to introduce which letter you are selecting.
  • When you write what you are selecting, respect lowercases, uppercases and umlouts (special letter in German that can appear in the image: ä. ö or ü).
  • Avoid to select part of other letters inside of the polygon.
  • To qualify as accepted all the letters must be correctly labeled, except in cases as commented in last point.

Instructions gif:

Result csv

We built a tool to check the results. You can test it here using the previous csv

Check tool (Only in Chrome)




What I liked from this experience is that we are able to access to a lot of workers simultaneously. I think that the value is not in the money that they received, but being able to multiply the volume. Before starting this experiment we had two people working on this task and after accessing Mechanical Turk we could multiply easily that number.

What I don't like: Some workers were doing the tasks really fast because the model of this platforms is about volume instead of quality. So instead of trying to undestand the instructions and investing some more seconds in delivering good results, they prefered to keep doing tasks fast. That's why we had to build the result checker tool to check manually everything and rejecting what is not correct very fast.

Explore alternative crowd-labor markets

Task Rabbit

Task Rabbit is an online service which deals with mostly manual labor tasks rather than intelligence based tasks. Some of these tasks include help with moving, cleaning, setting things up and repair and maintenance. The service is available in various American cities and In Europe it is only available in London.

Worker can register as “taskers” with Task Rabbit. Those looking for help post their request to Task Rabbit. The staff at Task Rabbit then takes care of connecting the Requester to a suitable worker in closest to the requester’s locality. It behaves more like an online service provider which provides the best available service closest to the requester.

A task requester simply fills out a web form detailing the required tasks. The data is then processed internally by Task rabbit and a suitable tasker is assigned or connected to the requester.

oDesk

oDesk is an online freelancing platform. It allows both job poster and job seekers to sign up and make a profile. It covers a broad range of topics like graphic designing, software development and data entry tasks. once a job poster posts his job job seekers apply via a posting stating there strengths and also sharing previous work samples. The employer specifies a budget he has for the project and can decide which job seekers profile and samples fit the best. The job seeker can either agree on a lump some payment decide mutually for the end of the project or a time based payment system is also present where a special software from odesk keeps a check on the worker for the time spent and usea various methods such as periodic screen shots and keyboard strokes to determine if the worker is spending time on his project

The payment is hourly based and is automatically credited to the users account.

Galaxy Zoo

Galaxy Zoo is an open source platform where users can classify galaxies based on a comparison palette where a classifier can view a picture of a galaxy and the platform asks step by step questions regarding its observable shape and pattern to classify the galaxy.

The classification is intended to be free of payments and as a help for astrologists to classify all observable galaxies.

Readings

MobileWorks

The system presented in this paper is called MobileWorks, a mobile web-based crowdsourcing platform which is used to conduct human OCR (optical character recognition) tasks.

The motivation for the system was that many people in developing regions don’t have access to desktop PCs, meanwhile mobile internet and mobile phones are comparatively cheap. Therefore, this platform is adapted to mobile internet (for example by using simple user interfaces) in order to be available even for very poor people. Also, another advantage of MobileWorks is that it can be used nearly everywhere and at any time.

However, besides limited internet and PC access, the authors of the paper mention several problems that keep people from poor regions from using crowdsourcing platforms. They write about language barriers and cultural problems when trying to understand the user interfaces and tasks and it is not clear if they consider those solved, because MobileWorks is a quite easy (is to say monotonic) application. If not, at least the language problem could be solved easily. Furthermore, they could use simple gamification elements like for example bars that show how much experience is missing until the worker gets higher wages, to motivate people.

mClerk

In this paper, the authors describe mClerk, a croudsourcing platform which distributes OCR tasks via a little-known protocol that can be used to send small images via SMS. In this way, they aim to digitalize hand-written text in Kannada, a local Indian language. The system especially addresses people from poor regions where even access to mobile internet is usually not given. mClerk makes use of the fact that most people in the considered regions own SMS flatrates.

The developers of the system apply many gamification elements like leaderboards, motivation messages and rewards, as well for working as also for referring new workers. Furthermore, the supporting infrastructure is very well planned: if users need help, they can give lost calls to the number they get the SMS from and will be called back by a person. If users stop answering to the SMS, they will be send a reminder message.

mClerk achieves an accuracy of 90,1% on the given digitalization tasks, which is below the leading market alternatives, but the authors already mention several possible ways to improve it.

Every payment and reward is payed to the users in mobile airtime what makes the payment mechanisms relatively easy. However, also partly due to this, the system should not be seen as a full-time job like it can be done with MTurk. Instead, the system aims to give the users the possibility to earn some additional money in the time they could not use well otherwise.

In their study, the authors explicitly look at how starting from ten users the system extends to new people and how the amount of payment or rewards influences the number of active users. According to their findings, higher wages obviously motivate more workers to participate, as well as additional rewards. However, even though payment was reduced, mClerk got from having 10 to having 239 users within the 5-weeks period, what is surprising even for the authors.

As a possible improvement it could be considered to send every new user some explanations about how the system works and what it is used for, as description quality gets worse after some number of people is involved. Also, several potential users reported doubts if the system was maybe illegal as there seem to have been some similar illegal activities in India lately.

Flash Teams

Flash Teams is a structural approach used to manage remote teams that are sourced from online crowdsourcing or labor networks. Team work is coordinated by defining interlinked tasks represented in interactive systems, so called blocks. If necessary, flash team sizes can elastically be altered according to arising task needs. A platform named ‘Foundry’ provides the base for end users to create, assign, manage and update tasks as well as teams at runtime.

As opposed to more common uses of crowdsourced labor, which often aim for getting redundant, independent viewpoints of untrained microlaborers, flash teams target high-quality experts with different expertise to collaborate on creative tasks. By benefiting from fast online recruitment, flash teams have a high elasticity concerning their size, allowing to be expanded or scaled down at runtime if task requirements are changing. Each task is symbolized by a block, containing details and tags that show associations with other tasks, team members, input, output and its duration. With the help of analyzing those tags and comparing them to ones of former projects conducted by other flash teams, team recruitment has the potential to become automatized. Furthermore, the project flow can be pipelined by streaming results before task completion, enabling for subsequent tasks to start early. This, in combination with the pre-defined task structure, prevents team members from ‘busy waiting’, as they are not required to possess and apply any project management skills and suggests the use of flash teams especially for rapid, parallel prototyping. It is indicated that using flash teams as opposed to self-managed teams, the required number of working hours for projects can be cut down to half.

One aspect not yet considered for task building and visualization is the location of team members in multiple time zones. While often extending the time needed to finish a project due to different work / sleep patterns, albeit not increasing the total number of working hours, this effect could be leveraged to the opposite. By giving the option to intentionally select distributed team members and synchronizing their tasks carefully, constant, uninterrupted work on projects will be possible.

On the other hand, a contrary approach could solve the issue of lacking team spirit: by allowing workers to autonomously form groups with their preferred colleagues that can be booked as a whole, tasks or projects will benefit from the effect that teams who have already worked together in the past perform better than newly founded ones.