Milestone 1 singularity

From crowdresearch
Jump to: navigation, search

Experience the life of a Worker on Mechanical Turk

Unfortunately, our accounts were rejected by Mechanical Turk. As a result, we explored other avenues in order to earn our one dollar. I will describe my experience about www.microworkers.com, which the site I worked on. Microworkers is a site where people assign jobs that are really small to others, for a really small payment. I completed two basic tasks on that website -

  • one of them was following Bentley.cool.cat on Instagram and liking and sharing any three posts for $0.55
  • the other was reviewing an app called Crown It that is available on the Google Play Store for $0.35

On completion of these two tasks, I had a total of $0.90 in my account.

Another team member did a couple of tasks on microworkers.

  • Signing up for iPoll which earned him 0.10$
  • Searching for a product on Amazon and sharing the screenshot for 0.25$.

One of the team members did one task on iPoll to fill a survey of around 8 minutes which earned him 1$ but to get it redeem you need to earn 20$ first but he was unable to find any other task there.

The tasks didn't require a lot of time. The majority of the time was spent on finding work and the website. We also spent some time on crowdflowers.com, but it provided work based on the level/experience on the site. Finding work for new workers was difficult.

As one team mate put it - we had to compelte a lot of menial tasks to earn a very nominal amount of money.

Experience the life of a Requester on Mechanical Turk

Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results.


Unfortunately our request for being a requester on MTurk was denied. We couldn't even use the sandbox because our country of residence isn't the US. We created a Google doc form and sent it to our batchmates. It asked them to transcribe the text in the image. We got a few responses because it wasn't a lot of work, and some people didn't mind spending a minute or two to complete the job.

Another problem that we faced was uploading our task on crowdsourcing sites like crowdflower, microworkers. They required minimum of 10$ to start any job as requester. It was difficult to pay 10$ as they don't accept Paypal which is one of the famous way to transfer money.

We realized that workers rarely volunteer to do the job, and since we weren't paying any fee , it was even tougher to get the responses.

As a requester, we liked how we got the responses with minimal cost and effort.

Responses - [1]

Explore alternative crowd-labor markets

I joined the site - GalaxyZoo. GalaxyZoo uses workers to classify galaxies based on their visual appearance. In contrast to microworkers, workers volunteer to do the jobs and aren't paid. microworkers generally has a mixed bag of tasks like signing up for a site, doing a specific google search, reviewing a product, retweeting, etc.

GalaxyZoo relies on the fact that workers with an interest in astronomy will volunteer to work.

Readings

MobileWorks

Strengths of the system

1. The system makes crowdsourcing available to a massive part of the Indian population due to it's ability to run on simple mobile phones. It exploits the high mobile penetration in India while taking into account the fact that most of these mobile phones are cheap,simple and have low resolution.

2. The reported overall accuracy of the workers seems to be quite high which is incredibly important for a crowdsourcing job.

3. It overcomes some major barriers by it's simplistic design: a. Limited English literacy: This makes crowdsourcing available to people from rural areas who have a limited understanding of English b. Simple Interface: Often, one might be deterred to use a platform due to it's convoluted design which seems to be avoided here.

4. Using historical accuracy as a metric for reinforcement to ensure good accuracy seems to be a good idea as it would reward workers for doing well.

Improvements

1. It could be enabled to do tasks other than OCR. This might seem difficult since MobileWorks needs to work on simple mobile phones. However, surveys or multiple choice questions would be easy to answer even on such simple phones. Tasks like audio transcription and local language translation could also be viable (which they intend to work on in the future).

2. The problem of limited english literacy could be overcome by using reasonably sophisticated translation tools. This would result in more tasks being available for the worker to choose from.

3. Since Hindi is the most widely spoken language in India, it would be incredibly convenient if the interface text language could be switched. Many rural workers might find it difficult to use even a simple interface such as this if they're unable to understand the text on it. This should not be difficult to implement as the interface is simple and contains few words.

mClerk

Strengths of the system

  • mClerk uses text messages to communicate with its workers. In India the number of people having access to the internet or a PC is lower , so one can reach a far greater population with text messages.
  • Using bitmapped images as in earlier Nokia messaging is a nice idea.
  • The bonuses offered to the workers, the leaderboard to increase competitiveness, and the lower rate charged to requesters are strengths of mClerk.
  • Focussing on local language, Kannada , instead of English is an interesitng idea. Cosidering India has a diverse culture and a lot of languages, such a system would be extremely benificial for translation accuracy.


Improvements

  • The accuracy of the system was around 90% , which is much lower than the average. I think they should increase the bar for acceptance of a translation to 3 or 5 users, to increase accuracy.
  • They should increase the bonuses for people with a higher accuracy , and strength of an answer could increase with the experience of a worker. This would work to incentivise workers to be accurate.
  • As mentioned in the paper, a collabaration with the phone comapnies could prove benefcial.

Flash Teams

Strengths of the system

One concept that seemed innovative to me is that a requestor can feed in 'input material'(e.g. script) and 'output material'(e.g. video) and Foundry would use AI planning techniques to construct a modularised workflow that can connect the input to output at minimal time or monetary cost. What I find particularly exciting is the possibility that not only the workflow but also the design of an entire megascale organisation could potentially be computationally managed.

Foundry is more scaleable than a traditional human managed organisation and also more robust as it allows recruitment of replacements on the fly from a vast talent pool. The results of the experiment in the paper also show that it is more efficient than a self managed project.

Improvements

1. As has already been pointed out in the paper, lower completion times could be achieved if timezones for different phases of the work are stated while requesting for jobs on oDesk.

2. There could also be added intrinsic support for parallel human management hierarchies which would become crucial for larger projects.

3. The system could also be enhanced to allow for creating persistent organisations that work on much longer term projects. This is a harder challenge as tasks cannot be as elegantly modularised and 'handoffs' not as strictly defined when there are multiple feedback and iteration processes.

4. The input and output types for workflow blocks could be extended to support iterative improvement. e.g. consider a user centred design process where we finally want >90% user satisfaction. Then there can be a "design" and a "testing" block which loop into each other. The "design" block would output a product and the "testing" block would output a satisfaction rating. The exit condition for the loop would be when the output of the "testing" block crosses 90% in value. An additional real-valued output of design blocks(which may be obtained from a testing process) could also be used to decide bonuses given to the team.

5. I did not see any explanation of how to review the work of individual workers or decide their individual bonuses when their final output comes from a team. There will surely be more contention over ratings and evaluation if these workers do not personally know each other. Designing an effective incentive scheme is another challenge.