Milestone 1 RATH

From crowdresearch
Revision as of 20:24, 4 March 2015 by Alisoncossette (Talk | contribs) (Explore alternative crowd-labor markets)

Jump to: navigation, search


Experience the life of a Worker on Mechanical Turk

Reflect on your experience as a worker on Mechanical Turk. What did you like? What did you dislike?

Experience the life of a Requester on Mechanical Turk

Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results.

Explore alternative crowd-labor markets

The following table outlines various aspects of existing crowdsourcing platforms.

Criteria Amazon Mechanical Turk Task Rabitt Galaxy Zoo oDesk
Type of Task "Artificial Artificial Intelligence" Activities related to mental effort; translation, writing, transcription, surveys etc.
Number of Tasks/Day Limited number of tasks per day No limitation
Profit/Non-Profit For Profit For Profit
Worker Payment Payment after task is finished and approved Invoice and payment after completing the task
Worker Concerns
Requester Concerns

Readings

MobileWorks

* What do you like about the system / what are its strengths? Mobile works in looking to broaden micro task markets by marginalized workers. Mobile works recognized that in India for example the desktop computer penetration was only .09%, while mobile phone penetration was much higher at 50%. They designed a minimal interface that would be usable in by a variety of cell phone grades and would be efficient even on a low-end mobile phone. This significantly increased the opportunity for acceptance and participation in the given market. The success of the this particular solution is in its simplicity. By keeping the UI at its most basic it was able to create widespread usability across local devices. The other great strength of the MobileWorks solution is the accuracy. Their solution utilized only single entry (one worker submission per task) and was able to begin at 89% accuracy. Speculation is that multiple entry could increase that accuracy to 98.79% and triple entry 99.89%.

* What do you think can be improved about the system? One of the goals of the project was to "create an interface efficient enough so as to provide livable wages to workers." The pilot project sought to reverse engineer compensation. They determined that the average worker had an efficiency level of 120 tasks per hour and an average salary of 20-25 Indian Rupees (.32 - .40 USD) per hour in their regular work, therefore tasks would have to average 0.18 to 0.20 Indian Rupees per task to provide a living wage. As this was simply a pilot project there would need to be a large number of requesters in order to build sufficient efficiency opportunities among workers to attain and maintain the needed task per hour ratios. Also not addressed is current market compensation for similar work on more traditional desktop platforms. Additional questions would include: What is the efficiency of task per hour on a traditional desktop computer platform? Does the limitation of speed on a low-end phone compromise this efficiency? If there is a discrepancy between efficiency rates on platforms, how does this inform the compensation from the requesters standpoint? Does this discrepancy compromise the ability to achieve sufficient efficiency to attain a "living wage".

mClerk

* What do you like about the system / what are its strengths?

mClerk's most innovative strengths are:

1. The ability to bring image-based task to a low-end mobile platform through small bitmapped images through SMS.

2. Management of digitization of local language text with high accuracy rate 90.1%

3. Non-monetary compensation

mClerk has a unique approach to digitization of local language documentation. The document is scanned and divided up into individual word images. These images are are transferred to a binary picture message and distributed to workers via SMS. Word images range from 64x16 pixels to 74x28 pixels depending on the system. From here workers will text back the word in the local language. As many phones do not support local language fonts. The mClerk system asks workers to utilize the best equivalent english word. The mClerk system utilizes a two worker response. They also amended their algorithm to mark a response as equivalent if it transliterates back to the same word in the local language, thereby mitigating the challenge of utilizing the non-native language leading to an accuracy rate of 90.1%. mClerk has also gone on to take on the compensation challenge by introducing a non-monetary compensation structure. Workers are paid with mobile phone time rather than traditional monetary compensation.

One other aspect of the system that is appealing is the commitment to keeping the workers engaged and motivated to respond with timely feedback after every 10 correct messages. The ease of use, unique compensation and committed engagement made for a successful viral launch of the pilot program.

* What do you think can be improved about the system? While the translation to english and back to the local language does solve some aspects of sms based font challenges. It does create multiple new challenges including two English answers being in agreement but both incorrect, It would be interesting to see the platform would do in other language font regions to see if the success and competitiveness of translation rates compared to agency translators would continue to be as competetive.

Flash Teams

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?