Difference between revisions of "Milestone 1 RATH"
(→Experience the life of a Worker on Mechanical Turk)
m (Alisoncossette moved page Http://crowdresearch.stanford.edu/w/index.php?title=Milestone 1 RATH to Milestone 1 RATH: Wrong Format for original title - newbies!)
Revision as of 22:16, 4 March 2015
Experience the life of a Worker on Mechanical Turk
Reflect on your experience as a worker on Mechanical Turk. What did you like? What did you dislike?
The biggest challenge we faced was actually signing up for the program. We only had two members fully engaged this week and neither of us were able to gain clearance to actually accept a HIT. I found the sign up process very disappointing. You were provided a screen that said: This may take 48 hours, but no confirmation e-mail was sent to confirm the sign up request. Additionally, there was no way to know if you were even close to being approved. We are STILL waiting with no information to be approved.
Experience the life of a Requester on Mechanical Turk
Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results.
Explore alternative crowd-labor markets
The following table outlines various aspects of existing crowdsourcing platforms.
|Criteria||Amazon Mechanical Turk||Task Rabitt||Galaxy Zoo||oDesk|
|Type of Task||"Artificial Artificial Intelligence" Activities related to mental effort; translation, writing, transcription, surveys etc.||Prides itself on personal and thoughtful customer service. Physical labor, ie. house cleaning, repair, rides etc.||Scientific and research activities||Graphic Design, software development, testing, etc.|
|Number of Tasks/Day||Limited number of tasks per day||No limitation||Unknown||Unknown|
|Profit/Non-Profit||For Profit||For Profit||Not for Profit||Not for Profit|
|Worker Payment||Payment after task is finished and approved||Invoice and payment after completing the task||Unknown||Highly dependent: per hour/week/project etc.|
* What do you like about the system / what are its strengths? Mobile works in looking to broaden micro task markets by marginalized workers. Mobile works recognized that in India for example the desktop computer penetration was only .09%, while mobile phone penetration was much higher at 50%. They designed a minimal interface that would be usable in by a variety of cell phone grades and would be efficient even on a low-end mobile phone. This significantly increased the opportunity for acceptance and participation in the given market. The success of the this particular solution is in its simplicity. By keeping the UI at its most basic it was able to create widespread usability across local devices. The other great strength of the MobileWorks solution is the accuracy. Their solution utilized only single entry (one worker submission per task) and was able to begin at 89% accuracy. Speculation is that multiple entry could increase that accuracy to 98.79% and triple entry 99.89%.
* What do you think can be improved about the system? One of the goals of the project was to "create an interface efficient enough so as to provide livable wages to workers." The pilot project sought to reverse engineer compensation. They determined that the average worker had an efficiency level of 120 tasks per hour and an average salary of 20-25 Indian Rupees (.32 - .40 USD) per hour in their regular work, therefore tasks would have to average 0.18 to 0.20 Indian Rupees per task to provide a living wage. As this was simply a pilot project there would need to be a large number of requesters in order to build sufficient efficiency opportunities among workers to attain and maintain the needed task per hour ratios. Also not addressed is current market compensation for similar work on more traditional desktop platforms. Additional questions would include: What is the efficiency of task per hour on a traditional desktop computer platform? Does the limitation of speed on a low-end phone compromise this efficiency? If there is a discrepancy between efficiency rates on platforms, how does this inform the compensation from the requesters standpoint? Does this discrepancy compromise the ability to achieve sufficient efficiency to attain a "living wage".
* What do you like about the system / what are its strengths?
mClerk's most innovative strengths are:
1. The ability to bring image-based task to a low-end mobile platform through small bitmapped images through SMS.
2. Management of digitization of local language text with high accuracy rate 90.1%
3. Non-monetary compensation
mClerk has a unique approach to digitization of local language documentation. The document is scanned and divided up into individual word images. These images are are transferred to a binary picture message and distributed to workers via SMS. Word images range from 64x16 pixels to 74x28 pixels depending on the system. From here workers will text back the word in the local language. As many phones do not support local language fonts. The mClerk system asks workers to utilize the best equivalent english word. The mClerk system utilizes a two worker response. They also amended their algorithm to mark a response as equivalent if it transliterates back to the same word in the local language, thereby mitigating the challenge of utilizing the non-native language leading to an accuracy rate of 90.1%. mClerk has also gone on to take on the compensation challenge by introducing a non-monetary compensation structure. Workers are paid with mobile phone time rather than traditional monetary compensation.
One other aspect of the system that is appealing is the commitment to keeping the workers engaged and motivated to respond with timely feedback after every 10 correct messages. The ease of use, unique compensation and committed engagement made for a successful viral launch of the pilot program.
* What do you think can be improved about the system? While the translation to english and back to the local language does solve some aspects of sms based font challenges. It does create multiple new challenges including two English answers being in agreement but both incorrect, It would be interesting to see the platform would do in other language font regions to see if the success and competitiveness of translation rates compared to agency translators would continue to be as competetive.
- What do you like about the system / what are its strengths?
The Flash Teams framework solves a number of unique challenges. It significantly reduces barriers to entry on product development for inventors. No longer do you need a staff of developers to create a prototype. From a single napkin sketch you can quickly move the basics of development to create a sample product. Another significant advantage is that the total project timeline decreases because you do not come across resource bottlenecks that plague many technology companies today. Rather than a "burn 'n churn" environment you always have a ready willing and able technologist available to execute the next phase. It can also be an insight into larger technology and consulting company's just-in-time recruiting challenges. What would be possible if highly skilled workers were in the "typing pool" for well paying remote temp design/development work?
Flash Teams is Agile at its best. Always be developing!
- What do you think can be improved about the system?
The system has a solid theoretical foundation. One would think that a requester with a napkin sketch and limited technical expertise would have a minimal amount insight into whether the technical aspects of what is delivered are solid/scalable. Would a novice necessarily know if the code was tidy? maintainable? secure? Whereas for an experienced technologist or project manager one can see great possibilities here. It's not a catch all for technology companies talent crises for obvious reasons including proprietary information, platform consistency, quality of work, coding guidelines etc. However, the idea of taking the agile approach to software development and apply it to staffing is intriguing.