Milestone 1 teamtrojan

From crowdresearch
Revision as of 14:11, 4 March 2015 by Rashmiputtur (Talk | contribs) (Experience the life of a Requester on Mechanical Turk)

Jump to: navigation, search

Experience the life of a Worker on Mechanical Turk

Singing up as a Mechanical Turk worker was not easy. There were issues related to our Amazon Payment Account. Finally, having resolved the signing issues, we were excited to experience the life of a Turk worker. Being neophytes at this task, it took us time to get the concept and working right.

Since we needed to start in the training mode, we started with a task that required to count the number of comments in a given URL. There were ten minutes available to provide the correct number for each URL. For every correct answer , we were paid an amount of $0.02.

Having obtained a fair idea of concept, we tried a task that required a drawing for a description. Few descriptions we tried were, describing a camp, diversity and barbeque

The platform provides an easy and an inexpensive way to collaborate with different participants.However, the number of tasks for a beginner is low and it is difficult to search tasks for novices. It will help if the tasks can be classified based on a worker's level.

Overall, we had an amazing experience in learning a new concept.

Experience the life of a Requester on Mechanical Turk

Turk is a great platform to conduct research in an inexpensive and a quick way.It provides a suitable platform to conduct online surveys and record responses. However, responses to tasks cannot be relied upon as the platform is not representative of a segment of population. Also, deciding on whether Mechanical Turk is a good option to conduct a particular research is an open question. Requesters must decide if the platform suits their needs. There is also the problem of validating responses to filter out bots and workers who are not attending to the purpose of the task.

Creating HITs as a requester was a challenging experience. After a lot of pondering, we decided to conduct a survey on the usage of laptops and tablets for a reward of $0.06 and an allotted time of 10 minutes.

Alternate crowd-labor markets - GalaxyZoo

While GalaxyZoo is an astronomical crowdsourcing platform that solicits people to aid in morphologically classifying galaxies leveraging the concept of citizen science , Amazon Mechanical Turk is a crowdsourcing platform that is aimed for individuals and businesses to use collaboration and perform tasks that computers are unable to perform.

GalaxyZoo restricts itself to succor in scientific research but tasks such as classifying images or writing product descriptions can be performed using Amazon Mechanical Turk.

Mechanical Turk, allows requesters' post the Hits(Human Intelligent Tasks) with deadlines, rewards and allotted time to complete them. The HITs range from classifying images to writing detailed descriptions. GalaxyZoo does not allow users to define deadlines and rewards but takes the users' opinion by asking them a series of questions pertaining to each galaxy.

Apart from being an astronomical crowdsourcing platform, GalaxyZoo also provides access to a rich collection of papers and data related to astronomy and galaxies. The site also allows users to engage in constructive discussions through discussion boards.

While Mechanical Turk caters to the broader section of people and businesses, GalaxyZoo aims at helping scientists with galaxy classification.



MobileWorks is a mobile phone based crowdsourcing platform that provides human OCR tasks that can be completed by workers on low-end mobile phones through web browsers.

Features of the system include:

Accessibility: The system uses mobile Internet , which because of the ubiquity of inexpensive cell phones, is now a cost-effective way to send micro-tasks to people at the bottom of the economic pyramid.

Quality: Quality of the system is maintained using multiple entries. Each task is distributed to two workers until two of the answers match. If a worker provides an incorrect answer, her quality rating decreases. Conversely, a worker that provides a correct answer will see an increase in quality score

Flexibility: The system is time flexible i.e it can be accessed at any time from anywhere according to the users’ convenience.

Improvements that can be made to the system include:

Tasks Performed: Since the system makes use of the web, it can explore wide range of tasks that can be performed easily over the internet including audio transcription, same language subtitling and local language translation.

Minimum Required Qualification: System can associate some minimum experience or quality rating with high priority tasks.

Tagging of Tasks: Categorization of tasks based on difficulty level to help users quickly find suitable tasks.

Improving time : Rather than replicating all tasks, a second experienced user could verify or reject the response of a prior worker.


mClerk is a mobile crowdsourcing platform that focuses on low-income workers in developing countries for performing the tasks via SMS.

The system has the following good features:

1. Accessibility: The system takes into consideration, the lack of access to computers and internet for a significant percentage of population in developing countries and thus, uses SMS for sending and receiving tasks, making the system accessible to anyone with a low-end mobile phone.

2. Graphical Tasks: Though the trivial SMS is limited to sending and receiving texts, mClerk makes use of a protocol to send small images via ordinary SMS. This helps in accomplishing various real-world tasks that require access to images.

3. Users’ Requirements/ Qualifications: In order to ensure the contribution of low-income and less educated workers who have limited broken knowledge of English language, mClerk provides them with local language documents for digitizing. Thus, the large-scale problem of digitizing local-language task is also resolved to some extent.

4. Novelty: This system is the first demonstration of large-scale crowdsourced digitization for a language that lacks font support on workers’ devices.

5. Interaction with the users: The concept of leaderboard messages everyday to acknowledge the contribution of top users and reminder message in case of no activity improves the interaction of the system with users and helps increasing user contribution.

Improvements that can be made to the system include:

1. Accuracy of results: The system can limit the influence of less reliable workers by matching their response to a trusted worker and by requiring a minimum match rate for a worker to qualify for payment.

2. Improving digitization latency(total time taken to digitize a word from the point it was first sent to when second verified response is achieved) : Rather than replicating all tasks, a second user could verify or reject the response of a prior worker.

3. Increasing contribution of lead user: In order to ensure that the lead user does not stop contributing by just relying on the referrals’ income, we can fix a threshold contribution amount. Lead user will receive his referrals’ share of earning only if his personal contribution amounts to the threshold contribution amount.

4. Competitive Environment: We can design our own system which pits two users in a game of word solving in a limited time frame. Competition might lead to improved participation and strive for accuracy among users.

Flash Teams

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?