Milestone 1 DesignDelight

From crowdresearch
Revision as of 23:28, 4 March 2015 by Jashanjitkaur (Talk | contribs)

Jump to: navigation, search

Hi All!

We are a team of graduate students from University of Michigan School of Information, specializing in Human-Computer Interaction.

Experience the life of a Worker on Mechanical Turk

Reflect on your experience as a worker on Mechanical Turk. What did you like? What did you dislike?

We completed 3 tasks on AMT - one text entry from an image of business card for $0.02 and two surveys for $1.00 each. Some of our insights about problems faced by a worker include:

  • User Experience of the Platform: Certain design issues with the website include:
  * There were a lot of confusion about how to navigate the website for a novice user. For instance, how to find your worker ID, confusion about terms like HITs and Groups.
  * We feel workers may spend significant time wasting on finding a right HIT, instead of spending that time actually working.
  * Description of certain tasks is not very detailed and informative. User often needs to open a HIT to decide whether they want to do it or whether they are even qualified for the task; and all relevant information is not available upfront. Some tasks we found were invitation only, but there was no way to filter out these tasks. Instructions provided for completing the tasks are not concise and user-friendly. 
  • Time given maybe too less: For the text entry task, time given was just 10 minutes while it took us around 5 minutes to just (quickly) read through the instructions, which appeared in 2 pop-ups. We also ended up getting alert that we’ve run out of time to submit the task. We later discovered that there is a timer located on the top of the page. But it does not blink or highlight to draw a user’s attention, when time is running out. Further, user may be completing a survey in another tab, when time is running out and hence showing the timer in the current way is not very useful. We wonder if there is another way to making sure that workers are not idling away while completing tasks instead of keeping timer like this.
  • Legitimacy and usefulness of a task is not clear: It’s not often clear how useful or important tasks are for the requestor. A task that we encountered looked like spam, where as it may not be. We find that there is little aid provided to standardize tasks for the requestors.
  • No way to easily clarify confusions about tasks: We also encountered some situations where we were confused what the task was or when we thought we needed more guidance. This situation can include something as simple as not sure how to pick first and last names for a name on business card (like “Eric Tang Wai Kit”) to more complex situations when user may be asked to evaluate something and they may not know how to give good and comprehensive feedback.
  • Amount of payment can be very low: Certain tasks were certainly paying much less than they should be given the time taken and minimum wages. We thought there should be ways for the workers to flag such tasks within the system. System can also recognize what hourly payment is given time taken to complete, and provide warnings or alerts before a user (both requester and workers) proceed further. Even though there are forums for workers to discuss these issues and it is gaining visibility in academic community, we feel this is not enough to discourage requestors from posting anything less than standards. If such reputation is integrated in a more transparent way within the system, it would be better. For one task, there was a difference in amount listed on MTurk and amount listed on introduction of the survey (1$ vs. .75 cents on the survey). There are also cases when you may complete task and not get paid, which seemed not fair. We feel requestors would consider time spent on the tasks as anyone would for other forms of employment; and workers should be more motivated to provide quality results. However, fear of not getting paid and wasting time should not be the factor used to ensure quality results.

Experience the life of a Requester on Mechanical Turk

Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results.

As a requester on Mechanical Turk, I submitted an image tagging project asking fifteen workers to help me tag it. $0.05 is paid for each HIT. The overall I think it was an efficient process to publish tasks and collect data though it took some time for me to understand how the platform works and there were certain steps that confused me.


  • Quick Signup Process as An Amazon User: The signup process is very efficient for Amazon users. I can directly create an account using their Amazon account. Also, I can pay directly through the payment methods I have set up before in my Amazon account.
  • Easy to Pick Up Using the Tool: Listing different categories of tasks that can be done on Mechanical Turk as well as showing according examples helps users understand the purpose of the platform and to get started in an easy way. Also, the project templates provides a very efficient way to start a new project. By choosing a preset category, a lot content has been filled as default and an example layout has been provided. Users do not have to start from scratch.


  • Usability Issues: I was experiencing certains usability issues while I was using it. First is the confusing term “HIT” used. This term is user all over the system; however, the explanation of what it stands for is not prominent at all. It is buried in a sub tab called “How it Works”. The second problem I’ve encountered is that there’s no instant link to view my published task. It counters my assumption that I can access my post to view it and confirm it looks exactly the way I want it to be. However, I have to search for my post to view it in AMT. Finally, I think a dashboard page is necessary requester users. The introduction information on the current home page is not that useful for users who have used the system before and more important information like project results is buried.
  • Lack of Requester/Worker Qualifications: I believe business users are an important part of AMT’s target users. The overall sign up process does not feel strict enough to verify the qualifications of both the requesters and workers. Also, it is hard to confirm the qualification of workers who complete my assignments. Interaction ways between requesters and works supported by the system is also very limited. This may make it difficult for business users to trust their workers as well as to trust the data they collect, which may be a big problem for the system to solve.
  • Lack of Ways to Promote Your HIT: From a requester’s standpoint, I hope my posts can be listed on the very top of the list and I hope I can reach as many workers as possible. However, there’s lack of promotion methods supported by the ATM system. Also, I cannot access data like how my set reward compared to other similar tasks/requesters, and how I can make it more competitive. I think such potential requesters’ needs have not been fulfilled by AMT.

Explore alternative crowd-labor markets

Compare and contrast the crowd-labor market you just explored (TaskRabbit/oDesk/GalaxyZoo) to Mechanical Turk.

We looked at TaskRabbit and ODesk to compare crowdsourcing markets. We feel both these websites do a better job of ensuring quality work by ratings of taskers/freelancers. However, we found that it may be difficult for a new person to gain reputation. For instance, a person may be on TaskRabbit for years and have hundreds of reviews, where as a new person may not have time to work to gain similar reputation. In TaskRabbit, highest rated taskers are called TaskRabbit Elite and have consistent good record. These people probably use TaskRabbit as their full time job. Coming back to the question discussed in the lecture about if you would want your child to work in this platform, we are not sure whether TaskRabbit meet the criteria. It is possible that it’s users may think they work too hard compared to other people doing similar jobs and are not compensated equally. We read that taskers have complained that they have to undergo stringent security check for using the platform where as clients do not, which can be unfair. It is also possible that TaskRabbit is relied on for urgent situations and hence, availability of taskers is highly valued. In nutshell, we feel just like freelancing, using taskrabbit provides potential for the taskers to be very successful but required a lot of effort on their part.

ODesk provides ability to "track” work being done to ensure quality using things like “work diary” which takes screenshots of the workers screens every 10 minutes they are being billed. This seems like an interesting way for requesters to be assured of the work. Maybe on AMT, requesters have assumed that workers will try to finish the work as effortlessly and quickly as possible. This may make them expect low or medium quality work and make them reluctant to pay enough.



  • What do you like about the system / what are its strengths?

We liked the fact that tasks can be completed on low-end phones using a web browser. The idea of using history of the user to access quality of work and future payments also seemed intriguing. This can also prevent the need for multiple entry to access accuracy. We also liked that Mobile works divides the tasks for different users, so that it’s easier and more efficient.

  • What do you think can be improved about the system?

Given the nature of tasks, there should be a way for the users to filter tasks based on difficulty levels. For instance, maybe people with poor eye-sight want to transcribe only readable images. Images not selected by any users can also have higher reward associated with them.


  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?

Flash Teams

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?