Difference between revisions of "Uwi"

From crowdresearch
Jump to: navigation, search
(Created page with "Template for your submission for Milestone 1. Do not edit this directly - instead, make a new page at Milestone 1 YourTeamName or whatever your team name is, and copy...")
 
m
Line 7: Line 7:
 
== Experience the life of a Requester on Mechanical Turk ==
 
== Experience the life of a Requester on Mechanical Turk ==
  
Reflect on your experience as a requester on Mechanical Turk. What did you like? What did you dislike? Also attach the CSV file generated when you download the HIT results.
+
We used the Mechanical Turk [https://requestersandbox.mturk.com/ requester sandbox] to post a request for workers. This request involved categorising fashion elements into their appropriate categories i.e. tops, bottoms, footwear, one piece, accessories. We chose the categories request type because it seemed like that was the only one available when we first got started. Once we chose categories as our request type, it was relatively simple to create our actual task. Since the requester sandbox didn’t require any payment, the request was published quite fast. However, we found that once published, it was extremely hard to get back to the request and view it’s status.
 +
 
 +
After we confirmed that the request was published, we started searching for it under HITs but couldn’t find it anywhere. 10 minutes later, we tried searching for categorise under HITs and then found the request. We then requested a team member to try and work on the request but the system just wouldn't allow her to accept the request. It seemed as though there was some sort of qualification restriction applied to the request. We were quite bewildered by this because we hadn’t set any qualifications at all.
 +
 
 +
We’ve been trying to get the categorization request to work without qualifications but cannot seem to figure it out. So, unfortunately, we do not have results to show. We learned however that Amazon takes a 10% commission on top of the reward amount that we set for Workers.
 +
 
 +
We then tried to create a data collection task. On the first page, we found that it was easy to input the title and the instruction of our task. On the second page, we were allowed to edit the layout of our task pages. This type of request follows a strict format with the instruction on top, a table in the middle and input box at the bottom. So we have to re-type the instructions. In addition, we have to change the source code of the page to change the format, which is really difficult for people with no coding knowledge.
 +
 
 +
'''Likes'''
 +
* Getting started to create a request wasn’t difficult.
 +
* Actually creating the categories request was relatively simple.
 +
* The process was quite straightforward
 +
 
 +
'''Dislikes'''
 +
* There was no clear indication on the type of request being created. The text on the button keeps changing and it links to a different place every time you click it. Some links are orange in color and some are blue.
 +
* It wasn’t very obvious that you could create a request that wasn’t categories based.
 +
* Once the request was created, it was extremely hard to find.
 +
* The layout of data collection task is hard to change. The only effective way is to change the source code. However, this is not convenient for people who don’t have coding knowledge.
 +
* Managing requests was hard. Certain concepts such as batches and qualification types are not explained clearly.
 +
 
  
 
== Explore alternative crowd-labor markets ==
 
== Explore alternative crowd-labor markets ==
Line 22: Line 41:
 
=== mClerk ===
 
=== mClerk ===
  
* What do you like about the system / what are its strengths?
+
mClerk is a mobile application that was built specifically to target users in semi-urban areas of India and introduce them to crowdsourcing.
* What do you think can be improved about the system?
+
 
 +
'''Likes & Strengths:'''
 +
*The most interesting thing about the system was how it targeted the perfect users - i.e. people in semi-urban areas with strong social circles.
 +
*We found the use of leaderboards to gamify the crowdsourcing process very fascinating. It was great to see how this encouraged users to work harder toward completing tasks.
 +
*Another thing we found key in mClerk’s success was its use of reminders to refresh a worker’s memory about their pending tasks.
 +
*We found that dividing the project into two phases was a very smart move. We particularly liked how the team used bonuses in phase 2 to reveal changes in the user’s behavior.
 +
 
 +
'''Dislikes'''
 +
*SMSes were not free for all of mClerk’s potential users. This could have prevented new users from joining since they were so sensitive to price.
 +
*Although we realise that providing a mobile refill might’ve been the easiest and most convenient way to compensate users, it could have been better to pay users by another method such as hard cash or some sort *of medical coverage. This might’ve been more meaningful for the user and might have even prevented them from misunderstanding the system.
 +
 
  
 
=== Flash Teams ===
 
=== Flash Teams ===
  
* What do you like about the system / what are its strengths?
+
'''Likes & Strengths of the system'''
* What do you think can be improved about the system?
+
*The Flash Team enables teamwork. This means that more complicated and professional tasks can be assigned using this platform and relatively high quality results can be expected.
 +
*The team is modular and combines several blocks together. One or more people take charge of a block. In each block, a manager is assigned. The team is managed by the system, and the users then set the blocks and time limitation of each module required to complete the task. The structure is similar to that of the organization, which is easy to understand and allows everyone in a team to focus on the work related to his/her expertise.
 +
*The team is very elastic. If the former group finishes their work earlier, the system calculates the start time of later groups and sends them a  notification.
 +
*The team uses a pipeline workflow. As long as the latter group gets enough input from the former group, work can begin earlier, saving a lot of time.
 +
*Workers in two adjacent groups can communicate with each other. This allows workers to ask questions about the former group’s work. Workers are also connected to users. Users can give feedback and workers can ask questions about the users’ need.
 +
*Since the working plan is set by the system in advance, as long as all the workers in the block follow the plan, the task can be done effectively and efficiently.
 +
 +
'''What do you think can be improved about the system?'''
 +
*Since the team is modular, it is hard for people to communicate and discuss the work as a team. The team members cannot brainstorm together, inspire each other and find the problems in the project from different perspectives. The team cannot talk to each other about misunderstandings, disagreements and conflicts.
 +
*The platform allows workers in latter blocks to contact the workers in the former block. But workers in the former block can’t change the results of this block even if they find problems after they have talked to ones in the latter block. In addition, only people in adjacent blocks can communicate with each other. Moreover, because it is an online platform, someone may finish his/her work and just leave the group.
 +
*It is hard for workers to find and solve problems in pipelined workflows. For example, the user research team might not know the limitations that back-end developers are facing. They may come up with something that the developers cannot implement at all. In addition, problems from the early stages of the project cannot be iterated on easily if nobody finds it.
 +
*To offer a task in this platform, the users need to know the inputs & outputs of the project. Most users just have an idea or a goal for what they want to do. They need experts to translate their ideas and goals to specific inputs and outputs.
 +
*Though the platform allows users to give feedback to workers during the process, it is hard for users to to do this since they can’t see explicit results. In addition, users may not have time to monitor the team and give feedbacks at all times. Additionally, the feedback could be biased.
 +
*There is no evaluation mechanism for work in this platform.

Revision as of 15:13, 4 March 2015

Template for your submission for Milestone 1. Do not edit this directly - instead, make a new page at Milestone 1 YourTeamName or whatever your team name is, and copy this template over. You can view the source of this page by clicking the Edit button at the top-right of this page, or by clicking here.

Experience the life of a Worker on Mechanical Turk

Reflect on your experience as a worker on Mechanical Turk. What did you like? What did you dislike?

Experience the life of a Requester on Mechanical Turk

We used the Mechanical Turk requester sandbox to post a request for workers. This request involved categorising fashion elements into their appropriate categories i.e. tops, bottoms, footwear, one piece, accessories. We chose the categories request type because it seemed like that was the only one available when we first got started. Once we chose categories as our request type, it was relatively simple to create our actual task. Since the requester sandbox didn’t require any payment, the request was published quite fast. However, we found that once published, it was extremely hard to get back to the request and view it’s status.

After we confirmed that the request was published, we started searching for it under HITs but couldn’t find it anywhere. 10 minutes later, we tried searching for categorise under HITs and then found the request. We then requested a team member to try and work on the request but the system just wouldn't allow her to accept the request. It seemed as though there was some sort of qualification restriction applied to the request. We were quite bewildered by this because we hadn’t set any qualifications at all.

We’ve been trying to get the categorization request to work without qualifications but cannot seem to figure it out. So, unfortunately, we do not have results to show. We learned however that Amazon takes a 10% commission on top of the reward amount that we set for Workers.

We then tried to create a data collection task. On the first page, we found that it was easy to input the title and the instruction of our task. On the second page, we were allowed to edit the layout of our task pages. This type of request follows a strict format with the instruction on top, a table in the middle and input box at the bottom. So we have to re-type the instructions. In addition, we have to change the source code of the page to change the format, which is really difficult for people with no coding knowledge.

Likes

  • Getting started to create a request wasn’t difficult.
  • Actually creating the categories request was relatively simple.
  • The process was quite straightforward

Dislikes

  • There was no clear indication on the type of request being created. The text on the button keeps changing and it links to a different place every time you click it. Some links are orange in color and some are blue.
  • It wasn’t very obvious that you could create a request that wasn’t categories based.
  • Once the request was created, it was extremely hard to find.
  • The layout of data collection task is hard to change. The only effective way is to change the source code. However, this is not convenient for people who don’t have coding knowledge.
  • Managing requests was hard. Certain concepts such as batches and qualification types are not explained clearly.


Explore alternative crowd-labor markets

Compare and contrast the crowd-labor market you just explored (TaskRabbit/oDesk/GalaxyZoo) to Mechanical Turk.

Readings

MobileWorks

  • What do you like about the system / what are its strengths?
  • What do you think can be improved about the system?

mClerk

mClerk is a mobile application that was built specifically to target users in semi-urban areas of India and introduce them to crowdsourcing.

Likes & Strengths:

  • The most interesting thing about the system was how it targeted the perfect users - i.e. people in semi-urban areas with strong social circles.
  • We found the use of leaderboards to gamify the crowdsourcing process very fascinating. It was great to see how this encouraged users to work harder toward completing tasks.
  • Another thing we found key in mClerk’s success was its use of reminders to refresh a worker’s memory about their pending tasks.
  • We found that dividing the project into two phases was a very smart move. We particularly liked how the team used bonuses in phase 2 to reveal changes in the user’s behavior.

Dislikes

  • SMSes were not free for all of mClerk’s potential users. This could have prevented new users from joining since they were so sensitive to price.
  • Although we realise that providing a mobile refill might’ve been the easiest and most convenient way to compensate users, it could have been better to pay users by another method such as hard cash or some sort *of medical coverage. This might’ve been more meaningful for the user and might have even prevented them from misunderstanding the system.


Flash Teams

Likes & Strengths of the system

  • The Flash Team enables teamwork. This means that more complicated and professional tasks can be assigned using this platform and relatively high quality results can be expected.
  • The team is modular and combines several blocks together. One or more people take charge of a block. In each block, a manager is assigned. The team is managed by the system, and the users then set the blocks and time limitation of each module required to complete the task. The structure is similar to that of the organization, which is easy to understand and allows everyone in a team to focus on the work related to his/her expertise.
  • The team is very elastic. If the former group finishes their work earlier, the system calculates the start time of later groups and sends them a notification.
  • The team uses a pipeline workflow. As long as the latter group gets enough input from the former group, work can begin earlier, saving a lot of time.
  • Workers in two adjacent groups can communicate with each other. This allows workers to ask questions about the former group’s work. Workers are also connected to users. Users can give feedback and workers can ask questions about the users’ need.
  • Since the working plan is set by the system in advance, as long as all the workers in the block follow the plan, the task can be done effectively and efficiently.

What do you think can be improved about the system?

  • Since the team is modular, it is hard for people to communicate and discuss the work as a team. The team members cannot brainstorm together, inspire each other and find the problems in the project from different perspectives. The team cannot talk to each other about misunderstandings, disagreements and conflicts.
  • The platform allows workers in latter blocks to contact the workers in the former block. But workers in the former block can’t change the results of this block even if they find problems after they have talked to ones in the latter block. In addition, only people in adjacent blocks can communicate with each other. Moreover, because it is an online platform, someone may finish his/her work and just leave the group.
  • It is hard for workers to find and solve problems in pipelined workflows. For example, the user research team might not know the limitations that back-end developers are facing. They may come up with something that the developers cannot implement at all. In addition, problems from the early stages of the project cannot be iterated on easily if nobody finds it.
  • To offer a task in this platform, the users need to know the inputs & outputs of the project. Most users just have an idea or a goal for what they want to do. They need experts to translate their ideas and goals to specific inputs and outputs.
  • Though the platform allows users to give feedback to workers during the process, it is hard for users to to do this since they can’t see explicit results. In addition, users may not have time to monitor the team and give feedbacks at all times. Additionally, the feedback could be biased.
  • There is no evaluation mechanism for work in this platform.