WinterMilestone 1 xthavy
- Thavy Thach
Experience the life of a Worker on Mechanical Turk
Process: I signed up to be a worker Monday night after the first meeting for the Winter Batch 2016. This process took two days to accept me. Then, one additional day to accept my amazon payments account. I dislike that. It shouldn't have to take that long of a process to signup to be a worker. Simply, we can improve this by lumping all of these registrations details together and not separately. This will enhance the experience and allow users to make money as quick as possible even though uncertainity may arise from requesters' tasks.
Experience 1: It's the day I got approved that I began to work on these tasks. These tasks are surprisingly fun yet time consuming. For example, the first task that interest me was a Linkedin information gathering tactic. I'm always interested in LinkedIn, but when I looked at the task it was very unorganized. This task design CAUSES uncertainity to a huge degree. There were two "website" spots to fill in. Instructions are too unclear to be certain about what to do. The instructions "NA" were available, but information communicated was confusing. I made 11 cents off this task and left it at that as it was poorly designed. Outcomes of Experience 1: - Task Design must be improved - Instructions must be improved - Prototype Tasks will help with that. - Liked how you can earn money easily by doing such tasks - Usually surveys are complete scams, but the usage on Crowdsourcing sites makes them more reliable and usable to the online/research community. - Disliked Task Design - Disliked Instructions - Registration must be improved - Can Daemo reach that stage? - Disliked Time consumption in tasks
Experience 2: Today I also attempted to complete a task that involved labeling 3D objects. This consumed a lot of time and it was only worth 25 cents. During this frustration, I went to on to the next task as this 'labeling' directed me to a different site. A credible academia site, but it wasn't very efficient. It was very buggy. Certainly, this didn't bode well. Therefore, I went on to the next task that took less time as well as paid significantly (or decently) well.
Outcomes of Experience 2: - Task Design must be improved - Instructions were decent, but requires improvement - Disliked that these tasks are not paid well. - You can make a living, perhaps, in a developing country, but not so much in a developed country. It can only amount to a side hobby, but Daemo could change how we earn online. Improvements need to made on wages to envision workers of variety professions. - I predict that if Daemo can be successful, then almost "everyone" would be balancing a full time job and working on a crowdsourcing platform like Daemo or Mechanical Turk.
Experience the life of a Requester on Mechanical Turk
First day as a requester: I am completely new to this experience of crowdsourcing. Therefore, I got lost in the progress after submitting my first task to be worked on. I submitted it so that it only had 1 HIT only. I didn't know what HIT was at all. That's when I thought, "Okay I have until Sunday to figure out how to get this to work". Throughout the whole day and the whole process I figured out that nothing was happening. I felt frustrated becauses I'm someone who usually gets results as soon as possible (not on mechanical turk, but in general). I struggled with being a requester. I had no idea what any of the terms meant. Mechanical Turk was very confusing to me as a requester. I had to use a search engine to discover what most of these terms were. Of course, still confused at what to do I managed to create my first request. It was a disaster.
Three days after that Batch was published: It seems as if Mechanical Turk hates me or that I'm not creating these HITs properly. The second option was correct. I found that my first attempt at creating a HIT was nonsense. It only created 1 hit when the desired amount was at least 15. I figured out that my task design was poorly designed to fit only ONE HIT and would amount to 30 minutes of work where my improved task design took 2 minutes of work approximately to fit up to 16 HITs. Now I'm truly happy that I got this to work as there were many flaws and limitations to creating a HIT on Mechanical Turk. All HITs completed: I created my HITs the first day and I paid 0.15 per HIT. There were 16 HITs total. I wanted to aim for 15 from 15 different people to be completed. I created this task at 1PM. All HITs were completed at 10PM. Everything went smoothly. My HIT was pretty simple since it was just gathering information from an image. I didn't expect to it be completed so easily in a mere 8 hours. This was tremendously beautiful. I was in shock as the first task I created was not designed correctly. Therefore led to no results in three days. A better task design was necessary to take advantage of Mechanical Turk's system of working and requesting. I was satisfied as 8 hours was enough to get this task working for others to work on it. I'm truly happy to understand what Requesting actually is. I rejected many who tried to continue doing the hit as it was specified to only do this ONCE. I believe it's pretty easy to reject and approve someone. Therefore, this brings out anonymity and scamming. How do we alleviate from that pressure? We did a good job, but we can probably not get our money's worth. This is huge because who's at fault? Requester or the worker? Either way Boomerang will fit nicely in this boat to research on.
Requester Outcomes: - HTML knowledge must be known to be effective in creating HITs - Improvements: cheat sheets in an area that can be reached, not searched for on a search engine as it was confusing. - Variables are everything, excel is a must for this --> determines effectiveness - Task DESIGN is everything - I like how excel/input is implemented - I like how HTML is implemented, but can be improved by using a different system that is less confusing for the general layman. - I dislike Mechanical Turk's structure of helping a new user. - There are documents, but some seem outdated and misleading as instructions are very unclear.
Explore alternative crowd-labor markets
oDesk (Upwork): Two options if you want to signup similar to Mechanical Turk: "I want to hire a freelancer" or "I'm looking for online work". By looking at the front page, this platform is totally different from what Mechanical Turk offers. Even though it offers incentives to its workers and requesters, the audience is on a totally different level. It's toward technology savvy individuals who code or design something of the sort. This is totally different from transcribing and performing microtasks.
-MobileWorks takes advantage of how cell phones are used in developing countries, such as India. This systems allows users in developing countries who have a cell phone handy to get paid by performing simple tasks using Optical Character Recognition (OCR).
-HUMAN OCR emphasizes a preprocessing stage that can be amplified to several possibilities that can be expanded further. One of which can be dividing those "small tasks" into even smaller tasks. Then, I think again about the possibility of adding those tasks together and attached a larger pay. There's a lot to improve on this system in terms of the future.
-HUMAN OCR is also the biggest strength to this system because without it, then this MobileWorks system would not operate. It requires workers especially marginalized workers to use their cell phone to work to get a wage. What this system also allows these marginalized workers to do is work simultaneously on tasks as well as their full-time job. Portability is best utilized when access to a phone and internet are readily available.
-Whatever screensize a phone there can possibly be, the documents available through the web application user interface act accordingly to the screensize.
-Quality is assessed beautifully as two individuals are assigned a task until both of the answers match. Quality score goes up if it's correct, otherwise goes down if incorrect. The improvement: How accurate can the answer be? Could we find a way to assign quality scores in a different manner because maybe I entered the text wrong and it'll still decrease my quality score. Is there a more accurate representation of this quality system?
Boomerang addresses a reputation system that differs from traditional rate-and-leave systems by impacting both users; the receiver and the giver. I love this system as ratings will not be given solely because they have to give a rating. In this way, this rating system makes the user rate them fairly because if they don't, then they'll see a mediocre worker again and again leaving them with no jobs.
Prototype tasks is a system that seems more helpful than Mechanical Turk's as you don't get to preview your own project before it is published. We don't know whether it'll be good or not. Therefore, I like prototype tasks as it allows tasks to go through a loop before it gets published to be revised further. It also pays too which creates more opportunity for pay. This revision helps in making the best results that match the requesters expectations.
IMPROVEMENT for Prototype Tasks: Making this an optional option to have. For example, let's say that I'm an user who's on the run (going to work). I post this request, but I don't want to go through prototyping tasks process. I want to immediately have my request up and then check later if it has been worked on. Finally, pay accordingly for my results. It'd be a useful option for users with that intent.
PROCESS is brilliant. Napkin sketch to low-fidelity mockup to heuristic evaluation to revised low-fidelity mockup to high-fidelity mockup w/ user study report to revised high-fidelity mockup is an interesting system. It seems like a simple system in hindsight, but in action is a totally different beast.
Foundry is the platform for all of this. I don't like crowd managed teams because the manager might be selfish and not actually know what to do. Therefore, I propose that it the "manager" or leader or something rotates during the whole process if it's a long task.
Crowdsourcing with Experts was a neat topic to read on about. When I read it, an idea that "workflows, task design, etc is designed by the workers/requesters". That opens up opportunities. However, probably increases wait time aka enhancing the time it takes for prototypes to get through.
IDEA, let's say we had a platform for something and we wanted help from online volunteers [expert volunteers, specialized volunteers or something]. Couldn't we use this Flash Teams/Crowdsourcing with Experts idea for collaboration? This would open up opportunities for jobs quite a bit. It would also be revolutionary if this world had individuals who had full time jobs as well as a specialization online to be pinged at anytime.
IMPROVEMENT/CONCERN is that there is no reference to time in this paper. HOW does time play in this context? How do people from different time zones play a role in this? The study you guys conducted probably consisted of individuals in one country and not from different countries. This would've probably left the results not being VALUABLE if this was to be used for worldwide research efforts. Time must be accounted for different regions in the world as managing teams is difficult. We have that problem in our slack because many of us have teams from different time zones. It's something to think about.
BLOCKS is the basic building block of flash teams. I like INPUTS AND OUTPUTS idea of compatability as you can have different outputs from some simple input. It has a lot of different functionalities too which seems promising. This is the strength of Flash Teams as it has so many possibilities to go places.
Elasticity is growing and shrinking of a block. I like this idea of adding and removing individuals from a team. However, the doubt and concern is that many individuals who would be added might feel left behind as other experts won't help and the one that quit, just quit indefinitely.
Pipelining is interesting as you don't need to wait for the entire upstream of tasks to begin doing downstream tasks.
Slack usernames of all who helped create this wiki page submission: @thavythach