Amdp's Milestone

From crowdresearch
Jump to: navigation, search

I had a very hard time in participating in this research project. As an example, a lot of people overwrote this milestone due to inexperience with wiki and no instructions on how to deliver the milestones as well on wiki itself. EDIT: The instructions were there, but neither I nor other people had time to carefully read all the wiki. Maybe it is not a problem of information but a problem of information access. Instead of reporting my experience of mechanical turk and/or other crowdsourcing platform, I'd like to focus on this crowdsourced project.

EDIT: the more I go on, the more the crowd itself pops out with suggestions. I'm resuming them in this page: http://crowdresearch.stanford.edu/w/index.php?title=Orientation , in order to balance my critiques with an effort to solve the same problems I am noticing.


There is a good effort of human interaction in order to make people able to understand what is going on, but it is very hard to understand it anyways due to three main points:


1. The tools you have to use in order to participate are many, and they fragment your efforts. Being approved to Mechanical Turk seems to be difficult, and there is not an evident notice that you can try another services too. EDIT: the alternative services are listed in the wiki milestone page you can find in the Orientation wiki page.

2. The goal of the whole project is not that clear and it seems like you can pursue any research idea you want, but at the same time it seems like there is a platform to develop that should work better than MTurk (Daemo).

3. The setup of the project is clearly "american", and there is no description about how the american process works. For example, it is a standard for an american to think about milestones, but in other parts of the world that could be different. Also the main timezone of the project is california-centric, so if you want to be in the peer-review of the milestone submission and you're not in PST it is possible that you should do that in a time that won't be easy to manage. Current delivery time is, for example, 4/5 AM in Eurafrica. I suggested that 7AM in PST is 2AM in Sydney, so all "Pangea" is not in sleeping hours, and the pacific is in night time. That arch should be considered when deciding worldwide timings.


Comparing to the description of the Mechanical Turk provided by this research group/project, "However, these platforms are notoriously bad at ensuring high quality results, producing respect and fair wages for workers, and making it easy to author effective tasks. It’s not hard to imagine that we could do better.", we can see three things that aren't currently efficient in this research group:


1. Ensuring good quality

2. Ensuring respect and fair wages

3. Ensuring task authoring


About the good quality of this research project, I am currently skeptic, but I tend to be optimist in the future growing of the project itself.

About the respect, I see low care in the way the project helps people to get involved.

About the fair wages, I see a low pay-off comparing the efforts you have to make to participate and stand the chaos, the lack of human relations and the lack of physical presence of the other peers, with the possible outcomes of passing through all those efforts with an apparent low possibility to get something useful done if you want to get your name on a publication. On the contrary, if you want to study crowd and group dynamics, the setting is very useful and "rewarding".

About the task authoring I have no experience of that at a higher level, but I can personally report that individual effort are prized with one of the many apps used in the project, bonusly. Nobody told my I could receive a "badge" as a reward, nor the persons who are in charge to do that, but I received one. I do not know the value of it nor if it is an aknowledgement of a task and if there's a database of all those.


I suggest to think about a more efficient usability, a more international approach and a more careful human resources context for the future of the project.

Conclusion: The project seems to be raising its performances once the first complicated approach is passed. Out of around one thousands participants, I have seen the names of around 50 actively participating and probably 100 would deliver milestones. I personally think an easier initial walkthrough could make the actual 10% of participating people to raise to a much satisfying percentage. It seems to be an amazing project but there's still some space for a better transparency and evidence of what the whole plan is about.


The need of an evaluation step in the crowdsourcing workflow.(after a talk with the european team)

Mechanical Turk seems to be a platform in which there is a strangely difficult admission, a set of repetitive and low-profile tasks awarded with very low and unfair wages compared to "western countries". It seems like it is a service for people who reside in an area where one or two dollars more can make a difference, even if sometimes the job is not even paid due to lack of technical binding to money flows when the job is done.

The evaluation of the job done could be a very difficult point in the flow of the worker/client supply. If the job is completed in its form but wrong in the contents it will be hard to proof the job is ok and get paid by the worker or get refunded by the client prooving the job isn't well done at all. It could be interesting the creation of a channel for result evaluation in the crowdsourcing process.