WinterMilestone 2 Jinesh

From crowdresearch
Revision as of 09:34, 25 January 2016 by Jineshmehta (Talk | contribs)

Jump to: navigation, search

Learn about Need Finding

I discovered this field of Human Computer Interaction with the help of Scott Klemmer's Coursera HCI lectures. Below is a brief summary of these lectures and other lessons in Needfinding.

  • Solving and Identify existing problems.
  • Finding the environment in which the device or product is going to be used.
  • Exploring or Finding a need is much more easier task if one can participate in that task(Field Work)
  • Getting an idea or Getting Conclusions just by hearing out what people talk often about their work is a one piece of puzzle but to get the complete solution, we have to go beyond the surface in order to find out things they do.
  • Start to find the needs of people in any direction can broaden the path on a later stage by revealing the hidden needs.
  • Need Finders have a different way of ranking than the normal viz. when a problem occurs, the optimal solution provided is always by the one who has practical experience for that thing and next best solution is provided by the person who is somewhat less experienced but considering this rare case, one who has observed person having best skillsets are at higher rank than those with more experience than this one.
  • Understand from where you can get your questions properly answered for your needs. Higher Rankings may not always provide you the best answer.
  • Problems are source of supply for need finders.
  • Finding loopholes in nearby workspace can really reduce the climbing on the hill.
  • The lesser you know your tech, higher are the chances for finding exclusive needs.
  • Conclusions noted on the base of old survey or just what people are saying may lead your need finding task to disasters.

Attend a Panel to Hear from Workers and Requesters

Deductions

  • Requesters find that there's a lot of pressure not to reject bad work, but also can't accept everything
  • Some requesters circumvent this by temporarily removing qualifications from bad workers so they can prevent them from sending bad hits without lowering their rating
  • Occasionally, language can present a barrier to proper communication (this doesn't seem too common)
  • Requesters feel that they don't have enough time to respond to worker messages
  • There seems to be no proper process to protest an unfair hit rejection
  • Hard for requesters to be sure workers maintain attention span and give good hits
  • Willing to pay more for good workers who they trust (often found on forums)
  • Sometimes workers provide feedback to help requester improving the HITs
  • Most of workers has no specific time for work it changes from day to another

Here are some excerpts from the discussion

Reading Others' Insights

Worker perspective: Being a Turker

  • Make requesters give the inner motivations why a work should be done and to show what the job has led the requesters to accomplish, in a way to make workers part of a common adventure.
  • Increasing the vibes in the turker-requester relationship may lead to better designed HIT’s and support cooperation providing more control with respect to market functionality
  • Workers should be open minded.
  • Accepting a mistake may stronger the bond between a worker and requestor.
  • workers tend to look at the compensation as a package, The pay combined with the ‘benefits’ of being able to work from home , whenever they want etc.
  • Searching for a good HIT is critical to them; time means money
  • Are sceptical of change in the platform (bought by academics and journalists interfering)
  • Have self esteem and do not want charity/ do not want to be ‘rescued’ by people whom they see to be better off.

Worker perspective: Turkopticon

  • Quick Payment is highest priority for them.
  • If a worker's rating drastically falls, it is hard for him/her to pick it back up.
  • Workers also take a risk when they accept a job because their work is compared through an algorithm that may or not be functioning correctly. And if the algorithm is incorrect, they won't get paid.
  • Valid argument must be present for rejecting a quality work.
  • Majority of the work done by worker is treated unfairly.

Requester perspective: Crowdsourcing User Studies with Mechanical Turk

  • Skill set or the geographical location of the worker is not known to the requestors.
  • Faster response to the work done leads to high skill growth for worker.
  • Standardization would mean less guess work on the side of both requesters and workers
  • Worker can choose to perform tasks of varying difficulty.
  • Workers must learn the intricacies of each requester, including the different interfaces for their tasks and the various quality requirements for each requester.
  • Workers are forced to constantly adapt due to the lack of standardization of tasks.

Both perspectives: A Plea to Amazon: Fix Mechanical Turk

  • People are demanding for evolution
  • Core Structure is weak.
  • People’s Problems have never been addressed properly.
  • Hands-off approach not properly issued.
  • Poor Interface and ill repute system.
  • Difficulty with respect to finding full time worker.
  • Proof of Authenticity to every new requestor.
  • Lack of Interface Standardization.
  • Problem regarding things that have to be built from scratch.
  • Easy manipulation to worker’s profile.
  • No one is trusted due to bad review of some people.
  • Redundancy is more.
  • As long as the work is done properly, the reputation of the worker simply does not matter.
  • Requestor can do whatever they want to do!
  • Trial and Error method is followed by the workers for a new requestor.
  • New Requestor tries to give a big task but can’t get good results as workers doing that task are spammers and inexperienced which results in less number of new requestors
  • Progress of payment not seen by worker.
  • Rejections are not verified.


Synthesize the Needs You Found

List out your most salient and interesting needs for workers, and for requesters. Please back up each one with evidence: at least one observation, and ideally an interpretation as well.

Worker Needs

A set of bullet points summarizing the needs of workers.

  • No less nor more money should be paid to the worker.Decision should be taken on basis of amount of work they did.
  • Need the ability to search HITs according to the requesters who posted them.
  • Complete schedule of payload should be provided to worker prior to the task.
  • Base rating should be established.
  • Time Requirement factor should be focused more.
  • Workers need some leverage over requesters. In the current set up requesters hold all the power in the relationship, and many of the papers noted the lack of some kind of way to rate requesters meaningfully.
  • They need to have a data dashboard tools at their disposal which can give the different kind of information that is not readily available (eg how many rejections can they sustain to maintain their rating, Alert them when certain kind of HITs become available etc). Basically data for which veteran turkers write scripts should be easily achievable, even to the newbies.
  • Need a mechanism for shielding them from rejections, a redressal mechanism where they can understand why their work was rejected and contest wrongful rejections. A mechanism where the whole process is not so opaque and their dignity and self-worthiness is upheld.

Requester Needs

A set of bullet points summarizing the needs of requesters.

  • Requesters need to access to workers who consistently produce good results. One of the requesters mentioned in the panel that he would actively reach out to communities like TurkerNation to discuss work. Requesters seem to be willing to pay for good results but feel distrustful due to the relative anonymity of the current set up.
  • Authenticity of the work should be verified properly in order to gain the attention of good workers.
  • Better System to tackle untrustful answers.
  • Requesters need to reduce the amount of work they do in order to get good results. Evidence: In the paper, "A Plea to Amazon: Fix Mechanical Turk", the author reported requesters building their own quality assurance system, ensuring qualifications from workers, ranking workers according to quality, etc. Interpretation: Requesters have to invest a lot of time into maintaining the quality of results that they get back from workers.

Contributors

This milestone was completed by @jinesh