Milestone 3 ZSpace DarkHorseIdea: TurkBook: Taskfeed for you

From crowdresearch
Jump to: navigation, search

One of the major observations we have made is that all workers (barring qualifications and requester imposed limits) can participate in HITs. The workers which work on a HIT are not necessarily the best workers for the job.

In real life this is not a problem because organizations typically screen potential employees through interviews. Even when the number of prospective applicants are large, organizations can effectively screen candidates by looking through their resumes and other techniques (blocking all candidates with a GPA below some constant, only accepting candidates from a set of institutions etc). Analogous information is not available to requesters on MTurk. Even if this information was available, filtering out candidates would put a load on the part of the requester. Such a system would help increase the amount of trust placed on the workers by the requesters. Thus we believe a possible solution would be to devise an automated recommender system for suggesting HITs to workers based on their previous work record. This system would operate on similar principle to Facebook’s news feed in that it computes a score for each HIT for each worker. The score would be based on a number of criterion such as time elapsed since posting, similarity to work done in the past, qualifications of the worker, etc. Finally the system would display the HITs in descending order of score. The role of the score is to assign HITs to workers who would be able to do a better job on those HITs.

We believe the following advantages are to be had from this idea:

  • Workers are assigned/recommended tasks that they are good at, therefore the quality of work increases.
  • Requesters are more certain of the work quality to be had.
  • MTurk is typically preferred for simple tasks only, more complicated tasks like precis writing and design are reserved for specialized platforms. By promoting the few workers who have the qualifications/experience to perform such tasks (and discouraging those who may do a bad job), MTurk can diversify the types of work offered.

We could find the following disadvantages during the brainstorming sessions:

  • Workers who have the skills/qualifications to carry out specialized tasks might find it harder at the beginning because of the lack of experience.
    • This can be offset by providing a random component to the assigned score. By “stirring the pot occasionally”, the system can provide a balance between new workers and experienced ones.
  • Finding the exact proportion to ensure quality while allowing a realistic chance of a worker gaining credentials is a matter of trial and error.
  • Workers who want to work in a specific category of HITs might find it difficult to “migrate”. This may be due to lack of experience or a negative experience in a previous HIT.
    • We believe that a possible solution is to allow some amount of search in the newsfeed.
  • Care must be taken so that too much freedom is not granted that the guarantees afforded by the recommender system is not overridden.
  • It might be difficult to guarantee algorithmic fairness. The fairness aspect of algorithms is a matter of increasing concern as they go on to govern large aspects of our life both virtual as well as physical.
    • Today algorithms are responsible for deciding traffic signal timings, they can decide who is likely to be a tax evader and who in the event of a disaster get rescued first.
  • While it may be easy to imagine perfect algorithms that behave in an optimal manner, rigorously ensuring their fairness is not easy because they might make use of heuristics or statistical observations.