Winter Milestone 5 Tractricoid

From crowdresearch
Jump to: navigation, search

Please use the following template to write up your introduction section this week.

System (for task feed and open gov write up)

Brief introduction of the system

We are introducing four features that can improve both the requester and worker experience on a crowdsourcing platform. They can individually improve the system but together as a whole they can make a significant difference in how workers find the tasks that fits them best. The first, tagging that allows requesters to tag the tasks with the required skills. Second, rating and endorsing workers that accomplish a task to give them both feedback and assign them work that matches their skills. We are also hoping to provide critical information for workers such as the required time to complete a task in part three. Lastly, we are hoping to provide workers with tools to help them choose what tasks would be more beneficial for them considering their time, rating, skills, interests, and financial plans.

Introducing modules of the system

Below, we introduce the three main crowdsourcing applications that Twitch supports. The first, Census, attempts to capture local knowledge. The following two, Image Voting and Structuring the Web, draw on creative and topical expertise. These three applications are bundled into one Android package, and each can be accessed interchangeably through Twitch's settings menu.

Module 1: Tagging

Problem/Limitations

One of the major problems for requester is that it is hard for them to tell which candidate is the best fit for the tasks they put online since the task itself does not show which skills are required. For the workers, similarly, it is hard for them to find the task that matches their skills as well. The lack of communication in the current crowdsourcing system makes it hard for both workers and requesters to be satisfied with the work result. Tagging skills should be considered as a potential solution to this problem. Different tags can clearly demonstrate the skills for a specific task and this could also be an efficient way for requesters and workers to communicate.

Module preview

We are thinking about adding tags to each task so that the task name or description is followed by the skills required to succeed in the task. This is also a great way for workers to skim through or search among lists of tasks to find the ones they are interested in. Prototype2.png

System details

When requester post a task online, he or she is required to type in or choose the skills required to finish the task. For example, if the task is front end programming, the skills should be HTML, CSS, JavaScript, Ajax and Bootstrap. Then there will be several candidates that are interested in this task, when a worker wants to finish the task, he or she will click on “interested” button. There will be a list of candidates appear under the task followed by the skills that this task require from highest to lowest score that only requesters can see. The requester can then choose from the list and the top candidates will be a better fit for this task. But there are certain limitations about this. 1) the tags will probably be user generated, which means it is hard to check if the tags are specific enough. 2) it is hard to decide the top candidates if there is one worker has 4 on all the skills and one has 5 on one skill and 3 on another. 3) what should the requesters do if he is not sure what skills should be required for this task.


Module 2: Endorsement/Rating

Problem/Limitations

The major problems for workers are that they’re not able to find the job that matches their skills. Every worker has his/her unique talents and wants to make good use of them. However, current crowdsourcing systems make it difficult for workers to differentiate themselves from other workers with same rating but different tasks. On the other hand, requesters also face difficulty in recruiting the right talent since they can only differentiate workers by their task completion instead of their skills. The reliabilities or reputations of the workers’ specific ability are often unknown.

Module preview

Endorsing/rating people on their skills differentiates them from the crowd and helps their reputation and self-confidence. Websites such as StackOverflow use reputation as a motivation to help people with their questions so on a crowdsourcing platform like Daemo, reputation can fundamentally change how the system works. Having requesters rate the worker on the skills that their task required can be a great way to give feedback to the worker. Userratingprototype.png

System details

For workers, they can create their own skill sets and start doing some related tasks. They can receive honest feedback and appreciation from requesters after completed the work. No matter they were accepted or rejected, they can tell which part do the requesters like or dislike about their work. Also, workers get to know which requesters gave them credit and able to follow up those requester’s new post of work, which is beneficial to develop positive relationship between high-quality workers and requesters. For requesters, they can only rate workers by the tasks tags. For example, if the task was about “HTML” and “CSS”, the requester could only endorse/rate these skills of the workers. They can also create new skill for the workers if it’s not in their skill set. For example, if a worker perfectly completed a task that needed “HTML”, “CSS”, and ”JavaScript”, but there’re only “HTML” and “CSS” in their skill set, then requester is able to create and endorse/rate “JavaScript” for the worker. This rating system enables requesters to give credit to workers easily because they only need to agree or disagree whether this worker has the specific skill, which motivates requesters to endorse workers and promote positive cycle. Also, requesters get to understand worker’s ability and recruit preferred workers based on requester’s own interests and abilities.


Module 3: Timing

Problem/Limitations

When workers decide whether they should complete a task, one of the important pieces of information they need is how long the task takes to complete, or possibly, how long does the task take to complete well. First approach that comes to mind is to have the requesters submit an estimate for the required time to complete the task but their estimation is often influenced by their long exposure to the question and potentially the correct answers. Therefore, they may not give an accurate estimation for the time required to read and understand the question.

Module preview

The first approach is to think requesters can report an estimation on how long completion of a task takes, yet what they report can often be subjective. In this system, we can see improving the accuracy of the completion time in two different approaches.

Module details

Workers have various habits for the type of tasks they choose, some people prefer to take over tasks that require less time to complete and potentially perform more tasks. On the other hand, some people prefer working on tasks that are more challenging and take a longer time. We can introduce NLP and ML algorithms to learn about the people and the time the tasks they are interested require and improve the suggestion, or show them a better estimate for their time. One way to calculate the time required is measuring the time between the worker accepting the job and submitting it, adding other aspects such as comparing it with similar tasks and how long they take and finally taking requester’s estimate into consideration.

Module 4: Finding the best task

Problem/Limitations

Workers on crowdsourcing platforms might not always have an idea what task is best for them to perform. Task A pays a lot but doesn’t quite match their skills, task B pays not as much but is a better match, can we choose between them quickly?

Module preview

We would like to consider other factors such as the required time to complete those tasks, the ratio of rejections from each requesters, the probability that the worker would be able to finish the task based on their abilities. As the list goes on, it becomes more difficult for the worker to choose the work that benefits them most.

Module details

The application of this view can be presented in multiple ways, one way is to add ‘best value’ to the list of options for sorting tasks. Another way would be to add a comparison tool for the worker view of Daemo, so the workers would be able to compare Task A, Task B,... to see an analytics of which tasks are they most likely to get accepted, which task is fastest to accomplish, or what combination as a whole does the analytics tool suggest for their time to make the most money and get the highest rating in their skills.

Milestone Contributors

@parsis @sophiesong @hizai