WinterMilestone 3 freddiev DarkHorseIdea: Toss Out Boomerang After Analyzing User Ratings

From crowdresearch
Jump to: navigation, search

Winter Milestone 3: Dark Horse Idea

Removing the Boomerang Rating System Based off Positive Correlation Between User Ratings and Boomerang Ratings

So what I read through and understood from the white paper about Boomerang was that if user ratings tend to set high expectations, and requesters end up being disappointed because the task authorship of a worker seems to be high-quality, but in fact it is mediocre ... but what if user ratings did end up being accurate?

Here's what came to mind: what if there was some sort of analysis/comparisons that we run in the background, to see how user ratings match up with boomerang ratings? Essentially, users could rate workers, but the requesters would only be matched up more often with high quality workers based off of boomerang.

There would be two groups: the Boomerang group, and the Platform group. Essentially, the Boomerang group would be a one-to-one pairing of a single worker to a single requester. It is the rating a single worker receives from a single requester; if the worker receives good ratings, they'll more likely be matched to the requester, and vice verse (like how Boomerang works, hence the Boomerang group).

The second group would be the Platform group: a many-to-one relationship of multiple requesters to one worker. There would be one rating that is the average of all the ratings of the many requesters for the one worker (I'm not sure what the ratio would be, maybe 100:1 or 50:1. Something significant but not crazy large).

Image Below:

Darkhorseidea freddievargus.jpg


If the one requester gives a negative rating, but the platform group (100 requesters) all give positive ratings, then the one negative rating would be deemed negligible, and the likelihood of the one worker and one requester would not change (therefore, negating the Boomerang reputation system's processes).

Now...what if this happens every time? And the requester's likelihood to being matched to the worker never changes? Well, I guess to negate this, then there could be a "three strike" rule. If they get matched three times, and give three negative ratings, then the worker would never be matched again (because, clearly, even though they shared the same "mental model" with 100 other requesters, it just doesn't work out with the 1 requester that continued to give negative ratings).

I think the end goal would be to figure out why people don't share the same mental model, especially if a one-to-one relationship works but a many-to-one relationship does not.


If any of this doesn't make sense, I blame the 30g of sugar that sparked this idea ¯\_(ツ)_/¯

Thatsmyjamcore.jpg