IceLearngroup Usability Evaluaution

From crowdresearch
Jump to: navigation, search

We have conducted a usability testing for Task creation of our platform, including both micro and macro tasks. Here we present the test procedure and details of users as well as test results of the usability test.

Micro and Macro Tasks

As the micro task each user was instructed to create a job, to crop 10 images into passport size and measure the time spent on creating this task. As the macro task each user was instructed to create a job to make a company webpage and measure the time spent on creating this task.

Test Subjects

Here are the 5 test subjects who performed both micro and macro job creation tasks.

  • User 1 - User who has previous experience with design of the platform, female, advance computer skills
  • User 2 - No experience of crowdsourced jobs, university student, advance computer skills
  • User 3 - User with experience of crowdsourced job, male, freelancer
  • User 4 - User has freelancing experience + being a part of design of this platform ( hi Karolina..;)
  • User 5 - Software engineer, graduate, Tech Geek

Test Results

User 1 Results

Please click on this link for better rendering of the images, ( I really dont like wiki..-- sorry..) http://www.dilrukshigamage.com/projects---ux-stanford-corwdsource-research.html

create project choose category page
feedback between project category and project details
project details
payment

Benchmark is that each user should be able to do each job creation task less than 6 minutes.

User Time taken to micro task (mins) Time taken to macro task (mins)
1 4 _
2 8 6
3 6 5
4 10 5
5 6 6

Evaluation Methods

Heuristic Evaluations

Performed by 2 evaluators. Heuristic evaluation produced a list of usability problems with severity rating.

Observations

Tasks performed by 5 users. Each user had to perform 2 tasks, 1 micro and 1 macro. Observed for,

  • time spent on performing a task
  • if the task was complicated
  • the difficulty felt by user judge by the observer
  • any other concerns that use want to express

Observations and small informal interviews produced,

  • usability problems
  • interesting insights
  • functionality problems

Overall Results

Ranked to a quantified number - 1 to 5 likert scale representing (1 frustrating to 5 Gratifying)

Users were asked to evaluate on several aspects scale ( complete difficult to easiest )

  • Easy to learn - 1 2 3 4 5
  • Response time is reasonable - 1 2 3 4 5
  • Overall rating was below 3
  • Response time reasonable 4

Links

Video

Images