Thursday, October 6, 2011

Week 4: Oct 3rd, 2011

Usability Evaluation Methods and Tools

This week, I spent my time investigating methods and tools used for evaluating e-learning and teaching applications. To my surprise, I discovered that the evaluation of e-learning or educational application is not often performed because of the complexity and multidisciplinary evaluators required. This complexity and need for resources can result in a difficult, time consuming and costly process.
Ssemugabi and Villiers identify 4 categories of usability evaluation methods (UEM) to be taken into consideration when evaluating educational applications:



Inspection Methods
These methods are often fast, inexpensive and easy to perform. They include such methods as heuristic evaluation by experts, cognitive walkthroughs, features inspection, guidelines or standard checklist. They can result in major improvements, as evident in the study comparison by
Ssemugabi and Villiers. Since they are often cheap and easy, they can be performed during the development, so changes can be made without any real cost to the project. In exaamining the cost vs benefit, Nielsen claims that 3-5 evaluators can be used to identify 65-75% of the usability issues (depending on the expertise of the evaluators). In evaluating educational software, experts can range from UI designers to teachers or professors.

User Testing Methods
This methods usually consists of a 'learner' carrying out assigned tasks in an organized lab setting. It could includes surveys or the think-aloud protocol, which can be executed in a co-discovery or question-asking method. This method simply consists of asking the end users directly for feedback on the system. They are often evaluated using a criterion-based survey, questionnaire, focus group or interview. Generating questionnaire can be difficult, and should be piloted to eliminate any possibility for misunderstanding or confusion.

Exploratory Methods
Executed by observing user in their natural setting. An example of this could be a field observation. Exploratory methods are usually coupled with a formal interview, questionnaire, survey or focus groups in order to more thoroughly gauge users reactions to the system.

Analytical Evaluation Methods
This is the process of generating predictive or descriptive models of activities that a typical user would execute during an interaction with the system.

In their research, Tselios & Avouris & Komis identified the different levels of educational applications and generated a mapping from the different types of teaching/learning software to usability evaluation methods mentioned above (see below).



Nielsen claims that end users are not as good as experts in identifying usability problems.
Supporting this, Tselios & Avouris & Komis found that the 'learners' who had been using the target system for some time identified a slightly lower portion of problems than the experts. However, in a comparative study of 2 analytical evaluation methods for educational computer games, Bekker, Baauw and Barendregt concluded that a predictive approach (heuristic evaluation) combined with User testing provides the best results. The end users discover problems that would not be uncovered otherwise by experts alone (bias due to technical knowledge, etc.). They discovered that combining an inspection based method with user-based methods (hybrid approach) was beneficial in uncovering the highest number of issues.

A comparison of two analytical evaluation methods for educational computer games for young children
by Mathilde M. Bekker Æ Ester Baauw Æ Wolmet Barendregt

The effective combination of hybrid usability methods in evaluating education applications of ICT: Issues and challenges
by Nikolaos Tselios & Nikolaos Avouris & Vassilis Komis

A comparative study of two usability evaluation methods using a web-based e-learning application
by Samuel Ssemugabi and Ruth de Villiers

No comments:

Post a Comment