e-Learning and HCI
Tuesday, November 8, 2011
Week 8: Oct 31th, 2011
Evaluation: Round 1
Early this week, Build 683 was released and a notification was sent out to the evaluators.
Throughout the week, I went on a Caribbean Quest looking for bugs. Not many unknown bugs were found, to my pleasant surprise.
However, the Cog2 meeting on Friday revealed that there was has not been much effort on the Psych department side. Because of this, we decided to delay the close date for bug reports until Tuesday, which would allow them the weekend to investigate further.
Apart from this setback, things are moving forward with the planning of the upcoming evaluations (round 2 - heuristic evaluation). We will be arranging a meeting with Angela, myself and the 2 members of the psychology department to review their objectives and target audience. The goal of this meeting is to focus in on the 'goal' of the games and schedule a time where these professionals can devote a few hours to conduct a structured evaluation of the educational content of the game. My hope is to enable communication, so that simple/cheap usability evaluations can be performed often throughout the iterative development process.
Upcoming evaluations
After reading some further work from Hasiah Mohamed @ Omar and Azizah Jaafar, I have decided to develop a portal in which all expert evaluators can use to conduct their evaluations. Hasiah Mohamed @ Omar and Azizah Jaafar had discussed an extensive system where the surrogate and real users enter their evaluations as input, and through some AHP algorithm, the administration is able to deduce issue ratings, statistics and export multiple reports.
In our case, there are not enough resources to develop such a complex and robust system. However, it may be possible to build a similar system by utilizing the tools that are already being used by the team (Google tools). This type to 'portal' would be a cheap way to enable communication between both departments and facilitate iterative usability evaluations.

I will begin analysis of the feasibility and design of such a system over the weekend.
Early this week, Build 683 was released and a notification was sent out to the evaluators.
Throughout the week, I went on a Caribbean Quest looking for bugs. Not many unknown bugs were found, to my pleasant surprise.
However, the Cog2 meeting on Friday revealed that there was has not been much effort on the Psych department side. Because of this, we decided to delay the close date for bug reports until Tuesday, which would allow them the weekend to investigate further.
Apart from this setback, things are moving forward with the planning of the upcoming evaluations (round 2 - heuristic evaluation). We will be arranging a meeting with Angela, myself and the 2 members of the psychology department to review their objectives and target audience. The goal of this meeting is to focus in on the 'goal' of the games and schedule a time where these professionals can devote a few hours to conduct a structured evaluation of the educational content of the game. My hope is to enable communication, so that simple/cheap usability evaluations can be performed often throughout the iterative development process.
Upcoming evaluations
After reading some further work from Hasiah Mohamed @ Omar and Azizah Jaafar, I have decided to develop a portal in which all expert evaluators can use to conduct their evaluations. Hasiah Mohamed @ Omar and Azizah Jaafar had discussed an extensive system where the surrogate and real users enter their evaluations as input, and through some AHP algorithm, the administration is able to deduce issue ratings, statistics and export multiple reports.
In our case, there are not enough resources to develop such a complex and robust system. However, it may be possible to build a similar system by utilizing the tools that are already being used by the team (Google tools). This type to 'portal' would be a cheap way to enable communication between both departments and facilitate iterative usability evaluations.

I will begin analysis of the feasibility and design of such a system over the weekend.
Monday, October 31, 2011
Week 7: Oct 24th, 2011
Evaluation: Round 1
As mentioned before, round 1 of the evaluation phase consists mainly of bug catching. Due to some last minute fixes to the game, there will be a slight delay in the release of the published build. The plan is the have the build and bug-report form send out to the psychology department by Monday the 31st. The close date for the bug reports will be Friday, Nov 4th. Our reasoning behind this was if evaluators are not given with a strict end date, they will often keep putting it off. It is important for the Cog2 team to be able to move forward in their iterative development process.
The bugs found during this testing phase will be crucial in order to execute a more usability evaluation. We do not wish for a user to get distracted by a bug in the game, and lose track of the purpose of the evaluation.
In developing an evaluation methodology, one issue that I have been struggling with is the following:
Evaluating the educational content of Caribbean Quest is an integral part of evaluating educational computer games as a whole. Initially, my scope of the project was simply to organize and evaluate the usability from a game development and HCI perspective. However, the more background research I go through, the less effective this approach appears to be. To evaluate Caribbean Quest as a regular computer game would be only looking at half of the potential issues.
Due to the lack of communication between the Cog2 team and the Psychology department, it will be challenging to arrange for experts in the field of education to be involved in the evaluation. However, to conduct a successful evaluation, it will be necessary to gain their minimal participation.
As mentioned before, round 1 of the evaluation phase consists mainly of bug catching. Due to some last minute fixes to the game, there will be a slight delay in the release of the published build. The plan is the have the build and bug-report form send out to the psychology department by Monday the 31st. The close date for the bug reports will be Friday, Nov 4th. Our reasoning behind this was if evaluators are not given with a strict end date, they will often keep putting it off. It is important for the Cog2 team to be able to move forward in their iterative development process.
The bugs found during this testing phase will be crucial in order to execute a more usability evaluation. We do not wish for a user to get distracted by a bug in the game, and lose track of the purpose of the evaluation.
In developing an evaluation methodology, one issue that I have been struggling with is the following:
Evaluating the educational content of Caribbean Quest is an integral part of evaluating educational computer games as a whole. Initially, my scope of the project was simply to organize and evaluate the usability from a game development and HCI perspective. However, the more background research I go through, the less effective this approach appears to be. To evaluate Caribbean Quest as a regular computer game would be only looking at half of the potential issues.
Due to the lack of communication between the Cog2 team and the Psychology department, it will be challenging to arrange for experts in the field of education to be involved in the evaluation. However, to conduct a successful evaluation, it will be necessary to gain their minimal participation.
Wednesday, October 19, 2011
Week 6: Oct 17th, 2011
Usability Evaluation Framework for Caribbean Quest:
As with all software evaluation, it is impossible to maximize all 3 desirable attributes simultaneously of a UEM for educational applications.
In their research, Omar and Jaafar identify 3 important challenges to overcome when evaluating learning/teaching software:

As for selecting proper evaluators, using a 'hybrid' method of evaluation required a rand of end (real) users and surrogate (expert) users. For the scope of this project, as well as the scope of the CSC department, we will be focusing on gathering HCI experts and game developers as the expert users. This will allow us to evaluate the technical aspects of the game, while the content and education experts will be provided by the Psychology department. This will be discussed in more detail when we approach this phase of evaluation.

Omar and Jaafar argue that the evaluation process for an educational computers should be formative. An online evaluation questionnaire, derived from our set of Heuristics as well as a set the Playability Heuristic Evaluation for Education Games (PHEG), will be made available to our users
Challenges in the Evaluation of Educational Computer Games
By Hasiash Mohamed @ Omar and Azizah Jaafar
As with all software evaluation, it is impossible to maximize all 3 desirable attributes simultaneously of a UEM for educational applications.
- Generalizability
- Precision
- Realism
In their research, Omar and Jaafar identify 3 important challenges to overcome when evaluating learning/teaching software:
- Evaluation Criteria
- The Evaluators
- The Evaluation process

As for selecting proper evaluators, using a 'hybrid' method of evaluation required a rand of end (real) users and surrogate (expert) users. For the scope of this project, as well as the scope of the CSC department, we will be focusing on gathering HCI experts and game developers as the expert users. This will allow us to evaluate the technical aspects of the game, while the content and education experts will be provided by the Psychology department. This will be discussed in more detail when we approach this phase of evaluation.

Omar and Jaafar argue that the evaluation process for an educational computers should be formative. An online evaluation questionnaire, derived from our set of Heuristics as well as a set the Playability Heuristic Evaluation for Education Games (PHEG), will be made available to our users
Challenges in the Evaluation of Educational Computer Games
By Hasiash Mohamed @ Omar and Azizah Jaafar
Friday, October 14, 2011
Week 5: Oct 10th, 2011
Caribbean Quest (Cog2)
The next step in my research is to put the knowledge base of HCI, UEMs and Education Application Design that I have accumulated to the test!
I will be working with the Cog2 group in conducting usability evaluations, in order to produce a polished copy to the Psychology group.
Because of the development stage the project is currently in, we will be taking an iterative approach to the evaluation process (integrate the process of finding the problems and developing the solution!).
Meeting with Cog2 group
The project is currently in the development stages of it's last 2 games. The developers are simultaneously conducting a code review, and updating it to reflect their established code standards. There are 3/4 games that are currently ready to be evaluated, following thorough bug-testing.
After reviewing the benefits/pitfalls of different evaluation methods, I have decided to use a 'hybrid' approach, combining inspection and user testing methods to evaluate the system. As a group, we identified the different evaluation stages necessary to accomplish this task:
Round 1: Debugging. Dave requires a small group of experts to execute several use cases to reduce the chance of a UI expert or an end user discovering a bug in upcoming evaluations. The system will be given to specified group of people in the lab and/or on the 3rd floor of the ECS building.
Round 2: Heuristic evaluation with experts. This evaluation phase will be executed with a group of 3-5 users with a technical background and knowledge in UI and/or game design. The evaluators will be asked to take 2 passes at the system:
Pass 1: The evaluators should get a feel of the flow and general scope of the system.
Pass 2: The evaluators will examine the system and focus on these individual elements outlined in a predefined checklist of usability/educational game heuristics.
Round 3: User Testing. Using an appropriate mix of 'learners', end user testing will help uncover issues in the system overlooked during the inspection method. The size of our user group will depend on the resources available.
I will be spending the next week reviewing the background information on the project and some of the requirements/design documents shared between group members.
I have posted a tentative evaluation schedule, and it is to be presented to the Cog2 group in upcoming weeks.
The next step in my research is to put the knowledge base of HCI, UEMs and Education Application Design that I have accumulated to the test!
I will be working with the Cog2 group in conducting usability evaluations, in order to produce a polished copy to the Psychology group.
Because of the development stage the project is currently in, we will be taking an iterative approach to the evaluation process (integrate the process of finding the problems and developing the solution!).
Meeting with Cog2 group
The project is currently in the development stages of it's last 2 games. The developers are simultaneously conducting a code review, and updating it to reflect their established code standards. There are 3/4 games that are currently ready to be evaluated, following thorough bug-testing.
After reviewing the benefits/pitfalls of different evaluation methods, I have decided to use a 'hybrid' approach, combining inspection and user testing methods to evaluate the system. As a group, we identified the different evaluation stages necessary to accomplish this task:
Round 1: Debugging. Dave requires a small group of experts to execute several use cases to reduce the chance of a UI expert or an end user discovering a bug in upcoming evaluations. The system will be given to specified group of people in the lab and/or on the 3rd floor of the ECS building.
Round 2: Heuristic evaluation with experts. This evaluation phase will be executed with a group of 3-5 users with a technical background and knowledge in UI and/or game design. The evaluators will be asked to take 2 passes at the system:
Pass 1: The evaluators should get a feel of the flow and general scope of the system.
Pass 2: The evaluators will examine the system and focus on these individual elements outlined in a predefined checklist of usability/educational game heuristics.
Round 3: User Testing. Using an appropriate mix of 'learners', end user testing will help uncover issues in the system overlooked during the inspection method. The size of our user group will depend on the resources available.
I will be spending the next week reviewing the background information on the project and some of the requirements/design documents shared between group members.
I have posted a tentative evaluation schedule, and it is to be presented to the Cog2 group in upcoming weeks.
Thursday, October 6, 2011
Week 4: Oct 3rd, 2011
Usability Evaluation Methods and Tools
This week, I spent my time investigating methods and tools used for evaluating e-learning and teaching applications. To my surprise, I discovered that the evaluation of e-learning or educational application is not often performed because of the complexity and multidisciplinary evaluators required. This complexity and need for resources can result in a difficult, time consuming and costly process. Ssemugabi and Villiers identify 4 categories of usability evaluation methods (UEM) to be taken into consideration when evaluating educational applications:

Inspection Methods
These methods are often fast, inexpensive and easy to perform. They include such methods as heuristic evaluation by experts, cognitive walkthroughs, features inspection, guidelines or standard checklist. They can result in major improvements, as evident in the study comparison by Ssemugabi and Villiers. Since they are often cheap and easy, they can be performed during the development, so changes can be made without any real cost to the project. In exaamining the cost vs benefit, Nielsen claims that 3-5 evaluators can be used to identify 65-75% of the usability issues (depending on the expertise of the evaluators). In evaluating educational software, experts can range from UI designers to teachers or professors.
User Testing Methods
This methods usually consists of a 'learner' carrying out assigned tasks in an organized lab setting. It could includes surveys or the think-aloud protocol, which can be executed in a co-discovery or question-asking method. This method simply consists of asking the end users directly for feedback on the system. They are often evaluated using a criterion-based survey, questionnaire, focus group or interview. Generating questionnaire can be difficult, and should be piloted to eliminate any possibility for misunderstanding or confusion.
Exploratory Methods
Executed by observing user in their natural setting. An example of this could be a field observation. Exploratory methods are usually coupled with a formal interview, questionnaire, survey or focus groups in order to more thoroughly gauge users reactions to the system.
Analytical Evaluation Methods
This is the process of generating predictive or descriptive models of activities that a typical user would execute during an interaction with the system.
In their research, Tselios & Avouris & Komis identified the different levels of educational applications and generated a mapping from the different types of teaching/learning software to usability evaluation methods mentioned above (see below).

Nielsen claims that end users are not as good as experts in identifying usability problems. Supporting this, Tselios & Avouris & Komis found that the 'learners' who had been using the target system for some time identified a slightly lower portion of problems than the experts. However, in a comparative study of 2 analytical evaluation methods for educational computer games, Bekker, Baauw and Barendregt concluded that a predictive approach (heuristic evaluation) combined with User testing provides the best results. The end users discover problems that would not be uncovered otherwise by experts alone (bias due to technical knowledge, etc.). They discovered that combining an inspection based method with user-based methods (hybrid approach) was beneficial in uncovering the highest number of issues.
A comparison of two analytical evaluation methods for educational computer games for young children
by Mathilde M. Bekker Æ Ester Baauw Æ Wolmet Barendregt
The effective combination of hybrid usability methods in evaluating education applications of ICT: Issues and challenges
by Nikolaos Tselios & Nikolaos Avouris & Vassilis Komis
A comparative study of two usability evaluation methods using a web-based e-learning application
by Samuel Ssemugabi and Ruth de Villiers
This week, I spent my time investigating methods and tools used for evaluating e-learning and teaching applications. To my surprise, I discovered that the evaluation of e-learning or educational application is not often performed because of the complexity and multidisciplinary evaluators required. This complexity and need for resources can result in a difficult, time consuming and costly process. Ssemugabi and Villiers identify 4 categories of usability evaluation methods (UEM) to be taken into consideration when evaluating educational applications:

Inspection Methods
These methods are often fast, inexpensive and easy to perform. They include such methods as heuristic evaluation by experts, cognitive walkthroughs, features inspection, guidelines or standard checklist. They can result in major improvements, as evident in the study comparison by Ssemugabi and Villiers. Since they are often cheap and easy, they can be performed during the development, so changes can be made without any real cost to the project. In exaamining the cost vs benefit, Nielsen claims that 3-5 evaluators can be used to identify 65-75% of the usability issues (depending on the expertise of the evaluators). In evaluating educational software, experts can range from UI designers to teachers or professors.
User Testing Methods
This methods usually consists of a 'learner' carrying out assigned tasks in an organized lab setting. It could includes surveys or the think-aloud protocol, which can be executed in a co-discovery or question-asking method. This method simply consists of asking the end users directly for feedback on the system. They are often evaluated using a criterion-based survey, questionnaire, focus group or interview. Generating questionnaire can be difficult, and should be piloted to eliminate any possibility for misunderstanding or confusion.
Exploratory Methods
Executed by observing user in their natural setting. An example of this could be a field observation. Exploratory methods are usually coupled with a formal interview, questionnaire, survey or focus groups in order to more thoroughly gauge users reactions to the system.
Analytical Evaluation Methods
This is the process of generating predictive or descriptive models of activities that a typical user would execute during an interaction with the system.
In their research, Tselios & Avouris & Komis identified the different levels of educational applications and generated a mapping from the different types of teaching/learning software to usability evaluation methods mentioned above (see below).

Nielsen claims that end users are not as good as experts in identifying usability problems. Supporting this, Tselios & Avouris & Komis found that the 'learners' who had been using the target system for some time identified a slightly lower portion of problems than the experts. However, in a comparative study of 2 analytical evaluation methods for educational computer games, Bekker, Baauw and Barendregt concluded that a predictive approach (heuristic evaluation) combined with User testing provides the best results. The end users discover problems that would not be uncovered otherwise by experts alone (bias due to technical knowledge, etc.). They discovered that combining an inspection based method with user-based methods (hybrid approach) was beneficial in uncovering the highest number of issues.
A comparison of two analytical evaluation methods for educational computer games for young children
by Mathilde M. Bekker Æ Ester Baauw Æ Wolmet Barendregt
The effective combination of hybrid usability methods in evaluating education applications of ICT: Issues and challenges
by Nikolaos Tselios & Nikolaos Avouris & Vassilis Komis
A comparative study of two usability evaluation methods using a web-based e-learning application
by Samuel Ssemugabi and Ruth de Villiers
Wednesday, September 28, 2011
Week 3: Sept 26th, 2011
Heuristic Evaluation:
This week I performed a heuristic evaluation on an application developed for the iPhone and iPad, in order to discover any usability issues in the interface design.
Some of the key differences between a heuristic evaluation and a standard user test:
The evaluation was performed without any further tutorial, in hopes of remaining as independent and unbiased as possible.
A written record was kept of all issues discovered or any additional comments. Following each evaluation, I discussed these issues with developer and we reviewed the potential fixes.
iPhone
Length of evaluation: 20 mins
Some principles that were reviewed:
Length of evaluation: 1 hour
Some principles that were reviewed:
The first iteration of changes to the iPhone version were implemented mid-week. The second iteration of changes are in progress and will be scheduled for review late this week/early next week.
This week I performed a heuristic evaluation on an application developed for the iPhone and iPad, in order to discover any usability issues in the interface design.
Some of the key differences between a heuristic evaluation and a standard user test:
- There is no need for the observer to interpret the users action, since the user evaluating the interface is a domain or HCI expert.
- The users willingness/ability to answer questions and explain their actions
- The extent to which the observer can provide hints to use the interface, which would allow for better assessment of the usability.
The evaluation was performed without any further tutorial, in hopes of remaining as independent and unbiased as possible.
A written record was kept of all issues discovered or any additional comments. Following each evaluation, I discussed these issues with developer and we reviewed the potential fixes.
iPhone
Length of evaluation: 20 mins
Some principles that were reviewed:
- Aesthetic and minimalist design
- Visibility of system status
- Efficiency of the user: Keywords first
- Consistency and standards
Length of evaluation: 1 hour
Some principles that were reviewed:
- Aesthetic and minimalist design
- Consistency and standards
- Recognition rather than recall
- Documentation and Help text
The first iteration of changes to the iPhone version were implemented mid-week. The second iteration of changes are in progress and will be scheduled for review late this week/early next week.
Subscribe to:
Posts (Atom)