Evaluation Method

К оглавлению
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 
153 154 155 156 

When evaluating educational software, learning and usability need to be

considered as interacting in order to avoid superficial evaluation (Jones et al.,

1999). Given the interaction between learning and usability, usability evaluation

Picture 3. Character has touched a spinning question mark, and a

question box is shown (The question is multiple choices, and deals with the

topic of laser. Correct answer gives the user a high amount of points.)

Picture 4. Character, numbers representing points to score, CDs to collect

in order to achieve the game objectives and the antagonist of this game

level (the Skateboarder)

methods should be well suited for evaluation of edutainment artifacts in the case

presented here, since the methods would capture both design implications

(Karat, 1997) and potentially also the interaction between usability and

learning. Therefore, an approach based on evaluation methods from the

usability discipline was used for the purposes of identifying empirical design

implications for edutainment games. This approach would then potentially

address the learning aspects and, most importantly for the focus of this case,

obtain implications for design.

Previous findings in the related area of interactive entertainment evaluation

(Wiberg, 2001a) reveals that evaluation of entertainment websites based on

methods from the usability discipline, and user testing in particular, tend to

provide findings that are focused on basic usability problems concerning

navigation, design of menu buttons, etc. This implies that more subtle factors

such as immersion, absorption and engagement, all potentially important to

both entertainment and education, are difficult to grasp with the user testing

method (Wiberg, 2001b). Several studies reveal that usability inspection

methods, such as Design Walkthrough (e.g., Karat, 1997), Cognitive

Walkthrough (e.g., Lewis et al., 1994) and Heuristic evaluation (e.g., Nielsen,

1993, 1994) in many cases identifies problems overlooked by user testing, but

also that user testing may identify problems overlooked in an inspection

(Nielsen, 1994). In this study, we therefore used a combination of evaluation

methods including both user testing and inspection methods. A combination of

user testing and inspection would provide a broad picture of the important

aspects and issues at hand, and seems to be a fruitful approach when generating

a foundation for deriving design implications. In order to refine the results

provided by the user testing and inspection method and to generate a set of

empirical design implications, the focus group method was used. In practical

terms, a focus group is a collection of people gathered together at one time to

discuss a topic of interest for the researcher. The explicit use of the group

interaction provides the researcher with data and insights that would be less

accessible without the interaction (Sullivan, 1994). By collaborating the results

from the user testing and inspection method in a focus group session, the

intention was to create a set of design implications of importance for edutainment

games, which is the major purpose of this chapter.

A total number of five (5) subjects were invited to participate in the user testing,

of which four (4) actually participated.

The subjects performed the test one at a time, and each test took about 30

minutes in all. The user tests consisted of three parts:

• 10 minutes of free surf with Think Aloud

• 10 minutes of Walkthrough, performed by the test subject in collaboration

with the test leader (collaborative evaluation)

• 10 minutes of post-interaction interview

In the first part of the session, the subjects played the game without any specific

task to solve or instructions to be carried out. They were asked to verbalize

their thoughts throughout the interaction, and they finished the session when

they wished to do so. In the second part, the subjects performed a Walkthrough

of the whole game prototype in collaboration with the test leader. Different

aspects of the game were discussed, and the subjects were asked to give their

opinions about specific features and parts of the design. They were also able

to express any thoughts and comments they wanted to share. The postinteraction

interview gave the subjects an opportunity to give comments and

thoughts on general aspects of the game, the interaction and the performed test

procedure. Here, the subjects could develop or refine their opinions and ideas

from the previous parts of the test, and the test leader could follow up on issues

that needed to be clarified.

Subject Age Gender Computer

literacy

(1=Novice,

5=Expert)

Computer

gaming

literacy

(1=Novice,

5=Expert)

Comment

1 25-30 Female 3 1 Researcher HCI

2 25-30 Female 5 5 Researcher HCI

3 50-60 Male 3 1 Engineer

4 20-25 Male 4 4 HCI analyst and

lecturer

Participants