# 7.3.2 Procedure

17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67

68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84

85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101

102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118

119 120 121 122 123 124 125 126 127 128 129 130 131

The procedure is similar to the bootstrapping one outlined in Section 7.2.1, and

looks as follows. You may find a glance at Figure 7.1 is helpful as you go

through the steps below. It’s a grid completed by one of our sales staff in the

publisher’s example shown in Tables 7.2 to 7.6.

(1) Obtain ratings on a supplied ‘overall’ construct. Make sure, when you

elicit each grid, that you supply a construct which serves to sum up the

interviewee’s overall stance on the topic. Ask the interviewee to rate all the

elements on this supplied construct, as well as on the elicited constructs.

(2) Compute sums of differences for each construct against the ‘overall’

construct. Use the procedure defined in Section 6.1.2, ‘simple relationships

between constructs’. Note: you’re doing this for every construct against the

‘overall’ construct only. For the purposes of our present analysis, you don’t

have to work it out between each construct and each other construct. Bearing

in mind that you have to check for reversals (that you might get a smaller sum

of differences between a given construct and the ‘overall’ construct if one of

the two is reversed), the quickest way of completing this step is to compute the

sum of differences

(a) between ‘overall’ construct and the first construct

(b) between ‘overall’ construct (reversed) and the first construct

(unreversed, of course)

(c) note the smaller of the two sums of differences.

Repeat for the ‘overall’ construct and each of the other constructs.

Doing it this way makes sure that you only have to reverse one set of ratings,

those for the ‘overall’ construct; and, in fact, this part of the procedure is much

quicker to do than to describe! (Glance at the very bottom row of ratings in

Figure 7.1).

(3) Ensure comparability with other grids. In other words, turn these sums of

differences into % similarity scores.

(4) Take the individual’s personal metric into account. Look at these %

similarity scores. Within each grid, divide the constructs as best you can into

the highest third, intermediate third, and lowest third. (‘As best you can’. In

Following step 2 of the procedure, the sums of differences are shown below each construct, on the left. The

reversed sums of differences are shown in bold on the right; you only need to reverse one set of ratings, for the

‘overall’ construct, in order to calculate them. The reversed ratings are shown in bold at the bottom. For

each construct, the lower of the two values (unreversed, reversed) has been chosen and circled.

Following step 3, these sums of differences have been turned into % similarity scores.

Following step 4, the constructs have been divided into three sets, high, intermediate, and low, as evenly as

possible.

Following step 5, the constructs have been labelled (2.1, 2.2, 2.3, etc.) to ensure subsequent identification.

The grid sheet can now be cut up into strips, ready for categorisation.

Figure 7.1 Using Honey’s technique

Figure 7.1, 67%, 33% reversed, 67% reversed, 75%, and 65% don’t divide into

three equal groups!)

(5) Label each construct with both indices. At this stage, as with the

bootstrapping technique, it’s convenient if you transfer each construct onto a

separate file card, and note which interviewee it came from. Or simply cut the

grid sheet into strips, each construct on a separate strip; but be sure to indicate

which interviewee’s construct it is! Give each construct its unique number, as

suggested in Section 7.2.1 (construct 16.4 would be the fourth construct from

interviewee number 16, for example). Now check that the % similarity scores

have been written in below each construct. Next, mark it H, I, or L depending

on its % similarity score value in comparison with the other constructs which

that particular interviewee used. Now, back to the content-analysis procedure.

(6) Identify the categories.

(7) Allocate the constructs to the categories, following the core-categorisation

procedure (see Section 7.2.1).

(8) Tabulate the result.

(9) Establish the reliability of the category system, following exactly the same

procedures you used in steps 4.1 to 4.7 in Section 7.2.1. You get your

colleague’s help in going through steps 6, 7, and 8, working with the

constructs after they’ve been labelled with their % similarity scores and their

H-I-L indices, but before doing anything else. There’s no point in summarising

the table and doing differential analyses if the category system isn’t reliable.

(10) Summarise the table: first, the meaning of the category headings; that is,

define the category headings.

(11) Summarise the table: find examples of each category heading. Here’s

where the power of Honey’s approach reveals itself.

(11.1) Within each category, order the constructs from top to bottom with

respect to their%similarity scores. Those at the top have identical or nearidentical

scores (that is, they represent ‘what the interviewee particularly

had in mind in thinking about the topic’, even though, within their

category, of course, they may be different in meaning, covering some

different aspect of that category). Those at the bottom, those with the low %

similarity scores, will be less salient (bulking less, as it were, in that

particular interviewee’s thinking so far as the topic is concerned).

11.2) Looking at all the constructs within a category, identify personally

salient constructs on which there is consensus in the group.

. If the H-I-L indices are high, the idea behind that particular construct is

important for the people in your sample as individuals. And if many

individuals in your sample have that construct, then certainly, hang on

to it, since it’s saying something about the thinking of your sample as a

whole as well as each individual member!

. If the H-I-L indices are mixed, the idea behind that particular construct

reveals no particular consensus. In the sample as a whole, there’s a

certain ambivalence about the construct’s relevance or importance to the

topic. And particularly, if you notice that two or more constructs which

express essentially the same meaning, obtained from the same person, have

mixed H-I-L indices, set the constructs aside. There’s no point in

preserving ambiguity or ambivalence here.

. If the H-I-L indices are low, it looks as though the sample as a whole

agree that the construct doesn’t relate particularly well to the topic in

general. Note this.

Table 7.9 shows the result of this step for our running example.

(11.3) If there are subthemes within a category, group them according to

the meaning being expressed. The final result will reflect your overall

purposes in doing a content analysis; but, typically, you would aim to end

up with 40% to 80% of the original number of constructs which show the

consensus you’re looking for. (Of course, if there was little consensus,

you’d say so!) The result will be a table in which the columns are

categories, each one divided into two subcolumns, in the first of which the

chosen constructs appear, and, in the other, the % similarity scores and the

H-I-L index.

(12) Summarise the table: state the frequency under the category headings.

How many constructs are there in each category (and subcategory, if relevant)

at this stage? Report the number, and if this varies markedly from the original

number of constructs in each category at the start of step 10, assess and discuss

the significance. On what kinds of issues is there a consensus, and on which

ones don’t people agree?

This is a good point at which to calculate any sums, averages, and so on. You

might, for example, use the % similarity scores to provide a ‘mean importance

score’ of each category for the sample as a whole, if this makes sense in your

analysis.

(13) Complete any differential analysis which your investigation requires, as

before, when you bootstrapped. If you tag each construct according to the

different subsamples of interviewee, or cross-tabulate them in different rows

of your table, you may see differences among the subsamples in terms of:

. the number of constructs in different categories

. the relative importance (the H-I-L indices and/or the mean % similarity

scores).

So, for instance, suppose in our running example, you knew that interviewees

nos 5, 6, and 7 had only had sales experience, whereas interviewees nos 1, 2, 3,

and 4 had worked in the office before joining the sales force. Would the fact

that there are more constructs from interviewees 5, 6, and 7 in the ‘pricing

decisions’ category be meaningful?

But be careful! This example has only 50 interviewees since it is invented

for illustrative purposes. A real data set would have at least 200 items. In point

of fact, in this example, it would be dangerous to come to any differential

conclusions because the subsamples of sales staff with office experience, and

without, are rather small. For instance, four of the 10 constructs under the ‘pricing

decisions’ category which we are using to make a point in our differential analysis

come from just one interviewee, no. 7.We have to avoid a conclusion that the bee in

one interviewee’s bonnet does indeed typify the views of the other interviewees in

the subgroup!

Aswith any formof sampling, idiosyncrasies that do not reflect population characteristics

are more probable in a small sample than a large one. If you want to conduct a

differential analysis, make sure that you have sufficient constructs to represent each

subgroup.

Okay. And, while on this statistical theme, note the following step.

(14) Complete any statistical tests on this differential analysis, as before.

Before moving on, make an attempt at Exercise

7.3. This gets you to practise steps 1 to 8 of the

Honey procedure.