6.1.1 Simple Relationships Between Elements
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118
119 120 121 122 123 124 125 126 127 128 129 130 131
You would carry out analyses of element relationships if you:
. didn’t have access to a computer for more detailed analyses
. didn’t understand heavy statistics but are happy to do some simple counting
. wanted to continue the analysis in a collaborative style with your
interviewee
. were using the grid as part of a counselling or personal development
interview
. were using the grid as a simple decision-making device.
When you carried out your eyeball analysis (Section 5.3.2), you did an
examination of the ratings of elements on constructs as a way of identifying
what the interviewee thinks. It is a natural next step to ask whether the
interviewee thinks of one element in the same way as s/he thinks of another.
How would you tell? Glance for a moment at Table 6.1. It summarises the
ways in which a training officer thinks of other trainers he has known. Do an
instant eyeball analysis to familiarise yourself with the content of this grid.
Now, focusing on the elements, which two look as though they’re construed in
much the same way by the interviewee?
It looks like trainer 1 and trainer 3 (T1 and T3). Now, how did you arrive at
that answer? Think about it: what did you actually do? Another way of putting
it: if you had to tell someone else how to arrive at that answer, what is the
procedure you would ask him or her to follow?
‘What did I do? Well, I could see that the ratings for T1 and T3, reading down
those two columns, were practically identical, which wasn’t the case with the
other elements. There was practically no difference between them.’
Exactly so: you focused on differences, and the procedure involved in simple
element relationship analysis is straightforward. It’s a matter of summing
differences and comparing the outcomes, as follows.
(1) Calculate differences in ratings on the first pair of elements on the first
construct. Take element 1 and element 2 (column 1 and column 2). Find the
absolute difference between the two ratings on the first construct (that is, take
the smaller rating from the larger regardless of which element has the larger,
and which the smaller, rating).
(2) Summing down the page. Do the same on the second, and subsequent,
constructs, systematically down the page, summing the differences as you go.
Jot this total down when you’ve finished.
(3) Repeat for all pairs of elements. Now repeat for columns 1 and 3, 1 and
4 . . . 2 and 3, 2 and 4 . . . etc., noting down the sums of differences as you go.
(4) Compare these sums of differences. The smallest difference, indicating the
two elements which are construed most similarly, and the largest difference,
indicating the two elements which are construed as most dissimilar, are
particularly useful to examine.
Glance again at Table 6.1. Trainers 1 and 3 are construed the most similarly:
both ‘seat-of-pants’ rather than careful preparers (5–5); both ‘energetic’ (1–1);
both halfway between having an ‘intellectual’ rather than ‘pedestrian’ style
(3–3); trainer 1 being ‘shambolic’ in his presentation, a little more so than
trainer 3 (5–4); trainer 3 being slightly more, but not extremely, ‘clear and
obvious’ compared with trainer 1 (2–3); both inclined to ‘tell jokes’ but T1
more than T3 (1–2); and both receiving similar ratings on the ‘overall’ supplied
construct (1–2). These differences (and remember, we’re taking the absolute
value each time, subtracting the smaller from the larger) sum to 4.
Repeat this for all the other pairings, and you see that no other elements are as
close to one another as those two. T3 and Self are the next most alike (a sum of
differences of 7), while T1 and T4 are the least alike, with a sum of differences
of 20.
Table 6.1 An extract from a grid interview with a young training officer on ‘trainers
I have known’
1 T1 T2 T3 T4 Self 5
Prepares thoroughly 5 2 5 3 2 Seat-of-pants speaker
Energetic, moves about 1 2 1 5 1 Just stands there stolidly
Intellectual 3 1 3 5 2 Pedestrian
Language articulate, precise,
and concise
5 1 4 2 3 Language shambolic,
appeals to intuition
Makes it seem so obvious
and clear
3 1 2 5 3 You have to work to
understand his point
Tells jokes 1 5 2 4 3 Takes it all very seriously
Overall, enjoyed his courses 1 3 2 5 2 Overall, didn’t enjoy his
courses
Go on: check it for yourself by doing
Exercise 6.1.
The next step is highly recommended if you’re working collaboratively with
the interviewee.
(5) Discuss these relationships with the interviewee. The grid used in this
example was a very simple one. The interviewee would easily be able to see
what s/he said as you fed it back, pointing to the two columns of the grid. If
the relationship isn’t obvious in a larger and more complicated grid, simply
repeating the rationale about finding the smallest sum of differences to your
interviewee, and running through an example as under step 4 above, should
be sufficient.
At that point, your conversation with the interviewee will depend on your
purpose in eliciting the grid, but will also depend on the extent to which the
interviewee is interested, intrigued, and possibly surprised by the information
about the relationships among the elements. It shouldn’t come as an enormous
surprise, by the way: at the most, an ‘ooh yes’ response, ‘I hadn’t noticed that
before, but now that you point it out I can see that’, or words to that effect.
Much of the time you’ll be confirming what’s known already.
The interviewee should have a sense of ownership of what you’re pointing
out. And if s/he doesn’t – if s/he doesn’t recognise, or disowns, the
relationship – then you may wish to explore the apparent disparity between
your analysis and the interviewee’s own view, in greater detail. (The chances
are that you haven’t yet elicited some fairly important constructs, on which the
interviewee would rate the two elements very differently.) Next,
(6) Examine relationships with supplied elements, if any. These will most
commonly be
. relationships between any ‘self’ element and the other elements. This helps to
answer the question, ‘Who do you see as most similar to yourself?’
. relationships between any ‘self’ element and any ‘ideal self’ element. How close is
the interviewee to his or her ideal? This approach is often used when
measuring change, or assisting the interviewee in clarifying his or her
thoughts about some possible change. The applicability to counselling is
obvious.
These are twomuch-researched fields, andif yourwork in construct elicitationhas an
advisory, guidance, or counselling element, youmay want to familiarise yourself with
some of this research. Probably the best place to start isWinter (1992). If you look at
page 42, for example, you’ll see nine different measures of self-construing listed,
some of which require you to do more complex structural analyses (see Section 6.2
below).Others, however, like the Self ^Other Score, the DeathThreat Score, and the
extent of polarised self-construing, can be derived by simple counting of the kind
described earlier (seeWinter,1992: 42^43).
. relationships between any ‘ideal’ element and the other elements. This helps to
answer the question, ‘which element comes closest to your Ideal?’, and is the
basis on which grids are used in the choice situations which arise in many
knowledge-management applications. The rationale here would be that, if
the interviewee is helped to compare all the courses of action which s/he
feels it is possible to undertake, with the way in which s/he views the ‘Ideal
course of action’, the one which matches best with the Ideal should be the
one to put into effect.
This isn’t an exercise: real choices can bemade in thisway, particularly if some of the
constructs summarise the results ofempiricalworkdone using other, more‘objective’,
techniques! However, the value of the grid as a decision-making device often lies in
the stimulus it gives to a discussion about:
. the things the decision-maker is taking forgranted
. his or her strategy in identifying alternatives
. choosingamong them
. putting the chosen one into action
. revisinghis/her views in the light of the outcomes.
rather than in sometotal figure that you have calculated.
The simple rationale you mention above provides a useful procedure in situations in
which the attributes expressed in the constructs carry equal weight for the interviewee.
There will be times when more complex procedures are required, though.
(See Humphreys & McFadden,1980.) Also, there is a debate (see the short overview
in Jankowicz, 1990) about the extent to which the outcomes of a grid interview,
however rich in complexity, can be used to make decisions in an automated way.
There again, some quite simple grid techniques based on the above procedure have
been used in developing quite complex expert systems (Boose,1985) by focusing on
theways inwhich expertsmake inferences.
Okay. Thank you for that. This would be a good point at which to practise
doing a simple element-relationships analysis in a choice situation.
Enjoy Exercise 6.2 before continuing.
And, finally, one tiny last step:
(7) Ensure comparability with other grids. There may be occasions on which
you want to compare the element-relationship scores across different grids. As
your measure of the relationship depends on the sum of differences over all
the constructs, comparison is only possible if each grid has the same number
of constructs. You need to use a different form of relationship score where the
grids being compared have different numbers of constructs! (Skip the next two
paragraphs and Table 6.2 if this is obvious.)
For example, you may have interviewed five people in your firm’s new
product development department about how they feel about their team leader,
asking them to construe the bosses for whom they’ve worked in their career so
far, and have discovered, for each one, which of their bosses came closest to a
personal ideal of what the manager of a technical development department
should be.
Table 6.2 shows the comparison for just one element with the Ideal, in two
different cases. Assume this is the most similar element to the Ideal in both
cases. You can see that the element-relationship score is sensitive to the sheer
number of constructs in the grid, rather than being a simple measure of the
differences in the grid. (Of course: it’s a sum of differences, and if you sum
over more items – six constructs in the second grid, but only four constructs in
the first – you’re bound to get a different value anyway.)
Table 6.2 Sums of differences vary depending on the number of constructs
Interviewee 1 Boss X Ideal
He always gives clear job
instructions
1 1 Sometimes unsure of what he
wants
Has a sense of humour 1 1 Lacks the light touch
Approachable when I need help 2 1 Doesn’t like it when I ask for help
Good at building a team 1 3 Treats us all as individuals
Sum of differences
Boss X against – 3
Interviewee 2 Boss Y Ideal
He always gives clear job
instructions
1 1 Sometimes unsure of what he
wants
Has a sense of humour 1 1 Lacks the light touch
Approachable when I need help 2 1 Doesn’t like it when I ask for help
Good at building a team 1 3 Treats us all as individuals
Can’t delegate 2 5 Good at delegation
Handles stress well 3 1 Loses his cool
Sum of differences
Boss Y against – 8
I suppose you could take an average difference score, dividing each sum of
differences by the number of constructs in each case. As it happens, the usual
practice is to turn each of the sums of differences into a percentage. (In doing
so, the opportunity is also taken to turn it into a % similarity score on the
rationale that it’s easier to think of the extent to which things are the same than
the extent to which they’re different.)
The procedure is the same as for any percentage calculation. You express the
value you’re interested in as a proportion of the largest possible value. A half
is a half is a half regardless of the size of the total value. You then multiply the
proportion by 100 to stretch the result onto a neat little 100-point scale. I’ll
repeat that, with an example. Five as a percentage of 10 is 5 divided by 10 (in
other words, a proportion of five-tenths, or a half); the answer multiplied by
100 gives 50. Please bear with the triviality; but, in analysis, every step needs to
be understood; otherwise, why bother?
The value you’re working with is the sum of differences you calculated earlier.
And what’s that as a percentage? Let’s take it step by step.
. The largest difference on any single construct is the largest rating that’s
possible on the scale (5 on a 5-point scale; 7 on a 7-point scale) minus 1. Call
this (LR71).
. This will happen as many times as you have constructs in the grid (the sum
of differences accumulates as you add them up, down the grid). Call the
number of constructs C. So the largest possible sum of differences in the
whole grid is given by (LR71) times C.
. Now take the value you’re interested in, that is, the particular sum of
differences you want to turn into a percentage. Call this SD.
. Divide SD by the largest sum of differences to get the proportion; then
multiply the outcome by 100 to get the percentage. And that’s it. In other
words:
SD
рLR _ 1Ю _ C _ 100
Or, if you prefer it all on one line, {SD/[(LR71)6C]}6100.
. Finally, turn this percentage sum of differences into a percentage similarity
score by subtracting it from 100.
100 _
SD
рLR _ 1Ю _ C _ 100
On one line, that’s 1007({SD/[(LR71)6C]}6100).
Now you can compare similarities across grids made up of differing numbers
of constructs. This can be useful when your interviewees have each given you
different numbers of constructs; or when you’re interviewing the same person
twice to see how his/her construing might have changed. (Chapter 9 provides
you with alternative procedures for examining change, by the way. Changed
construing may mean different ratings than before; but it can also mean that
the person has added more constructs, or even dropped some, the second time
round!)
And now that you know how % similarity is computed, let me save you the
tedium of doing so! For a 5-point scale, at any rate. Take a look at Appendix 3.
If you use a scale which isn’t a 5-point scale, or if you’re working with a
different number of constructs, you’ll need to do the calculation yourself.
Table 6.3 shows the grid on ‘trainers I have known’ which you worked with
when you did Exercise 6.1. The sums of differences which you calculated (and
checked for correctness against Appendix 1.6) have been replaced by %
similarity scores. As you can see by comparing this table with your answers in
Table 6.16 (or preferably with the correct answers given in Appendix 1.6!), the
element % similarity scores are as you’d expect: trainer 1 and trainer 3 are seen
Table 6.3 An extract from a grid interview with a young training officer on ‘trainers I
have known’, together with element % similarity scores
1 T1 T2 T3 T4 Self 5
Prepares thoroughly 5 2 5 3 2 Seat-of-pants speaker
Energetic, moves about 1 2 1 5 1 Just stands there stolidly
Intellectual 3 1 3 5 2 Pedestrian
Language articulate, precise,
and concise
5 1 4 2 3 Language shambolic,
appeals to intuition
Makes it seem so obvious
and clear
3 1 2 5 3 You have to work to
understand his point
Tells jokes 1 5 2 4 3 Takes it all very seriously
Overall, enjoyed his courses 1 3 2 5 2 Overall, didn’t enjoy his
courses
Simple Element Analysis T1 T2 T3 T4 Self
% Similarity scores
T1 against – 35.7 85.7 28.6 67.9
T2 against – 50.0 42.9 67.9
T3 against – 35.7 75.0
T4 against – 46.4
Self against –
as the most alike (a similarity of 85.7%), while trainer 1 and trainer 4 are seen
as the least alike (a similarity of 28.6%).
A little practice in working out % similarity
scores: do Exercise 6.3 right now!