7.2.2 A Design Example

К оглавлению
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 
119 120 121 122 123 124 125 126 127 128 129 130 131 

I’d really like to set you an exercise with a realistically sized sample, with

answers presented in Appendix 1, as I’ve done for the other procedures

outlined in the previous chapters! However, it just isn’t possible to provide

you with the data from 20 grids, and all the associated paraphernalia. (Exercise

7.1 will have to do. At least it focuses attention on what’s involved in reliability

checking.) Hence the level of pedantic detail I’ve gone into in Section 7.2.1, to

try to ensure that the procedure is readily understandable. The best way to

learn a procedure is to do it, and when you do, you’ll find that everything falls

into place.

Instead, let me provide you with a case example which addressed the same

problem of how best to analyse a large number of grids, using a somewhat

different approach and with slightly different design decisions being adopted.

An examination of some slightly different answers adopted to the questions

we’re addressing should help to establish the principles of what we’re doing.

It’s worth examining in detail, as a further example of the sampling and design

options involved in content analysis.

Watson et al. (1995) were interested in the tacit, as well as the more obvious,

knowledge held by managers about entrepreneurial success and failure. This

suggested the use of a personal construct approach, since constructs, being

bipolar, are capable of saying something about both success and failure. (In

fact, this issue was so important to the researchers that they took a design

decision to work with two distinct sets of constructs, those dealing with

successful entrepreneurship, and those dealing with unsuccessful entrepreneurship,

doing a separate content analysis of each.)

Theirs was a large study. They identified 27 different categories for 570

constructs relating to success, and 20 categories for 346 constructs relating to

failure, in a sample of 63 small-business owner-managers.

The top five categories assigned to successful entrepreneurship were

‘commitment to business’, ‘leadership qualities’, self-motivation’, ‘customer

service’, and ‘business planning and organising’, between them accounting for

42% of the success-related constructs. The top five categories assigned to

unsuccessful entrepreneurship were ‘ineffective planning and organisation’,

‘lack of commitment’, ‘poor ethics’, ‘poor money management’, and ‘lack of

business knowledge and skills’, accounting for 32% of the constructs which

related to business failure.

Other variants on the approach outlined in Section 7.2.1 above are as follows.

Their constructs were obtained, not through grid interviews, but by asking

their respondents to write short character sketches of the successful

entrepreneur, and the unsuccessful entrepreneur, together with their

circumstances, market, etc. The content analysis followed the classic Holsti

approach towards the identification of content units and context units. Notice,

since constructs had to be identified from connected narrative, rather than

being separately elicited by repertory grid technique, there wasn’t a preexisting

and obvious content unit. You’ll recall that grid-derived constructs

provide you with a predefined unit of meaning. So, in that sense, their analysis

was more difficult and much more time-consuming than the one I’ve outlined

above.

Their categorisation procedure was carried out twice, on what were two

distinct sets of data: the constructs about successful entrepreneurs, and the

constructs about unsuccessful entrepreneurs. Thus, the differential analysis I

have indicated as step 8 is, in their case, not an analysis, immediately

amenable to statistical testing, of one single table divided up into groups of

constructs, but a looser analysis ‘by inspection’ of differences between the two

complete data sets. It’s clear, however (see Watson et al., 1995: 45), that the two

solid and comprehensive sets of constructs constitute a database which is

available for further analysis in a variety of ways. For example, they have

carried out cluster analyses of both their data sets, although the details of the

procedure they followed are not reported.

Their reliability check was a careful and detailed six-step procedure in which

two independent raters:

. sorted constructs written onto cards into categories

. privately inspected and adjusted their category definitions

. negotiated over the meanings

. privately adjusted their category definitions again

. renegotiated

. finally agreed the definitive category set.

These categories were ‘approximately the same’ as a result, although no

computation of a reliability index or coefficient was reported.

In Conclusion

If we stand back from the details of this generic bootstrapping technique for a

moment, one characteristic, in particular, is worth noting. The generic

technique as I’ve described it, and as applied in their own way by Watson

et al. (1995), emphasises the meanings present in the constructs, but discards

information about the ways in which interviewees use those constructs. There

were no ratings available in Watson’s study since the constructs were obtained

from written character sketches rather than from grids.

But where ratings are available, as in the procedure outlined in Section 7.2.1, it

seems a shame not to use them to capture more about the personal meanings

being aggregated in the analysis. What a pity to disregard the individual

provenance which leads one person to rate an element ‘1’ on a given construct,

and another person to rate the same element ‘5’ on a construct which the

content analysis has demonstrated means the same to both people!

But how can this be done? It’s rather a tall order, you might argue. Though

there’s an overlap in meaning for different interviewees, they don’t each use

the same set of constructs, so how can one make use of the ratings in a regular

and ordered way? Well, the content-analysis technique presented in Section

7.3 does just that. In the meanwhile, we need to consider a further generic

approach.

At this point, you may feel you’d like to get

away from my artificial examples, and case

examples like the one above, and essay a

content analysis of your own. Exercise 7.2

gives you the opportunity to do so.