Developing the Isolation Strategy

К оглавлению
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 
153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 
187 188 189 190 191 192 

When it comes to deciding which of the three isolation techniques

to use, the more the merrier. Ideally, all three techniques will be used.

Practically, this is often not possible. It is recommended that expert

estimation always be included in the evaluation strategy. In the case

of OptiCom, this was the only technique used (e.g., Table 11.2), and

expert estimation can be a powerful stand-alone isolation technique.

Coaching does present three unique challenges for evaluation in

contrast to more traditional leadership development activities:

1. Each coaching experience is unique. More traditional learning

programs practice “sheep dipping,” where all leaders go

through the same experiences.

2. Coaching initiatives may lack a “quorum for impact.” When

coaching participants are drawn from across the organization,

it is not reasonable for coaching to impact business measures.

3. The benefits of coaching may require a gestation period. The

higher up the leader is in the organization, the longer it may

take for the benefits of his or her actions to be realized.

Despite these challenges, initiative managers and evaluators can

think creatively about how to include pre/post and comparison

groups techniques in an evaluation strategy. For example, coaching

deployment can be phased in over a period of time so that natural

comparison groups are created. The productivity of those groups

that experience coaching early in the deployment can be compared

to those who have yet to experience coaching. Preexisting performance

data collection and reporting procedures can be utilized to

provide high-quality data at little additional cost. In the case of productivity,

the output measures of groups that experience coaching

can be compared to those who have not experienced coaching.Measures

such as units produced or units sold can be included in the

evaluation.

In this chapter we covered a lot of ground. As we set the stage for

the postprogram ROI evaluation: important decisions were made

about setting objectives in the context of an evaluation strategy. In

the next chapter we’ll see how such a strategy came to life in the evaluation

of coaching at OptiCom.