Home Research Return on Investment The Essentials of Coaching Program Evaluation: Formative, Summative and Four Ds

The Essentials of Coaching Program Evaluation: Formative, Summative and Four Ds

35 min read
0
0
219

While this experimental design is classic in evaluation research, it is difficult to achieve in practice. First, people often can’t be assigned randomly to alternative programs. Second, a control group may not provide an adequate comparison for an experimental group. If members of a control group know that they are “controls,” this will influence their attitudes about and subsequently their participation in the program that serves as the control. Conversely, an experimental group is likely to put forth an extra effort if it knows its designation. This is what is often called “The Hawthorne Effect.” It may be difficult to keep information about involvement in an experiment from participants in either the experimental or control group, particularly in small organizations. Some people even consider the withholding of this type of information to be unethical.

Third, test and retest procedures are often problematic. One cannot always be certain that the two assessment procedures actually are comparable in assessing a coaching client’s performance, behavior, attitudes, knowledge or skills before and after a program, Furthermore, if there is no significant change in pre- and post-program outcome measurements, one can never confidently conclude that the program had no impact. The measuring instruments simply may be insensitive to changes that have occurred. On the other hand, the coaching clients already may be operating at a high level at the time when the pre-test is taken and hence there is little room for improvement in retest results. This is the so-called “Ceiling Effect.”

A control group can solve some of these test/retest problems, because if the problems are methodological, they should show up in the assessment of both groups. However, one must realize that the pretest can itself influence the effectiveness of both the experimental and control group programs and thus influence the two groups in different ways. Fourth, several logistical problems often are encountered when a classic experimental design is employed. In all but the largest organizations there may not be a sufficient number of people for a control group. There also may not be enough time or money to conduct two assessments with both an experimental and control group.

Pages 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Download Article 1K Club
Load More Related Articles
Load More By William Bergquist
Load More In Return on Investment

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

Measuring and Communicating ROI in Executive Coaching

Being able to measure and communicate return on investment (ROI) in executive coaching is …