Library of Professional Coaching

The Coaching Research Agenda: Pitfalls, Potholes and Potentials

The field of professional coaching is in need of evidence—evidence that demonstrates the effectiveness of skillful coaching, evidence that provides guidance regarding the specific coaching strategies that most successfully address certain client issues, and evidence that points the way to further developments and improvements in this emerging human service endeavor. Much as evidence-based practices are now prominent in the fields of medicine and psychotherapy, so they are much needed in the field of professional coaching.  Given this imperative and precedence for evidence-based coaching, it is appropriate and timely that this issue (and the next issue of The Future of Coaching) be devoted to the topic of coaching research.

It is also appropriate and timely to point out the many pitfalls and potholes associated with any evidence-based initiatives. The backlash against both evidence-based medicine and evidence-based psychotherapy is something more than just knee-jerk responses of reactionaries and recalcitrants who oppose any intrusion into their professional autonomy or any challenge to their deeply-entrenched practices. The backlash also uncovers some very important cautionary notes regarding the collection of data about complex human service practices. In this article and one I will be offering in Issue Three of The Future of Coaching, I will identify some of these cautionary notes and suggest ways in which the pitfalls and potholes associated with this type of research can best be addressed.

The Challenge: Research on a Nested Problem

I begin by exploring the general challenge: professional coaching operates in complex systems that are highly dynamic and not easy to assess or analyze (particularly with regard to causal relationships). I propose that the assessment of coaching effectiveness is what I describe as a nested problem.  A problem, first of all, is an issue that does not have a simple or single answer (as is the case with a puzzle). It is multi-disciplinary in nature: many different perspectives can be taken in viewing and seeking to analyze a problem. Furthermore, there are often competing and even contradictory goals associated with a problem. Polarities are prevalent and paradox is found in abundance when seeking to understand and successfully address a problem. Nested problems are even more challenging, for there are typically several problems embedded in a nested problem that contribute to the “bigger problem.” For instance, in the field of medicine, there are economic issues (problems) with regard to the trade off (polarity) of costs and quality of care that are nested in the broader issue (problem) of formulating an equitable and sustainable public policy regarding the provision of health care. There are additional issues (problems) nested inside the public policy problem that concern the acceptance of risk regarding new medical procedures: a polarity existing between the value of being sure a new medical procedure is safe and the value of accelerating approval of new procedures so that afflicted patients can receive the most advanced medical care.

I will begin to explore the various layers of the nesting that occurs when seeking to conduct research on professional coaching practices. I begin with the nested problem of defining terms. What is “coaching” and what does it mean to be “professional.” I then turn to a companion problem in the nest—the challenging problem of identifying the players at the table. Who determines what coaching is and what it means to be professional? Furthermore, what are the criteria to be used in assessing the effectiveness of professional coaching practices? When we talk about evidence, how do we know what is and is not evidence? Who is allowed to answer these questions and how do we know they are credible sources? At an even deeper level, what do we know about the agendas that these “credible” decision-makers bring to the table? What about their own personal (and collective) biases, hopes and fears regarding the field of professional coaching.

I will turn secondly to the problem of sampling and sample size which is interwoven with the issue of who sits at the table and what definitions and criteria are employed. We can’t study everyone who receives coaching services. Who do we focus on (in terms of both the coaching-provider population and the client population)? Do we only study “professional” coaching practices? How big of a sample size do we need in order to offer any definitive decisions regarding coaching outcomes? How diverse must this sample be for us to reach any general conclusions or suggest with confidence that certain client populations or certain types of client issues are amendable to professional coaching in general or amenable to specific coaching strategies?

Third, I will look at the fundamental issue of measurement—which once again is a nested element of the broader problem regarding conducting research on professional coaching. What tools should be used to measure coaching effectiveness? The tools being used might have more impact on the outcome of the assessment of effectiveness than the actual coaching processes being studied. A finely calibrated measuring stick that is applied to micro-events often yields quite different results from a less finely calibrated measuring stick that is applied to macro-events.

I will save my identification and analysis of a fourth nested problem for the third issue of The Future of Coaching. This problem concerns the ways in which evidence regarding coaching practices actually get used and the ways this evidence can influence the quality of professional coaching being provided.  I will devote quite a bit of space to this fourth problem because I believe it is the real elephant in the room. I will suggest ways in which to talk about and potentially influence this elephant, focusing on a fundamental question: does this research really make a difference with regard to the way professional coaching is conducted? Is it really worth the time and effort (funding) to build a strong foundation for evidence-based professional coaching if this foundation is being ignored? If many of those in the coaching business are going their merry way in providing services based on hunches and their own tried-and-true experiences, then why do the research? Maybe it is all a matter of marketing and networking in the field of professional coaching – at least at the present time. We may still be operating in a frontier village where snake oil sells better than prescription medications.

In this future essay I will also be balancing off the challenges of coaching research with some optimism regarding the kind of research that can be done to build an evidence-based foundation for professional coaching. I will encourage the use of multiple strategies, the triangulation of research methods and sample populations, and the establishment of collaborative initiatives based on the principle of “reflective practice.”

Conducting Research in a “Messy” and Rugged Environment

The world in which professional coaches work is quite “messy.” Another term that is sometimes used to describe this world is “wicked.”  What does it mean for an environment to be messy or wicked? It means that this world is filled with the nested problems I identified earlier in this essay. It also means that everything in this world is interconnected with everything else in this world. John Miller and Scott Page (2007) describe this world as a complex system and they contrast it with a complicated system. A complicated system is one which has many parts—but the parts all work in isolation from one another. A complex system is one in which all o rmost of the parts are interrelated and inter-dependent. Scott Page (2011) also uses the metaphor of landscape when describing complex system. He would suggest that complex systems closely resemble rugged landscapes (such as those found in the Appalachian Mountains) where there are many peaks and valleys.

Coaches work in a rugged landscape: it is not clear when one is at the highest peak or whether one is moving in the correct direction toward some goal (given the many hills and valleys that must be traversed). In this type of environment, there are no simple solutions and it is not even clear when one has been successful, given that there are multiple goals (peaks) and many ways to get from the current position to one of the desired goals (peaks). Those who work in messy and wicked world suggest that it is very hard to assess progress in such a world. I think Scott Page would agree.

Not only are the clients with whom most coaches work likely to be operating in this type of messy, wicked and rugged environment, they are likely during their coaching sessions to ask their coaches to focus in particular on the challenges of complexity inherent in this type of environment. Coaches aren’t brought in to help a client solve simple puzzles that have clear goals achieved through the application of existing skills (though both the client and coach might hope this is the case). Rather coaches are brought in to help clients prioritize multiple (and often conflicting) goals, navigate through rugged terrains and acquire new skills needed to meet the often shifting challenges (Page refers to landscapes that are not only rugged but also dancing!).

Given the prevalence of these messy environments, how does one assess the extent to which a specific coaching intervention has been successful? If everything is linked with everything else in a complex environment, how does one determine whether or not a specific coaching intervention has made a difference? Many other factors may have contributed to the success (or failure) either independently or (more often) in connection with the coaching. What constitutes “evidence” in a messy environment? How does one sort out relevant data from the “noise” of a rugged and dancing landscape?

What is “Coaching”?

There is an even more disturbing evidence-based challenge to take into account. What exactly is “coaching”? How do we know if coaching has had an impact when we are not even sure what coaching is (and is not)? Even if we have a fairly clear idea of what professional coaching is and is not (for instance, using the ICF definitions) at another level we must ask: what are the different types of coaching and how do we know when one kind is operating—either because the coach is espoused this specific type or because we can observe this type being enacted? (Argyris and Schön, 1974)

I would propose that a coaching taxonomy is one of the building blocks needed for the foundation of evidence-based coaching. This taxonomy must include not only a set of distinctions between different types of coaching, but also a consistent way of framing these distinctions. The taxonomy can’t simply be an assemblage of coaching schools and philosophies that seem to differ in some important way from one another (or at least purport to differ from one another). It must be based on some underlying model of human behavior that brings coherence to the field.

I worked with several of my colleagues (Suzi Pomerantz and John Lazar) on a taxonomy several years ago that was based on the traditional psychological framework of affect (emotions), cognition (thoughts) and connation (behavior). (see copy of taxonomy in Appendix A) While this taxonomy inadequately addresses all of the many forms of coaching now operating, it does offer some coherence. Furthermore, in this taxonomy we tried to identify the kind of coaching issues most often effectively addressed by each type of coaching. This taxonomy – or one that vastly improves on this one – could provide the blueprint in building a foundation for evidence-based coaching practices. At the very least, assessments could be done to determine in a preliminary manner if each type of coaching really is most effective in working with a specific set of coaching issues—provided, of course, that we recognize the challenge of conducting assessment in a messy environment.

What about the term ‘Professional”? What does this term mean and who is and is not a professional coach? Many years ago, an observant social analyst and historian, Burton Bledstein (1976), noted that the professions emerged in many societies as a substitute for social class. The term “professional” suggests higher social status as well as the acquisition of technical knowledge in a specific field or discipline and some form of certification (supposedly to insure quality control and certainly to imbue the user of professional services with a sense of confidence). Typically, a specific field has been professionalized when training programs and academic degree programs are established for preparation of practitioners in this field. Emulating in many ways the guild structure of early European trades, the professions have typically evolved through the establishment of professional associations, codes of conduct and (most importantly) attempts (successful or unsuccessful) to restrict the number of people allowed to provide services in this field. This form of quality control usually takes place through the establishment of government or association run examination programs, as well as requirements regarding supervised internships.

While an emphasis on quality control is to be applauded by anyone who is interested in the improvement of services being provided by professionals, it is also important to note that professionalization can be engaged for less worthy causes—namely, the restraint of trade, the imposition of specific professional practices, and even the reinforcement of societal discrimination. It is worth noting that the professionalization of American medicine in the early 20th Century resulted in the dissolution of most nontraditional (and often innovative) medical practices (such as the emphasis on prevention rather than just amelioration). It also meant the closure of virtually all medical schools that admitted women or minorities, or were founded to increase the quality and extend of medical services to the underserved. These draconian measures, in turn, reduced the number of physicians entering the field, which in turn drove up medical expenses and led to the establishment during the 1920s of large scale medical insurance plans to cover these costs. (Starr, 1982) We can certainly applaud the effort to improve American medical practices, but we must also be aware of the attendant costs. We must similarly be attentive to the costs as well as benefits associated with the professionalization of any other field or practice.

We find many of these same dynamics operating in the professionalization of coaching. As the field of coaching has emerged over the past 20-30 years and as attempts have been made to regulate entrance into the field, the term “professional” has been used with increased frequency to distinguish those who are doing coaching without much training or any certification from those who have received approved training and have obtained certification. In recent years, the professionalization of coaching has been undertaken, with some controversy, by the International Coaching Federation. Associations have also been formed to certify and monitor coach training programs. Some observers of the coaching field have even suggested that we may soon find that coaching certification will require not only graduation from an approved coach training program, but also a Masters-level degree from an approved graduation program that focuses on coaching theory and practices. Thus, when we explore the challenges associate with research about professional coaching, we have to address the rather knotty issue of determining who is being studied. Do we include coaching services that are being provided by practitioners without certification? Should the research findings be used to further refine the criteria for determining certification? Who do we exclude and which perspectives on coaching do we disallow? And, as I am about to explore, who gets to sit at the decision-making table and who is excluded?

Who is Sitting at the Table?

There is a revolution (or at least a readjustment) going on in the field of applied economics, especially as it begins to interact with the fields of cognitive psychology and neurobiology. This revolution often goes by the name, “behavioral economics,” and it is based in part on recognition that traditional economic theory, with an emphasis on rational decision-making and self-correcting economic dynamics, is to be challenged. (Ariely, 2008; Kahneman, 2011) One of the key points made by the behavioral scientists is that the criteria for assessing outcomes may be more important than the actual assessment that is being done. They write about the processes being engaged to determine the criteria (often focusing on seemingly irrational process such as the use of irrelevant “anchor points” to determine judgmental criteria). The behavioral economics scientists also note at an even more basic level, that it is important to identify the participants in any decision-making group that is formed to determine the criteria. For instance, who determines criteria for identifying the economic health of a country or the level of social equity or prosperity in the country (leading some behavioral economists to challenge the use of GNP as a primary criterion)? Who is left out of the discussion and decision-making process, and, in turn, what values and perspectives are ignored during this process?

We can probe even deeper: what is the rationale for the decisions that are made and what biases operate in establishing this rationale? What are the vested interests that operate among those establishing the criteria? Doesn’t the rationale being used have a major influence on the criteria being established and determination of a program’s success based on these criteria? Do not the vested interests, values and perspectives of those at the table have a major impact on the assessment of outcomes? Is evidence ever gathered and interpreted in a neutral manner?

What about the challenges faced by those doing research on professional coaching? Are not some people and perspectives absent from the table? If return-on-investment is identified as a key criterion for determining the success of coaching programs, then how is “investment” defined and what does “return” mean? Are both terms defined primarily in financial terms? If this is the case, then are some outcomes being devalued or even ignored? Are there important investments other than money that must be taken into account? It is not just a matter of expanding “investment” to include time spent and facilities engaged, it is also the investment of hope and the price paid by loss or regret. How do we take these into account?

If we reframe the criteria and speak of “return-on-expectations” we may be bringing more people to the table, but at the same time we may be making assessment even more difficult and increasing the intrusion of biases and preconceptions. The world gets messier or at least the mess that is already there becomes more apparent. What are we going to do about this challenge and what would a process of determining criteria for establishing evidence look like when many people are invited to the table—bringing with them diverse perspectives and values. Many behavioral economists propose that this diversity brings more creativity to the table (Page, 2011; Johansson, 2004; Kahneman, 2011). The key question is: do we need creativity when we are trying to build the foundation for evidence-based coaching? Are clarity and consensus more important?

Who Participates in the Study?

If there is clarity regarding the criteria to be used in conducting research on professional coaching then the next central question concerns the people who will be studied—including (potentially) the coaches, the clients and other people impacted by the coaching process. This question, in turn, breaks down into two parts. First, how many people will be studied? Second, who specifically will participate in this study?

The issue of quantity is very important, for a researcher can’t have it both ways. If the study is to be quantitative in nature then the sample size has to be large; if the study is to be qualitative in nature, then the sample size can be much smaller, but the research itself must be intensive and in-depth with regard to each person being studied. All too often, the sample size is small even though quantitative measures are being used. We see many coaching studies that yield conclusions based on much too small a sample size (under 50), even though the measures being taken are quantitative, superficial and often one-dimensional (for example, based only on self-ratings of satisfaction with the coaching process or supervisor’s ratings of the coaching client’s improvement in performance).

Later in this essay I will identify multiple methods of data collection and propose that three or more different methods should be used in any broad-based study of coaching practices. At the very least, larger sample sizes should be required – pointing to the value of collaborative research strategies involving multiple coaches, coaching firms and organizations that use coaching services (I will have more to say about this in my Issue Three essay).

The small-scale quantitative research projects on professional coaching will rarely yield credible data. Without major funding, isolated projects are usually a waste of time. On the other hand, the small and highly focused qualitative study is feasible—even without major funding. This type of research project, often framed as a case study, can be quite valuable, though it is important (and should be obvious) that definitive conclusions regarding evidence of coaching effectiveness can’t be generated from these studies. The focus of qualitative studies should be placed on trying to understand the nature of specific coaching practices, rather than on trying to demonstrate that specific coaching practices are effective. Both research goals are very important. It is not enough to know that coaching does work. It is also important to understand why certain kinds of coaching work when addressing specific kinds of coaching issues. No one qualitative study will successfully address the second of these two questions, but each study helps—particularly if framed by a shared coaching taxonomy (or at least shared language regarding coaching strategies and practices).

The other big challenge is to identify participants in the research project. Do we study both the coaches and the clients—as well as others impacted by the coaching process? Many years ago, the famous psychotherapy, Irvin Yalom, conducted a study in which both he, as psychotherapist, and his patient wrote in their journal after each session regarding their shared psychotherapy experiences. Yalom (1991) discovered that the accounts written down by himself, as therapist, and his client were quite different. I suspect that the same holds true for the coach and client. They are likely to identify quite different points in the coaching session as being important and may even convey quite different stories about what happened during a specific session. Perhaps most importantly, they are likely to use quite different criteria in determining the level of success and the outcomes of any one coaching session—unless the coaching strategy is one in which considerable attention is given to specific outcomes that can be and are assessed at the end of each coaching session. Even when specific (often behavioral) outcomes are the focus of the coaching, I suspect in many instances that both the behavioral coach and her client (if candid in their appraisals) will reveal quite a bit about what occurred in each session that goes beyond the scope of the identified behavioral outcomes.  This is only a suspicion on my part: several rich research questions can be derived to address the validity of this suspicion.

If one is at all interested in a broader assessment of coaching impact, then the data sources must be expanded to include those who indirectly benefit from the coaching process. I return to the concept of the complex environment in which most coaching takes place. The environment is complex (and not just complicated) because everything is connected to everything else. Thus, coaching research must eventually address these broader, systemic issues. Evidence of coaching impact must extend beyond the boundaries of coach and client. I will propose several ways in which systemic studies might be conducted in my Issue Three essay.

In determining who is to be studied, we also must return to the issue of professionalization. Do we study only those coaching practices that are being provided by certified coaches? Does this exclude the exploration of nontraditional practices? If the evidence is being collected only from “certified” sources then are we likely to find that existing paradigms of practice are being reinforced and alternative paradigms are being ignored or evaluated through very biased lenses? The behavioral economists push even deeper into this issue. (e.g. Kahneman, 2011). They suggest that we often change the question we are posing when we either don’t like the answer to our original question or can’t find an adequate answer. What about the question regarding whether or not specific coaching practices are effective when addressing specific issues being brought forward by the coaching client?

If we restrict our study population only to those coaching practices being provided by certified coaches (or being offered only in conjunction with specific training programs or graduate programs at major universities), then have we changed the question? Are we now asking: “which of the currently approved coaching strategies are most effective in working with a circumscribed set of issues?” Or are we now really asking a much more politically and economically-charged question: “how can we justify the restriction of professional coaching practices to those who are certified or hold an advanced degree in a field related to coaching?” How do we continue to promote innovation and improvement in the field of coaching without opening up the sample populations? How do we ensure that specific populations of coaches and clients aren’t being excluded because of social-economic status, position on an organization chart, or (even worse) gender, race, ethnicity or abilities?

Gathering the Data

What measurement tools do we use? This is the next major challenge which we face in establishing a program to study the professional coaching process. Admittedly, the tools are often predetermined. The research question is often framed or reframed in a manner that presupposes the use of specific tools. We may even change the research question on occasion to conform to the restraints of a specific measurement tool. At other times, a specific tool is quickly selected, setting aside the question of which tools might be most appropriate and even more importantly what occurs when a single tool is employed. I am not alone in suggesting that effective research dealing with a complex phenomenon such as professional coaching should deploy more than one measurement tool—preferably at least three tools. This three-fold approach—often called triangulation—is a classic in the annuals of research methodology. (e.g. Merriam) In fact, this three-fold approach is often identified not just with the use of three or more measurement tools, but also with the use of three or more sources of information.

This multi-source/multi-method approach is clearly quite demanding with regard to both resource requirements (time, money, etc.) and the need for careful planning. This demanding approach, however, is worth the effort given the valuable outcomes that can be obtained. When only one measurement tool is used, the method of information collection itself can influence the system being studied. If two sources are used, the researcher risks obtaining contradictory information based in part on differing methodological biases. There is no clear-cut way to resolve these differences. Three or more sources of information allow for constructive resolution of these discrepancies. Typically, at least two of the three (or more) sources will yield similar information, or, at least, common themes. If all three information sources yield discrepant data, it is evident that the system being studied is complex, contradictory and in need of broader investigation.

Specifically with regard to research on professional coaching, if we don’t use a multi-source/multi-method approach then we don’t know if we are generating unbiased information from our single source (coach, client or supervisor) and we don’t know if we are obtaining information about the impact of our single measurement tool on the people we are studying or are obtaining information about the people themselves and the coaching process. For instances, if we only conduct interviews then we might be discovering something about the fear factor: how much do the subjects of our study want to reveal about what is really happening in the coaching process? A study about organizational fear is important, but it doesn’t tell us much about what is really happening in the coaching sessions. Similarly, a questionnaire when used exclusively, may tell us quite a bit about the way respondents assess the coaching process when given a chance to rate the process using categories and criteria formulated by someone else, but these results tell us very little about how the coach or client actually perceive the coaching engagement (using their own criteria) or about the actual experience of the coach or client (using their own categories). A so-called “phenomenological” perspective cannot be gained from the use of questionnaires, just as generalized conclusions can rarely be generated from the exclusive use of interviews.

While a multi-method study can be logistically challenging, numerous methods are available to those conducting research about coaching. I propose that at least ten different methods for collecting information are available to most coaching researchers: (1) interviews, (2) observation, (3) participant-observation, (4) archival (document) review, (5) unobtrusive measurement, (6) obtrusive measurement (participant-observation of reactivity), (7) performance reviews, (8) questionnaires, (9) critical incident checklists and (10) general information about comparable problems and programs. A researcher is limited only by time and creativity in her use of these information collection tools.

Interviews

Interviewing is one of most widely used and generally appropriate methods of information collection for coaching researchers. Interviews can be conducted individually or in small groups. Sometimes they are open ended: the interviewees’ responses to initial questions (which usually are determined ahead of time) dictate the nature and scope of later questions. At other times, the questions all might be specified prior to the interview. Interviews can be conducted in person, by telephone, by email or even via social media. A random sampling of attitudes and perceptions about coaching can also be conducted with limited time and personnel through the use of polling techniques.

Observation

An effective research initiative will often make extensive use of observation when the opportunity is available. Though observations are time consuming and often bump up against the issue of confidentiality, they provide the researcher with rich insights into the real workings of the coaching process. Even if researchers can’t sit in on the actual coaching session, they might be allowed to watch the coaching client in action—exhibiting (or not exhibiting) some of the behavioral outcomes that are sought as evidence of coaching effectiveness.  If nothing else, a researcher might observe continuing projects having to do with the coaching program that don’t violate confidentiality (for example, training sessions or planning meetings) or events that reflect on the milieu of the organization in which the coaching is taking place (for example, spontaneous activities, award celebrations or special events).

Participant-Observation

In some instances, a researcher might deem it useful to assume a participant-observer role by becoming actively involved in some event related to the coaching process (for example, participating in several coaching sessions). The participant-observer records not only what she has observed but also her personal reactions to participation in the event.

Archival (Document) Review

A researcher usually can request copies of pertinent documents regarding the coaching process. Some documents should be read carefully, especially those concerning goals, policies and outcomes related to the problem or need that precipitated the initiation of the coaching program (whether by the individual requesting coaching or the organization which initiated the coaching). Additional documents can be reviewed quickly for broad themes and particularly unique or contradictory perceptions or recommendations.

Unobtrusive Measures

Other types of information found in activity records (for example, schedule of coaching appointments), budgets, evaluation forms and related archival sources are of value, even if they do not have to do directly with the convening problem—for archival sources of information tend to be nonreactive or unobtrusive. The collection of this information will not disrupt or influence the continuing coaching events. Coaching contracts, minutes from important planning meetings, and even the informal stories (and jokes) about the coaching process are of comparable value. These unobtrusive measures tend to be descriptively rich and persuasive. They reveal something about the “real life” of the coaching process.

Obtrusive Measures

The reactions to certain data collection procedures are also of some benefit, for these reactions tell much about not only the coaching process, but also broader environmental dynamics (the nature of the rugged or dancing landscape in which the coaching is taking place). For instance, the way in which a researcher is introduced and provided with an orientation and appropriate support services may be indicative of the level of support for professional coaching in the organization. Similarly, to the extent that a researcher disrupts the flow of work when observing it, one can infer (with confirming evidence from other sources) that there is no precedent for peer observation and probably strong attitudes supporting professional autonomy. Such information has value when interpreting the apparent success or failure of a coaching program to influence professional behavior. The response of people to the current research initiative or to previous research initiatives is indicative of attitudes, goals and receptivity to change. The obtrusive event serves as litmus paper. It helps in preparing a map of the landscape.

Performance Reviews

Various psychometric devices should also be available to someone conducting research regarding professional coaching processes. Performance reviews by supervisors, peers or others in the organization (often in the form of 360 degree feedback processes) can be used to determine relative levels of achievement in a specific area. Similarly, the performance of the coaches can be rated by the users of the coaching services as well as other key stakeholders. It is often particularly valuable for this rating to be done by accessing multiple sources: the clients, the coaches and the stakeholders.

Questionnaires

A second psychometric device, the questionnaire, is used by researchers almost as frequently as interviews. Sometimes the researcher will design and distribute a questionnaire that specifically focuses on the coaching process being used in a specific setting. At other times, a standard questionnaire is used to cut down on design time or to compare one institution or program with others. A questionnaire can take many different forms: multiple choice, checklist, true-false, matching, scalar, short answer, or open ended.

In recent years, situational-descriptive questionnaires have become more popular. The respondent is presented with a specific description of a situation and asked to indicate which of several responses is most (and least) likely. For example, a study regarding specific coaching strategies can make use of a questionnaire that identifies a specific coaching issue (such as responding to a difficult subordinate) and then lists several different ways this issue can be addressed by the coach and client. The questionnaire respondents rank order and/or rate each of the alternative responses as to their frequency of use (or desirability) as a coaching strategy. Rich insights can be gained if both coach and client complete the questionnaire. Respondents also can be asked to predict how they think other people will respond (coaches, clients, supervisors, organizational leaders, etc.). Used in this way, the questionnaire reveals the respondent’s expectations regarding the coaching process. The respondent is not being asked to evaluate probable responses, but only to predict what they will be.

Critical Incident Checklist

A third psychometric device is closely related to the situational-descriptive questionnaire. Through the use of critical incident checklists, a respondent is asked to indicate how frequently a specific coaching activity is generally found to be critical to a specific coaching outcome or more broadly the success or failure of a client or organization in addressing a specific type of (messy) problem. An indication of its relative frequency of occurrence can be of considerable value to a researcher in coming to a fuller understanding regarding the dynamic interplay between a specific coaching strategy and the needs of a coaching client or organization.

General Information

The tenth source of information resides in the memory of the researcher as well as in the memory of others participating in the research initiative. This is the general knowledge one has acquired about the professional coaching process (and about human service and organizational dynamics in general). One need not direct this knowledge only to the specific coaching process being studied. Information about nationwide or regional trends, new funding priorities, different coaching models, and related human service practices (such as career counseling and organizational consultation) can be thrown into the research hopper along with information about the client system. The general knowledge can be used as a signal. When information-gathering methods generate data that are discrepant not only with information from other sources in the life of the individual client or organization, but also with general trends regarding this type of client or organization, they should be viewed skeptically, though not necessarily dismissed.

Conclusions

The challenges regarding use of evidence and opportunities regarding the use of effective research strategies when studying professional coaching will be found in Issue Three of The Future of Coaching. Before moving to these challenges and opportunities, I have addressed in the current essay several nested elements of the research problem. These nested problems concern: (1) definitions, (2) who is sitting at the table, (3) what is the nature and size of the sample being studied, and (4) what is the nature of the research methodologies being used. These fundamental issues must be addressed prior to any considerations regarding use of the evidence—for the evidence will only be influential if it is credible.
The analysis of these preliminary nested problems can point the way to effective research strategies and provide us with an opportunity to be optimistic about the potential for evidence-based coaching. I hope that you exit this essay with a clearer sense of the challenges inherent in this type of research and with some ideas about how these nested problems can be effectively addressed.

______________

References

Argyris, C. and D. Schön (1974) Theory in Practice. San Francisco: Jossey-Bass.

Ariely, D. (2008) Predictably Irrational. New York: Harper.

Bledstein, B. (1976) The Culture of Professionalism: The Middle Class and Development of Higher Education in America. New York: Norton.

Johansson, F. (2004) The Medici Effect. Boston, MA: Harvard Business School Press.

Kahneman, D. (2011) Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

Merriam, S. B. (2009) Qualitative Research: A Guide to Design and Implementation. San Francisco, CA: Jossey-Bass.

Miller, J. and S. Page (2007) Complex Adaptive Systems. Princeton NJ: Princeton University Press.

Page, S. (2011) Diversity and Complexity. Princeton, NJ: Princeton University Press.

Starr, P. (1982) The Social Transformation of American Medicine. New York: Basic Books.

Yalom, I (1991) Every Day Gets a Little Closer: A Twice-Told Therapy. New York: Basic Books.

_____________

 

Appendix A

The Organizational Coaching Taxonomy

William Bergquist, John Lazar and Suzi Pomerantz

 

Performance Coaching

1) Focus of Coaching  

Behavior

2) Nature of Issue Being Addressed  

Puzzle: Uni-dimensional, quantifiable, internal locus of control

3) Examples of Issues Being Addressed  

Providing a subordinate with feedback

Building the agenda for a meeting

Preparing presentation for board meeting

4) Sub-Varieties of Coaching Strategies

Engagement: Preparing for difficult and important interaction with one other person

Empowerment: Preparing for difficult and important work in a group setting

Opportunity: Preparing for major event in one’s life

 

Executive Coaching

1) Focus of Coaching

Decision-Making: Cognition/Thought and Affect/Feelings

2) Nature of Issue Being Addressed

Problem: Multi-dimensional, complex, mixture of internal and external locus of control

3) Examples of Issues Being Addressed

Determining when to give specific feedback

Identifying primary purpose for specific group’s existence

Understanding the leadership style one prefers in group settings

4) Sub-Varieties of Coaching Strategies

Reflective: Deliberating about options, assumptions, beliefs

Instrumented: Gaining clear sense of personal preferences and strengths

Observational: Gaining greater insight regarding one’s own actions and the impact of these actions

 

Alignment Coaching

1) Focus of Coaching

Fundamental Beliefs, Values, Purposes

2) Nature of Issue Being Addressed

Mystery: Unfathomable, unpredictable, external locus of control

3) Examples of Issues Being Addressed

Determining whether or not to re-main employed in an organization that places a low value on human welfare

Identifying the ethical and appropriate action to take in a particular setting

Clarifying values and perspectives with regard to career advancement and personal autonomy

4) Sub-Varieties of Coaching Strategies

Spiritual: Discerning spiritual directions

Philosophical: Critically examining fundamental frames of reference

Ethics: Identifying and consistently acting upon one’s own values and ethics

Life and Career: Identifying and acting upon broad life and career preference patterns

 

Business Coaching

1) Focus of Coaching

Overall Business Performance

2) Nature of Issue Being Addressed

Challenge: Specific and focused, yet involving multiple dimensions and multiple stakeholders, internal but broad-based locus of control

3) Examples of Issues Being Addressed

Determining the best procedure for engaging in a strategic planning process

Designing a specific job in the organization for high level performance

Creating an effective marketing plan to launch a new product

4) Sub-Varieties of Coaching Strategies

Strategic:  Creating and executing a strategic planning process

Tactical: Implementing various components of a strategic plan

Design: Reviewing and modifying job and organizational structures to create high performance

Operational: Reviewing and modifying policies and procedures of organization to maximize effectiveness and efficiency, and ensure equity

 

 

Exit mobile version