Library of Professional Coaching

The Women in Assessments

The power-packed field of Assessments has never been so prevalent as we try to understand leadership and its effects on the world. Leadership in crisis and leadership under scrutiny are two phases representing our times.

There are over 250 instruments to help leaders perform stronger and as authentic leaders with the end result of strengthening their organization’s bottom line, increasing employee enhancement and creating a corporate culture of a positive tribe. Some are simple self-assessments requiring only 90 seconds to take, and some are more complex requiring more time to complete and analysis, however all validate instruments can be beneficial providing five things have taken place:

  1. The interpreter has been properly trained including having the ability to eliminate self-bias
  2. The instrument has statistical data to prove validity.
  3. The receiver is open to accepting the results as well as has a wiliness to positively act on the results
  4. A support team is in place to help effect change
  5. Goals have been pre-established

This article presents interviews for five instruments with a twist.. all three are developed or owned by women! Thus given support to women being recognized for the work they are doing in the field.

I asked these accomplished women, Tricia Nadoff, CEO of Management Research Group (MRG); Sharon Birkman, CEO Birkman International; Barbara Singer CEO of ExecutiveCore; Shreya Sarkar-Barney, Ph.D., CEO of Human Capital Growth, and Cheri Tree, CEO of BANKCODE, the same questions. As expected, each had different and exciting responses; thus, proving there are many approaches to the goal of recognizing and supporting different styles, personalities, temperaments, and goals.

Tricia Nadoff, CEO of Management Research Group (MRG) ∙ Management Research Group (MRG) Leadership Effectiveness Analysis™ (LEA360™)

The LEA360 is a multi-rater leadership assessment designed to provide insights on an individual through measurement of 22 leadership behaviors and 30 leadership competencies.  The purpose of the LEA360 is to significantly increase the leader’s self-awareness, provide clear choices for increasing effectiveness, and clear coaching suggestions and recommendations for moving forward successfully.

What makes the LEA360 effective is the clarity and insight of the feedback; the combination of both behavior pattern feedback and competency effectiveness feedback; the insights provided through the open-ended questions; the detailed coaching suggestions included in the companion resource guide; and the guided action planning process, also included in the resource guide.  In addition to what the individual leader receives, the coach has an extensive research library available to her/him to gain further insights into leadership effectiveness – this research includes, but is not limited to, leadership effectiveness by gender, age, management level, function, industry, leadership competencies and leadership potential. The tool is over 30 years old.

 Sharon Birkman, CEO Birkman International   Birkman 360™

The Birkman Assessment is a Behavioral and occupational assessment used for executive coaching and building teams. It views people in a 3-dimensional way within one 30-minute assessment. The Birkman 360 is unique in that the results can be mapped to the Birkman Method results.  This mapping makes it easier for the coach to understand the reasoning behind the 360 results and can provide a place to start when developing the action plan for the subject.  The tool is 68 years old.

Shreya Sarkar-Barney CEO of Human Capital Growth     HCG Leadership Effectiveness Profile™; the HCG Team Effectiveness Assessment™

The HCG Leadership Effectiveness Profile assesses attributes that are hard-wired and those that can be developed for success in a leadership role. It is an ideal tool for making hiring decisions and for gaining insight into development needs of a leader.

The HCG Team Effectiveness Assessment​ is based on meta-analytic findings of team effectiveness across a variety of job types, including production teams, airline teams, and paramedic teams. The online survey can be conducted with an intact team or with multiple collaborative teams.  Parts of the assessment can be customized to gain a deeper insight into specific areas.  When conducted across teams, the comparison of ratings highlights differences and similarities in perceptions among team members or between teams.

 

 

 

Barbara Singer CEO of ExecutiveCore    ∙  Awareness 2020™

A newcomer in the assessment space the Awareness 2020™ is an Awareness IQ™ for presence and self-awareness. This assessment is used to explore how well a person inspires employee engagement and obtains results. The Awareness Index communicates the value of belonging to groups in a way that energizes others to be optimistic, hopeful, and resilient.  It helps to distill complex ideas into a clear plan of action and extend others’ ideas to synthesize/combine normally unrelated thoughts, ideas, and actions. It can help a leader operate with an entrepreneurial mindset to make decisions as if they were the “owner of a business.” The tool was introduced in 2018.

 Cheri Tree, CEO of BANKCODE BankCode™  

BankCode™ is a self- report assessment, which predicts buying behavior based on four personality types that are based on the four temperaments.  The temperaments identify four discrete factors that identify unique motivators: Blueprint, Action, Nurturing, and Knowledge (B.AN.K.™).  The short version takes up to 90 seconds and the long version takes two to five minutes.  It can be administered and scored by anyone. The tool has been in use for over 10 years.

♦♦♦

Q: Who is able to administer your instrument?

Tricia Nadoff  ∙       LEA360 requires experience in both development and prior assessment usage. Certification includes pre-work conducted through a learning management system, completing one’s own assessment, participating in either a two day in person training session or a five module remote session, and the completion of one LEA360 assessment feedback with an individual outside their certification cohort.

Sharon Birkman ∙      Eight hours of online pre-work, three days of onsite training in a small classroom setting and a follow-up is required to pass the conversation exam.

Shreya Sarkar-Barney ∙        A background in psychology is preferred and prior experience in HR, and coaching is preferred. Half day training is required for each instrument

Barbara Singer  ∙      Awareness 2020 requires that a coach be certified in other 360-degree assessments and/or test and measures graduate level course

Cheri Tree  ∙       Training and certification are broken down into various components and are based on levels of expertise. Level 1 involves 1.5 days training plus three days certification training; Level two requires 2.5 days of training plus five days of certification training; Level 3 requires 5.5 days of training plus seven days of certification training.

Q: When do 360 assessments work best/ best practices?

Tricia Nadoff  ∙       Because 360 assessments can consume a fair amount of time and money and cause a certain level of vulnerability in both the participants as well as their observers, 360s work best when the organization is prepared to provide the time and resources needed to gather the data, provide effective feedback, and make available resources to support the development that will be both needed and desired after the 360 feedback has been delivered. 360 feedback should not be attempted if the organization is not prepared to provide the time and resources needed.

360 feedback should also not be attempted if there is a high degree of perceived threat and lack of trust in the environment. The best 360 assessments measure behavior, skills, and or competencies. 360 assessments should only be used to measure aspects of an individual that can be readily observed by others. Therefore, attributes such as motivations, values, beliefs, and personality should only be measured in self-assessments.

When choosing observers for a 360 assessment, individuals should be coached to choose wisely. This means choosing observers who have the opportunity to work with the leader on a regular enough basis to have a full view of the individual’s approach to her or his work. When choosing direct reports, it is important for the leader, wherever possible, to choose all of her/his direct reports. The best 360 assessments will allow the ability to divide observer groups within a single observer category.  In situations where a leader may have two bosses, or peers from two different subgroups, or two or more sets of direct reports the data can be separated appropriately to get the most accurate and actionable feedback.

Sharon Birkman ∙        360s work best as a developmental tool when there is a plan to conduct an initial 360 and a follow-up 360 once the agreed upon action plan has been completed.  This provides the “subject” the opportunity to determine if their hard work has paid off.

Shreya Sarkar-Barney ∙      Documented in (Kluger and DeNisi, 1996) feedback fails about a third of the time.  Whether 1:1 or multi-source not all feedback is a gift as one believes.  The context matters!  If the goal of for the 360 feedback to be developmental, the receiver must see the sources to be trustworthy, and the feedback must be constructive (does not have to be positive) and it must be behavioral.

Q: How does your instrument support the role of a master-level executive coach regarding being an “enterprise-wide business partner™”*?

Tricia Nadoff ∙        LEA360 Provides behavior feedback such as strategic, persuasive, production; competency feedback such as the ability to see the big picture, ability to deliver results, effective decision-making, ability to develop others; and research that all support the development of leaders to be more effective at bringing about enterprise-wide success.

Sharon Birkman ∙        We can help the coach in an individual or a systemic way. As the coach aligns teams and works with leaders at all levels.

Shreya Sarkar-Barney ∙        Assessments allow a leader to gain self-awareness and with the guidance of a coach to develop a behavioral repertoire that is relevant to their work setting.  Coaches benefit from these instruments because it quickly helps them narrow down on the areas where their client needs help and it allows their clients to make visible improvements.

Barbara Singer  ∙      Instruments can provide executive coaches with a focused and multi-faceted way to assess a client’s current self-awareness and to support that client to achieve greater self-awareness for better performance and more successful outcomes. Self-awareness and awareness management have been linked to leadership excellence. But “awareness” has been difficult to reliably assess and quantify. Awareness 20/20™ is designed to meet the need for a quantitative method of assessing leaders’ awareness skills and practices to provide ROI data for the coaching engagement.

Cheri Tree  ∙       BANK™ can be used to provide a simple, quick, common language for people to better understand each other and communicate more effectively. Leaders can understand their direct reports and communicate in a more effective manner.

Q: Define qualitative versus quantitative assessment, which is best?

Tricia Nadoff  ∙       Both qualitative and quantitative assessments provide important insights that support the development of an individual. A well-designed quantitative assessment helps reduce bias in the feedback, while qualitative assessments often provide examples and rich detail. The best quantitative assessments provide the opportunity for both the participant and the observer to respond to open-ended questions, allowing a degree of qualitative feedback in addition to the quantitative measures.

Aristotle famously said, “Self-awareness is the beginning of all wisdom.” If we believe there is some element of truth in this statement then certainly helping leaders have deeper self-awareness is the entrée into a more effective way of leading, and certainly, assessments are effective and efficient vehicles for bringing about increased self-awareness.

As neuroscience will tell us, the more overwhelmed and the more stressed we become, the more our actions become habitual and even more biased. Because of this, leaders can relatively quickly get entrenched in being overly reactive and shortsighted. With 360 assessment leaders are required to pause, understand and reflect in ways that are both broader and deeper than they will encounter in their business as usual responses to their role.

Shreya Sarkar-Barney ∙        Irrespective of the type of assessment, what is more, important is the evidence-basis of what is being assessed. Does it predict anything of value or is it simply descriptive. A skilled coach relying on an evidenced foundation can derive value from a qualitative assessment (assuming one that is gathered through an interview without using a scale). Similarly, a quantitative assessment that is evidence-based can provide comparisons to a normative group and float up information that needs attention.

Cheri Tree ∙       There is no best; it depends on what the researchers are exploring or trying to prove.  Typically, a mixed-methods approach provides quantitative data for analysis of known characteristics along with qualitative data to capture not-yet-known characteristics.

 

Q: What do assessments have to do with leadership efficacy?

Shreya Sarkar-Barney ∙    Science in this space is extensive. Personality relates to (which has elements of character e.g. integrity) about 19% of leadership effectiveness and about 30% of leader emergence.

Q: Should High Achievers be identified only though assessments? 

Tricia Nadoff  ∙       Selecting high achievers is a very complex endeavor subject to a great deal of bias and subjectivity. While assessments can help reduce biases and subjectivity, they cannot be a full replacement for objective observations of past performance, cultural fit, and an individual’s interest in future growth. No assessment, no matter how well constructed, is universally infallible, and so while a well-constructed assessment can, and should be, a vital part of identifying individuals with high potential and/or high achievement, they should not be the lone or single factor in the determination.

Sharon Birkman ∙        No, an assessment – any assessment – is only that.  There are too many other pieces of necessary information to consider for a determination of who is or will be a high achiever.

 Shreya Sarkar-Barney ∙    Ideally, one should use what is called a multi-trait, multi-method approach to minimize errors in measurement and maximize prediction.

Barbara Singer  ∙      High potential leaders can be identified more fairly using assessment data, performance ratings, financial results, third-party satisfaction measures, and employee engagement.  No one measure is enough but coaching without data or an evidence-based approach can be dangerous.  Great succession management employs more data and works hard to level the playing field for all people.

A leader’s ability to positively impact revenue is often measured in OIBTDA, Operating Income Before Taxes and Depreciation, and Amortization.  We can use pre and post testing of valid 360° survey reports to see if leaders improve their behavior.  We can gather anecdotal examples from stakeholders (bosses, peers, direct reports, and those outside of the organization) to demonstrate that the person has made positive change.  We can also begin to measure increased self-awareness when we see a person’s own evaluation of them matching how others evaluate them.   A Gestalt way of thinking helps a person take action faster in a way that can be seen and reported on by others around them

 Cheri Tree ∙       High achievers should be assessed at the very least with a triangular approach, meaning at least three different perspectives.  The personal interview is still known scientifically as the best assessment; however, self-report assessments, peer and leader assessments, and KSA tests should also be employed based on the job description.

Q: What is the validity of managing/confronting blind spots through assessment?

Tricia Nadoff  ∙       360 assessments are highly effective at managing and confronting blind spots. Leaders are often surprised to see the variation among their observer ratings as well as between observer ratings and their self-ratings. Initial reactions often include a statement such as “they don’t know the real me.” The wisest response to this statement is “the questionnaire did not ask your observers to describe the real you” but rather the questionnaire asked your observers to describe their perceptions of you as a leader.

We often tell leaders that it doesn’t matter whether you are a creative thinker or feel empathetically, when it comes to being a leader if your observers don’t experience these things from you then, in fact, you are not seen as an innovative leader or as an empathetic leader. For leadership to be realized it must be seen heard and experienced by the leader’s constituents. So blind spots are a beautiful opportunity for gaining greater self-awareness and building stronger intentionality in one’s approach to leadership.

Shreya Sarkar-Barney ∙    If a leader disagrees with the scores explaining the evidence-based origin of the assessment helps shift the focus from accuracy to what it means for the individual.

Barbara Singer  ∙      Some of the many negative factors that result from blind spots include: isolating/withdrawing; not asking for feedback from those who can tell you the difficult truth; making decisions based on fears; pleasing people (wanting to be liked/accepted at your own expense); being impatient; demonstrating anger that is more than the situation calls for; loss of humor and playfulness; distorting reality; not synthesizing all the available data/facts into a good course of action; missing key themes/patterns when making a decision, forgetting to assess how people will be impacted and adjust accordingly; failing to act because you fear failing; winning at all costs — others get hurt; not clearly understanding how others perceive you; and or not being clear in your intentions.

 

Q: How do we measure assessments for reliability/ how do we determine the best?

Tricia Nadoff  ∙       The reliability measures of an assessment essentially answer the question “If nothing else changes in this individual, will her results in this assessment remain consistent?”  The measure of the validity of an assessment essentially answers the question “Does this assessment actually measure what it says it is measuring?” The tests for reliability are pretty simple and straightforward, depending on the type of assessment you can either compare time 1 and time 2 results of the assessment or you can compare the responses from the first half of the assessment to the responses in the second half of the assessment.

On the other hand, the pursuit of demonstrating the validity of any given assessment never actually ends. To keep an assessment current an assessment organization must constantly be measuring how the assessment performs against current, modern expectations of leadership. To do this assessment providers gather various types of concurrent measures and examine correlations between their assessment and the outside measures.

In this way, the validity is determined based on the degree to which the assessment performs in relation to the concurrent measures. For example, you would expect measures of empathy to correlate positively with concurrent measures such as sensitivity to others feelings and willingness to listen while you would expect there to be a negative correlation with a concurrent measure such as aggressiveness. Additional measures of validity include face validity which essentially means those who were receiving feedback believe that the feedback is accurate and expert peer review in which those respected in the field review the structure of the questionnaire, the scoring mechanism, and the output and make an expert assessment of validity.

Shreya Sarkar-Barney ∙    Our assessments are based on validated models that are backed by research evidence and global applicability.

Cheri Tree ∙       Assessment tools need to have shown the reliability of inferences from their scores by assessing Cronbach’s alpha and split-half testing.  For new tools > .70 is the threshold.  More established tools, especially with over 20 items should be > .80.  Sources of validity should correlate appropriately > .70 positively for convergent validity and negatively for divergent validity.  Effect sizes and confidence intervals should always be reported.  Many assessments do not have predictive validity and are, therefore, inappropriate for selection decisions, such as hiring, firing, promotion, etc., but are useful in other ways.

Q: How do instruments deal with bias such as academic discrepancies, cultural differences generational differences?

Tricia Nadoff  ∙       Biases are universally present in all human endeavors and assessments are no different. However, there are things that assessment creators can do to both minimize the possibility for biased responses in the structure of the questionnaires and in the structure of the feedback. Further, assessment providers can also attend to insights about biases through their research. To deal with academic discrepancies in participants, most assessment providers will standardize on a reading level of questionnaire vocabulary at the eighth or ninth grade level (depending on the intended audience for the questionnaire, this may be standardized at a lower grade level).

Cultural differences are much more complex to attend to. First and foremost, language translation is critical since most assessment takers will be more accurate in their responses in their native tongue. To do an accurate translation the process requires several steps. First, the original questionnaire is translated into the desired language. Second, the translated questionnaire is back translated into the original language. Third, the original English text and the translated text are compared and discrepancies are worked through with the assessment creator and the translator in order to get a more accurate translation.

Once this is done and in person native speaking practitioner reviews the translated text and makes recommendations for cultural nuances that may not have been picked up in the back-translation reconciliation. Finally, you can expect a new translation to go through three or four minor iterations over time until you have an accurate translation both in language and cultural nuance. The degree to which the content of an assessment needs to be modified culturally is based on the type of assessment.

In the case of the LEA360, because we are not promoting an idealized model of leadership but rather a descriptive model of behavior, content adjustment for culture has not been needed because cultural variation is expressed in the pattern of usage across the 22 behaviors. MRG has been doing cultural research globally for the last 30 years and has a number of studies documenting the unique patterns of leadership by country.

Other demographic biases such as generation and gender are also rich areas for research exploration. MRG’s research shows the different approaches to leadership based on age and based on gender. To explore the dynamics of gender bias MRG recently conducted a study to examine the rating patterns in the LEA360 based both on the gender of the rater and the gender of the individual being rated. The results of that study show very specific patterns of gender bias based on both the gender of the rater and the gender of the person being rated. Studies like these and others conducted by MRG are readily shared with practitioners certified in our assessments to help them navigate through, and help their clients navigate through, the complexities of human bias in the workplace.

Shreya Sarkar-Barney ∙    These biases are less of an issue when one relies on meta-analyses and not findings from individual studies.

Most leadership assessment tools are descriptive, which means they help describe a facet of leadership but may not provide a forward-looking view of what a leader is likely to do.  The later requires building predictive tools.  Such endeavors take time to build, as they require testing the validity of the instrument across samples and across time. The bias for leaning on science allows us to draw upon the 1000s of studies that are continuously evolving our understanding and ability to more accurately measure leadership. It ensures that the tools we build are backed by a rock-solid foundation in prediction and explanation.  Further, instruments should undergo rigorous psychometric evaluations to ensure they meet four critical gold standards.

(a) Reliability – the instrument consistently measures the attributes of interest for the same individual and across individuals, (b) Differentiating – the instrument measures the full spectrum of the attribute on a yardstick as opposed to only middle range scores, (c) Validity – the instrument accurately measure the attributes of interest, and (d) Predictive – the instrument scores provide a forward-looking view of the phenomenon of interest with a high level of accuracy.

Barbara Singer  ∙      Clients should use leadership assessments over time to level the playing field for women and other under represented groups of people at the top of the house, for example, 360 and 720 data show clearly in nearly every study that women are held to higher performance standard which shows their skills are usually rated significantly higher than men in group report with the exception of strategic conversations… when a company included 720 assessment in its succession management along with employee engagement and other metrics like financial performance , promotions tend to accelerate more women and underrepresented groups

Q: How do instruments work when there is resistance to reality?

Tricia Nadoff  ∙       We all have some resistance to reality and leaders undergoing 360 assessment processes are no different. Openness to 360 feedback depends on many factors. First and foremost, participants need to feel safe. If the participant feels they are at risk for ridicule or receiving any type of negative consequence then she/he is less likely to be open to feedback. Further, if the leader is overwhelmed and stressed that she/he is unlikely to have the bandwidth to take in the depth and breadth of 360 feedback. So when there is resistance, it’s important for the coach to explore these individual factors.

Each one of us is at varying states of readiness to hear or not hear many of the messages that can come through 360 assessments. Fortunately, almost all leaders can benefit from development across any number of behaviors or competencies. So while the leader may not be ready to take in feedback in one area of their assessment they will often indicate a lower level of resistance and a higher level of readiness for change in other aspects of their feedback.

Well-designed assessments can provide significant insights for an individual leader, and in the hands of a qualified, effective, motivated coach these insights can be revealed to the leader in ways that increase the leader’s self-awareness and stoke the fire of motivation for growth.

♦♦♦

Some of the above instruments require extensive training and a certification to administer while others require less time. Some of the above assessments have existed for over 60 years for, example, Birkman 360 while others are new to the field like BankCode and Awareness2020. However, all have been statistically validated.

Some assessments relate to personality, some behavior and some for team building, etc. there is an instrument out there for all types of human capital query. I encourage you to take a look our e-book on assessment for master level corporate executive coaches for a description of some of the more widely used tools but I also encourage you to explore the new instruments which are being published as we learn more and more about human behavior and human science. The caution is to ensure that you engage a highly qualified individual for the interpretation and that you examine the reliability/ statistics for each instrument you select.

Exit mobile version