Library of Professional Coaching

A crisis in the rejection of expert knowledge, and the acceptance of “Google-fueled, Wikipedia based, Blog-sodden” information

The title “Google-fueled, Wikipedia based, Blog-sodden” is taken from Tom Nichols book entitled “The Death of Expertise”. While this description may be a little over the top, we are living through a period in our history where people are dying (as a result of the Covid pandemic), many of whom could be saved if they followed the advice of medical experts and followed some fairly simple techniques and practices. But, while we may think this is a problem manifested by the pandemic, it is not – the problem of distrusting experts has a long history and numerous causes. This backdrop also has implications for coaching and consultants of all kinds (leadership/executive, life coaching and even sports coaching), where coaches and consultants are often considered experts, or at least deeply experienced. As I discuss in this essay, experts are fallible, and laypeople are often mis-informed or in some cases (as I describe later) blatantly ignorant on certain topics they may weigh in on. Our society (and interactions in the business coaching and consulting world) requires rules and guidelines for more constructive engagement between experts and laypeople.

The Challenge: Where Do We Seek Our Information and What Do We Believe?

As a leadership coach and change management consultant, my advice and consult are ignored from time to time. One case was that of a large utility company that was implementing new technologies that would ultimately impact every household across the state. My consulting advice was to engage, educate and prepare field workers who interact with the public on a daily basis, and could engage the public in an honest and open dialogue about the technology. This advice was largely ignored with the eventual result that there was a huge public backlash against the technology – to some degree fostered by field workers themselves. This led to long delays and massive cost increases. This project was complex, having many stakeholders and numerous unknowns, so my guidance was not absolute. It needed to be debated and various scenarios worked out. Unfortunately, this debate never occurred and senior leaders dictated the plan going forward, which ultimately failed badly.

Being challenged as a so-called expert is healthy and important. Experts are not always right in their judgments. This kind of challenge – or rather debate – is common in academic environments as well as in some medical situations (for example, deciding on a treatment plan for a complex surgery). In these situations, there are processes, rules and practices that allow these debates to occur in a collaborative way and for practical and useful outcomes to emerge. This is not the case in other situations. Often, differences of opinion result in combative discussions, people going to their corners and becoming immovably defensive with no opportunity for solutions.

A crisis in expertise – What are some of the causes?

Being cautious about the guidance of experts is often warranted. Let’s delve into a few examples.

Research demonstrates that the predictions of many experts about the future are often “devastatingly” wrong (Kahneman, 2011). Those with the most knowledge are often the most unreliable. This is because many experts develop the “illusion” of skill and can become over-confident. They develop the so-called “arrogance of over-confidence”. This illusion of expert knowledge and arrogance is not only risky and can provide false hope in challenging social situations, but also makes the process of coaching these expert leaders so much more difficult – the hubris of overconfidence of many experts is a barrier to influencing their behavior and to change. Lay people who blindly follow the directions of experts and leaders without thought or debate, are potentially following the pied piper.

Experts are often Wrong!

The problem is not that expert leaders make mistakes – that’s a given (Kahneman, 2011). It’s that errors (especially those involving prediction) are inevitable because the world is complex and unpredictable. The problem is that many people in society expect and want leaders to provide what is often not possible – total and absolute clarity on complex issues and to be correct all the time! This expectation is unreasonable and dangerous in some cases (for example, taking medical advice as absolute or legal advice without the client doing some due diligence).

Nobel Prize winner Daniel Kahneman (Thinking Fast and Slow, Noise) describes, for example, variations and errors in judgement and decision-making in our court system. Studies show that similar cases of criminal extortion can receive massively different penalties ranging from twenty years imprisonment and $65,000 fine, to a mere three years and no fine at all. Other statistical research shows that judges are more likely to grant parole at the beginning of the day or after a food break than before such breaks. In other words, if judges are hungry or tired, they are much tougher. Judges are also more lenient on their birthdays. When the weather is hot, immigration judges are less likely to grant asylum. As Kahneman describes, these discrepancies in the judgements and decision-making of experts is not uncommon in a wide variety of specializations, including doctors, nurses, lawyers, engineers and many other professions.

Tom Nichols notes, “experts get things wrong all the time … and yet, experts regularly expect average people to trust their judgement and to have confidence not only that mistakes will be rare, but that experts will identify those mistakes and learn from them”.  Nobel Prize winner Daniel Kahneman (Thinking Fast and Slow) notes numerous examples of excessive over-confidence of CEO’s, for example, making costly acquisition decisions, many of which are unsuccessful. In fact, research shows that the most confident CEO’s are more likely to make unsuccessful acquisition decisions.

What tends to exacerbate these situations is that people with the most knowledge often emerge as leaders with immense influence on the people around them. As Kahneman continues, psychologists have confirmed that most people (and especially senior leaders) genuinely believe they are superior to most others on desirable traits (including knowledge and expertise), almost developing a narcissistic perspective to their thinking, as those around them shower them with admiration and enable their leaders’ hubris. This blind confidence can be dangerous.

The more absolutely confident a leader is, the LESS we should trust them?

This overly optimistic and inflated sense of expertise is referred to, (in some contexts), as the “hubris hypothesis” (Aronson, 2008), where a person’s absolute optimism and surety is received more positively by followers than a leader whose optimism is described in a comparative manner (which can provide some balance on a complex topic). While comparisons can provide more balanced understanding, psychological research shows that audiences tend to dismiss these as being wishy-washy and tend more often to believe the absolute optimistic viewpoint – people tend to want absolute certainty. Herein lies the risk, where knowledgeable and overly confident leaders make absolute statements about the future, which are then rarely challenged by those around them. The potential is that a culture of blind followship to a leader’s dictates emerges. And followers can be following the pied piper into danger. Leaders and experts must be challenged, and issues must be debated if the best and most current information is to be surfaced and well-informed decisions made. But we need a process and guidelines for how to go about doing this.

Leaders and experts who are coachable. And Not!

Expert and leadership hubris is a barrier to learning and change. I recently attended a wonderfully informative webinar with Dr Jack Zenger and Dr Joe Folkman on the “coachability” of senior leaders. I am always impressed with the sound research foundation of the Zenger-Folkman analysis and their strong connection between leadership capabilities and business performance. In this case Zenger and Folkman referenced responses from almost fifty thousand leaders and linked this feedback to business success.  Their findings are remarkable: As leaders become more senior and move up in their organizations, and are increasingly successful over time, so they tend to become less coachable – they tend to ignore, or be resistant to, feedback about their leadership performance, their behaviors and decision-making. On the other hand, those leaders that remain open to coaching and feedback are significantly more successful in the long term than those who resist feedback.

In my consulting and coaching work with senior leaders – typically on large-scale transformation initiatives – I work closely with leaders on their roles in leading transformation, from culture change to technology and structural transformations. These are often highly successful leaders with many years of business successes but who are now facing challenges requiring a change in their leadership style and behaviors – what has made them successful in the past, will not achieve success in the future. With few exceptions, I find that many leaders are resistant to feedback from their team members, as well as observations and advice from leadership coaches. Indeed, the more successful they have been over many years, the more resistant they tend to be. This lack of personal awareness has been labelled the Dunning-Kruger Effect (after psychologists David Dunning and Justin Kruger) who researched this phenomenon and found that uninformed or incompetent people (in a particular area of expertise) are less likely to recognize their own lack of knowledge or incompetence on a particular topic.  In my anecdotal experience, it is quite common for successful leaders to express over-confidence in areas way beyond their areas of experience and expertise. Tom Nichols provides an extensive overview of various experts in one field behaving as if they are therefore experts in other areas – celebrities getting involved in complex political topics, doctors talking as experts in exercise or nutrition and experts in technological fields waying in as experts in psychological topics, amongst others.

But, just how stupid are we?

But it is not just experts and experienced leaders that can give bad advice and make poor decisions. Lay people who interact with experts are also part of the problem. It is difficult to engage in a constructive conversation with an expert when the debater has limited or superficial knowledge – this can happen in the “Google-fueled, Wikipedia based, Blog-sodden” environment we live in – many people think they know a lot, but do not.  As an immigrant to the US, I was initially shocked at comments from a friend and colleague that included some version of “never underestimate the stupidity of the American people”. The level of ignorance amongst Americans about basic civic knowledge for example, is quite alarming. According to FindLaw, a website with legal information, two-thirds of Americans can’t even name a single justice on the Supreme Court while 35% of Americans can’t name a single branch of our government. Similarly, according to a survey by Benenson Strategy Group in Washington D.C., 91% of the people surveyed said they would vote in the next presidential election even though 77% of them couldn’t even name one of the senators in their home state”.

(https://www.theodysseyonline.com/americans-educated-basic-politics). On the other hand, almost one quarter of Americans can name all five family members of the sitcom the Simpsons (Shenkman, 2008). Historian Yuval Harari a professor at The Hebrew University of Jerusalem, places a global spin on this problem. He is quoted as saying “One thing that history teaches us is that we should never underestimate human stupidity … It’s one of the most powerful (and destructive) forces in the world.” In the CNBC’s news article below, Harari expresses concern about the ability of populist leaders — a group he described as “selling people nostalgic fantasies about the past instead of real visions for the future” — to solve today’s biggest global problems.

https://www.cnbc.com/2018/07/13/never-underestimate-human-stupidity-says-historian-and-author.html

But it’s not just ignorance, it’s also laziness – Tom Nichols quotes the research of Philip Tetlock on the interaction of experts and non-experts, and notes that one of the biggest barriers to keeping experts honest and to have robust debates is the average person’s “laziness” to make an effort to educate themselves to some degree on the topic being discussed. It is difficult to have robust debates on important topics involving experts and everyday people who think they have knowledge, but really do not. As Tom Nichols notes, “the most poorly informed among us, are those who seem to be the most dismissive of experts”.

The Response: New Perspective on Expertise

 Experts make mistakes and are frequently wrong. The average layperson, and especially the “Google-fueled, Wikipedia-based, Blog-sodden” community are more often wrong and frequently misguided. We need to come up with mechanisms to ensure this situation does not polarize people and ensure that constructive engagement can move difficult subjects forward and develop solutions collaboratively. There are many processes, structures and systems already used in business, academia, industry and the scientific communities to help limit expert failure as well as overcome (to some degree at least) the general lack of knowledge of people in general. Following are several areas in which improvements could be made:

The roles of experts and lay people in organizations (and society)

The role of experts, and their interaction with non-experts and the public, should be better defined and understood. Indeed, the process of research and acquiring knowledge should be better understood (from a national level to one’s local milieu such as a school district). Knowledge is not static – expertise is a “moving target” and is forever evolving. The average citizen (and employees) should understand that what is considered cutting-edge knowledge today, may be replaced or refuted in the future.  This does not mean that experts cannot be trusted and that “Google-fueled, Wikipedia based, Blog-sodden” knowledge takes its place.

There is also value and risk mitigation in separating the roles of experts and “deciders” (Tom Nichols). Experts should advise and leaders decide. This structure worked very well in a technology company I consulted with, where leaders were often partnered with “fellows” who were deeply experienced scientists and most often acted in advisory roles. In this way, senior leaders could listen to the advice and opinions of numerous stakeholders and then make better informed decisions.

A process for engagement that includes mutual respect and courtesy

The lack of process and courtesy is best demonstrated by recent videos of parents screaming abuses at local school boards concerning mask mandates and vaccinations. Police are occasionally called in given the level of vitriol and abuse. When questioned after these interactions by local news journalists, these parents often describe vague and unsubstantiated claims “they read somewhere” but vehemently support. Alternatively, I recently participated in a political forum via an organization called “Braver Angles”. The intent of this organization is to bring people of very different political views together to engage in constructive debates on tough issues. The process and rules that underpin these debates produce effective discussions, improved relationships and constructive outcomes.

Mechanisms for improved expert guidance and decision-making

In their book “Noise” (Daniel Kahneman Oliver Sibony and Cass Sunstein) these authors describe how bias and “noise” can negatively impact the decision-making of leaders, experts and people in general. They also provide techniques and mechanisms – some quite simple – that can be applied to significantly improve understanding and decision-making by reducing noise. This is a fascinating and extensive topic, and I have included a summary on this topic (primarily based on Kahneman’s earlier text “Thinking Fast and Slow”) in an appendix to this article (see below).

 Experts must be held accountable.

If experts are in business for example, company culture (embedded in process) should encourage leadership level experts to be challenged (courteously of course). There are specific techniques and processes that organization can introduce to encourage this kind of debate. I experienced how effectively these processes can operate when consulting with Chevron some years ago and was exposed to the impact it had on this organization some years later when these processes where largely disbanded. Academic research has a sound model of ensuring experts and senior leaders are held accountable and are kept honest.

Lay people should be held accountable.

In a business context, employees who engage in debate with experts need to have done their homework. Specific guidelines must be in place for these kinds of debates. For example, in the appendix to this essay, I  describes the impact of using simple checklists for people who engage in debates with experts and decision-makers and could easily be applied, for example, to the parent engaging with the school board ending with police escorting her out. While checklists (that outline the need for preparation, rules and protocol) may not totally eliminate bad behavior, they could pre-empt much of the animus.

Conclusion

Sifting through the various perspectives from authors quoted in this essay, it is apparent that there is a dangerous imbalance in the relationship between so-called experts, leaders-decision makers and lay-people. This imbalance urgently needs to be corrected, understanding that solutions, certainly in society, take time to implement. The dynamics and solutions in this relationship are numerous and complex:

References

Daniel Kahneman, Thinking Fast and Slow. FSG New York. 2011.

Daniel Kahneman, Oliver Sibony, Cass R. Sunstein, Noise. Hatchette Book Group, 2021.

Elliot Aronson with Joshua Aronson, The Social Animal. Worth Publishers, 2018.

Rick Shenkman, Just How Stupid Are We? Basic Books, 2009.

Tom Nichols, The Death of Expertise. Oxford University Press. 2017.

Appendix

A case for standardized checklists, algorithms and simple rules to reduce complexity and improve the understanding and decision-making of experts and lay-people alike

(This appendix is an excerpt from my book “The House of Culture”)

Daniel Kahneman (2011), the psychologist and Nobel Prize winner in economics, quotes Paul Meehl, (who Kahneman rates as “one of the most versatile psychologists of the twentieth century”), as saying that one reason experts are almost always outperformed in predictive capabilities by simple algorithms, is that they think they are quite capable of dealing with massive amounts of data and information – and they are almost always wrong. They know that they are very smart people – but they “try to be (too) clever, think outside the box and consider complex combinations of features in making predictions – Complexity (most often) reduces validity”. Many studies have shown that human decision-makers are inferior to relatively simple formulae, statistics and checklists when assessing and making decisions about the success of complex scenarios such as mergers and acquisitions amongst others. In research studies, even when smart people are given the result provided by formulae, these same people tend to overrule it and ignore it because they feel that they have more knowledge and information than that produced by the formulae. Kahneman notes that “they are most often wrong”.

Standardized approaches, simple algorithms and checklists can be very powerful tools. Atul Gawande, (2013), a general surgeon in Boston and assistant professor at Harvard Medical School, defines the power of checklists in this way:

We (humans) have accumulated stupendous know-how. We have put it in the hands of some of the most highly skilled and hardworking people in our society. And with it they have accomplished extraordinary things. Nonetheless, that know-how is often unmanageable. Avoidable failures are common and persistent, not to mention demoralizing and frustrating across many fields – from finance, business to government. And the reason is increasingly evident: the volume and complexity of what we know has exceeded our individual ability to deliver its benefits correctly, safely and reliably. Knowledge has both saved us and burdened us” … but there is such a strategy (to solve this problem) – though it is almost ridiculous in its simplicity, maybe even crazy to those who have spent years carefully developing ever more advanced skills and technologies (and indeed is resisted in many companies for this reason). It is a checklist!

Kahneman puts forward his own personal judgment and predictive capabilities (or lack of) as a young military psychologist charged with assessing the leadership capabilities of aspiring officers; he was initially dismal at this task. He also highlights examples of poor capabilities of highly trained counselors predicting the success levels of college freshmen based on several aptitude tests and other extensive data compared to the predictive accuracy of a simple statistical algorithm using a fraction of the information available – the algorithm was more successful than the trained counselors by far. Kahneman continues to reference cases of experienced medical doctors predicting the longevity of cancer patients, the prediction of the susceptibility of babies to sudden death syndrome, predictions of new business success and evaluations of credit risk, all the way to marital stability and the ability to predict the future value of fine Bordeaux wines. In all these cases, the accuracy of highly trained experts was most often exceeded by simple algorithms, much to the consternation, occasional anger and derision of the experts concerned.

Jonah Lehrer (2009) similarly references studies conducted at MIT in which students given access to large amounts of data performed poorly in predicting stock prices when compared with a control group of students with access to far less information. He notes that the prefrontal cortex of the brain has great difficulty NOT paying attention to large amounts of information which can overwhelm the ability of the brain to estimate and predict. Access to excessive quantities of information can have “diminishing returns” when conducting assessments and predicting future outcomes, he says. Lehrer comments that corporations, in particular, often fall into the “excessive information” trap and invest huge amounts of resources in collecting data that can then overwhelm and confuse the human brain, versus the intent of informing decision-making. Lehrer describes the remarkable situation of medical doctors diagnosing back pain several decades ago. With the introduction of MRI in the 1980’s and with far greater detail available, medical practitioners hoped that increasingly better predictions of the sources of back pain would be made. The converse happened. Massive amounts of detail produced by the MRI actually worsened their assessment and predictive capabilities – poorer assessments were made. Kahneman refers to scenarios that contain a high level of complexity, uncertainty and unpredictability as “low-validity environments”. Without doubt, assessing and predicting the outcome of cultural change initiatives falls into this category.

Experts are “less competent than they think they are”

Kahneman explains the state of our deep knowledge and experience in so many fields in the following way: “Psychologists have confirmed that most people genuinely believe they are superior to most others on most desirable traits…. (for example) Leaders of large businesses sometimes make huge bets on expensive mergers and acquisitions acting on the mistaken belief they can manage the assets of another company better than its current owners … (in many cases) they are simply less competent than they think they are”. Kahneman rather humorously notes that humans are “incorrigibly inconsistent” in making judgments about complex situations. While his description of this is humorous, the reality can be serious. He continues to describe situations where experienced expert radiologists evaluating the same chest X-Rays as being “normal” versus “abnormal” contradict each other 20% of the time. Sometimes they even contradict their own evaluations on a second assessment. Similarly, my personal experience driving culture change in large corporations, is that many executive leaders (usually with very strong and dominant personalities) often have strong opinions about the future success of their culture change initiatives. With differing background and experience, and often relying on culture research studies that produce large quantities of research statistics (which, in my view, can distract focus), their views of what needs to be done and predicting future outcomes is often all “over the map”. This inconsistency amongst leaders is “destructive of any predictive validity”, Kahneman says. He labels this sense of over competence the “hubris hypothesis”. These executives are most often “less competent than they think they are”, he says, and I agree.

 Algorithms, statistics, checklists and simple rules

The power of something as simple as a checklist is has been shown by Kahneman to have “saved hundreds of thousands of infants”. He gives the example of new born infants a few decades ago, where obstetricians had always known that an infant that is not breathing normally within a few minutes of birth is a high risk of brain damage or death. Physicians and midwives through the 1950’s typically used their varying levels of medical judgment to determine whether a baby was in distress. Different practitioners used their own experience and different signs and symptoms to determine the level and extent of this distress. Looking at these different symptoms meant that danger signs were often overlooked or missed and many newborn babies died. When Virginia Apgar, an American obstetrical anesthesiologist, was asked somewhat casually by a student how to make a systematic assessment of a newborn, Apgar responded “that’s easy” and jotted down five variables (heart rate, respiration, reflex, muscle tone and color) and three scores (0, 1 or 2 depending on the robustness of each variable). Apgar herself began to use this rating scale in her own work. She began applying this assessment about sixty seconds after birth of all infants she handled. A baby of eight or greater was likely to be in excellent condition. A baby with a score of four or less was in trouble and needed immediate attention. What is now called the “Apgar Test” is used in every delivery room every day and is credited for saving thousands of infant lives. Indeed, a report on CNN.com as recently as March 2014 (Hudson, 2014) indicated that about one in twenty five patients that seek treatment in US hospitals contracts an infection from the hospital, and that patients acquired some 721,800 infections in 2011. This statistic is however significantly better than previous years, about 44% from 2008 to 2012. This result came from “requiring hospitals to follow a simple checklist of best practices”. Simple checklists focused on complex situations work!

 Resistance to assessment, prediction and tracking methods

Kahneman writes in detail of the level of resistance, even hostility, that he and other researchers have met with when presenting the results of his research on this topic. From medical professionals to psychologists and wine producers, these experts either rejected or ignored the results, and in some cases responded with derision. Perhaps this is predictable, because these results challenge the assessment and predictive capabilities of these same experts who have developed their skills over many years and have rightly developed high opinions of their capabilities.

Kahneman quotes Gawande who writes in his book “The Checklist Manifesto”:

We don’t like checklists. They can be painstaking. They’re not much fun. But I don’t think the issue (people resistance) here is mere laziness. There’s something deeper, more visceral going on when people walk away, not only from saving lives, but from making money. It somehow feels beneath us to use a checklist, it’s an embarrassment. It runs counter to deeply held beliefs about how the truly great among us – those heroes we aspire to be – handle situations of high stakes and complexity. The truly great are daring. They improvise. They do not need protocols and checklists. Maybe our idea of heroism needs updating.

I agree with this sentiment. I have experienced this kind of response, verging on disdain when presenting various checklists related to change and transformation. Somehow a checklist, algorithm or computation trivializes their personal sense of the expertise, making them feel less expert. But, I believe a key element of introducing assessments and checklists is missed in Kahneman’s dialogue. These tools should be developed – as best as possible – together with the experts that will ultimately use them. This is a basic “behavioral change” principle, designed to overcome the “not invented here syndrome”. This principle has helped me introduce checklists into organizational change initiatives where many executives feel they “know it all”. Kahneman’s “close your eyes” rule is also valuable in these situations.

Exit mobile version