Library of Professional Coaching

Technological Acceleration: The Crisis of Information, Reality and One’s Sense of Self

Kevin Weitz, Psy.D. and William Bergquist, Ph.D.

The negative impact of Fake News, especially within the political, economic, and social environments is increasing, emphasizing the need to detect and identify these Fake News stories in near real-time. Furthermore, the latest trend of using Artificial Intelligence (AI) to create fake videos, known as “DeepFakes” or “FakeNews 2.0”, is a fast-growing phenomenon creating major concerns. AI technology enables, basically anyone, to create a fake video that shows a person or persons performing an action at an event that never occurred. Although DeepFakes are not as prevalent and widespread as Fake News articles, they are increasing in popularity and have a much greater effect on the general population. In addition, the sophistication behind the creation of DeepFake videos increases the difficulty of identifying and detecting them, making them a much more effective and destructive tool for perpetrators.
Counter Misinformation (DeepFake & Fake News) Solutions Market – 2020-2026 (reportlinker.com)

Two main characteristics of deepfakes make it uniquely suited for perpetuating disinformation. First, like other forms of visual disinformation, deepfakes utilize the “realism heuristic” (Sundar, 2008) where social media users are more likely to trust images and audio (rather than text) as a more reliable depiction of the real world. As technology progresses, the manipulated reality could be more convincing, amplifying the consequences of disinformation. The second characteristic is the potential to delegitimize factual content, usually referred to as exploiting “the liar’s dividend” (Chesney and Citron, 2019: 1758). People, and especially politicians, can now plausibly deny the authenticity of factual content.
Navigating the maze: Deepfakes, cognitive ability, and social media news skepticism (researchgate.net)

The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.
Scientists Increasingly Can’t Explain How AI Works (vice.com)

As artificial intelligence and deepfake technology continue to advance, the risks they pose are becoming more evident. While these technologies have the potential to revolutionize various fields, including entertainment and communication, they also have the potential to do significant harm.

Deepfake technology has already been used to create convincing but entirely fabricated videos, audio, and images. While initially used for entertainment purposes, it is now being used for malicious intent, from creating revenge porn to impersonating high-profile individuals. As the technology becomes more sophisticated, it will become increasingly difficult to differentiate between what is real and what is not, causing significant trust issues in various sectors.

The dangers of artificial intelligence are equally concerning. The more advanced the technology becomes, the more it poses a risk to humanity. There is already evidence of AI being used for nefarious purposes, such as the development of autonomous weapons, which could trigger wars and conflicts, and the creation of social media bots, which can be used to spread misinformation and propaganda.

Moreover, as AI systems become more complex, it is difficult to predict how they will behave in certain situations. There is a risk that AI could become uncontrollable, leading to unintended consequences, or that it could be used to manipulate people and organizations, causing chaos and destabilizing society.

The risks of artificial intelligence and deepfake technology cannot be ignored. These technologies are still in their infancy, and there is a real danger that they could be used to cause significant harm. One specific example of an individual being duped by AI is the case of a CEO who transferred $243,000 to what he believed was the account of a Hungarian supplier. However, it turned out that the CEO had fallen victim to an AI-powered deepfake audio attack, where the attacker had used AI to replicate the Hungarian supplier’s voice and make it seem like he was requesting the funds transfer.

This incident highlights the potential dangers of deepfake technology and the importance of being cautious and verifying information. As such, it is essential to have strict regulations in place to govern their development and use to mitigate the potential risks. It is also vital that individuals and organizations remain vigilant and cautious about how they use these technologies and the information they provide, lest they fall prey to their dangers.

Were you vigilant or were you just duped?

As a reader, you may – more likely not – have picked up something different about these introductory paragraphs! It was written entirely by the artificial intelligence system CHatGPT! It was created by posing a couple of simple questions, and although these authors reviewed the text for applicability, we changed very little. And as ChatGPT itself noted, this technology is in its infancy.

The Risks (and Benefits) of Artificial Intelligence

Recently, a mentally unstable individual broke into the home of Nancy Pelosi, who was then Speaker of the House. The intruder viciously attacked her husband with a hammer. Current reporting suggests that the accused perpetrator was deeply immersed in conspiracy theories, devouring misinformation and deeply participating in “dark-web” collaboration forums. While these current social media forums and their algorithms are adept at dragging people deeper and deeper into misinformation, technology experts suggests that what we currently experience is trivial compared to what we are likely to experience in coming years.

The advent of the metaverse, artificial intelligence and deepfake technologies are likely to greatly accelerate the opportunities for people with the propensity to believe misinformation and conspiracy theories to be indoctrinated into whatever cult or sinister movement is looking for naïve recruits. Moreover, the likelihood that information provided by experts and leaders virtually will be disbelieved is almost assured.

A Risky World I: Artificial Intelligence and Deepfake Technology

We are reaching a point where any information we read online needs to be questioned. Moreover, this is truly a “Liar’s Dividend” situation in which Machiavellian individuals can easily deny or undermine any information that contradicts their positions or viewpoints – in other words “everything I disagree with is fake news”.

…an allegation of a deepfake or fake news can provide rhetorical cover. To avoid cognitive dissonance, core supporters or strong co-partisans may be looking for an “out” or a motivated reason (Taber and Lodge 2006) to maintain support for their preferred politician in the face of a damaging news story.

In essence, the technologies described below can be used to undermine factual information and support lies and misinformation at every level – essentially, all of us will have to question everything we read, view or hear. Our need to develop critical thinking (see chapter on this topic) is essential.

As these emerging technologies are new and evolving, they require some upfront description. However, these authors are psychologists and not technologists – so, the next section is largely referenced information (in italics) about some of these technologies and how they are likely to operate and impact us – our commentary and insight follows. To be clear, this is not some entirely futuristic fantasy: These technologies – while in their infancies – are operating today and rapidly growing in sophistication.

Artificial Intelligence

Artificial intelligence is an expansive branch of computer science that focuses on building smart machines. Thanks to AI, these machines can learn from experience, adjust to new inputs, and perform human-like tasks. For example, chess-playing computers and self-driving cars rely heavily on natural language processing and deep learning to function. [8 Examples of Artificial Intelligence in our Everyday Lives (edgy.app)

There are four types of artificial intelligence: reactive machines, limited memory, theory of mind and self-awareness. [ Understanding the Four Types of Artificial Intelligence (govtech.com)]

1. Reactive machines: The most basic types of AI systems are purely reactive and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of Type I machine. Deep Blue can identify the pieces on a chess board and know how each piece moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities. But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.

The current intelligent machines we marvel at either have no such concept of the world or have a very limited and specialized one for its particular duties. The innovation in Deep Blue’s design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov. Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are easily fooled. They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it’s bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won’t ever be bored or interested or sad.

2. Limited memory: This Type II class contains machines that can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in just one moment, but rather requires identifying specific objects and monitoring them over time. These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel. So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

3. Theory of mind: We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about. Machines in the next, more advanced, class not only form representations regarding the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior. This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly. This will be quite a challenge—since many human beings lack this capacity and as a result are unable to experience empathy for other people or act in ways that benefit other people more than themselves. As we have noted in previous essays, those people who lack a clear and sustained theory of mind are among those most likely to be attracted to an authoritarian leader.

4. Self-awareness: The final step of AI development is to build systems that can form representations about themselves. Ultimately, AI researchers will have to not only understand consciousness but build machines that have it. In a sense, this is an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict the feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step toward understanding human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

I Understand You – I Know How You Feel!

As described above, while AI is not there yet, it’s getting there. In other words, it is predictable that in the future AI technology will be able to learn how we think and feel. For example, an individual with a personality profile that is susceptible to conspiracy theories can begin to explore online dialogue about, say QAnon. Instead of a “simple” algorithm that currently channels the individual down a digital path providing more and more conspiracy theory material, the AI engine is able to digitally interact with human-like empathetic response – “ I understand how you feel, and I get that you might be angry”. This is a powerful long-term implication.

In this “understanding and empathetic” dialogue, an AI terrorist recruiter (as described below) could lure unsuspecting and susceptible individuals into a web of misinformation, deceit and potentially acts of violence. This kind of indoctrination is happening today (as in the Pelosi attack). It is just that AI enabled interaction is likely to be much more effective.

In defense of this negative influence, psychological research suggests that being “authentic” or “real” (realness defined as the tendency to act on the outside the way one feels on the inside) is measurable (Hopwood, Christopher et al., 2021). As most of us know, we often feel and think one way but act in an entirely different way to others around us (we moderate our behavior based on our environment and the response we expect from others) (Bergquist, 2023). There is a possibility that “authenticity” or lack of it (for example, lying) is also observable. In the future, it may feasibly be possible for AI powered facial recognition, for example, to be able to assess authenticity, or lack of it. This may be extremely valuable to identify Machiavellian liars in the moment. For example, if we are watching a deepfake video, a pop-up notification may warn that there is a high probability of deepfake manipulation.

However, this technology could also be dangerous with the potential that AI may be able to identify vulnerable and susceptible individuals who could be easily influenced to join a cult movement, a terrorist organization or simply to be indoctrinated into being a “lone-wolf” murderer because they believe that their families have “serpent DNA and must be killed” (California dad killed his kids over QAnon and ‘serpent DNA’ conspiracy theories, feds claim (nbcnews.com).

Currently, there are a number of fact-checking apps and tools that are able to provide some guidance of the truth, or lack-of truth, in reported news (the app “FactStream for example). Some of these require humans – usually journalists – to provide their analysis to provide a truth-rating. Increasingly my guess is that AI technologies will enable this fact-checking/authenticity process – let’s hope!

A Risky World II: The Metaverse

A digital reality that combines aspects of social media, online gaming, augmented reality, cryptocurrencies and virtual reality is how Investopedia defines “metaverse.” Forbes magazine calls it the mirror world of alternate digital realities where people work, play and socialize.

Welcome to the metaverse–a collective virtual shared space created by the convergence of virtually enhanced physical reality and physically persistent virtual space. The term is used to describe Web 3.0, a new Internet era that will fundamentally change the way we interact in the digital world. [Convergence of AI and blockchain in the metaverse | Joseph Araneta Gamboa (businessmirror.com.ph)

The metaverse, a term coined by science writer Neal Stephenson, describes an internet-run universe where users can socialize and play in a virtual setting. For online video gamers, a metaverse may be the paradise they’re hoping for, as they can easily switch between gaming and virtual socializing. However, for many others, there are many questions and risks still needing to be addressed in the adoption of a metaverse. 

The dark side of the metaverse: While companies like Facebook are now pouring millions into developing new metaverse technology, they are also the source of thousands of misinformation cases or fake news stories. Facebook in particular has been having fake news problems, and with the 2020 U.S. election specifically, was very purposeful about who could publish what.

As a metaverse will be an expansion of social sites like Facebook, it’s important to understand how misinformation might spread in a metaverse.  Despite the consumer-facing version of the metaverse being more of an idea than a real thing thus far, there have already been concerns about the potential for cybercriminals and others with malicious intent.

What are the Risks and Opportunities of the Metaverse? We might find that some people start using the metaverse for criminal activities, whether for fraud, disinformation, money laundering, or child exploitation. Indeed, as research shows, the malevolent is often nothing if not creative in finding new ways to do harm, with the researchers concerned that it could even extend into terrorist activity. [(metaverseinsider.tech) ]

Suffice to say, there has never been a technology invented to date that did not have the potential for both good and bad, and the metaverse will certainly not be any different. That should not prevent us, however, from exploring the possible harm the platform could cause to society. For instance, it could act as a fertile environment for recruitment, whether into criminal gangs or into terrorist cells.

The web has become the bedrock of recruitment for all forms of modern extremism in recent years, and the metaverse promises to exacerbate the problem by making it easier still to meet up. Indeed, with technologies like deep fake videos rapidly progressing, it’s feasible that Osama bin Laden could be resurrected in digital form to give talks and recruit the next generation of jihadists. [What are the cyber risks posed by the metaverse? | Cybernews]

Deepfake technology

Deepfakes are images or videos which have been altered to feature the face of someone else, like an advanced form of face-swapping.  Although there are some deepfake videos which are very clearly doctored and inauthentic, most look and sound convincingly real. The most common use of this so far has been deepfake pornography in which the face of an actor in a pornographic video is replaced with that of a celebrity. 

Deepfake technology has also been used to contribute to fake news, hoaxes, revenge porn, and other types of deception.  For example, a deepfake video could feature a politician saying things they never stated through image and audio manipulation, like this video of President Obama (898) You Won’t Believe What Obama Says In This Video! ?? – YouTube).

The names come from a combination of two terms: “deep learning” and “fake.” Deep learning is a type of machine learning based on artificial neural networks (ANNs).  The artificial intelligence algorithm used to create deepfakes is enhanced by generative adversarial networks (GANs), which in this case has two neural networks working in tandem: one “generator” that creates the images, and another “discriminator” which determines how real or fake they appear. Once the data set has been analyzed on multiple layers (which is where the “deep” part of the name originates), it can then be extracted to be applied to other media.

For instance, certain types of deep learning software could analyze existing videos of a celebrity for 3D facial mapping, which aids in recreating realistic movements and familiar mannerisms and expressions in a deepfake video. Similarly, recorded audio of someone speaking could be processed for cadence, tone, and other speech patterns, allowing a deepfake creator to produce synthetic audio.  As artificial intelligence technology has become increasingly sophisticated, it has also become easier than ever to create deepfakes capable of deceiving viewers. [What Is a Deepfake? Everything You Need To Know | DeepFake.com]

We are Vulnerable

It is feasible to imagine a future in which susceptible individuals become immersed in a fictitious virtual world and eventually believe that it is real. Indeed, this already occurs at some level. With the emerging metaverse, artificial intelligence technologies and deepfake capabilities creating a world, Machiavellian leaders will be capable of fabricating information and a worldview that fully convinces and indoctrinates susceptible individuals.

The need for critical thinking and educated awareness of what these technologies are capable of will become essential for any of us to truly believe what we see, hear and experience – a truly frightening scenario. Essentially, we will have to question all technology-based information we consume.

Susceptible, gullible individuals

Previously referenced research (Saifuddin. Navigating the maze) suggests that individuals with lower cognitive abilities are more likely to focus on and believe information that:
* Include a message that is threatening (our “fast” brains tend to focus on danger)
* Involve multiple elements (video images, text and voice intermixed)
* Requires effort to interpret, significant attention and cognitive effort
* Aligns with their in-group beliefs

These individuals will tend to ignore some aspects of the message and focus (limited cognitive resources) on only a few select aspects of the mediated message. In other words, people with limited cognitive abilities and critical thinking are more likely to quickly believe messages that are specifically targeted at susceptible people to influence their thinking, beliefs and actions.

Critical thinking skills

Ahmed, Saifuddin (2021) argues that since individuals have limited cognitive capacity, they seldom process all facets of a message and instead focus on only a few salient features to encode the information. Similarly, evaluations of credibility are judged based on only a few salient aspects. Nevertheless, some individuals may have higher cognitive abilities than others. Increasingly sophisticated media and mediated messaging designed to manipulate, require higher levels of cognitive ability and critical thinking to navigate. Navigating the maze: Deepfakes, cognitive ability, and social media news skepticism – Saifuddin Ahmed, 2021 (sagepub.com)

The first direct exposure of both of us to deepfake technology was a video of Barack Obama (Note, this is a YouTube video link that may include ads or other material – (898) You Won’t Believe What Obama Says In This Video! ?? – YouTube). At first viewing, we were both stunned and momentarily assumed it was real. Our “fast” thinking system was momentarily duped, but our “slow” cognitive capabilities kicked-in quickly.

We both began to critically explore the authenticity of the video to eventually learn that it was fake. But this took cognitive effort. Indeed, neither of us wanted to believe that it was true and invested the cognitive work needed to find out the truth. But what about individuals who want to believe this kind of information? What if this kind of misinformation aligns with their in-group beliefs? It is very likely that these individuals will not have any interest in investing cognitive effort (or simply alck the cognitive problem solving skills) to deeply consider these messages – the more real they are, the more believable they will be.

The punchline here is that mediated messages developed via AI/metaverse/deepfake technologies are likely to greatly enhance the risks we have outlined in previous essays:
* In a highly polarized society where in/out-groups are highly distrustful of each other, the likelihood that mediated misinformation is believed without cognitive scrutiny is greatly enhanced.
* People with lower cognitive abilities and limited critical thinking skills are likely to be easily misled by messaging using the technologies discussed in this chapter.
* Machiavellian leaders and influencers are likely to rapidly adopt these technologies to manipulate susceptible people, and the expansive use of these technologies will likely be rapid.

Advice about how to overcome this sinister use of AI and Metaverse technologies often leans on the need for effort and education for seeking the truth. [For example, the website (Overview ‹ Detect DeepFakes: How to counteract misinformation created by AI — MIT Media Lab) lists numerous techniques for identifying deepfake videos. ]

Mitigation strategies

The problem with all of these suggestions is that each of them requires that we “pay attention” to various aspects of the message and video. As we’ve described previously, many people either don’t have any interest in paying attention to material that aligns with their in-group thinking and beliefs, and/or don’t have the cognitive endurance to do so.

For those interested, the Massachusetts Institute of Technology (MIT) (noted in the article above) has a research website where anyone can access and test their capability to detect deepfake and misinformation. Our limited technical knowledge does suggest that technological sophistication has progressed significantly since this MIT project to the point where some of these types of examples – which we correctly identified – would be undetectable.

As Artificial Intelligence technologies are used to create deepfake and misinformation, new technologies are being developed to counter this risk The report referenced earlier in this chapter noted measures to counter misinformation and deepfake. This counter effort is referred to as “warfare” – this is potentially how serious people and organizations consider this threat to societies values and beliefs. Experts in this field consider this a new and evolving threat to our democratic freedoms – manipulating how people think and believe to weaken western countries from the inside out. [Counter Misinformation (DeepFake & Fake News) Solutions Market – 2020-2026 (reportlinker.com)]

The above report notes that since most fake news and misinformation is disseminated via social media, “significant efforts are being made by technology companies Facebook, Twitter, Microsoft, Youtube and other leading content platforms to invest time and money to better understand and detect deepfakes to ensure their platforms are not misused by criminals and state-owned operators. However, these efforts alone will not be enough. These institutions will have to take a more prominent role by allocating larger budgets to purchase or develop capabilities to mitigate the risk. In addition to detecting these deepfakes, journalists and the social media platforms also need to figure out how best to warn people about deepfakes when they are detected to minimize the damage done”.

As previously noted, in the near future there may be highly visible pop-up notifications on content that scores high on the deepfake/misinformation algorithm that will then notify people of the risk. But even this “in-your-face” notification is unlikely to convince the conspiracy theory believer. These are the kinds of people (as have been frequently portrayed in the media) that when confronted with convincing evidence – for example – that there was not fraud in recent elections, simply won’t believe it – they don’t want to believe it. Other strategies and tactics are required here.

Protecting Ourselves from AI Generated Misinformation

The Notre Dame Deloitte Center for Ethical Leadership suggests four ways in which to protect us all from misinformation.
Four Ways to Stop the Spread (of Misinformation) // News // Notre Dame Deloitte Center for Ethical Leadership // University of Notre Dame (nd.edu)

Inoculate.
As the saying goes, “an ounce of prevention is worth a pound of cure.” While it is difficult to counter misinformation directly, there are promising ways to prevent it from taking hold in the first place. For example, a recent study showed that it is possible to practice a kind of “inoculation” for fake news. Just as with medical inoculation, the idea is to give a small or weakened dosage of the harmful substance.

This allows the patient—or in this case the reader or viewer—to develop immunity before the true threat appears. You can get out ahead of conspiracy theories and fake news by showing examples of them and training employees or students to spot them by their look and feel. Schools, colleges, and other organizations are already using immersive games like Bad News to help users experience fake news through the process of creating it.

Foster a healthy news diet.
Another step you can take is to curate high quality fact-based information for your organization. By providing reliable information early and often, you can help members of your organization find the “signal” of truth within the “noise” of misinformation. This can also help information hobbits to step outside their comfort zone or even prevent epistemic bubbles from forming.

Nudge others toward a truth-focused mindset.
As strange as it may sound, we often interact with information without focusing on whether the information is truthful or not. We may like, share, post, or forward information because we find it entertaining, interesting, or emotionally resonant even if we have doubts about its truth. Thus, one surprisingly effective strategy for countering fake news is to simply redirect a reader’s attention toward the information’s accuracy.

Studies have shown that a simple “nudge” asking social media users to rate an article’s accuracy or asking users to pause and “explain how you know that the headline is true or false” can cause them to be more circumspect about the articles they are willing to share. And in another lab study, psychologists were able to arm study participants against repeated false claims by asking them to behave like “fact checkers.”

Keep ethics in view when sharing information.

Is it immoral to share fake news? Most of us would say so, but one study revealed that we can easily lose sight of this fact. Repeated exposure to a fake news headline caused study participants to rate it as less unethical to publish and share. While major societal changes may be required to restore trust and fix what is broken in our media landscape, we can each play a part in improving it by holding one another accountable and by recognizing that we are performing a moral act each time we participate in the spread of information, even if it involves just a few clicks of our mouse.

The Broader Perspective: Human Embedded Technologies

Our concerns regarding technological acceleration go even deeper and wider, for in recent years, there has been a revolution in the presence of technologies inside or closely associated with human beings and their immediate environment. It is not just a matter of what technology has begun to offer us and has the potential of interacting with us. It is the matter of technological intimacy” –what we will refer to as technology embedment. We might even want to dust off an old word—“propinquity”—and invest this word with the special (and diverse) relationships that now exist and will increasingly exist between humans and technology.

Most importantly, there are many profound implications associated with this propinquity – which, in turn, point to the need for not only greater understanding of these implications but also the educating and training of people to more fully understand and work with this technological propinquity. The challenges inherent in living with technologies that are deeply embedded in our head and heart move us beyond just critical thinking and the discernment of misinformation. As with anything that is intimate, it is hard to think rationally about our relationship with this intimate object. Intimacy is necessarily addressed in a subjective rather than objectivity manner—for we are studying a part of “us” rather than something “else” that resides outside our self.

The Varieties of Embedded Technologies

What are we talking about here? These embedded technologies range from health monitoring devices (that can sample, for instances, heart rate on a device taped to the body for a one-month period of time or the sleep monitoring devices that can now be monitored with a device that is taken home) to devices that continuously supply information to us visually while we are perceiving and navigating through the world. Human-embedded technologies range from the now universal hand-held devices that provide us with any information we need and GPS systems that guide us through the city streets to now envisioned (but soon accessible) devices that we have already identified.

These “empathic” (theory of mind) technologies will meet many of our emotional as well as cognitive needs (by tracking all of our decisions and preferences) (as captured in the movie titled Her). Human embedded technologies can translate our spoken word into written word and soon might even be able to translate our thoughts into words (Artificial intelligence translates thoughts into text using brain implant | The Independent | The Independent). Our thoughts are suddenly available to our technologies. We can be assisted by an internal secretary and transcriber.

These technologies are serving even more importantly as lifesavers (such as digitally monitored heart values). Building on DNA research and the technologies arising from this research, we now have access to a large amount of information about our ancestors and even the diseases we are likely to encounter in our lives. From neurobiological research we now have access to a large amount of information about how our brain works, how we react to traumatizing events and how what we eat impacts on how we think, feel and behave. From high-storage watches to chips embedded in our skin and clothing, we are entering a world of remarkable propinquity between person and device.

Psychological Implications of Technological Propinquity

While there are many implications, we wish to focus on just four: (1) use of the information received, (2) privacy of the information we receive or share about ourselves, (3) blurring of lines between reality and fantasy, and (4) the fundamental nature of consciousness.

Information Overload and Decision-Making: The first and most obvious questions is: what should we do with all of the information we are receiving about ourselves and our world? This question has already been addressed to some extent in which we have written about the new powerful technologies that may soon replicate human functions. We (and our technologies) already know more about our body and mind than ever before. We will also soon learn more about our buying habits and other propensities in our daily journey through life.

This new world of propinquity requires a tolerance of ambiguity along with comfort with the new technologies (most of us can no longer live comfortably as techno-peasants). There was a major field of research that flourished in psychology over several decades known as “human factors.” It concerned (among other things) the way in which people (such as pilots) gained access to and made effective use of complex information (such as altitude, speed and pitch). This human factor field is even more important today as we address the challenge of making decisions based on even more complex information as we navigate our own “airborne” journey. We are entering the world of “advanced human factors.”

Privacy and Our Exposure as Public Selves: Closely related to the issue of making use of the information we receive about ourselves is the accessibility of this information to other people. In many ways, we are more “naked” today than we have ever been—since Adam and Eve wandered through their garden. We now enjoy the services of “Alexa” who waits for requests from us to provide information, but we also know that Alexa might be providing our requests and related information to other people and institutions. We own computer-aided television sets that provide us with easy access to many channels of information, but also know (or at least suspect) that this television set is monitoring our own actions at home. And, of course, we are aware of the extensive information being collected by outside agencies from our hand-held devices and computers.

What do we do about this matter – a trade-off between access to information from outside ourselves and other people’s access to information from inside ourselves. This challenge is not just about law and ethics, it is also about what we want to disclose and what we want to keep private. It is about the multiplicity of selves we project onto and into the world–what Kenneth Gergen (2000) called our “saturated self’. Who are we and what does this mean for other people in the world with whom we wish to (or must) associate? Do we just create an avatar of ourselves for other people to see in a digital world? Is there such a thing any more as a true and authentic self who is known intimately by a few people (often set as a limit of about 150 people)? There are many psychological challenges associated with this management of the private and public self.

The Merger of Reality and Fantasy: There is the related matter of somehow discerning between reality and image-production (“virtual reality”) and how we integrate the two when we are wandering through the world receiving both kinds of information at the same time. We see this merger dramatically (and often humorously) conveyed in the satirical movie, Being There. In this movie, Peter Sellars portrays a man who had grown up in isolation, having done nothing during the day but watch television. Suddenly he is forced to enter the “real world.” He brings his channel changer with him. When encountering several men who threaten him, Sellars clicks the channel changer, hoping that these men will go away. Imagine what it would be like if this movie were made today. Sellars’ world would be composed of Internet images and he would be clicking on a mouse to escape threat.

This challenge of reality merging with fantasy clearly relates to what we have written about the metaverse and deep fake technology. Young people around the world are already finding it much more interesting to date an avatar (a person who is able to digitally transform themselves). They never have to actually meet their “date”. And what do we do about the building of relationships with a “machine” that knows more about what we want than anyone with whom we are affiliating. As in the movie (“Her”) there might be more to gain from an intimate relationship with a machine than from an intimate relationship with a person. There is a perspective in psychodynamic psychology known as “object relations theory.” This theory might be taking on new relevance as related to the formation of relationships with technological “objects” (rather than real or fantasized people).

One of the dimensions of psychology that relates closely to this blurring of lines concerns the locus of control. We hope to control at least certain aspects of our life (a predisposition toward an internal locus of control) but know that much of the world around us remains out of our control (a recognition of external locus of control). We no longer live in a small village where we know everyone and have some say about what is happening in this village. We might not have had much control over the weather (hence must play nice with the gods), but we could at least influence our neighbors. Now, with little control over many matters in our lives, do we find ourselves pulled toward a world of fantasy that we can control? Are the digital games we (or at least our children) play becoming more relevant than the real world in which we live? Do we build communities in a fantastical world because we can’t build communities in the world we actually inhabit?

What do we do about this pull toward fantasy and about intimate relationships with machines rather than people? How do we deal with a real world that seems to be beyond our control – or even our influence?

The New Consciousness: This fourth challenge is a real dilly! Our fundamental assumptions about not just reality but also human consciousness might be on the chopping block. Most Western (and Eastern) philosophy has always assumed that there is some way in which we can reflect on our own thoughts and experiences. Human consciousness was assumed to be a unique (or at least highly developed) feature of human capacity – and it was a process that resided within each of us (rather than being shared by the entire community). While there are “intersubjectivity” perspectives in psychology and philosophy that suggest consciousness exists in the space (relationship) between two or more people, it was still a matter of human consciousness – not the consciousness of some machine. This might be changing. We might now be reasoning, deciding and reflecting with the aid of very high-powered machines. We are about to not only leave the driving to the machine (self-driving cars), but also some (or much) of our thinking and reasoning.

The human brain is much more complex and refined than any computer that now exists (or probably will exist in the near future). However, there is still some domains where we would like some assistance from our technologies. And this assistance could end up capturing some of the work for which we might not want assistance—such as our sense of self and our capacity to reason, reflect and make decisions. If some technology is telling us what ingredients appeal to our taste buds and what food contains these ingredients, then we don’t really have to become discerning in our purchases at the supermarket (or in ordering food on-line). One little bit of consciousness might be lost when our choice of food is mediated by a machine.

This notion about lost consciousness is closely related to the other three challenges mentioned herein. If we are overwhelmed with information, if our boundaries between private and public are invaded and if we are having a hard time discriminating between reality and fantasy (often preferring the latter), then we might be losing our sense of self and abandoning the hard work of making choices and reflecting on our own actions. We might be losing our unique consciousness (individual or collective). What are the psychological implications of this loss?

Conclusions

These multiple challenges are all interesting and something of interest about which to write. We can serve as Paul Reveres racing through the streets declaring that “technological propinquity is coming.” We are likely to be ignored or folks will be curious. They will ask: what in the world are we talking about? Most importantly, there might not be any work open to these Paul Reveres—especially if technological propinquity, human-embedded technology, advanced human factors and saturated selves are not in the human vocabulary and if there is no domain of professional psychology devoted to these matters.

Becoming A Rider

We would suggest that there is work available for these revolutionary riders—and each of us might become a rider. With knowledge about the kind of challenges addressed in this preliminary proposal, one might, as a psychologist, work with high tech firms: how do we help prepare people to deal with the new propinquity and where do we want to set the boundaries with regard to the human/machine interface? We might also find employment (or at least a consulting contract) in working with health-based institutions regarding how they help their patients and clients handle the information they have received. This is where a cutting-edge alliance between psychology and behavioral economics will yield inter-disciplinary expertise regarding the cognitive and emotional implications of human-embedded technologies.

There are other areas where knowledge (and wisdom) with regard to propinquity might be valued. Physical and mental health professionals will certainly be addressing issues of stress and overwhelm associated with this propinquity, as will professionals in the fields of executive coaching, leadership development and advanced human factors analysis. Of greatest importance is a more fundamental observation: we don’t yet know how this knowledge will be of greatest value—the human-embedded technologies are changing too fast to make any kind of confident predictions about the psychological impacts and remedies. What we do know with confidence is that human-embedded technologies are here to stay and are growing in number and variety. Propinquity might be an obscure word, but it represents a fundamental revolution in the intimate relationship between humans and their technologies.

The Onus is on Us!

Education and critical thinking skills are essential to prevent us being duped. In the same way that many companies provide (and require) training on how to prevent cyber-attacks, phishing and insider threats, education on how to identify misinformation and stop the spread is available. Just one example is the work being done by UNESCO by training teachers on how to identify and mitigate the spread of conspiracy theories and misinformation:

The fight against conspiracy theories, and the antisemitic and racist ideologies they often convey, begins at school, yet teachers worldwide lack the adequate training. That is why today, UNESCO is launching a practical guide for educators, so they can better teach students how to identify and debunk conspiracy theories.  [Addressing conspiracy theories through education: UNESCO guidance for teachers | UNESCO]

Education – especially from an early age – is critical in protecting against nefarious attempts to manipulate and influence susceptible individuals. We have said much more about the role played by education in a previous essay. Numerous educational specialists have addressed this important remedy to digitally saturated perspectives: “The one thing we know that helps against them (conspiracy theories and misinformation) is education. The propensity to believe in conspiracy theories is highly correlated with the level of education”. [Education can help against conspiracy theories – EuroScientist journal].

Is formal learning passe? Perhaps not. It might still be of some value to read a book, enter a classroom, and debate an issue—all of this in the midst of a digital world that declares we no longer need books, classrooms or multiple perspectives on important issues.
___________________

References

Bergquist, William (2023) The New Johari Window. In press

Addressing conspiracy theories through education: UNESCO guidance for teachers | UNESCO

Ahmed, Saifuddin. Navigating the maze: Deepfakes, cognitive ability, and social media news skepticism (2021). Navigating the maze: Deepfakes, cognitive ability, and social media news skepticism – Saifuddin Ahmed, 2021 (sagepub.com)

Convergence of AI and blockchain in the metaverse – BusinessMirror

Counter Misinformation (DeepFake & Fake News) Solutions Market – 2020-2026 (reportlinker.com)

Education can help against conspiracy theories – EuroScientist journal

8 Examples of Artificial Intelligence in our Everyday Lives (edgy.app)]

Four Ways to Stop the Spread (of Misinformation) // News // Notre Dame Deloitte Center for Ethical Leadership // University of Notre Dame (nd.edu)

Gergen, Kenneth (2000) The Saturated Self. New York: Basic Books, (rev ed)

Hopwood, Christopher et al. (2021). Realness is a core feature of authenticity. Journal of research in Personality. Vol 92, June 2021.

Convergence of AI and blockchain in the metaverse | Joseph Araneta Gamboa (businessmirror.com.ph)

Understanding the Four Types of Artificial Intelligence (govtech.com)

What are the cyber risks posed by the metaverse? | Cybernews

 

 

 

Exit mobile version