Home Concepts Communication Technological Acceleration: The Crisis of Information, Reality and One’s Sense of Self

Technological Acceleration: The Crisis of Information, Reality and One’s Sense of Self

45 min read
0
0
219

Kevin Weitz, Psy.D. and William Bergquist, Ph.D.

[Note: the content of this essay has been included in a recently published book called The Crises of Expertise and Belief. This paperback book can be purchased by clicking on this link.]

 

The negative impact of Fake News, especially within the political, economic, and social environments is increasing, emphasizing the need to detect and identify these Fake News stories in near real-time. Furthermore, the latest trend of using Artificial Intelligence (AI) to create fake videos, known as “DeepFakes” or “FakeNews 2.0”, is a fast-growing phenomenon creating major concerns. AI technology enables, basically anyone, to create a fake video that shows a person or persons performing an action at an event that never occurred. Although DeepFakes are not as prevalent and widespread as Fake News articles, they are increasing in popularity and have a much greater effect on the general population. In addition, the sophistication behind the creation of DeepFake videos increases the difficulty of identifying and detecting them, making them a much more effective and destructive tool for perpetrators.
Counter Misinformation (DeepFake & Fake News) Solutions Market – 2020-2026 (reportlinker.com)

Two main characteristics of deepfakes make it uniquely suited for perpetuating disinformation. First, like other forms of visual disinformation, deepfakes utilize the “realism heuristic” (Sundar, 2008) where social media users are more likely to trust images and audio (rather than text) as a more reliable depiction of the real world. As technology progresses, the manipulated reality could be more convincing, amplifying the consequences of disinformation. The second characteristic is the potential to delegitimize factual content, usually referred to as exploiting “the liar’s dividend” (Chesney and Citron, 2019: 1758). People, and especially politicians, can now plausibly deny the authenticity of factual content.
Navigating the maze: Deepfakes, cognitive ability, and social media news skepticism (researchgate.net)

The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.
Scientists Increasingly Can’t Explain How AI Works (vice.com)

Pages 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Download Article 1K Club
Load More Related Articles
Load More By Kevin Weitz
Load More In Communication

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

How Lies and Misinformation Undermine Trust in Experts, Leaders and Scientific Facts

We focus on the psychology of how influential people use language to propagate misinformat…