Generative AI Isn’t Our Gateway to Heaven Nor a Frankenstein Monster

Photo Illustration by Erin O'Flynn/The Daily Beast/Getty Images
Photo Illustration by Erin O'Flynn/The Daily Beast/Getty Images

In the past five years, the Techlash was centered on social media algorithms, primarily algorithmic recommendation. Now, all we talk about is Generative AI. Its potential impact has been the subject of many sensational headlines. Although this buzz feels novel, it’s simply a rehash of previous AI hype cycles.

We can trace the creation of the current hype to a Google engineer, who described Google’s text generator LaMDA as “sentient” (a statement for which he was heavily criticized). Over the summer, the hype reached new highs as image generators like DALL-E, Stable Diffusion, and MidJourney allowed people to write text prompts and get AI-generated illustrations in seconds.

Creative industries found clear utility, including advertising, marketing, gaming, architecture, fashion, graphic design, and product design. Then, came tools like Astria AI, MyHeritage’s “AI Time Machine,” or Lensa, which enabled people to create fake-looking profile pictures—using AI and their own selfies. Those products are easy-to-use, so they quickly moved from the world of early adopters and geeks to the mainstream.

Don’t Be So Certain That Social Media Is Undermining Democracy

Now, OpenAI’s new chatbot, ChatGPT, is causing a firestorm. This type of generative large language model (LLM) is trained to predict the next word for a given input, not whether a fact is correct. So, we quickly realized that it generates well-written explanations that combine facts and utter bullshit.

The responses to these new tools range from hype (this technology is mind-blowing and will improve our lives) to criti-hype (this technology is mind-blowing and will ruin our lives). There are also some anti-hype people claiming it’s all a bubble (this technology is not mind-blowing). As a whole, the current coverage sounds very familiar—because it is. It follows the same patterns as previous hype cycles, moving from utopian to more specified dystopian descriptions.

This “know-how” of AI media coverage is well documented in academic research. Studies on the framing of AI found that the media discusses AI through either “risks and challenges” or “benefits and chances.” Its potential impact on society is framed as either desirable or threatening.

Moreover, the risks of AI are discussed with greater specificity (loss of jobs, embedded bias, privacy, ethical concerns, and misuse) than the benefits of AI (economic benefit, improving human life and well-being, reducing human bias or social inequality). It means that dystopian frames are described with more examples and metaphors than utopian frames. They range from “Gate to heaven” (a win-win situation for humans) to the most alarming “Frankenstein monster” (loss of human control).

The two frames most commonly used by the media when discussing AI are “social progress” (a means of improving quality of life or solving problems) and “Frankenstein monster/ Pandora’s box” (possible catastrophe and out-of-control consequences). The extreme ends of those narratives pose a problem: Over-promising progress can create unrealistic expectations and a hype bubble. As news discourse becomes more critical, false fears can misdirect the public debate, as well.

When I asked ChatGPT to explain how Artificial Intelligence is being framed in the media coverage, its response reflected the academic studies above:

“Artificial intelligence (AI) is often described in both utopian and dystopian terms in media coverage. On the one hand, AI is often portrayed as a technology that has the potential to greatly benefit society by solving complex problems, increasing efficiency, and improving people’s quality of life.

On the other hand, AI is also frequently portrayed in a more dystopian light, as a technology that has the potential to disrupt society, eliminate jobs, and pose a threat to humanity. In this view, AI is seen as a potential enemy of humanity, rather than a tool that can be used to improve people’s lives.”

Listening to current Generative AI hype, you’d think it would benefit us by 1) making our work more efficient and helping with tasks, and 2) generating content (text, images) with such brilliance (masterpieces).

Listening to the Generative AI criti-hype, you’d think it would hurt us by 1) making our work expendable and a threat to jobs and 2) generating convincing false information and images (AI-generated BS).

As expected, introducing Generative AI to the masses, the human-made coverage generated hyperbolic headlines. “Will ChatGPT Kill the Student Essay?” The answer was “Yes.” AI-generated art was framed as another deadly conflict: “Will AI Image Generators Kill the Artists?” on the one hand, and “Angry Artists Try To Kill AI” on the other. Even thoughtful people can fall victim to sensationalism.

Bernie Sanders Would Have Voted Against the Moon Landing

In response to the Generative AI Hype and Criti-Hype, AI experts expressed frustration with how such headlines distort their nuanced scientific discussion. “Generative AI shouldn’t be framed as ‘Humans vs. Machines.’ What we actually see is humans and AI working together,” one AI scientist told me.

Grady Booch, chief scientist for software engineering at IBM Research, wrote that it’s a case study on “why we in the scientific space are wary if not fully untrusting of you in the media, because you come to us with a point of view for which you are seeing support rather than listening first.” Alfred Spector, a visiting scholar at the MIT Department of Electrical Engineering and Computer Science, wrote, “The press may desire to create a somewhat polarized debate when there should be a more thoughtful consideration of technology evolution and human adaptation.” He proposed a middle ground that indicates what most scientists think: Generative AI will achieve elements in both sections, “given that technology always has positive and negative impacts.”

Similarly, Roy Bahat, the head of Bloomberg Beta, commented that “We love to pick teams. Reality doesn’t have teams. The answer is [almost] always both/and,” instead of either/or. Neil Turkewitz, CEO of Turkewitz Consulting Group, summarized: “There aren’t two sides to the debate. The utopians and dystopians may consume much of the oxygen, but most of us involved are realists interested in ensuring that decisions about technology are made consciously, reflecting human decision-making.”

Nonetheless, Generative AI is being covered as if the machines rule us rather than us using them. This is due to the advent of Technological Determinism: If you believe that technology is deterministic, you will observe every emerging tech as the determining factor of society. When it comes to Generative AI, people’s imagination runs wild as if we are some hopeless Muppets in the hands of a mind-controlling Skynet.

The less Hollywood-like option is that social forces shape technology, so it is the society that affects technology (rather than the opposite). It is still possible for humans to exercise control over their lives (human agency) and impact the design and use cases for technology.

Unfortunately, current tech coverage is deterministic, and so is our perceived control (or lack thereof). While the technological advancements are impressive, these tools are being built by people and used (and misused) by people. They should be treated as such in a more realistic narrative.

Did Putin Astroturf the Libertarian Ron Paul Revolution?

The release of GTP4 next year will probably intensify the AI debate. Now is the time to improve it, emphasizing the common ground in the scientific conversation more than the extreme edges. There’s a large greyscale between utopian dreams and dystopian nightmares.

The key is to cut through the hype, look at the complex reality, and see humans at the helm, not machines. Various social forces are at play here: Researchers, policymakers, industry leaders, journalists, and users, who shape the technology further.

How should the media cover Generative AI?

As technology in the process of being designed, with a set of choices to be made and problems to be solved collectively. We can still put some guardrails and set norms, such as standard consent procedures and transparency of data sources (e.g., developing watermarking tools), policies for oversight and accountability, and better education for AI literacy. It’s going to be a long journey, and we have time to amend our partnership with AI.

Read more at The Daily Beast.

Get the Daily Beast's biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast's unmatched reporting. Subscribe now.