Artificial Intelligence is this Generation’s Jurassic Park

Janani Mohan

The day before November 30, 2022, was a normal day—artificial intelligence was a growing problem that many knew nothing about and most ignored. Then, OpenAI dropped an early version of ChatGPT. Within five days, the AI-based bot garnered one million users. With ChatGPT pioneering the way to the “AI revolution,” AI became better, faster, and more accessible than ever before. While GPT-1 was a barebones model that demonstrated what AI could be, subsequent versions leading up to the current GPT-4 served up that potential with improved models, increased data points, and higher-skilled, human-like capabilities. But in the midst of this, one question is just starting to get asked—is this revolution good for humanity, or could it be this generation's “Jurassic Park”?

Soon after the release of ChatGPT in the mainstream, organizations became the first to regulate it. Companies prevented staff from using AI in their work due to security risks and inaccuracies, and universities established that using AI on exams was cheating. These responses all prevented the use of AI in specific circumstances but did not consider more broadly if and where AI development should be constrained. In the United States, it became a question for the government to deliberate. Over the past year, the U.S. Congress has held multiple hearings on AI, but these hearings proved challenging due to the government-technology gap. Congressmembers often struggle to meaningfully discuss the dangers posed by AI, because they still need to learn more about what AI is to understand what it can do.

This government-technology gap serves as a prime example of the fundamental paradox of AI governance: tech companies can but won’t; governments can’t but need to. Tech companies are developing new AI products and expecting the government to regulate them; governments do not understand AI enough at this point to set such policies. Tech companies are fast-paced and turn around updates every few months; governments are slow and need more time to formulate and authorize policy. AI is a major opportunity for many tech companies; governments treat AI as one of numerous emerging issues that they need to respond to. The problem with this paradox is that tech companies are the only ones currently capable of thinking through whether certain upgrades to AI are beneficial to humanity but are often too lost in their excitement of what can be done to think about what should be. 

Considering how best to govern AI must therefore be treated as a serious and compounding issue. Recent AI development includes attempts to create stronger capabilities and greater sentience, and these two factors coupled together could prove dire. The framework for understanding AI sentience includes four levels: reactive machines where AI responds, limited memory where AI learns from limited feedback, theory of mind where AI understands human emotions, and self-awareness where AI can have its own emotions in response. While only the first two currently exist, as AI progresses and gains more autonomy, it could gain theory of mind and self-awareness, or at least processes that mimic these two. This is a problem considering that AI could have a theory of mind that “rationally” determines the best course of action, but whose system of rationality is based on faulty ethics that lead it to cause harm.      

Beyond this futuristic, yet fast-approaching issue, even in its current state, AI can cause problems when used incorrectly or without proper oversight by humans. Because AI learns from limited feedback, AI can currently recognize patterns and provide recommendations that treat these patterns as causal, ignoring broader nuances. And, even when AI does its job correctly, AI is only as good as the data that is inputted into its algorithms, which means that faulty datasets can lead to faulty conclusions. For example, the use of AI in predictive algorithms for crime has led to racial bias issues in decision-making by police forces and legal systems, due to poor datasets and treating demographics incorrectly as a causal factor. 

Although the potential for where AI can go next is still uncertain and many decry sounding alarm bells too early, a helpful reminder is that many thought what ChatGPT could do was just a future possibility, or even the plot of a futuristic sci-fi movie, before November 2022. And yet, that future is here: AI is proving to be this generation’s Jurassic Park, where tech companies advance AI without fully considering its potential downsides. AI that follows human controls is AI that can be regulated. But, if AI stops responding to human controls, much like sentient dinosaurs, or utilizes poor datasets and logic, it could be a problem that is no longer solvable. To put it simply, the future of artificial intelligence could shape the future of our planet. And, governance must respond to AI immediately, like it's a bunch of man-eating dinosaurs that could forever change the world—before it does.

STAIR Journal

St. Antony’s International Review (STAIR) is Oxford’s peer-reviewed Journal of International Affairs.