AI and Nationalism Are a Deadly Combination

If the new technology is as dangerous as its makers say, great power competition becomes suicidally reckless. Only international cooperation can ensure AI serves humanity instead of worsening war.

Dario Amodei, the CEO of leading AI company Anthropic, has written a 19,000 word warning that AI technology could spell disaster for humanity. While insisting that he and his company are developing AI responsibly, Amodei says that we are facing unprecedented risks, in part because AI is soon going to have a much greater capacity to help people and governments commit crimes against humanity. AI models, Amodei says, are getting smarter all the time, and it may soon be possible for nefarious actors to commit absolute mayhem with them, including releasing engineered pathogens, creating child sex abuse images on a massive scale, killing people with swarms of tiny drones, manipulating and blackmailing millions of people simultaneously, and more. We are, he says, at a crucial moment that will determine whether our species is capable of dealing with an exponential increase in our power to inflict cruelty and destruction, and because the technology is advancing faster than anyone expected, “we have no time to waste.”

For instance: I don’t know if you remember the COVID-19 pandemic, but a tiny virus that started out by infecting a single person soon spread across the entire world and killed seven million people. Well, thanks to AI products like the one Amodei is developing, he says that it may soon be possible for plenty of people to develop and release new deadly viruses. AI models are like having a “genius in everyone’s pocket,” “essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step.” AI might potentially tell deranged loners how to engineer weapons of mass destruction in their garages. Oh, and lest you think that powerful AI can simply be used to figure out how to stop the threat, Amodei informs us that “there is an asymmetry between attack and defense in biology, because agents spread rapidly on their own, while defenses require detection, vaccination, and treatment to be organized across large numbers of people very quickly in response.” Oh dear.

Easy access to biological weapons is only the beginning. Amodei says it’s the threat he’s most worried about, but he believes AI will confer “unimaginable power” in many domains, and some of the possibilities he outlines for our future include: massive cyberattacks of unprecedented effectiveness, governments and corporations addicting people to AI-generated propaganda and manipulating their behavior, swarms of billions of autonomous armed AI-powered drones that will decide who to kill, and a “global totalitarian dictatorship” that uses AI to create an absolute panopticon in which everything anyone ever does or says is completely accessible to the state.

Oh, and if that’s not bad enough already, we’re all going to lose our jobs, and the wealth gap is going to escalate exponentially because all of the value generated by these technologies will be captured by the oligarchs who own them while the rest of us will see ourselves become economically worthless as the skills we developed over a lifetime are automated. Oh, and while I’ve been talking about ways in which people might misuse AI in pursuit of their ends, the AI might also develop its own malicious goals and try to enslave humanity. Oh, and of course this will all consume a fantastic quantity of energy and probably exacerbate the climate crisis.

One should take these prophecies with grain of salt, given that Amodei has a financial interest in hyping up the claims of his product’s usefulness and potential. The more people believe his company’s work is going to shape the human future, the more valuable that company becomes, and the richer he gets. But let us assume for the moment there is at least some chance he is right about what is around the corner. Given these horrendous possibilities, one might wonder why Amodei is working so hard to develop a technology that is “likely to continuously lower the barrier to destructive activity on a larger and larger scale,” which seems like it is likely to usher in a hell on earth. A person might well ask why we are pushing forward at such speed to create something that carries such terrible risks, something that produces what he thinks could reasonably be called “the single most serious national security threat we’ve faced in a century, possibly ever.” Part of the answer is that Amodei thinks that AI does have the potential to do us an immense amount of good, if it doesn’t kill millions of people in the process. But why move so fast? Shouldn’t we proceed very slowly and carefully if these are the stakes? Well, for Amodei, we can’t. One reason, discussed before in this magazine, is capitalism. Capitalist firms stand to make huge amounts of money from this technology, so they have an incentive to ignore safety risks, even when they themselves acknowledge that the risks are extreme. If we don’t build it first, someone else will, and whoever does so will make a vast fortune.

But an important additional reason for the haste and recklessness with which Amodei and his colleagues are pursuing a dystopian technology is their childlike, Manichean view of the world, in which “the democracies” are locked in battle against “the autocracies.” Internationally, the same “if we don’t build it, they will” logic necessitates an arms race. (The White House has even embraced an “AI Manhattan Project.”) The “threat of the CCP taking the lead in AI” creates an “existential imperative to prevent them from doing so.” Fully autonomous weapons may sound frightening, but we must build them, because they have “legitimate uses in defending democracy.” Because “the only way to respond to autocratic threats is to match and outclass them militarily,” Amodei says that “Anthropic considers it important to provide AI to the intelligence and defense communities in the US.” In other words, despite having just explained the terrible power these systems confer on their possessors, Amodei is actively working to refine that power and hand it to the world’s most lethal military machine, which is currently under the control of an unhinged imperialist billionaire who just invaded one sovereign country and is currently trying to starve another into submission.

Ironically, while Amodei’s essay is all about how to mitigate the risks from AI, I’m quite terrified of the risks of his own worldview. He reveals that even the most thoughtful and risk-aware of our Silicon Valley tech overlords have a shockingly simplistic view of international relations, in which We (the Good Democracies) must stop Them (the Bad Autocracies). Amodei recommends that the democracies form an “entente,” a strategic alliance that tries to develop powerful AI before the autocracies in order to keep them in check.

Amodei doesn’t seem to have much historical awareness, but those of us who do might shudder at the word “entente,” with its echoes of the pre-World War I alliance system that ended in such utter catastrophe. An important lesson of the “Great War” is that if the world separates into two alliance blocs, armed to the teeth and prepared to destroy one another, even a small incident can trigger a catastrophic conflict, because a local conflict will become a global one by drawing in all of the allies in the bloc. As I read about the dangers of AI-enhanced weaponry, my first thought is that we must, at all costs, keep a third world war from breaking out. Amodei does not spend any time discussing the possibility that his “entente” system could polarize the world and set the stage for the most disastrous war in human history. (One commentator calls his “crush China” approach to AI safety a recipe for a “suicide race.”)

Amodei is highly perturbed by the possibility of the Chinese Communist Party developing more advanced AI than the U.S., because China is “currently autocratic and operates a high-tech surveillance state.” This is part of why his company must help the U.S. government develop new weapons technology. But the argument is strange. Amodei concedes that “the Chinese people themselves” are the ones “most likely to suffer from the CCP’s AI-enabled repression.” That’s because being “authoritarian” is an internal characteristic of countries. It does not mean the country poses a threat to others.

If we were serious about assessing the threat posed by certain countries being the first to develop powerful new AI-enabled weaponry, the question we would ask is not whether the country is “autocratic,” but whether it is “aggressive,” meaning that it poses threats beyond its own borders. Yes, it will be disturbing if the Chinese government is able to use AI to further entrench its power over the population, but it’s not clear why the U.S. would fear this possibility, unless China was an aggressive country that posed a threat to us.

I think one reason Amodei discusses the world in terms of “authoritarian” versus “democratic” countries is that if he instead discussed aggressive countries versus defensive countries, he’d have to acknowledge that the U.S. poses far more of a threat to other countries in the world than China does. The U.S. has a far worse track record in the 21st century of abusing international law. Donald Trump is currently extrajudicially executing foreign subjects in the Caribbean whenever he deems them to be trafficking substances that are “controlled” under domestic U.S. law. Trump claims the right to depose any government he deems to be undermining U.S. national security interests, and simply kidnapped the president of Venezuela on bogus charges in order to take the country’s oil. The U.S. is actively trying to achieve regime change in Iran and Cuba right now, has threatened to seize Greenland by force, is arming and funding the Israeli government as it ethnically cleanses Palestine, and is doing everything possible to undermine the effectiveness of international institutions like the United Nations and the International Criminal Court that try to curb misconduct and ensure states resolve conflicts with diplomacy rather than violence.

The U.S. record of aggression is obscene and horrifying, from the invasion of Vietnam in the 1960s, to the overthrow of governments around the world, to meddling in elections, to launching the massively destabilizing post-9/11 wars that caused the largest displacement of human beings since World War II. China has nothing remotely comparable to this record of lawless aggression. Is it really the case that the U.S., the only country that has ever dropped a nuclear weapon on a civilian population, can be trusted with the unfathomable destructive power that new AI technology confers?

Appreciating that the “democratic” U.S. is not more trustworthy than the “autocratic” China is important, because it undermines a central claim of Amodei’s case for an arms race. China has consistently been the country warning against a hostile Cold War mentality developing between the two nations, while American politicians conjure a baseless “China threat.” Despite its lack of internal political freedoms, China is a far more rational and trustworthy actor on the global stage than the U.S., which is currently run by people who can only be described as “maniacs.” (Stephen Miller who probably thinks his neighborhood taco stand is an Al-Qaeda front, JD Vance who “doesn’t give a shit” if state-sanctioned murder is labeled a war crime, Pete Hegseth who thinks Muslims are at war with our civilization, and Mike Huckabee who is preparing for the End Times). Given the threats that AI poses, there is no reason why we should not try to work with China and other leading global powers to adopt a rational, careful framework for the implementation of this technology. It does not help when people like Amodei suggest that China’s internal repression means that we must treat it as an existential threat to the U.S., a conclusion that simply does not follow and for which he provides no evidence.

It worries me that a tech leader like Amodei, who is conscious of the risks his product is creating, is so ill-informed when it comes to international relations. For instance, he doesn’t talk at all about the important concept of the “security dilemma,” in which actions that one nation takes “defensively” are perceived as aggression by other nations, leading to the possibility of a spiraling arms race and an unnecessary conflict. But by emphasizing hostility toward China rather than cooperation with it, Amodei is making precisely this kind of situation more likely, and creating the very danger he says he wishes to avoid.

I have gone from someone skeptical of AI’s power to someone deeply worried about the very risks that Amodei discusses in his essay. But my fear is less of the technology itself than the fact that it is being developed by people like Amodei, who believe in (1) free market capitalism and (2) nationalism, two incredibly dangerous ideologies. Amodei’s capitalistic instincts show in his skepticism about regulating the technology (too much government regulation, he warns, will “potentially destroy economic value”) and his suggestion that the inequality caused by AI can be addressed by rich people voluntarily giving their money away (“ a large part of the way out of this economic dilemma,” he says, is the rich feeling “a strong obligation to society at large,” to which I say, fat chance). His nationalism shows in his view of China as an existential threat as opposed to simply another country comprised of human beings with the same basic desires and fears as we have.

My worry about AI is that it is being introduced in a world divided into nation-states, where too many people think as Amodei does, and see their particular country as being locked in competition with others, instead of as part of a human family that must work together through international institutions to ensure our collective survival and prosperity. Amodei does not mention the United Nations once, even though ending U.S. hostility toward it and increasing the UN’s ability to effectively regulate technology is crucial to ensuring the worst risks he discusses will not come to pass. Unfortunately, Americans in particular seem to increasingly treat international law as a dead letter, and by treating it this way they make it so, even though robust American support for international law could give it teeth and keep it from slipping into obsolescence.

So, yes, Amodei is right that we are entering a terrifying and perilous moment for our species. But what he does not realize is that his own childish view of the world, in which The Good Guys must engage in an AI arms race with the Autocrats (an arms race that, incidentally, will be very profitable for his company) heightens the risk of the worst possible outcome of all: a world war in which both sides use AI to engineer destruction on a scale that will make World War Two look like a petty skirmish.

 

More In: Tech

Cover of latest issue of print magazine

Announcing Our Newest Issue

Featuring

Our first issue of 2026 is here! Featuring gorgeous whimsical cover art by Toni Hamel, this issue dives deep into Thomas Pynchon’s novels, Phil Ochs’ songs, and Elon Musk’s creepy plan to put a chip in your brain. We look at New York City’s effort to exterminate the spotted lanternfly, the struggles of striking garbage workers, and the U.S. role in destroying Gaza. But that’s not all. We have some “cheerfulness lessons” inspired by Zohran Mamdani, an interview with CODEPINK’s Medea Benjamin, and a demonstration of how buying more Labubu can solve all of your problems at once! 

The Latest From Current Affairs