AI Safety or Security? What We Lose in Translation

The shift from AI safety to security is subtle but seismic. It changes who gets heard, what gets justified, and how power is exercised.

AI

Laila Shaheen

8/1/20254 min read

two women facing security camera above mounted on structure
two women facing security camera above mounted on structure

Words matter.

Words hold power. Words shape discourse. And in the case of security, the word implies an act.

Between 2023 and 2025, discussions around artificial intelligence underwent a subtle yet profound shift in terminology from AI safety to AI security.

While many may use these words interchangeably, they are in fact fundamentally different.They trigger different emotions, advance different priorities, and serve different agendas.

To call something a matter of security is to securitize it. And to securitize an issue is to take it from the realm of normal politics, where citizens can debate, influence, and hold leaders accountable, to the realm of exceptional, emergency politics, where democratic processes are sidelined in favor of whatever the state deems necessary to defend its national interest.


In this shift, citizens cease to be participants and instead become objects of security. Their fears are weaponized, their access to critical information is curtailed, and their rights are quietly bargained away; all in the name of protection from some elusive threat.

To see this shift in action, walk with me down Senate Hearings lane, where the evolving language between American senators and OpenAI CEO Sam Altman lays it bare.

Sam Altman has appeared twice before congress: once in May 2023 and again in May 2025.

In the 2023 testimony, “Security” was mentioned 5 times (none of which in reference to national security), “Safety” 29 times, and “China” 6 times.

In his 2025 testimony, the term “”Security” appeared 37 times, “National Security” 15 times, “Safety” 6 times, and “China” 42 times.

In just two years, the conversation around AI, from the public and the private perspective, shifted from keeping technology safe to defending the nation against a national security threat. The vocabulary shifted, and with it, the politics.

So why does this matter? Allow me to present three key concerns.

Firstly, when we securitize an issue, we blur the lines between military and civic concerns, between domestic and international issues. “National security” becomes a catch-all justification for state action. Once something is framed as a security issue, actions taken to address it no longer need public justification. The term itself becomes the legitimizer, as long as the audience, the object of security, accepts the framing.

Historically, a similar linguistic maneuver emerged during the Cold War. American officials deliberately reframed ‘security’ as national and state security, replacing the previously favored term of the military ‘defense.’ This shift was strategic: the war effort required a fusion of military and civilian activities, making the blurring of these lines essential. Unlike ‘defense,’ which carried a strictly geopolitical and military connotation, ‘security’ could be invoked more broadly to rally the public against threats both foreign and domestic. As this example shows, the dismantling of such distinctions began with the seemingly benign act of choosing a different word.

Secondly, there is a real tension between security and individual liberty. History shows this pattern well: post‑9/11 policies like the Patriot Act, the COVID‑19 surveillance creep, or the UK’s “Snoopers’ Charter,” which expanded state monitoring in the name of safety and crime prevention.

These policies, like many others, were born in moments when a threat, or a threatening actor, endangered national security. In exceptional times, exceptional measures are tolerated. And that is understandable. The problem arises when the moment passes but the measures remain. What was once temporary and extraordinary slowly becomes normalized, and societies quietly adjust their expectations of freedom.

Third, some issues genuinely warrant securitization: nuclear weapons, military defense, and border protection. However, the securitization we are talking about here is a pre-emptive securitization of issues that pose no existential risk in the real sense of the word. Where we find ourselves today is in a time of over-securitization. The label of “national security” is deployed less as a shield against real threats and more as a tool to stoke fear, justify extraordinary measures, and rally public compliance.

What further complicates the securitization of AI is that, unlike nuclear technologies developed in government labs under strict state oversight, AI is largely controlled by a handful of private companies whose primary allegiance is to their shareholders; not the state, and certainly not its citizens.

Yet these corporations have become key securitizing actors, framing AI as a national security threat to consolidate their power, justify reckless innovation, and strengthen their competitive edge against both domestic and foreign rivals.

In their letter to the Office of Science and Technology Policy, OpenAI wrapped its commercial agenda in the language of national security. The opening line sets the tone:
“As America’s world-leading AI sector approaches artificial general intelligence (AGI), with a Chinese Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s new AI Action Plan can ensure that American-led AI, built on democratic principles, continues to prevail over CCP-built autocratic, authoritarian AI.”

From there, the letter reads less like a policy recommendation and more like a corporate wish list. OpenAI urges the government to:

  • Fast-track facility clearances for frontier AI labs ‘committed to supporting national security.’

  • Ease compliance with federal security regulations for AI tools.

  • Accelerate AI testing, deployment, and procurement across federal agencies.

  • Tighten export controls on OpenAI’s competitors while pushing for the global adoption of ‘American AI systems’—whatever that means.

  • And, unsurprisingly, relax regulations on privacy, data ownership, and intellectual property.

If the government fails to relax regulations and fast-track the approval and deployment of AI tools, the U.S. will undoubtedly fall behind China, putting our national interests at grave risk.

This kind of rhetoric, so bluntly designed to manipulate public perception and justify letting private AI companies ‘move fast and break things’ under the threat of the communist boogeyman, is deeply dangerous.

If the bar for what counts as a national security issue keeps dropping, our capacity to act as informed, engaged citizens will steadily erode.

So what can we do?


Well, we can start by talking about it. Naming it. Challenging it.

Getting familiar with the ways we are being manipulated, by both state and corporate actors who capitalize on fear, is the first step toward reclaiming our democratic agency.

When we recognize how security language is used to shut down debate, limit transparency, and fast-track harmful policies, we can begin to resist it.

Public awareness is the antidote to manufactured urgency.

Collective skepticism can be the beginning of accountability.

So the next time you hear the words “national security” uttered in whatever context, but most definitely AI, pause and ask: whose security? And at what cost?