Original Title: Clawed
Original Author: Dean W. Bal
Translation: Peggy, BlockBeats
Editor's Note: When an individual's life and death experience is intertwined with the rise and fall of a national system, political narrative ceases to be merely an abstract institutional discussion and becomes a profound emotional recognition. Taking the death of a father and the birth of a child as a starting point, this article extends the private realization that "death is a process" to a reflection on the current state of the American republic system. In the author's view, the current conflict between artificial intelligence companies and the government is not an isolated event, but a sidelight of long-term institutional loosening and power imbalance.
The article focuses on the controversy between Anthropic and the U.S. defense system, discussing not only the contractual terms, policy boundaries, and the threat of "supply chain risk," but also a more fundamental question: in the era of cutting-edge artificial intelligence, who should hold the control? Should it be private companies, executive power, or some as-yet-unformed public mechanism? As national security becomes a reason for power expansion, as policy measures increasingly rely on temporary and mandatory arrangements, is the sense of the rules and predictability of the republic system weakening?
Technological leaps and institutional changes may occur simultaneously, and their intersection often influences the direction of an era. The author questions the government's practices, holds hope for the reconstruction of future institutions, and reminds readers not to equate "democratic control" simply with "government control." Against the backdrop of rapid AI evolution and the ongoing reshaping of governance models, this debate may just be the beginning. How to achieve a new balance between security, efficiency, and freedom will be an important long-term issue facing the future.
The following is the original text:
Over a decade ago, I sat by my father's side as he passed away. Just six months prior, he was a vibrant man, stronger than I am today, cycling faster and with more resilience than most twenty-year-olds. Then one day, he had heart surgery, and he was never the same again. It was as if his soul had been sucked out, the light in his eyes dimmed. Occasionally, he would regain some spirit, the familiar father briefly returning to his aging body, but those moments became increasingly scarce. His thoughts became disjointed, his voice growing fainter.
During those six months, he was in and out of hospitals. On his last day, he was put into hospice care. He hardly said a word that day. In the final few hours of his life, he was almost gone from this world. Lying on the hospital bed, his breath slowed, his voice fading. Almost inaudible, only a disconcerting "death rattle" remained — the result of a body no longer able to swallow. A body unable to swallow, unable to eat or drink, in a sense, had given up the fight.
My mother and I locked eyes, understanding each other without stating the obvious, without voicing the questions in our hearts. We knew time was running out. Anything said or asked at this moment would not bring useful information; probing would only add to the pain.
I had spoken with him privately more than once. I held his hand, trying to bid him farewell. My mother returned to the room, the three of us hand in hand. In the end, a machine let out a long beep, signaling that he had crossed a certain threshold — an invisible line to those in the room. In the late afternoon of December 26, 2014, my father passed away.
A few days later, eleven years later, on December 30, 2025, my son was born. I had witnessed death, and now I witnessed life. What I learned is this: neither is a momentary event but an unfolding process. Birth is a series of awakenings, death a series of slumbers. It took my son years to truly "be born," while it took my father six months to truly "leave." Some people even take decades to slowly fade away.
At some point in my life, I can't pinpoint exactly when, the America we knew began its decline. Like most natural deaths, its causes were complex and intertwined. No single event, crisis, attack, president, political party, law, idea, individual, corporation, technology, mistake, betrayal, failure, misjudgment, or foreign adversary "solely" ushered in the beginning of the end, although all played a part. I don't know at which stage of this process we find ourselves, but I know we are already in the "hospice room." I have long known this, but sometimes I, like all mourners, would engage in self-denial. I refrained from discussing it further, as talking about it often only brings pain.
However, if I didn't acknowledge that we are sitting bedside, I wouldn't be able to complete this writing with the analytical rigor you expect from me today. To honestly discuss the advancement of cutting-edge artificial intelligence and what kind of future we should build, we cannot ignore the fact that the Republic we knew is on its deathbed. Yet, no machine here will sound the final beep for us. We can only watch in silence.
In American history, our Republic has "died" and "been reborn" multiple times. The United States has been through more than one "founding." Perhaps we are now standing at the threshold of another rebirth, opening a new chapter in the country's continuous self-reinvention. I hope so. But it is also possible that we no longer possess enough virtue and wisdom to support a new founding, and the more realistic understanding is that we are slowly transitioning into a post-Republic era of American governance. I do not claim to know the answer.
What I am about to write is a confrontation between an artificial intelligence company and the U.S. government. I do not wish to exaggerate. The kind of "death" I am about to describe has been ongoing for most of my life. The events I am about to describe took place last week, and may even be resolved to some extent in a matter of days.
I'm not saying that this event "caused" the death of the republic, nor am I saying that it "ushered in a new era." If it meant anything, it was simply to make that ongoing decline more evident and harder to deny from my personal perspective. I see last week's event as the Republic's final "death rattle," a sound emitted by a body that has already given up the struggle.
As far as I know, this is what happened: during the Biden administration, the artificial intelligence company Anthropic reached an agreement with the Department of Defense (now called the Department of War, hereinafter referred to as DoW) allowing its AI system Claude to be used in classified environments. This agreement was expanded in July 2025 by the Trump administration (full disclosure: I was serving in the Trump administration at the time but was not involved in this deal). While other language models could be used in non-classified settings, classified work involving intelligence gathering, live combat operations, etc., could only use Claude until recently.
The original agreement negotiated by the Biden team with Anthropic, noteworthy, as several key architects of the Biden administration's AI policy immediately joined Anthropic after leaving office, contained two usage restrictions. First, Claude was not to be used for large-scale monitoring of Americans. Second, it was not to be used to control lethal autonomous weapons, i.e., weapons that operate entirely without human intervention throughout the entire process of identification, tracking, and engagement. When expanding the agreement, the Trump administration had the opportunity to review these clauses and ultimately accepted them.
Trump officials stated that their change of heart was not due to a rush to engage in large-scale monitoring or deploy lethal autonomous weapons but rather a rejection of the idea of private companies imposing restrictions on military technology use. The government's shift in attitude prompted it to take policy measures aimed at undermining or even destroying Anthropic—a company that may have been one of the fastest-growing in capitalist history and considered a global leader in AI, while the government repeatedly stated that AI was crucial to the nation's future. But we'll come back to this later.
The viewpoint put forth by the Trump administration was not entirely without merit: private companies imposing restrictions on military technology use does sound somewhat off. However, in reality, thousands of private companies are doing just that. Every technology transaction between the military and private enterprise exists in the form of a contract (hence the term "defense contractor"), with contracts typically containing operational restrictions (e.g., "system X shall not be used in country Y," similar to common clauses in Musk's Starlink), technical limitations (e.g., "a certain fighter jet is certified for use under specific conditions"), and intellectual property constraints ("contractors own and can reuse relevant technology IP").
In some respects, Anthropic's terms resembled these traditional restrictions. For example, the company was not opposed to lethal autonomous weapons per se but believed that current cutting-edge AI systems were not yet capable of autonomously determining life and death. This is akin to "fighter jet certification restrictions."
However, the key difference is that the restrictions imposed by Anthropic through contracts are more like policy restrictions than technical restrictions. For example, the difference between "This fighter jet is not certified to fly to a certain altitude" and "You are not allowed to fly to a certain altitude." The military may not be supposed to accept such terms, and neither may private companies. However, the Biden administration accepted them, and the Trump administration initially did as well, until later changing its mind.
This in itself indicates that such terms are not absurd violations. There is no law that contracts can only have technical restrictions and not policy restrictions. Contracts are not illegal; they may just be seen as unwise in hindsight. Even if you support positions against mass surveillance and lethal autonomous weapons, you may also believe that defense contracts are not the best tool for achieving policy goals. Under the Republic's regular rules, the way to implement new policies is through legislation.
However, "through legislation" is increasingly becoming a joke in contemporary America. If you genuinely wish to achieve a certain outcome, legislation is no longer the preferred path. Governance is becoming more informal and temporary, executive power is expanding, and policy tools are increasingly mismatched with their goals.
The Trump administration cited two concerns for its change of heart: first, Anthropic may withdraw its services at a crucial moment; second, as a subcontractor, Anthropic's terms may constrain other military contractors. Coupled with the government viewing Anthropic as a political adversary (which they may be correct about), the military suddenly realized it was relying on a company it did not trust.
The rational approach would have been to cancel the contract and publicly state the reasons, while also avoiding similar situations in the future through regulatory terms. However, the Department of War insisted that the contract must allow for "all lawful purposes" and threatened to designate Anthropic as a "supply chain risk." This designation typically only applies to enterprises controlled by foreign adversaries, such as Huawei. The Secretary of War went further, vowing to prevent all military contractors from having "any business dealings" with Anthropic.
This is almost equivalent to declaring "corporate murder" on a company. Even if the bullets are not necessarily lethal, it is enough to send a signal: do business on our terms, or your business will end.
This touches on a core principle of the American Republic: private property. If the military were to tell Google, "Sell global personalized search data, or else be listed as a risk," it would be no different in principle from current actions. So-called private property is merely a resource that can be requisitioned in the name of national security.
This move will raise the capital costs of the entire AI industry, weaken the international credibility of American AI, and may even damage the profitability prospects of the AI industry itself.
With each presidential turnover, American policy-making becomes increasingly unpredictable, rough, and arbitrary. It is difficult to determine when the order of freedom will evaporate.
Even though the Secretary of War has withdrawn the threat, the damage has already been done. The government has made it clear: as long as you refuse to comply, you may be treated as an enemy. This poses a deeper erosion to the American political culture.
More importantly, this is the first genuine public debate about "where should AI control lie." Our public institutions have shown disorder, malice, and lack of strategic clarity. The failure of political elites is not a new phenomenon but a theme that has been intensifying over the past two decades: "same as before, but notably worse."
Perhaps the next phase of reconstruction will be closely tied to advanced AI. In shaping the institutions of the future, please do not equate "democratic control" with "government control." The gap between the two has never been as stark as it is today.
Regardless of the future, we must ensure that mass surveillance and autonomous weapons do not erode freedom. I appreciate the AI lab holding the line. In the coming decades, our freedom may be more fragile than we think.
Everyone must choose the future they are willing to fight for or defend. When making that choice, please disregard the noise of the "deathbed rattle" and maintain independent thinking. You are entering a new era of institution-building.
But before that, take a moment to mourn for that once great republic.
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia