BlockBeats News, March 1st, according to WSJ reports, the United States recently used AI technology provided by the AI company Anthropic in airstrikes against targets in the Middle East.
The report stated that just hours before the attack, US President Trump had just signed an executive order targeting Anthropic. This timeline has raised concerns about the coordination and execution of policies. The relevant departments have not yet provided further details on the specific technical usage and the scope of the executive order.
Meanwhile, OpenAI founder Sam Altman initiated an AMA response on the X platform regarding the collaboration with the US Department of War. He revealed that Anthropic was once "very close" to reaching an agreement with the US Department of War, and both sides had a strong willingness to cooperate in most stages of the negotiation. However, in a highly tense negotiation environment, the situation may have deteriorated rapidly, ultimately leading to the deal not going through.
In terms of security governance pathways, both sides have significant differences in strategy. Altman stated that OpenAI adopts a "layered approach" model, including building a complete security technology stack, deploying Frontline Deployment Engineers (FDE), involving security researchers in the project, and directly interfacing with the US Department of War through cloud delivery. Instead of setting numerous specific prohibitions in the contract, OpenAI tends to rely on existing legal frameworks and use technical security measures as a core protection.
