According to Omdena Beating monitoring, Chris McGuire, the China and Emerging Tech Senior Fellow at the Council on Foreign Relations (CFR) and a former member of the White House National Security Council and the Department of Defense, stated in an article that the V4 has not altered the Sino-American AI competition landscape. He cited the original V4 report, which acknowledged that DeepSeek's inference capability "lags behind state-of-the-art models by about 3 to 6 months," benchmarking against GPT-5.2 and Gemini 3.0 Pro released half a year ago. He also questioned why the V4 report, while disclosing inference-side adaptation to NVIDIA GPUs and Huawei Ascend NPUs, did not reveal the specific GPU models and costs used for training (V3 had claimed the use of 2000 H800 units costing $5.57 million), suggesting tacit use of the export-controlled NVIDIA Blackwell chip. Earlier in February, unnamed U.S. government officials had made similar claims, which NVIDIA termed as "strained"; DeepSeek denied using Blackwell, stating that the model was trained on NVIDIA H800 and Huawei Ascend 910C.
Replit CEO Amjad Masad, on the other hand, countered by stating that while American politicians and lobbyists were hyping up "Chinese distillation" panic, Chinese scientists were openly sharing genuine AI breakthroughs. He referenced the structurally innovative features listed in DeepSeek's official tweet, including token-level attention compression (DeepSeek Sparse Attention) and significantly improved long-context computation efficiency, highlighting that the V4-Pro's single-token inference compute power and KV cache usage at 1M contexts were much lower than those of V3.2. Masad believed that such architectural innovations were completely unrelated to training data distillation and that everyone could benefit from open-sourcing, including labs of all sizes in the U.S.
