header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Does Anthropic use a ByteNet architecture? Claude Mythos crushes the baseline score of GPT-5.4 fourfold.

According to 1M AI News monitoring, Anthropic has refused to publicly disclose the architecture of Mythos, but an anomalous test score has sparked community speculation. The official Anthropic system card shows that in the GraphWalks BFS test (making the model perform breadth-first search in a complex graph structure), Mythos scored 80.0%, Opus 4.6 scored 38.7%, and GPT-5.4 scored only 21.4%. The differences on other benchmarks are not nearly as large.

Meta machine learning engineer Chris Hayduk was the first to point out that this anomalous peak precisely points to a specific architecture: the recursive language model. ByteDance's Seed team published a paper in October last year proposing LoopLM, with the core idea of having the same group of Transformer layers run multiple rounds of inference on input within the model itself, rather than relying on generating large amounts of text to "think" as in the current paradigm. Turing Award winner Yoshua Bengio is a co-author of the paper. The paper explicitly states that graph search is the theoretical strength of this architecture. In the open-sourced small model Ouro, a 1.4 billion parameter version can match the performance of a standard model with around 4 billion parameters.

Second clue: Mythos consumes only 1/5 of the tokens on SWE-bench compared to Opus 4.6, but its inference speed is slower. Normally, models are faster when they output less, but if the computation is hidden in the model's internal iterative processes, this contradiction is reconciled.

Anthropic has classified the architecture as "research-sensitive information" and has not provided any response. All of this is still speculation, but if true, it suggests that the architectural breakthrough of the next generation of top models may partially stem from publicly available research by a Chinese team.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish