According to 1M AI News monitoring, Cursor has released the Composer 2 Technical Report, disclosing the complete training scheme for the first time. The Kimi K2.5 base is built for the MoE architecture, with a total of 1.04 trillion parameters and 320 billion activation parameters. The training is divided into two stages: first, continuing pre-training on code data to enhance encoding knowledge, and then improving end-to-end encoding capability through large-scale reinforcement learning. The RL environment fully simulates real-world Cursor usage scenarios, including file editing, terminal operations, code search, and other tool invocations, enabling the model to learn under conditions close to a production environment.
The report also simultaneously disclosed the construction method of the in-house benchmark CursorBench: collecting tasks from the engineering team's real coding sessions rather than artificially constructing them. The Kimi K2.5 base scored only 36.0 on this benchmark, but after two-stage training, Composer 2 reached 61.3, a 70% improvement. Cursor claims that its inference cost is significantly lower than that of cutting-edge models like GPT-5.4 and Claude Opus 4.6 API, achieving a Pareto optimality between accuracy and cost.
