According to Dongcha Beating monitoring, Meituan has launched a new model, LongCat-2.0-Preview, on the LongCat API platform. The update log is dated April 20, but Meituan has not yet released any official announcement or technical report. Previously, each model in the LongCat series (Flash-Chat, Flash-Thinking, Flash-Lite, Flash-Omni, Next) was accompanied by an official blog and technical report, and was simultaneously open-sourced on Hugging Face and GitHub. The update log for 2.0-Preview does not include any open-source links and only provides services through the API.
The update log lists three main capabilities: agent-focused development, native support for tool invocation, multi-step reasoning and long-context tasks; proficiency in code generation, workflow automation, and complex command execution; deep integration with Claude Code, OpenClaw, OpenCode, and Kilo Code.
On April 24, several media outlets, citing insiders, reported more details: the model's total parameters exceeded a trillion, using a MoE architecture, supporting a 1M context window, and the parameter volume is comparable to that of DeepSeek V4 released on the same day. Insiders said the LongCat-2.0-Preview training inference was entirely completed using domestic computing power, leveraging 50,000 to 60,000 domestic accelerator cards, marking the largest-scale training task completed by domestic computing power to date. During the testing phase, a daily quota of 10 million tokens was provided free of charge.
