According to DynaInsight monitoring, Twitter account 'AnxietyOverload' revealed that Kimi K3 is planned to be released in Q3 with a parameter size exceeding $2.5 trillion; internal experiments have already tested a context length far beyond 1 million tokens, but it is still uncertain whether 1 million context will be made available to users. The leak also mentioned that the current bottleneck limiting Kimi from introducing 1 million context is not technical but rather computational resources.
The background of this rumor is that DeepSeek V4 Flash/Pro has already made 1 million context a public selling point, raising the bar for open-source pre-trained models in terms of long-context capability. The current public version of Kimi, K2.6, has a context window of 256K. If K3 indeed introduces 1 million context to the public version, Moonshot will be poised to compete with DeepSeek V4 in both model scale and long-context capability simultaneously.
