According to 1M AI News monitoring, the DeepSeek web platform has added a mode switching feature. Two new icons, a lightning bolt and a diamond, now appear above the input box, representing "Quick Mode" and "Expert Mode," respectively. Quick Mode is suitable for daily conversations, providing real-time responses and supporting image and file recognition. Expert Mode is designed for complex reasoning tasks but currently does not support file uploads and multimodal capabilities. The update was silently rolled out with no official announcement.
A community teardown revealed that Quick Mode is powered by DeepSeek 3.2, with knowledge up to July 2024, while Expert Mode points to an updated model, likely an early version of V4. User tests showed that Expert Mode excels in deep reasoning tasks such as physical simulation and mathematical reasoning. However, it has limited differentiation from Quick Mode in simple tasks like creative writing. Some testers believe that the Expert Mode currently routes to a version of V4 Lite and the full version of V4 is still pending. In the frontend code, a yet-to-be-launched third option called "Vision Mode" was discovered. However, reverse analysis indicated that it is not a standalone model but rather enables a visual understanding parameter in Quick Mode.
This marks DeepSeek's first product stratification since its popularity surged early last year. Previously, the web platform was entirely free with no feature distinctions, dynamically routing users to different model entrances based on their needs, serving as a form of compute scheduling strategy. Once this architecture is established, whether through the introduction of a paid system or mode-based limitations, there are no longer technical barriers.
