header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

OpenClaw has released v2026.4.10, highlighting Proactive Memory Subagent and Codex Deep Integration

BlockBeats News, April 11, the open-source AI client framework OpenClaw released the latest version v2026.4.10. This update focuses on both feature expansion and security reinforcement, with a cumulative merge of fixes for over a hundred issues.


On the core new features front, the most notable is the new Active Memory plugin—introducing a dedicated memory sub-agent before the main reply. It can automatically fetch user preferences, historical context, and relevant details without the need for the user to manually trigger the "remember this" command, supporting various context modes and detailed debugging options.


Regarding Codex integration, a standalone Codex Provider and plugin hosting server have been added, allowing codex/gpt-* series models to have independent authentication, native threading, model discovery, and context compression capabilities, completely decoupled from the original OpenAI path. Additionally, experimental support for MLX-based local speech synthesis on the macOS platform has been added, and the video generation section has integrated the Seedance 2.0 model.


On the platform and channel front, Microsoft Teams has added message pinning, reactions, and other operations, Matrix supports MSC4357 real-time typing animation, QQ Robot supports stream output mode configuration, and Feishu has standardized user agent identification.


Security reinforcement is another key focus of this update, covering various security aspects such as browser SSRF defense, sandbox navigation whitelist, exec preflight enhancement, plugin dependency scanning, Gmail token desensitization, WebSocket frame limit handling, among others. It also fixes long-standing channel issues such as silent loss of media sent via WhatsApp, abnormal media download in Teams, and multi-account routing in Telegram and Matrix.


At the framework layer, the default LLM idle timeout has been extended to 120 seconds, sub-agent completion notification deduplication has been implemented, and the fix for the infinite hang-up issue caused by empty vLLM inference model calls has been applied.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish