According to 1M AI News monitoring, the Open-source AI Agent platform OpenClaw has released version 2026.4.2. This release includes 2 breaking changes, around 15 feature improvements, and over 30 fixes.
The two breaking changes continue the plugin architecture externalization initiated on 2026.3.31: the x_search configuration of xAI and the web_fetch configuration of Firecrawl have been migrated from the core path to the plugin-specific path. The old configurations can be automatically migrated using `openclaw doctor --fix`.
The most intensive theme of this release is the centralized security of vendor HTTP links, with contributor vincentkoc submitting 8 related fixes. Previously, request authentication, proxy settings, TLS policies, and request header handling for shared HTTP, streaming, and WebSocket paths were scattered across vendor adapter code and have now been unified and consolidated: native/proxy request strategies for GitHub Copilot, Anthropic, and OpenAI-compatible endpoints to prevent spoofing or inheriting default native values; media requests routes such as audio and image through shared HTTP paths; image generation endpoints no longer infer private network access rights from the configured base URL; consistent use of time-secure comparison functions for cross-channel webhook key comparison. For users self-hosting or integrating multiple third-party vendors, these changes address a series of request spoofing and policy inheritance vulnerabilities.
In terms of new features, the Android client has added Google Assistant integration, allowing users to launch OpenClaw directly from the voice assistant and input prompt words into the conversation interface. Default behaviors have changed: the gateway and node hosts now default to `security=full` and `ask=off`, enforcing a security policy but without sequential pop-up confirmations. The plugin system has added a `before_agent_reply` hook, allowing plugins to short-circuit the entire process with a synthetic reply before LLM responds. Task Flow continues to be improved, adding managed subtask generation and sticky cancel intents, allowing external orchestrators to immediately halt scheduling and wait for active subtasks to naturally complete.
Other fixes: the inner thinking tag of the Anthropic model's `antml:thinking` used to leak into user-visible text output and has now been filtered on the output end; normalization has been applied to address parameter losses in Kimi Coding tool invocations due to Anthropic's and OpenAI's format incompatibility; repetitive output of previously transmitted content no longer occurs when exceeding the 4000-character streaming limit in MS Teams.
