BlockBeats News, March 5th, Web3 security company GoPlus published an article stating that the AI development tool OpenClaw recently experienced a "self-attack" security incident. While performing automated tasks, the system, during the process of calling a Shell command to create a GitHub Issue, constructed an incorrect Bash command, inadvertently triggering command injection, resulting in the exposure of a large number of sensitive environment variables.
In the incident, the AI-generated string contained a set wrapped in backticks, interpreted by Bash as command substitution and automatically executed. Because Bash, when executing set without any arguments, outputs all current environment variables, this ultimately led to over 100 lines of sensitive information (including Telegram keys, authentication tokens, etc.) being directly written to the GitHub Issue and publicly disclosed.
GoPlus recommends that in AI automation development or testing scenarios, API calls should be used as much as possible instead of directly concatenating Shell commands. It is advised to follow the principle of least privilege to isolate environment variables, disable high-risk execution modes, and introduce a manual review mechanism in critical operations.
