header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Tencent Open Sources Memory System, OpenClaw Saves Up to 61% on Token

According to Watchful Beating monitoring, the Tencent Cloud Database team spent 6 months specifically tackling the long dialogue amnesia issue and has recently open-sourced TencentDB Agent Memory. This is a set of local-first memory engine for AI Agent, which by default uses SQLite + sqlite-vec as the local backend, can be installed as an OpenClaw plugin, and also supports Hermes Gateway integration.

Its core concept is not to directly stuff historical dialogues into the vector database, but to split memory into two sets of structures. Long-term memory precipitates layer by layer according to L0 raw dialogue, L1 atomic facts, L2 scene chunks, and L3 user profiles; while short-term task memory outsources lengthy tool logs to refs files, summarizes steps into jsonl, and then uses a Mermaid canvas to retain task structure and node indices.

In complex workflows with more than 30 steps, the Agent usually only reads the lightweight Mermaid structure diagram, and when detailed verification is needed, it returns to the original logs based on node_id. Official benchmarks show that after integrating OpenClaw, the Token consumption of WideSearch tasks decreased from 221.31M to 85.64M (a 61.38% decrease), with a relative increase in pass rate of 51.52%. In the evaluation of long-term memory with PersonaMem, the accuracy rate increased from 48% to 76%. The value of this design lies in not using one-time summaries to swallow historical details but in preserving the complete path from high-level profiles, task canvases all the way down to the underlying original text.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish