header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Claude Code v2.1.89 fixes long session cache invalidation, addresses excessive consumption issue

According to 1M AI News monitoring, Anthropic has released Claude Code v2.1.89 (npm package is installable, GitHub release page not yet updated), fixing multiple caching issues that could lead to API cost anomalies and a batch of stability bugs. Earlier, Anthropic engineer Lydia Hallie confirmed that the speed at which users hit usage limits was "much faster than anticipated," and the community also reported two caching bugs that could increase API costs by 10-20 times.

The caching-related issues addressed in this update include: tool schema bytes changing mid-session in a long session causing prompt cache invalidation, and nested CLAUDE.md files being repeatedly injected tens of times in a long session. Both issues would cause token consumption in long sessions to far exceed expectations. Additionally, a fix was made for the StructuredOutput schema caching bug (previously causing around 50% of workflows to fail) and a memory leak caused by large JSON inputs retaining LRU cache keys.

Other notable changes:

1. Editing extremely large files (over 1GB) with the Edit tool no longer triggers OOM crashes
2. Resuming a session with old tool output no longer crashes
3. When hitting limits, no longer displays misleading "Rate limit reached," but instead shows the actual error and operational suggestions
4. Default thinking summaries are no longer generated; it must be manually enabled in settings by setting showThinkingSummaries: true
5. Added CLAUDE_CODE_NO_FLICKER=1 environment variable, enabling flicker-free virtual scroll rendering
6. Added PermissionDenied hook, allowing models to retry after auto mode rejects a command by returning {retry: true}

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish