header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Proprietary Compression Model Substitutes Claude Code Native Solution, YC Company Morph Claims 37% Faster End-to-End Throughput for Long Sessions

According to 1M AI News monitoring, YC-incubated AI programming infrastructure company Morph has released the Claude Code plugin, integrating two core features: the code search sub-agent WarpGrep and the dedicated context compression model FlashCompact. Morph claims that this plugin can improve the end-to-end speed of Claude Code's long sessions by 37%, while saving token consumption and increasing accuracy.

FlashCompact is a context compression model designed for the programming agent, with a throughput of 33,000 tokens/s, compressing context by 50%-70% in less than 2 seconds, aiming to replace Claude Code's built-in automatic compression mechanism. When the context window of Claude Code approaches the 200K token limit, automatic compression is triggered, which may result in the loss of key information such as file paths, error messages, and debugging status. FlashCompact claims to reduce the compression trigger frequency by 3-4 times. WarpGrep, on the other hand, is a reinforcement learning-trained code search sub-agent that runs in an independent context window to avoid contaminating the main agent's context with search results. A single search takes less than 6 seconds and ranks first on the SWE-Bench Pro benchmark.

This plugin also supports AI programming tools such as Cursor, Windsurf, Codex, Amp, OpenCode, and Antigravity. Morph's clients include JetBrains, Vercel, Webflow, Binance, among others.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish