header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Sentry Co-Founder: LLM Slowing Down Long-Term Development Speed, OpenClaw Generating Too Much Code to Easily Recover

According to 1M AI News's monitoring, Sentry's co-founder David Cramer today posted on X, stating outright that he is "completely confident" that large language models are currently not a net productivity gain. He believes that LLMs have lowered the barrier to entry, yet continue to produce increasingly complex and unmaintainable code, which, from his own experience, is slowing down long-term development velocity.

Cramer says that what he questions is "agentic engineering," that is, the approach of letting models auto-generate code and pushing it straight to production, arguing that the quality of the resulting code is significantly worse, becoming a net burden after a substantial accumulation. Specific issues include: poor performance when incrementally developing in a complex codebase, the inability to generate interfaces that adhere to language idiomatic styles, and "pure slop test generation." He specifically calls out OpenClaw: "If I were to wager, tools like OpenClaw have become irrecoverable due to the sheer volume of generated code," and emphasizes that "software remains hard to build, and it has never been about minimizing or maximizing the number of lines of code."

Cramer adds that the above assessment is primarily based on his experience of working on feature development in codebases with normal complexity; his recent contributions were driven by finding it "interesting" rather than "easier," attributing it to a fundamental change at a psychological level, with no significant difference in actual time spent.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish