header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

a16z Co-Founder: In the Age of Agents, What Truly Matters Has Changed

Read this article in 14 Minutes
The future's most outstanding programmers may not need to know how to code, but they must possess highly advanced logical reasoning and system architecture thinking skills, as code is poised to become a cheap commodity due to AI.
Original Video Title: Marc Andreessen introspects on Death of the Browser, Pi + OpenClaw, and Why「This Time Is Different」
Original Video Source: a16z, Latent Space
Original Article Translation: FuturePulse


Source of Signal:


This is a recent interview of a16z co-founder Marc Andreessen on the Latent Space podcast.


He is a prominent American internet entrepreneur, a key figure in the early development of the internet, and a representative figure of top Silicon Valley investors since founding a16z.


The entire conversation revolves around the development history and latest trends of AI, making it a highly recommended read.


1. This Wave of AI is not a Sudden Breakthrough, but the First Comprehensive "Getting Things Done" After an 80-Year Technological Marathon



· This wave of AI is not a sudden breakthrough, but the result of an 80-year technological marathon.


· Marc Andreessen directly refers to the current state as an "80-year overnight success," implying that what appears as a sudden explosion in the public eye is actually the concentrated release of decades of technological buildup.


· He traces this technological thread back to early neural network research and emphasizes that the industry today has essentially accepted "neural networks as the right architecture."


· In his narrative, the key milestones are not single moments but a series of stackings: AlexNet, Transformer, ChatGPT, reasoning models, and then agents and self-improvement.


· He particularly emphasizes that this time, it's not just text generation getting stronger, but four types of capabilities emerging simultaneously: LLMs, reasoning, coding, and agents/recursive self-improvement.


· His belief that "this time is different" is not due to a more appealing narrative but because these capabilities have already begun working in real-world tasks.


2. The agent architecture represented by Pi and OpenClaw is a more profound software architecture change than a chatbot



· He describes the agent very specifically: essentially "LLM + shell + file system + markdown + cron/loop." In this structure, LLM is the reasoning and generation core, the shell provides the execution environment, the file system stores the state, markdown makes the state readable, and cron/loop provides periodic wake-ups and task advancement.


· He believes the importance of this combination lies in the fact that besides the model itself being new, all other components are mature, understandable, and reusable parts of the software world.


· The agent's state is saved in a file, allowing for cross-model, cross-runtime migration; the underlying model can be replaced, but memory and state are still retained.


· He repeatedly emphasizes introspection: the agent knows its own files, can read its own state, and can even rewrite its own files and functions, moving towards "extend yourself."


· In his view, the real breakthrough is not just that "the model will answer," but that the agent can leverage the existing Unix toolchain to bring in the potential of the entire computer.


3. The era of browsers, traditional GUIs, and "hand-operated software" will gradually be replaced by an agent-first interaction


· Marc Andreessen has explicitly stated that in the future, "you might not need a user interface anymore."


· He further points out that the primary users of future software may not be humans, but rather "other bots."


· This means that many interfaces designed today for human clicks, browsing, and form filling will degrade into the execution layer called by agents.


· In this world, humans are more like goal setters: they tell the system what they want, and then the agent calls services, operates software, and completes processes.


· He connects this change to a broader software future: high-quality software will become more "abundant" and will no longer be a scarce product crafted by a few engineers.


· He also predicts that the importance of programming languages will decline; models will write programs across languages, translate among themselves, and in the future, humans will be more concerned with explaining why AI organizes code in a certain way rather than rigidly adhering to a particular language.


· He even mentioned a more radical direction: conceptually, AI may not only output code, but also directly output lower-level binary code or model weights.


4. This AI investment cycle is similar to the 2000 Internet bubble, but the underlying supply-demand structure is different


· He reviewed the emphasis in 2000 that the collapse was largely not because the "Internet didn't work," but because of the overbuilding of telecom and bandwidth infrastructure, premature deployment of fiber optic and data centers, followed by a long digestion period.


· He believes that today we can indeed see concerns of "overbuilding," but the current main investors are cash-rich large companies like Microsoft, Amazon, Google, rather than highly leveraged fragile players.


· He specifically pointed out that now, as long as an investment is made in GPUs, it usually quickly turns into revenue, which is different from the large amount of idle capacity in 2000.


· He also emphasized that what we are currently using is actually a "sandbagged" version of technology: because of insufficient supply of GPUs, memory, data centers, the full potential of models has not been fully unleashed.


· In his view, the real constraints in the coming years are not only GPUs but also CPUs, memory, network, and the overall chip ecosystem bottleneck.


· He juxtaposed AI scaling laws with Moore's Law from the past, believing that they not only describe regularities but also continue to inspire capital, engineering, and industry collaboration.


· He mentioned a very unusual but important phenomenon: as software optimization speeds up, certain older-generation chips may even have more economic value than when they were first purchased.


5. Open Source, Edge Inference, and On-Premises Operation are not sidelines but part of the AI competitive landscape


· Marc Andreessen made it clear that open source is very important, not only because it's free, but because it "teaches the whole world how it's done."


· He described the release of open-source projects like DeepSeek as a kind of "gift to the world" because code + paper will rapidly disseminate knowledge, raising the entire industry's baseline.


· In his narrative, open source is not just a technical choice; it may also be a geopolitical and market strategy: different countries and companies will adopt different open strategies based on their own business constraints and influence goals.


· He also emphasized the importance of Edge Inference: in the coming years, centralized inference costs may not be low enough, and many consumer-grade applications may not be able to afford long-term high cloud-based inference costs.


· He mentioned a recurring pattern: models that today seem "impossible to run on a PC" often can actually run on a local machine a few months later.


· In addition to cost, factors driving local execution include trust, privacy, latency, and use cases: wearable devices, door locks, and portable devices are more suitable for low-latency, on-device inference.


· His assessment was very direct: almost everything with a chip may carry an AI model in the future.


Six, AI's real challenge lies not only in model capabilities but also in security, identity, financial flow, organization, and institutional resistance


· On the security front, his assessment was very sharp: almost all potential security bugs will be easier to discover, and in the short term, there may be a period of "computer security catastrophe."


· However, he also believes that programming AI agents will scale the ability to patch vulnerabilities; the future way to "protect software" may be to let bots scan and fix it.


· Regarding identity issues, he believes that "proof of bot" is unfeasible because bots will become stronger; the truly feasible direction is "proof of human," which combines biometric identification, encryption verification, and selective disclosure.


· He also mentioned an often-overlooked issue: if agents are to transact in the real world, they will eventually need money, payment capabilities, and even some form of banking account, card, or stablecoin-like infrastructure. At the organizational level, borrowing the framework of managerial capitalism, he believes AI may re-strengthen founder-led companies because bots are adept at reporting, coordinating, paperwork, and a significant amount of "managerial work."


· However, he does not believe that society will quickly and smoothly accept AI: he cited examples such as professional licenses, labor unions, dockworker strikes, government departments, K-12 education, healthcare, etc., to illustrate that there are numerous institutional decelerators in the real world.


· His assessment is that both AI utopians and doomsayers tend to overlook one thing: just because a technology is possible does not mean that 8 billion people will immediately adapt to it.


Original Video Link


Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit