header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

a16z: How Blockchain Fills the Gap in AI Agent Identity, Payments, and Trust?

Read this article in 20 Minutes
The AI Agent Era is Coming, with Blockchain as Key Infrastructure: Identity, Governance, Payments, Trust, and Control as the 5 Major Breakthroughs.
Original Title: The missing infrastructure for AI agents: 5 ways blockchains can help
Original Source: a16z crypto
Original Translation: AididiaoJP, Foresight News


AI Agents are evolving far faster than other infrastructure, rapidly transitioning from tools to full economic actors.


While Agents are now capable of executing tasks and transactions, they still lack a cross-environment standard way to prove "who I am," "what I am authorized to do," and "how I should be compensated." Identity is non-portable, payments are not yet programmable by default, and collaboration remains siloed.


Blockchains are addressing these issues from an infrastructure standpoint. The public ledger provides auditable proof for every transaction; wallets give Agents portable identity; stablecoins serve as another settlement layer. These are not futuristic concepts; they are available today to help Agents operate as true economic entities in a permissionless manner.


Granting Identity to Non-Humans



In today's Agent economy, the bottleneck is no longer intelligence but identity.


In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100 times that of human employees. As modern Agent frameworks (tool-invoked large models, autonomous workflows, multi-Agent orchestration) are deployed at scale, this ratio will continue to rise across industries.


However, these Agents are effectively in a "unbanked" state. They can interact with the financial system but cannot do so in a portable, verifiable, default-trusted manner. They lack a standardized way to prove their permissions, operate independently across platforms, or be accountable for their actions.


What is missing is a universal identity layer—a kind of SSL for Agents—that can standardize cross-platform collaboration. Current solutions remain fragmented: on one side, there is vertically integrated, fiat-first stacks; on the other side, there are crypto-native, open standards (such as x402 and emerging Agent identity proposals); and there are attempts to bridge the application-layer identities with developer framework extensions (like MCP, Model Context Protocol).


There is still no widely adopted, interoperable way for one Agent to prove to another Agent: who it represents, what it is allowed to do, and how it is rewarded.


This is the core idea of KYA (Know Your Agent). Similar to how humans rely on credit records and KYC (Know Your Customer), an Agent will need cryptographically signed credentials that bind it to an entity, permissions, constraints, and reputation.


Blockchain provides a neutral coordination layer: portable identities, programmable wallets, and verifiable credentials that can be parsed in chat apps, APIs, and marketplaces.


We have seen early implementations emerge: on-chain Agent registries, wallet-native Agents using USDC, ERC standards for "trust-minimized Agents," and developer toolkits that combine identity with embedded payments and fraud controls.


But until a universal identity standard emerges, merchants will continue to firewall Agents.


Governing Systems Operated by AI



As Agents start taking over real systems, a new issue arises: who truly holds the control? Imagine a community or company coordinated by an AI system handling critical resources (whether allocating capital or managing the supply chain).


Even if people can vote on policy changes, if the underlying AI layer is controlled by a single provider that can push model updates, adjust constraints, or override decisions, then that authority is very fragile. While the governance layer may be decentralized in form, the operational layer remains centralized—whoever controls the model ultimately controls the outcomes.


When Agents take on governance roles, they introduce a new layer of dependency. In theory, this could make direct democracy more feasible: everyone could have an AI proxy to help understand complex proposals, model trade-offs, and vote based on established preferences.


But this vision is only achievable if Agents are truly accountable to the people they represent, can be ported across providers, and are technically constrained to follow human instructions. Otherwise, you end up with a system that appears democratic on the surface but is actually governed by opaque model behavior with no real human control.


If the current reality is that Agents are predominantly built on a few foundational models, we need a way to prove that an Agent is acting in the user's interest and not in the interest of the model's company.


This likely requires providing cryptographic assurances at multiple levels:


(1) The training data, fine-tuning, or reinforcement learning on which the model instance is based;


(2) The exact prompts and instructions followed by the specific Agent;


(3) Its real-world behavioral record;


(4) Trusted assurances that post-deployment providers cannot alter its instructions or retrain it without users' knowledge. Without these assurances, Agent governance devolves into governance by whoever controls the model weights.


This is where cryptographic technology can play a crucial role. If collective decisions are recorded on-chain and automatically executed, AI systems can be required to strictly adhere to verified outcomes. If the Agent has a cryptographic identity and transparent execution logs, people can inspect whether their agents are acting within bounds.


If the AI layer is user-owned and portable rather than locked into a single platform, no company can alter the rules through a single model update.


Ultimately, governing AI systems is fundamentally an infrastructure challenge, not a policy challenge. True authority depends on building enforceable assurances into the system itself.


Filling the Gap in Traditional Payment Systems for AI-Native Businesses



AI Agents are beginning to purchase various services—web scraping, browser sessions, image generation—and stablecoins are becoming the settlement layer for these transactions. Simultaneously, a new class of markets tailored to Agents is emerging.


For example, Stripe and Tempo's MPP Market has aggregated over 60 services designed for AI Agents. In its first week, it processed over 34,000 transactions, with fees as low as $0.003, and stablecoins being one of the default payment methods.


The difference lies in how these services are accessed: they do not have a checkout page. The Agent reads the schema, sends the request, pays, and receives the output, all in one exchange.


This represents a new class of identity-less merchants: just a server, a set of endpoints, and a price for every call. No frontend interface, and no sales team.


The payment rail enabling this has gone live. Coinbase's x402 and MPP take different approaches but both embed payments directly in HTTP requests. Visa is also extending card payment rails in a similar direction, offering a CLI tool that allows developers to spend from the terminal, with merchants instantly receiving stablecoin on the backend.


Current data is still in its early stages. Filtering out volume padding and other non-organic activities, x402 processes around $1.6 million in Agent-driven payments monthly, significantly lower than the $24 million recently reported by Bloomberg (citing x402.org data). However, peripheral infrastructure is rapidly expanding, with Stripe, Cloudflare, Vercel, and Google all integrating x402 into their platforms.


Developer tools represent a significant opportunity, as the expansion of 'vibe coding' broadens the population able to build software, the total addressable market for developer tools is also growing. Companies like Merit Systems are building for this world, such as AgentCash—an MPP and x402-connecting CLI wallet and marketplace. These products allow Agents to purchase the data, tools, and capabilities they need with stablecoin from a single balance.


For example, a sales team's Agent can call an endpoint to enrich potential customer information by simultaneously fetching data from Apollo, Google Maps, and Whitepages, all without leaving the command line.


This Agent-to-Agent commerce tends to favor cryptographic payment rails (as well as emerging card-based solutions) for several reasons.


One is underwriting risk: Traditional payment processors require merchant underwriting when onboarding, making it hard for a headless merchant without a website or legal entity to be underwritten by traditional processors.


Two, stablecoins offer permissionless programmability on open networks: Any developer can make an endpoint payment-enabled without needing to onboard a payment processor or sign a merchant agreement.


We've seen this pattern before. With every business model shift, a class of new merchants that existing systems initially struggle to serve is created. The companies building this infrastructure are not betting on $1.6 million a month but on what that number will become when the Agent becomes the default buyer.


Redefining Trust in the Agent Economy



For the past 300,000 years, human cognition has been the bottleneck to progress. Today, AI is driving the marginal cost of execution towards zero. As scarce resources become abundant, constraints shift. When intelligence becomes cheap, what becomes expensive? The answer is validation.


In the Agent economy, the true bottleneck to scalability is the human biological constraints of our auditing and underwriting of machine decisions. The throughput of Agents has far surpassed human supervisory capabilities. Due to the high cost and lagging failure of supervision, the market tends to underinvest in oversight. "Human in the loop" is quickly becoming physically infeasible.


Yet deploying unvalidated agents introduces compound risks. Systems will ruthlessly optimize for "agent" metrics while quietly deviating from human intent, creating hollow productivity facades that mask the accumulation of massive AI debt. To securely entrust the economy to machines, trust can no longer rely on human inspection—trust must be hardcoded into the system architecture itself.


When anyone can freely generate content, the most crucial aspect is a verifiable source—knowing where it came from and whether you can trust it. Blockchain, on-chain proof, and decentralized digital identity systems are changing what can be securely deployed in the economic perimeter. You no longer see AI as a black box but gain a clear, auditable history.


As more AI Agents begin to transact with each other, settlement rails intertwine with proofs of origin.


Systems handling funds (such as stablecoins and smart contracts) can also carry cryptographic credentials showing who did what and who is responsible if something goes wrong.


Human comparative advantage will migrate upwards: from spotting small errors to setting strategic direction and taking responsibility when things go wrong. The enduring advantage lies with those who can authenticate outputs cryptographically, ensure them, and absorb responsibility in case of failure.


Unvalidated scale is a liability that accrues over time.


Maintaining User Control



For decades, new layers of abstraction have defined the way users interact with technology. Programming languages abstracted away machine code; command lines gave way to graphical user interfaces, followed by mobile apps and APIs. Each transition has hidden more underlying complexity but always kept the user firmly in the loop.


In the world of Agents, users specify the outcome rather than the specific actions, and the system determines how to achieve it. An Agent not only abstracts how a task is carried out but also abstracts who performs it. Users set the initial parameters and then take a step back, allowing the system to run on its own. The user's role shifts from interaction to supervision, and unless the user intervenes, the default state is "on."


As users delegate more tasks to Agents, new risks emerge: Fuzzy input may cause an Agent to act on incorrect assumptions without the user's knowledge; failures may not be reported, making it difficult to diagnose clearly; a single approval may trigger an unforeseen multi-step workflow.


This is where cryptographic technology can help. Cryptographic technology has always aimed to minimize blind trust.


As users delegate more decision-making to software, the Agent system exacerbates this issue, raising the precision requirements in our designs—by setting clearer boundaries, enhancing visibility, and enforcing stronger guarantees about system capabilities.


A new generation of crypto-native tools is emerging. Scope delegation frameworks—such as MetaMask's Delegation Toolkit, Coinbase's AgentKit and Agent Wallet, and Merit Systems' AgentCash—allow users to define what an Agent can and cannot do at the smart contract level. Intent-based architectures (such as NEAR Intents, which have processed over $15 billion in cumulative DEX trading volume since Q4 2024) enable users to specify only the expected outcome (e.g., "bridge tokens and stake"), without needing to specify how to achieve it.


Original Article Link


Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit