header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

a16z: When UI Is Not the Product, What Remains of Software's Moat?

Read this article in 35 Minutes
Data, Permission, Business Logic, and Execution Capability
Editor's Note: Over the past two decades, the moat of SaaS has largely been built on top of the UI. Dashboards, fields, approval workflows, and user habits, not just the interface, have shaped the organization's way of working and data order. When AI can directly access data, invoke tools, and execute processes, the stickiness formed by human muscle memory begins to weaken, and the UI is no longer necessarily the core interface of enterprise software.
This does not mean that record systems are losing value, but rather that their defensibility is shifting: from UI and usage habits to data models, permission systems, compliance responsibilities, business logic, execution loops, and multi-party collaboration networks. In the future, truly defensible software may no longer just be a database recording human work, but a system that can capture context, initiate tasks, coordinate intelligent agents, and continuously generate new data during execution.
As software moves towards headless, the core issue of enterprise software also changes: value is no longer just about who owns the data, but about who can organize actions around the data.
The following is the original text:


Last month, Salesforce announced it would open its API and introduce a headless product. Essentially, this means that Salesforce is betting: in the Agent era, its core value no longer primarily comes from the UI, but from the data layer. This is a rather clever repositioning.


However, it should also be noted that from a technical perspective, this release does not seem to bring about much substantive change. The API that Salesforce today repackages as a "headless product" has actually existed for many years. In other words, this is more like a typical Salesforce-style marketing launch.


The core idea of this new product is that Agents can directly access data in the record system without having to go through a UI designed for humans. The traditional role of the UI is to help human users track processes, manage tasks, and drive workflows forward; but after the Agent's intervention, the necessity of this interface layer begins to diminish.


What is truly worth discussing in this release is not just what new product Salesforce has launched, but that it has raised a more fundamental question: if you strip away the UI, open only the underlying database, what is left of a record system? Between it and a Postgres database, a well-designed data schema, and a set of APIs, what is the real difference?


Furthermore, do the classic factors that used to make a record system inherently defensive still hold true? Or have new competitive standards emerged?


In the SaaS era, the reason why a record system had a moat was because human users lived in its interface for a long time. The interface carried operational habits, organizational processes, and data accumulation, thereby creating a high migration cost. However, in the Agent era, this advantage is being weakened. The truly defensive layers are sinking on one hand into data models, permission systems, workflow logic, and compliance capabilities; and on the other hand, moving up to network effects, proprietary data generation capabilities, and real-world execution capabilities.


As software becomes headless, where will the moat ultimately shift to?



UI Used to Be the Product Itself


The so-called System of Record (SoR) refers to the authoritative source of truth for a certain type of business data. It is where the "official version" of customer relationships, employee records, or financial transactions resides, and is the core system where other tools read data from and write data back to. CRM is the record system for revenue-related data, HRIS is the record system for personnel-related data, and ERP is the record system for fund and finance-related data.


The strength of these systems lies not only in the fact that they store data but in the fact that they eventually become the "living version" on which the entire organization relies to operate collectively.


Over the past two decades, what Salesforce has sold to customers is actually a set of tools to help sales leaders manage their teams. Dashboards, sales pipeline views, forecasting tools, dynamic feeds, these are the actual products being purchased. Its business model is built on selling seats to users, where these seats fundamentally provide access to the aforementioned functionalities. The underlying database is certainly crucial, but in the product experience, it appears more as implicit infrastructure.


In other words, what truly drives user stickiness is the UI.


The UI constrains data norms and shapes a common language: leads, opportunities, customer accounts. It enables thousands of sales reps to continuously input data they might not have otherwise. In the past, the UI was the mechanism that maintained data consistency and availability. The reason Salesforce has such strong stickiness, to the point where many sales leaders insist on bringing Salesforce to their new companies even after switching jobs, is not because of how great its interface is, but because it has become a form of muscle memory.


However, the Agent is starting to disrupt this model. They no longer need to interact with software through a UI but can directly read from and write to the underlying data. This has also given rise to a new set of tools and alternative solutions that bypass the traditional interface. Salesforce is not the only example: we have also recently discussed how a whole ecosystem more suitable for AI invocation is growing around SAP.


At the same time, Agents that can operate computers will also make preferences, training, undocumented context, and other traditional human factors gradually less important over time. In other words, the requirements for becoming a persistent recording system are changing.


The Past Criteria for Ratings


Before discussing what changes will occur in the Agent era, it is necessary to more precisely step back to a question: what exactly made a recording system sticky in the past?


Several factors, mainly related to how humans use software and human preferences, were the main reasons why the software was hard to replace. The stickiness of a software system largely depends on the UI, user habits, human workflows, and institutional arrangements embedded in organizational processes.


First, how often is it accessed?


A CRM is used every day by the GTM team and several other related departments. It is this high frequency of use that makes it critical infrastructure. The human layer built on top of it—such as team meetings, operational habits, management rhythms, and other organizational inertia developed over many years—is often the most challenging part to migrate. The reason is that it is often not even recognized as "something that needs to be migrated."


Second, is it write-only or read/write?


A truly sticky recording system is usually a bi-directional system. For example, in the case of CRM, it is not just a write-only archival system but is continuously read. Every call record, every stage update, every task created is input by a user, and this user usually cares about how this data will be used later.


This bi-directional flow means that any alternative must be able to handle real-time operational data, not just export a historical dataset. There is rarely an absolutely safe switchover point during the migration process. Therefore, once a company goes live, it often remains in the original vendor system for a long time.


In contrast, Applicant Tracking Systems (ATS) are often closer to "write-only" systems. Once a candidate is hired or rejected, the reasons for a company to revisit this data are relatively limited.


Third, how many undocumented SOPs are there?


The truly key business context is often not written in any wiki but is rather embedded in the workflow rules built over the years by administrators and system integrators.


Take the example of a sales system; this undocumented context may include: enterprise-level transactions over $100,000 requiring VP approval; transactions in the EMEA region needing privacy review; discounts for strategic customers can only bypass financial approval at the end of a quarter.


This context often determines whether something can be progressed promptly or completed without violating crucial processes. Migrating a system means having to unravel every automation rule; otherwise, the organization may directly lose a part of its institutional memory.


Fourth, how complex are the internal or external dependencies?


The core issue is: how many internal systems, team processes, or external stakeholders rely on this record system?


Internal connectivity refers to how many downstream software or workflows depend on it. External connectivity refers to external entities such as auditors, accountants, regulatory bodies, etc., needing direct access to its data. ERP is a classic example.


Whether internal or external, the higher the connectivity, the more complex the relationships that need to be unraveled and rebuilt during migration.


Fifth, from a compliance standpoint, how critical is the data?


The core issue here is simple: is this system compliance-critical?


Compliance-critical systems like payroll, ERP, and HR data must provide a legally defensible source of truth and have strict administrator access controls. Any migration may require auditors and regulatory bodies to be directly involved. This significantly increases their stickiness.


Sales data and customer support tools like Zendesk, on the other hand, fall at the other end. While the business cares about continuity and context, a data migration or someone gaining access usually does not immediately trigger regulatory risks.


Not all record systems have equal switching costs. When comparing CRM and ATS in the same dimension, the difference is clear.


ATS is a workflow tool serving a specific process, revolving around recruitment. Once a candidate is hired or rejected, the related records mostly become write-once data. Its integration scope is narrower, and its user base is smaller, more concentrated.


ERP, on the other hand, sits at the opposite end. The general ledger itself is an audit trail, with accountants, auditors, and regulatory bodies becoming direct stakeholders in the migration process.


Replacing an ATS is painful but still manageable. Replacing a CRM is like performing open-heart surgery. Replacing an ERP, on the other hand, is like performing open-heart surgery on a patient while they are running a marathon.



Traditionally, record systems have not really leveraged proprietary data or network effects as moats; typically, the workflow itself has been enough to create barriers. To some extent, the combination of tools and networks has been more of a consumer business phenomenon; historical Systems of Record (SoR) did not follow this path.


Proprietary Data. Many record systems have accumulated a significant amount of customer data but have not truly leveraged this data deeply, and in many cases, contract terms do not allow them to do so. Therefore, even though CRMs have rich datasets and theoretically could aggregate data from different customers to generate cross-customer insights, they have never done so in a truly meaningful way. Of course, products like Salesforce's Einstein have made some attempts.


Network Effects. Ideally, network effects should have been the strongest moat for record systems; for example, a CRM becomes more valuable because software vendors can find buyers within it. However, like data, the network effects of historical record systems have always been weak, almost non-existent.



If the UI disappears, what is left of the software after the Agent arrives?


An Agent does not need a browser. What it needs are APIs, context, instructions, and the ability to take action. Two things make all of this scalable today: first, LLM now has sufficiently strong reasoning capabilities, so an Agent can read context, make plans, select tools, take action, and review results without human intervention in most tasks; second, MCP has standardized the way tools are accessed, providing a common interface for Agents to invoke external capabilities.


An Agent with MCP access can now scale to execute actions previously done by human users on the platform in milliseconds and at scale, without needing a browser. In contexts with enough information, an Agent that can operate a computer may even interact directly with existing software interfaces without necessarily needing an API.


Simply put, software buyers now have three paths:


First, continue using existing systems and overlay Agents on them.
By using the CLI and APIs of existing systems, one can use either the vendor's native Agent product, such as Salesforce's Agentforce or SAP's Joule, or build custom Agents on top of them. Of course, for now, we assume that the APIs are full and accessible, and we temporarily ignore the potential complexity that "headless" operations may bring in practical operations.


Second, fully self-built record system.
Enterprises can build their own data model, operational logic, permission system, audit trail, system integration, and their own Agent stack from scratch. This path is likely to leverage third-party Agent development tools and database tools.


Third, purchase AI-native alternatives.
Enterprises can also purchase software designed for the Agent era from the ground up. These products emphasize machine readability, treating Agent orchestration as a first-class capability rather than patching AI functionality onto existing systems. These products may also be headless.


So, what aspects of the old scoring criteria will be retained?


Factors being driven by human behavior and preferences will gradually diminish, such as access frequency, read-write bidirectional properties, and other indicators related to human muscle memory. Agents may weaken the value of "muscle memory" as a moat, but they will not eliminate the moat of operational logic and business context. In a sense, they may even make these logics more important because Agents, to securely perform tasks, must rely on clear rule, permission, and process definitions.


Undocumented SOPs will remain important in the short term.
The institutional logic ingrained in workflow rules within the organization is exactly what Agents need to correctly execute tasks on your behalf. At the same time, this is also the most difficult part to rebuild. At least for now, it cannot be cleanly exported, especially in cases where some processes still require human involvement. However, capturing context is becoming easier; as Agents replace more manual labor, the importance of this factor will gradually decrease.


Connectivity remains difficult to unravel and will extend deeper.
The meaning of connectivity is changing. It is no longer just to align with human work but to maintain connections between traditionally siloed functions and software.


A CRM Agent needs to connect data and context from different stages such as sales, billing, customer success, etc. If your platform also serves as a node for Agents from multiple external organizations to transact, with buyers, sellers, partners all interacting through it, then the dependencies will deepen further.


When existing vendors stack Agents, it may be difficult for different underlying software's basic objects and logics to cooperate smoothly; if an enterprise relies only on a self-built database and a set of Agents, it will also face similar issues.


Compliance-critical data remains important.
Data involving regulatory agencies, regulatory risks, or legal risks still require a single, trusted source of data truth. If customers already trust the existing product, the likelihood of them switching systems will be lower.


Take compensation and accounting data as an example. An Agent may indeed need access to this data, but a business is usually unlikely to choose to self-build and maintain such a system internally for the long term.


In a fully Agent-native world, one of the most challenging issues is: Which Agents are authorized to do what? Who do they act on behalf of? How are these actions audited? If a record system could serve as the identity and permission layer for interaction among Agents, it would play a structurally irreplaceable role. The barrier here is not only what data it holds but the trust framework it enforces.


Looking ahead, for AI-native startups, a new set of factors will become increasingly important and will determine their defensibility.


First, how difficult is it to rebuild this record system?


Data will become more important on several levels.


First, in the short term, the key is how easy or difficult it is to extract and rebuild the underlying data of the record system. AI is making this easier, with a set of tools helping users with such migrations and rebuilds.


In the short term, incumbent vendors can and likely will make this harder: they can make APIs difficult to use, restrict them, make them incomplete, or make them uneconomical, or even not provide APIs at all. But as extraction tools advance, especially with the increasing capabilities of computer-operated Agents, data reconstruction will become easier.


At the same time, new companies are also reconstructing a richer set of data from emails, calls, voice Agents, and internal documents. AI reduces 80% of the cost of reconstructing a record system. What truly sets apart a useful entry point from a true replacement is the remaining 20%: edge cases, approval workflows, compliance requirements, and workflow in niche scenarios.


Second, do you have truly meaningful proprietary data?


Secondly, data itself will become more valuable.


Data that is truly defensive is not the data you import but the data that your product uniquely catalyzes. We often refer to this as the "data walled garden": this data is either proprietary, subject to regulatory constraints, or requires ongoing updates. A software vendor that invests heavily in collecting authoritative and comprehensive data will have a significant advantage over generic vendors or competitors lacking this type of data.


Data has another crucial dimension: whether it relies on actions generated within the product.


The best companies don't just store data entered from elsewhere. They generate their own data footprint by being part of the process, such as observing behavior, response rates, time patterns, process outcomes, industry benchmarks, anomaly patterns, and Agent execution traces.


The key is: data is now the context.


Third, do they have command over the action layer?


In the old world, storing records was sufficient by itself. But in the new world, Agents will take direct action, and defense may shift towards products that can form a closed loop: from taking action, to capturing outcomes, to using feedback to optimize future decisions.


For ERP, this may include approving expenses, triggering payroll, verifying invoices, sending notifications, and more. Products that can form a closed loop are more defensive because they are embedded in the execution process, not just at the observation layer. They generate unique data, continuously improve with use, and become harder to replace because removing them would disrupt workflows.


Of course, as more context accumulates, and edge cases are more thoroughly handled, the value here will further rise.


Fourth, does it encompass real-world execution elements?


Some business models are intertwined with real-world operations that are not entirely automated. The most obvious examples are companies with operational networks, such as DoorDash. They historically didn't belong to record systems, but are highly inspirational here.


More broadly, any company that can extend software loops to services, fulfillment, logistics, on-site operations, or payment processes has a defensibility different from pure SaaS. These companies not only store records or recommend actions; they dispatch personnel, move goods, or perform specific services.


For entrepreneurs, this means opportunities might lie in markets where software is increasingly making decisions, Agents are increasingly coordinating processes, but the last mile still requires real-world execution. For example, vertical software tied to on-site services is a typical direction.


Fifth, are there network effects?


Historically, most record systems had weak network effects because they were primarily internal software. But in the Age of Agents, if a system embeds multipart workflows, network effects can become much more crucial.


If a system mediates repetitive interactions between multiple parties, such as buyer-seller, employer-employee, company-auditor, supplier-customer, payer-service provider, adding each new participant may increase the network's value to the next participant.


One way is through shared workflow collaboration: where the product becomes a space for both parties in the process to conduct transactions, exchange context, handle exceptions.


Another way is through benchmarking and intelligence: where the system can present industry norms, anomalies, and action recommendations based on patterns observed in the network, reinforcing the value of data mentioned earlier.


A third way is through trust and standardization: once transaction counterparts begin to rely on the same set of tracks to complete approvals, handoffs, compliance, or payments, this product becomes not just a database but the collaborative infrastructure of the market itself, making it harder to be replaced.


Sixth, how strong is the buyer's technical capability?


In a world where theoretically anyone can self-build an agent, the actual development capabilities of different buyers still vary greatly. Especially in vertical industries and functional buyers who have not had strong internal engineering resources in the past, the probability of them building, maintaining, and continuously improving databases, workflow logic, agent stacks, and governance layers themselves is still very low.


Cost is equally important here. DIY may theoretically reduce software licensing costs, but often shifts expenses to implementation, maintenance, and internal complexity.


This means that there are still real opportunities in categories where operations are complex but technical supply is insufficient. For example, in manufacturing, construction back-office, industrial processes, on-site service workflows, and accounting.


There are also other factors that are equally important and will gradually become the basic threshold of software.


For instance, ontology needs to change. The idea of "self-built databases" underestimates the value carried by the object model itself. Existing software is built for dashboards, reports, and human users, capturing objects in the workflow such as opportunities, work orders, candidates, and so on.


But in the Agent era, schemas need to capture reasoning, actions, state tracking, exception handling, task delegation, and cross-system collaboration. The native object model may no longer be opportunities, work orders, and candidates, but tasks, intents, threads, policies, or outcomes.


Likewise, the permission system also needs updating. It is no longer just about managing human users, but about managing agents. This includes: who can do what, through which agent, under what strategy, requiring what approvals, leaving what audit trails, and how to perform rollback and exception handling.


Of course, all of this is inseparable from cost issues, such as how much it costs to build and maintain agents and databases, and how high the API access costs are. This brings us back to several core questions: how difficult is data reconstruction, how many dependencies are there, and how deeply embedded the system is.


So, what is the conclusion?



As existing software vendors move towards headless, they are actually making an implicit bet: the data layer will still be the core source of value. In certain categories, especially in heavily regulated fields like financial services, this bet may still hold for some time, and the headless process may proceed more slowly.


But for software entrepreneurs, as existing vendors start to decouple from the interface, the questions around how to compete with them, how to build software that has long-term defensibility, are changing.


The next generation of record systems is beginning to take a different shape: they are no longer just data warehouses that record human work, but are more agent-centric — able to capture context, proactively initiate work, and record the data trails generated during execution.


Furthermore, the most interesting companies will extend into the realm of real-world execution: coordinating field workers, logistics providers, service teams, and physical assets, or sitting between multiple parties, becoming an intermediary layer for multi-party collaboration.


These companies will blend multiple business models from the old world. And the core of traditional record systems, which is data, will gradually recede into the background, becoming the underlying infrastructure that supports the entire system.


[Original Article Link]



Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

举报 Correction/Report
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit