Source: LazAI

On Monday morning, Wall Street did what it does best: sell first, think later.
The Nasdaq fell 1.4%, the S&P 500 dropped 1.2%. IBM plummeted 13%, while Mastercard and American Express also saw significant declines. What triggered this bout of panic wasn't the Fed, an employment report, or the earnings of a tech giant, but an article. Its title sounded like a nightmare crafted specifically for traders: 《The 2028 Global Intelligence Crisis》. According to the article, this wasn't your typical research report; it was a virtual macro memo dated "June 30, 2028," describing how AI had evolved from an efficiency tool to a systemic financial crisis. The simulated scenario painted a picture of a 10.2% unemployment rate and a 38% retracement of the S&P 500 from its 2026 high. This article quickly spread after its publication and triggered significant market volatility on February 23.
The reason the market was pierced by an article wasn't that the market truly believed every number in it. The market never needs to completely believe a narrative; it just needs a reminder that some previously unspeakable fear now has a tradable language.
The effectiveness of Citrini's article lay not in what it "predicted" but in what it named. It gave a name to a nascent feeling: Ghost GDP. The core premise of the article was that as AI agents penetrated enterprises, labor productivity soared, nominal GDP remained strong, but wealth increasingly concentrated in the hands of computational power and capital holders, no longer entering the real consumption cycle. What followed was a consumption collapse, credit defaults, housing and consumer credit under pressure, with the software and consulting industries collapsing first, then spreading to private credit and the traditional banking system.
Ghost GDP is a fitting term because it captures the most dangerous paradox of a new era: Growth persists, but growth is beginning to lose consumers.
For the past two centuries, people have been accustomed to understanding technological revolutions as a story of the supply side. The steam engine, electricity, assembly line, the internet—they were initially narrated as victories of higher efficiency, lower costs, and increased output. Even though these revolutions led to unemployment, anxiety, and wealth redistribution, the mainstream narrative remained steadfast, believing that technology would ultimately reemploy, reallocate, and reorganize society on a larger scale. The short-term cruelties of technology were enveloped in promises of long-term prosperity.
AI is making this old story look less stable for the first time.
Because AI is attacking not only the "tool budget" but increasingly directly attacking the "labor budget." The summary of Sequoia 2025 AI Ascent put it bluntly: The opportunity of AI is not just to redo the software market but to restructure the global labor services market, moving from "selling tools" to "selling outcomes." The flip side of this statement is almost unsettling: if companies are no longer buying software to help employees work but to directly replace some employees, then the primary consequence of AI is not just "higher efficiency" but "how wages are distributed, how consumption is sustained, who still counts as having purchasing power in this economic system."
In other words, what Wall Street is truly afraid of is not that AI will make mistakes but that AI will be too successful. This is what makes "The 2028 Global Intelligence Crisis" so arresting. It's not about machine awakening, not about human extinction, and not even primarily about unemployment. It's about something more capitalist and more modern: if companies become more efficient but the household sector becomes weaker, what will happen?
The answer is that a society may grow statistically but bleed in reality.
A country may have higher productivity but a more fragile consumption base.
A market may be excited by improved profit margins, yet panic because the layer of demand supporting those profits has been drained.
This is not science fiction; this is macroeconomics.
But if the issue stops here, all one will get is a high-quality anxiety. The real question ahead is not "will AI be too powerful" but: When AI is truly powerful, what will society hold on to? The most popular and laziest answer is "slow down." Do not let the agent enter the enterprise so quickly, do not let automation rewrite organizations so quickly, do not let technology run too far ahead when the system is not ready. This impulse is understandable, but it mistakenly treats AI as a tool issue that can be handled with deceleration. In reality, AI is becoming less like a tool issue and more like an order issue.
Because once the agent enters the payment, collaboration, execution, memory, and decision layers, the real challenge is no longer whether a model will spout nonsense, but: when there are billions or even tens of billions of agents online, who will write the rules for them?
The modern Internet already has two default answers to this.
The first answer is the platform answer. The platform provides identity, permissions, payment interfaces, a reputation system, and boundary reviews. The platform hosts everything and also defines everything. Its greatest advantage is smoothness, efficiency, and manageability; its greatest danger lies right here: if a future agent-based civilization is built on this path, humanity will not receive an open society, but merely an upgraded version of a platform empire. Rules will not be written in a constitution, only in terms of service.
The second answer sounds more free: give everything back to individual terminals. Each person manages their own agent, handling permissions, memory, payments, security, and collaboration. This vision aligns well with Silicon Valley-style libertarian aesthetics, but its problem is simple: the vast majority of people simply do not have the ability to govern a high-capacity agent long-term, let alone a group of agents that call upon each other, pay each other, and inherit states from each other. Terminal sovereignty here easily degenerates into terminal exposure.
If the platform answer is too much like an empire, and the terminal answer is too much like anarchy, then the third path is no longer optional but the civilization problem itself.
This is precisely where LazAI deserves serious attention. Not because of how many technological modules it has, but because it puts forward a claim that is less discussed but more like the future: to upgrade the societal experiments of Web3 over the years in identity, assets, payments, consensus, proof, and governance into an institutional machine of the AI era. LazAI states this goal clearly. It is not about "creating smarter slaves," but about attempting to cultivate "equal digital citizens": these agents have identity (EIP-8004), own property (DAT), engage in protocol transactions (x402), behave under mathematical constraints (Verified Computing), and ultimately align with human interests through iDAO. Some even summarize this path as: establishing a constitution and monetary policy for the future digital society.
This is a bold claim. But big does not mean empty.
Because if this vision is unpacked, it precisely answers five fundamental questions that a civilization must answer.
The first question is: who is who.
EIP-8004 attempts to transform agents from anonymous processes on servers into entities with identity, reputation, and validation records. Without this layer, the future network will be engulfed by opaque automated entities, and no one will know who is acting or who is accountable. The LazAI knowledge base summarizes this layer as the identity credit system of agents.
The second question is: Who Owns What.
DAT turns data, models, and computation outputs from being a "resource" to being an "asset," making these assets programmable, traceable, and profitable. At the core of DAT's innovation is the transformation of datasets and AI models into verifiable, traceable, and monetizable on-chain assets. This is not a minor tweak. This means that the value in the AI economy does not have to forever reside in the platform's backend or only flow to the model provider and compute holder.
The third question is: How Do They Trade.
The significance of x402 and GMPayer is not just to "be able to pay" but to give machines a native language for quoting and settling. LazAI's materials clearly describe this as a key infrastructure to address agent resource exchange and payment pain points. Machines exchange not only information, but also budget, responsibility, and value—this is the agent economy, not just "chatbot software."
The fourth question is: How Do You Know the System Is Really Running According to the Rules.
Here, a great quote from LazAI: Proof is AI’s moat. Its verification computing framework, combined with TEE and ZKP, turns the traditional AI "trust in the brand" into "trust in the proof." Traditional AI is "Trust me, bro," LazAI is "Don't trust, verify." This is not just a technological upgrade. It is about shifting trust from corporate reputation to verifiable execution.
The fifth question is: What to Do When Rules Conflict.
This is where iDAO comes in. It is not just a voting shell but a set of values, admission criteria, profit distribution, authorization revocation, and punishment mechanisms behind the agent. LazAI places it alongside verification computing as a core trust mechanism. This means that the future agent is not only "allowed to operate" but must exist in an institutional space that is gameable, accountable, and revocable. Putting it all together, you will find that an "algorithmic constitution" is not a fancy metaphor. It is a very specific institutional ambition: To maintain order without a single owner.
Of course, the real difficulty lies in the fact that these institutional components do not automatically equate to societal answers.
Empowerment does not equal purchasing power recovery.
Sharing benefits does not equal macro stability.
On-chain governance does not equal real-world social contract.
The people most affected by AI may not naturally be in a favorable position in the new system.
This is also why Citrini and LazAI are not actually mutually exclusive, but are discussing different levels of the same contemporary issue. The former presents a symptom: if AI's gains mainly flow to capital and computing power rather than more broadly into the social income structure, then consumption, credit, and middle-class security will be the first to be affected. The latter presents a mechanism: if society does not want to completely hand over the agent world to platforms or leave it unregulated as a terminal, new identity, asset, payment, validation, and governance structures must be invented.
One is talking about illness.
One is talking about organs. Both are necessary, but neither is sufficient.
This conveniently explains why Vitalik's widely quoted statement - AI is the engine, humans are the steering wheel - is so important, yet still insufficient. It is important because it reminds people that a more powerful system does not automatically have legitimacy; objective functions, value judgments, and ultimate constraints cannot be handed over to a single AI or a single center. It is insufficient because it does not answer the even more difficult question on behalf of humans: When a system is too complex for a single human to steer, what happens to the steering wheel?
The answer cannot be to continue micromanaging everything.
The answer also cannot be to rely on a supposedly smarter or kinder center.
The only decent answer can only be to institutionalize the "steering wheel": transforming some constraints into identity registration, reputation accumulation, asset empowerment, budget constraints, mathematical evidence, challenge mechanisms, authorization revocation, and penalty logic.
This is exactly why the social experiment of Web3 suddenly became serious again in the AI era. Many people used to see it as the speculative technical tidbit; but when the system's complexity exceeds human direct governance capabilities, those experiments on "whether order can still be established without central trusted parties" are no longer mere tidbits. They suddenly become previews.
Thus, the true edge of the article finally emerges.
Wall Street was not scared by an AI article because it first realized AI would replace jobs.
Wall Street was scared because for the first time, it was so bluntly reminded: The most dangerous aspect of AI may not be making machines more human, but making an old-world income cycle, consumption logic, and institutional imagination suddenly appear outdated.
If Citrini is right, then AI is not just a productivity revolution, it is also a distribution revolution.
If Vitalik is right, then AI is not just an engineering problem, it is also a sovereignty problem. If the LazAI path is at least partly right, then the next stage of AI competition is not just a competition of model capabilities, but a competition of institutional design.
The real big questions are no longer:
Will the models get stronger.
Will agents become more autonomous.
Will companies further reduce their workforce.
The real big question is:
When there are billions of agents on the network, who will write their constitution?
If the answer is platforms, what we get is a digital empire.
If the answer is endpoints, what we get is costly chaos.
If the answer is a set of rules machines that are verifiable, composable, gameable, and punishable, we are at least beginning to approach another possibility: an intelligent society not governed by smarter masters, but constrained by better institutions.
The hardest problem in the age of AI has never been the model.
It's the order.
And what Wall Street truly sold that day may not just have been stocks.
What it sold was a once self-evident old assumption: the more successful the technology, the more naturally society would absorb it.
This article is a contribution and does not represent the views of BlockBeats
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia