Video Title: OpenAI President Greg Brockman: AI Strategy, AGI, and the Super App
Video Author: Alex Kantrowitz
Translation: Peggy, BlockBeats
Editor's Note: This article is translated from a conversation between OpenAI President and Co-Founder Greg Brockman on the Big Technology Podcast. The show has long focused on the changes in AI, the tech industry, and business structure, serving as an important window into frontline Silicon Valley observations.

In this conversation, Brockman did not dwell on the model's ability itself but instead pushed the question further: as AI's capabilities have been largely validated, how will the industry choose its path, reshape product forms, and absorb the systemic impact it brings? The conversation revolves around OpenAI's product strategy, the upcoming "Super App," and its judgment on AI entering the "takeoff stage."
This conversation can be understood from three aspects.
First, Path Convergence.
From video generation to inference models, from multi-line progress to active choices, OpenAI's choices are not a simple technical superiority judgment but a response to real-world constraints—computing power has become a core bottleneck. With limited resources, the technical roadmap is starting to converge on two leverage-rich directions: personal assistants and solving complex problems. This also means that AI's competitive logic is shifting from "what can be done" to "what to do first."
Second, Form Reconstruction.
The proposal of the "Super App" is fundamentally a leap in product form. AI is no longer a collection of scattered tools but a unified entry point: it understands context, calls tools, performs tasks, and continuously accumulates memory in different scenarios. From ChatGPT to Codex, AI is gradually taking over entire workflows, and the human role is also shifting from an executor to a scheduler—setting goals, assigning tasks, and supervising.
Third, Rhythm Inflection.
If the past two years were a stage of climbing capabilities, what is happening now is a "takeoff." On the one hand, model capabilities have risen from "assisting about 20% of work" to "covering about 80% of tasks," directly triggering a restructuring of workflows; on the other hand, AI is participating in its own evolution (using AI to optimize AI), overlaying chips, applications, and enterprise-side coordination to form a continuously accelerating closed loop. AI is no longer a single-point technology but is starting to become a key engine driving economic growth.
However, at the same time, another set of issues is emerging in sync: public distrust, employment uncertainty, controversy surrounding data centers, and the boundaries of security and governance. In response, Brockman's answer is not solely internal to the technology. He emphasizes two points: first, risks cannot be addressed through "centralized control" and instead require the establishment of a societal infrastructure around AI akin to the power system; second, individual capabilities are undergoing a transformation—the crucial question is no longer "can you use the tool" but "can you achieve your goals with AI."
If the past question was "what can AI do," the current question has shifted to what you need to do when AI starts doing most things for you.
The following is the original text (lightly edited for readability):
AGI Has Entered the "Clear Path" Stage: Greg Brockman (OpenAI Co-Founder) believes that based on GPT's reasoning model, there is now a clear path to AGI, expected to be achieved in a few years, but the form will remain "jagged."
Note: AGI (Artificial General Intelligence) refers to general artificial intelligence, signifying AI systems that possess equivalent or even superior capabilities to humans across most cognitive tasks. In contrast to current "narrow AI" (such as image recognition, recommendation algorithms), AGI emphasizes cross-task generality and transferability.
Strategic Convergence: From Multi-Line Exploration to Two Core Applications: Under computational constraints, OpenAI is concentrating resources on "personal assistants" and "complex problem-solving," rather than advancing all directions simultaneously (such as video generation).
"Super Apps" Will Become the Form of AI Entry: Chat, programming, browsing, and knowledge work will be integrated into a unified system, transitioning AI from a tool to an "execution layer," with users becoming "dispatchers."
Pivotal Shift: AI Begins Taking Over Workflows Rather Than Assisting: Model capability has surged from "completing 20% of tasks" to "handling 80%," forcing individuals and businesses to restructure how work is done.
Computational Power Becomes the Core Bottleneck and Competitive Focus: AI demand far exceeds supply, with the future constraint not lying in model ability but in compute resources, making data centers and infrastructure critical variables.
The "Takeoff" of AI is Happening: Self-accelerating technology (AI optimizing AI) combined with industry synergy (chips, applications, enterprises) is propelling AI from a tool to an engine of economic growth.
The Greatest Risk Lies Not in Technology, but in Governance and Use: Security issues cannot be addressed by a single entity; an open ecosystem and social infrastructure must jointly shoulder the responsibility.
Core Individual Capability is Transforming: Future competitiveness lies not in "execution" but in "setting goals + managing AI systems"; proactively using AI will become a foundational skill.
Alex (Host):
Today, we have with us Greg Brockman, Co-founder and President of OpenAI, discussing the most promising opportunities in AI, how OpenAI plans to seize these opportunities, and the concept of "super apps." Greg is here in our recording studio today.
Greg Brockman (OpenAI Co-founder & President):
Great to see you, thanks for having me.
Alex:
At this moment, OpenAI is pausing the advancement of video generation to focus resources on a "super app" that will integrate business and programming scenarios. Externally (including myself), it feels like OpenAI has already taken a lead on the consumer end but is now reallocating resources. What is happening?
Note: In March 2026, OpenAI announced the closure of its video generation product Sora (including the app and API) and halted related commercial efforts.
Greg Brockman:
Over the past period, we have been developing this deep learning technology, aiming to validate whether it can truly deliver the positive impact we have always envisioned—whether it can be used to build applications that truly help people and improve lives.
At the same time, we have also been pursuing another path: deploying this technology. On one hand, this is to support business operations, and on the other hand, it is to accumulate real-world experience early on, preparing for the moment when the technology truly matures.
And now, we have reached a new stage. We see that this technology is indeed viable. We are transitioning from "benchmarking" and some rather abstract capability demonstrations to a new phase—we must put it into the real world, have it engage in actual work, and continue to evolve through user feedback.
So I lean more towards understanding this change as: a strategic shift driven by a technological phase change.
This is not to say that we are shifting from the "consumer end" to the "enterprise end." More accurately, we are asking a question: in a situation of limited resources, which applications should we prioritize the most? Because we can't do everything.
Which applications can truly be implemented, collaborate with each other, and bring about real-world impact? If you list out all directions, the consumer end can be broken down into many types: such as a personal assistant, a system that truly understands you, aligns with your goals, and helps you achieve your life goals; as well as creation and entertainment; and many other possibilities. On the enterprise end, if you look from a higher level, it can actually be abstracted into one thing: you have a complex task, can AI help you complete it?
For us, the current priority is very clear, with only two things at the forefront: first, a personal assistant; second, AI that can help you solve complex problems.
The issue is: with our current computing power, we can't even fully support these two things. Once you add more application scenarios, it's simply impossible to cover them all. So this is actually a reality check: technology is rapidly maturing, the impact is about to explode, and we must make choices, select the most important direction to truly bring it to life.
Alex:
You mentioned a metaphor before, saying that OpenAI is a bit like Disney: it has a core capability, and then it can expand into different scenarios. Disney has Mickey Mouse, which can be used in movies, theme parks, Disney+. OpenAI's "core" is the model, which can be used for video generation, as an assistant, for enterprise applications.
But now, it seems that you are no longer taking this "comprehensive expansion" path, but rather have to make choices. Is that right?
Greg Brockman:
Actually, I think this metaphor is even more applicable now. But the key point is: technically, Sora (video model) and GPT (inference model) actually belong to two different technical branches. The way they are built is completely different.
The issue is, at the current stage, advancing both of these technical branches simultaneously is very difficult, especially with limited resources. So the choice we have made is to focus the main resources on the GPT path at this stage.
Of course, this does not mean that we are giving up on other directions. For example, in the field of robotics, we are still continuing relevant research. But robotics itself is still in the early stages and has not yet entered a truly explosive mature stage.
On the other hand, in the coming year, we will see AI truly take off in the knowledge work domain.
And it is important to emphasize: the GPT path is not just about "text." For example, bidirectional speech interaction (speech-to-speech) is also part of this technical path, making AI more accessible and practical. These capabilities are essentially within the same model framework, adjusted in different ways.
However, if you go down two completely different technical branches, it is difficult to sustain in the long term under limited computing power. Computing power is limited because the demand is too high. Almost after every model release, people want to do more with it.
Alex:
So why didn't you focus on the "World Model" path? For example, a video model that needs to understand the relationship between objects, which is also crucial for robotics. Moreover, Sora's progress has been very rapid. Why did you ultimately choose to bet on GPT?
Note: The "World Model" focuses on perception and physical intuition, with the core idea of enabling AI to understand "how the world operates," not just learning "surface patterns of data." Such models are often used to describe systems like Sora: they are not only generating images or videos but also modeling relationships between objects (such as humans, cars, light), the continuous changes in time (evolution between frames), and basic physical laws (such as motion, occlusion, and collision). In contrast, GPT belongs to language and reasoning models, more focused on abstract cognition and task execution capability.
Greg Brockman:
The biggest problem in this field is actually too many opportunities.
We found early on at OpenAI that as long as an idea is mathematically sound, it usually works and can achieve good results. This demonstrates the underlying power of deep learning, which can abstract generating rules from data and transfer them to new scenarios. This can be applied to various fields such as world models, scientific discoveries, and programming.
But the key is: we need to make choices.
There has always been a debate about how far text models can go. Can they truly understand the world? I think we now have the answer to this question: text models can reach AGI.
We have seen a clear path, and this year, even stronger models will emerge. Internally at OpenAI, one of our biggest pains is how to allocate computing power—this issue will only get worse, not better. So fundamentally, it's not a matter of "which path is more important," but a matter of timing and sequence.
Now, some applications that we once thought were distant are starting to become within reach. For example, solving unresolved physics problems. We recently had a case where a physicist had been studying a problem for a long time, handed the problem to a model, and 12 hours later, we had a solution. He said it was the first time he felt like a model was "thinking." This problem may even be one that humans can never solve, but AI did.
When you see something like this, your only choice is to double down, triple down. Because it means we can truly unleash tremendous potential.
So for me, this is not a competition between different directions, but rather what is OpenAI's mission? How do we bring AGI to the world? How do we make it truly beneficial for everyone? And we have seen that path, we know how to advance it.
Alex:
Well, I do want to go back to the next-generation models you mentioned earlier, but I want to follow up on this question first.
Earlier this year, I had a chat with Demis Hassabis of Google DeepMind. Interestingly, he said that for him, the closest thing to AGI was actually their image generator called Nano Banana.
Note: Demis Hassabis is one of the key figures driving AI from research to breakthrough applications. He co-founded DeepMind, which developed AlphaGo and famously defeated the world champion in Go in 2016, a landmark event in the history of AI development.
His reasoning was: whether it's an image generator or a video generator, to generate such images and videos, fundamentally, you must understand the interaction between objects, at least have some level of understanding of how the world operates.
So does this imply a potential risk? Is this a big bet — if that's the case, will OpenAI continue to double down on another branch of technology and miss out?
Greg Brockman:
If that's really the case? I have two answers.
First, of course, that's a possibility. That's how this field is; you ultimately have to make choices, you have to bet. And OpenAI has been doing this from the beginning: we have to assess, believe in the path to AGI, and then push highly focusedly along that path. Just like adding random vectors, the result may eventually be close to zero; but if you align all vectors, they can drive you clearly in one direction.
However, the second point is that image generation is actually also a very popular capability in ChatGPT, and we are still continuously investing and prioritizing advancement in this area. The reason we can do this is that it doesn't actually belong to the "world model" or "diffusion model" technical branch; it is actually built on top of the GPT architecture. So even though it faces a different data distribution, at a more fundamental core technology level, it's still the same thing.
And this is precisely one of the most amazing things about AGI: sometimes, very different-looking applications—such as speech-to-speech, image generation, text processing, and the application of text itself in various scenarios like scientific research, programming, personal health information, and more—can actually all be accommodated within the same technical framework.
So, from a technical perspective, one thing that I and the company have always been thinking about is how to unify our efforts as much as possible. Because we truly believe that this technology will bring a holistic improvement and may even elevate the entire economic system.
And the scale of this thing is too vast. We certainly can't do everything, but we can complete our part.
Alex:
This is what that "general" in Artificial General Intelligence (AGI) means.
Greg Brockman:
Exactly, that's the "G," that really is what it means.
Alex:
Talking about "unification," what will this super app look like in the end?
Greg Brockman:
The super app as I see it—
Alex:
Will it integrate chat, programming, browsing, and things like ChatGPT all together, right?
Greg Brockman:
Yes. What we want to create is an end-user-facing application that allows you to truly experience the power of AGI, that is, its "generality."
If you think about today's chat products, I think they will gradually evolve into your personal assistant, your personal API, a truly AI that considers you. It knows a lot about you, aligns with your goals, is trustworthy, and can to some extent "represent" you in this digital world.
As for Codex, you can think of it as: it is currently a tool mainly built for software engineers, but it is evolving into a "Codex for everyone."
Anyone who wants to create or build something can use Codex to have the computer do what they want. And it's no longer just about "writing software"; it's more like "using the computer" itself. For example, I have it help me adjust my laptop settings. Sometimes I forget how to set up hot corners, so I just have Codex do it, and it actually does it.
This is how a computer should naturally be; it should adapt to people, not make me adapt to it.
So you can imagine an app like this: anything you want the computer to do, you can tell it directly. It will include the ability to "use the computer" and "browse the web," allowing AI to truly operate web pages, and you can also supervise what it's doing. Moreover, whether your interaction is through chatting, coding, or general knowledge work, all these conversations will be unified in one system. AI will have memory and understand you.
This is what we are building.
But to be honest, this is just the tip of the iceberg, the part that is visible above the water. For me, what is truly more important is the unification of underlying technology.
We have mentioned unification at the level of underlying models, but what has really changed in the past few years is this: it's no longer just about the "model" itself; what's more crucial is the "deployment system." In other words, how do models get context? How do they connect to the real world? What actions can they take? How does the feedback loop with users operate as new contexts continually emerge?
Internally, in the past, we actually had multiple implementations of these things, or at least a few slightly different implementations. Now we are consolidating them into one. Ultimately, we will have a unified AI layer, and then, in a very lightweight way, we will point it to different specific use cases.
Of course, you can still create a small plugin, a small interface, specifically for finance or law, but in most cases, you might not even need to because this super app itself will be broad and generic enough.
Alex:
Is this app geared towards both enterprise and personal use cases?
Greg Brockman:
Yes, that is actually its core. Just like a computer, such as your laptop, is it for personal use or work? The answer is both. It is primarily your device, your interface to the digital world. And that is exactly what we want to achieve.
Alex:
So, from a non-business perspective, if I use this super app in my personal life, what would I use it for? How would my life change?
Greg Brockman:
My understanding would be this: In your personal life, it'll start by extending how you currently use ChatGPT.
How do you currently use ChatGPT? People are already using it to accomplish a wide variety of amazing tasks. Sometimes it's as simple as saying, "I need help drafting a speech for a wedding, can you assist?" or "Can you take a look at this idea and give me some feedback?" Or even, "I'm running a small business, can you provide me with some ideas?"
Some of these scenarios are more personal, while others are starting to blur the lines between personal and professional. And my take is: all of these kinds of queries should be something that a super app can handle.
Greg Brockman:
But if you look back at the evolution of ChatGPT, it's been evolving itself.
It used to be stateless, right? For everyone, it was the same AI, starting from scratch each time, almost like talking to a stranger. But if it can remember your past interactions, it becomes way more powerful. If it can tap into more context, it becomes way more powerful too.
For instance, hooking it up to your email, your calendar, truly understanding your preferences, having a deeper set of background info about your past experiences, and then using this to help you achieve your goals. For example, ChatGPT already has a feature called Pulse, which delivers content daily based on its understanding of you.
So at the individual usage level, the super app will encompass all of this and do it deeper and richer.
Alex:
When are you planning to launch it?
Greg Brockman:
A more accurate way to think about it is, over the next few months, we will be progressively driving in this direction. The full vision we're talking about will be delivered on step by step, not all at once; it will roll out in stages.
For example, today's Codex app actually contains two layers: one is a generic agent harness that can use tools; the other is an agent good at writing code.
And this generic harness can actually be used for many other scenarios. You hook it up to a spreadsheet, hook it up to a Word document, and it can help you with knowledge work.
So our first step is to make the Codex app more user-friendly for general knowledge work. Because we've already seen within OpenAI that people have spontaneously started using it this way.
This will be the first step, with many more steps to come.
Alex:
When I was talking to one of your colleagues about Codex yesterday, he mentioned that someone is using Codex for video editing: he had Codex help him process videos, Codex even created a plugin for Adobe Premiere to segment the video and then start editing. Is this the direction you're aiming for?
Greg Brockman:
I particularly love hearing about these use cases. This is exactly the way we hope this system will be used. What's really interesting is that the Codex app was originally designed for software engineers, so its current usability is actually not very high for non-programmers. Because during the setup process, many small issues may arise.
Developers can immediately understand what that means and how to fix it; we're already used to it. But if you're not a developer, when you see these, you might think, "What is this? I've never seen this before."
However, even so, we have seen many people who have never written code before start using it to build websites or do things like what you just mentioned—automating interactions between different software, gaining significant leverage from it. For example, someone in our communications team has integrated it with Slack and email to have it process a large amount of feedback, and it has produced very good summaries and analyses.
So the current situation is: those who are very driven are already willing to overcome these barriers and then reap high rewards from it.
In a sense, the hardest part is already done—we've created a truly smart, capable AI that can actually accomplish tasks.
What we need to do next is the relatively "easy" part: make it truly useful to the general public, gradually breaking down these entry barriers.
Alex:
Looking at the competitive landscape, Anthropic now also has the Claude app, which includes both a chatbot and Claude Code. To some extent, they already have the prototype of their own "super app."
How do you see why Anthropic made this move earlier? And how likely do you think OpenAI is to catch up?
Greg Brockman:
If you rewind the clock 12 to 18 months, we actually always focused on "programming" as a key area and consistently excelled in various programming competitions and other very "pure skill" tests. However, one thing we didn't invest enough in at the time was the last mile of usability.
That is to say, we didn't pay enough attention to this issue: AI is already very smart, able to solve various difficult programming problems, but it has never seen codebases in the real world— and real-world codebases are often messy, far from the "clean" environments it is familiar with.
At that point, we were indeed behind. But starting around the middle of last year, we began to take this very seriously. We specifically formed a team to look into all these gaps, the real-world messiness, and complexity that we had not truly encountered before.
For example, how to build training data? How to set up a training environment? What does it really feel like for AI to "do software engineering"— being interrupted, encountering strange issues, various non-ideal situations, and so on.
I think by now, we have caught up. When users truly compare us with competitors side by side, many people tend to lean more towards choosing us.
Of course, we also know that we have a gap in the front-end experience, and we will address this part. But overall, this has been our focus during this time: not just building a model and then slapping on a product shell; but rather, thinking of it as a complete product from the beginning. While doing research, we are also thinking: how will it ultimately be used? This is a shift that has been happening internally at OpenAI during this time.
So, in my view, we will have a very strong wave of model upgrades next. Just looking at this year's roadmap, I am very excited, there are really a lot of things that can be achieved.
At the same time, we are also very focused on filling the last mile of usability.
Alex:
Since 2022, OpenAI has been like the undisputed leader in this field. Obviously, the competition now is no longer just about test scores. You just used the phrase "we caught up" yourself.
Has the internal atmosphere of the company also changed? In other words, is it not the same feeling of being far ahead in a product like ChatGPT as in the past, but actually entering a real competition.
Some external reports actually show this change— such as internal meetings emphasizing that OpenAI no longer has any "side tasks," and everyone should focus on this core direction. So, what kind of changes have occurred in the internal environment and atmosphere now?
Greg Brockman:
I would say, for me personally, the most unsettling moment at OpenAI was actually after we released ChatGPT.
I remember being at the company holiday party, and there was this sense of "we've made it" in the air. I had never felt that before. My reaction at the time was: No, we're not the people who have made it, we're the underdog.
And we always have been. The competitors in this space are mostly entrenched large companies with more funding, more people, more data, and almost all resources more abundant.
So why is OpenAI able to compete? To some extent, the answer is that we never felt comfortable. We always saw ourselves as the challenger.
In fact, for me, seeing the market really start to take on this competitive dynamic, seeing other competitors emerge and do well, has been a very healthy thing.
Because, in my view, you can never get fixated on where your competitors are. If you're just looking at where they are now, by the time you get there, they've already moved on.
And I feel like, in recent times, it's actually been the other way around: a lot of people have been focused on where we are, and we've been able to continue to push forward. This has given us a sense of alignment and unity internally.
I mentioned earlier that we used to almost treat "research" and "deployment" as two separate things; and now, we really want to integrate them. For me, this is a wonderful thing.
So I would say, the stage we're in now is not a phase where I feel we've ever been "winning for sure," or suddenly in crisis. You know, external perceptions of you tend not to be as good as they say, nor as bad.
I feel overall, we've actually been quite steady. And in terms of core model development, I'm very confident in our roadmap and the research work we've put in. As for the product side, I feel like we have a really good energy now, everyone is coming together to truly deliver these things to the world.
Alex:
You've mentioned multiple times earlier that there will be some very strong new models coming up. So what exactly are they?
The Information reported that you've completed pre-training on "Spud"; and Sam Altman also told OpenAI staff internally that they should see a very strong model within a few weeks. That was a few weeks ago. The team internally believes it could even truly drive economic acceleration and things are progressing faster than many people expected.
So, what is "Spud" exactly?
Greg Brockman:
It's a great model. But I think the focus is not really on a single model.
Our research process is roughly like this: first is pre-training, which is to produce a new base model, and then all further improvements will be built on this base model. And this step often requires a lot of internal teams within the company to put in a huge effort. In fact, for the past 18 months, most of my time has been spent here: mainly around GPU infrastructure, supporting the teams responsible for the training framework, and actually running these large-scale training tasks.
Then comes the reinforcement learning stage. This is where this AI, which has already learned a lot of world knowledge, begins to truly apply that knowledge.
Next is the fine-tuning process. At this stage, you will actually tell it—well, now that you know how to solve problems, go ahead and practice in various different scenarios.
Finally, there is a "last mile" stage concerning behavior and usability.
So, I would see Spud as a new foundation, a new pre-training model. And on it, you can say that our research over the past two years is beginning to truly show results. It's going to be very exciting.
I think what the outside world will ultimately feel is an overall improvement in capability. But for me, this has never been just a one-time release issue. Because as soon as this version comes out, it's actually just an early version of many more advancements to come. We will continue to do more at every stage of this improvement process.
So I think we are more like having an ever-accelerating engine of progress now, and Spud is just a milestone on this road.
Alex:
So, what do you think it can do that today's models cannot?
Greg Brockman:
I think it will be able to solve harder problems and become more nuanced. It will better understand instructions and context.
Sometimes people talk about a feeling called "big model smell"—meaning, when the model is really smarter and more capable, you can clearly feel it. It will follow your intention more closely and better suit your needs.
When you ask a question and the AI doesn't really understand what you mean, that feeling is still very disappointing. You can't help but think: This is something you should clearly be able to figure out on your own.
So I would say, in a sense, this will be a result of the accumulation of many "quantitative changes" leading to a "qualitative change." On the one hand, there will be significant improvements in various metrics; on the other hand, some entirely new scenarios will emerge: previously, you might have been too lazy to use AI because it wasn't reliable enough, but now you would use it without hesitation.
I think this will be a comprehensive change. I am especially looking forward to seeing how it will continue to raise the limit of capability. We have already seen its performance in scenarios like physics research, and I think next, it will be able to address more open-ended problems and span longer timeframes.
At the same time, I am also looking forward to seeing how it will raise the floor of capability—meaning that no matter what you want to do, it will be much more useful than today.
Alex:
But for the average user, feeling this kind of change is sometimes not easy. For example, before the release of GPT-5, there was actually a lot of hype and anticipation; however, when it actually came out, the initial public reaction was somewhat disappointing to some extent. Later, everyone slowly discovered that it was actually very powerful in certain specific tasks.
So for the next generation of models, do you think it will be mainly felt in certain professional scenarios, or will it be a kind of improvement that is more intuitively and universally felt by everyone?
Greg Brockman:
I think the story might be similar. After the model is released, some people will immediately feel that it is a complete transformation compared to what they have seen before. But there will also be some use cases where the bottleneck is not in "intelligence." So if you just make the model smarter, in these areas, users may not immediately feel the difference.
However, over time, I think everyone will eventually feel the change. Because what really changes is: to what extent you start relying on this system.
If you think about how we interact with AI now, everyone actually has a mental model of "what it can do." And this mental model doesn't change quickly. It usually evolves as you gain experience, and then it occasionally does something magical for you, and you suddenly realize: wow, it can actually do this, something I never thought of before.
For example, in scenarios like accessing medical information, we have already seen similar cases. I have a friend who used ChatGPT to explore different treatment options for his cancer. The doctor had previously told him it was late-stage, and there was nothing more that could be done. But he used ChatGPT to research many different ideas and actually found a treatment because of it.
In a scenario like this, the premise is actually: you have to have a certain level of trust in AI's ability to help in this context before you're willing to invest so much effort in extracting value from the system.
So I think what we'll see next is: in any similar application scenario, the thing AI can help you with will become more obvious to everyone.
Therefore, this is not only about the technology itself getting stronger but also about our understanding of the technology changing and catching up with it.
Alex:
So you will increasingly rely on it. Within OpenAI, you are also developing an automated AI researcher, which is said to be launching this fall. So what exactly is that?
Greg Brockman:
I think, from an overall trend perspective, we are now in the early stage of this technology takeoff.
Alex:
What does "takeoff" mean?
Greg Brockman:
Takeoff refers to AI continuously getting stronger along an exponential curve. And part of the reason for this is: we can already use AI to help us improve AI itself, so the entire research process is also accelerating.
But I think that "takeoff" is not just a technological matter; it also signifies the release of real-world impact. The development of many technologies follows an S-shaped curve; and if you look at multiple S-curves over a longer time frame, they will eventually converge into a form of nearly exponential growth.
I think we are currently in such a stage. That is to say, the technology itself is advancing at an increasingly fast pace, and this engine of progress is continuously gaining momentum.
At the same time, in the external world, many tailwinds are forming: chip developers are receiving more resources; many people are working on various applications, trying to embed AI in different scenarios, and seeking the points of convergence between it and various specific needs.
All this energy is constantly accumulating, collectively propelling AI into a "takeoff phase," transforming it from a marginal existence into the primary engine driving economic growth.
And this is not just happening within the walls of our organization. It concerns the whole world, the entire economic system, how to collectively advance this technology, and how its practicality continues to progress.
Alex:
So what will this "researcher" specifically do?
Greg Brockman:
The so-called "Researcher" essentially refers to: as the proportion of tasks AI can take over increases, we should allow it to operate more autonomously.
Of course, there are many aspects behind this that need careful consideration. It does not mean: we release it, let it run on its own for a while, and then come back later to see if it has produced any good results.
I think we will still be very deeply involved in its management. Just like now, if you have a junior researcher and you leave them alone for too long, they are likely to go down a path that doesn't offer much value. But if you have a senior researcher, or someone with a real sense of direction leading them, they may not even need to personally master all the specific operational skills but can still provide continuous feedback on what the person is producing, review it, and provide guidance on the direction: what exactly I hope you will achieve.
So the system as I understand it is a set of mechanisms we are building that will significantly increase the speed of our model output, drive new research breakthroughs, and make these models more useful and usable in the real world. And all of this will happen at an increasingly faster pace.
Alex:
What will it specifically do? Will you directly tell it to "find AGI" and then it will try on its own?
Greg Brockman:
To some extent, I do see it that way, at least in the first sense. But if looked at from a more practical perspective, I would understand it as: taking the entire workflow of one of our research scientists from start to finish and trying to execute it as much as possible in a silicon-based system.
Alex:
Another way to understand "takeoff" is: AI's progress will shift from incremental improvement to accumulating momentum, eventually evolving into an almost unstoppable propulsion process toward a more intelligent intelligence than humans.
Are you worried that, just as things may develop in a positive direction, this progress itself may also get out of hand, deviate?
Greg Brockman:
I think, of course, there will be, that is without a doubt. I believe that to enjoy the benefits of this technology, one must also seriously consider its risks.
If you look at our approach to technical development, you will find that we have put a lot of effort into security and protection. A good example is prompt injection attacks. If you are going to create a very smart, powerful AI that has access to a lot of tools, you certainly want to make sure that it is not led astray or manipulated by someone giving it a strange command.
This is something we've put a lot of effort into, and I think we've achieved very good results. We also have a very strong team responsible for this work.
Interestingly, some of these issues can actually be analogized to humans. Humans are also susceptible to phishing attacks, can be misled, and may act without full context.
We bring these analogies into our own R&D process. Whenever we release a model, develop a model, we always think: how to ensure it truly aligns with human objectives, how to ensure it does indeed help? This is something we care a lot about.
Of course, there are also some larger issues involving the whole world, the entire economy: how will everything change? How can everyone benefit from this technology? These are not just technical issues, nor can OpenAI solve them alone. But yes, I do think often about not only advancing the technology but also ensuring that it can truly bring about a positive impact commensurate with its potential.
Alex:
The issue is, this looks like a race. What's happening within the walls of OpenAI HQ is also quickly replicated by many open-source players. And these players are often much weaker in terms of security boundaries and protective measures.
I remember you said something before, the gist of it was: Creative achievements require many people to get many things right, but destructive results may only require one malicious actor. This is at least the place I'm most concerned about. Because this is clearly a race, and progress is fast. Many of your peers have said if everyone agrees to stop, they are willing to stop too. But now, it seems like there's no sign of this race slowing down at all.
So, is the reward really worth taking on such risks?
Greg Brockman:
I believe the reward is worth it. However, I also feel that such an answer is still too broad and too simplistic.
Since the inception of OpenAI, we have been asking: What future constitutes a good future? How can this technology truly elevate everyone's situation?
You can break this question down into two perspectives. One is a "centralized" view: thinking that to make this technology safe, the best way is for only one entity to develop it. Then there is no competitive pressure, and you can carefully get things right, and when you are ready, decide how to deliver it to everyone. This idea is understandable, but to some extent, it is also a very difficult solution to accept.
And another path, which is also the path we lean towards, is to think from the perspective of "resilience." In other words, to see it as an open system: many participants are driving the development of this technology, but the focus is not only on the technology itself but more on building the social infrastructure around this technology, enabling it to be more securely embraced.
You can think about the development of electricity. Electricity is also produced by many different people and institutions, and it itself carries risks and dangers. However, at the same time, we have built multiple layers of security infrastructure around it: there are electricity safety standards, various usage specifications, regulatory approaches corresponding to different scales. At a very large scale, there are even specialized regulatory requirements. Many people can use electricity in a democratized manner, along with inspectors and a whole set of supporting systems gradually established around the characteristics of this technology.
And I think AI is the same. What we really see is that there must be a broad social discussion around AI. If this technology is really going to arrive and change everyone's life, then people must be involved. It cannot be solely driven and decided by a centralized small group in secret.
So, for me, this has always been a very core issue: In what way should this technology unfold? And what we truly believe in is a "resilient ecosystem" gradually formed around technology development.
Alex:
So, are you saying that we are currently in the process of "taking off," and we are all actually already in it? NVIDIA CEO Jensen Huang recently said he believes AGI has been achieved. Do you agree?
Greg Brockman:
I think AGI has different definitions for different people. And indeed, many people would argue that the technology we have today is already considered AGI.
This can be debated. But I think the truly interesting part is that the technology we have today is still very "rough," with a clear sense of fragmentation.
In many tasks, such as writing code, it is already superhuman. AI can do it, and it has significantly reduced the friction of creating things. But at the same time, there are still some very basic things that humans can easily do but AI still struggles with.
So where do you draw the line? To some extent, it's more like a "feeling," an atmospheric judgment, rather than a question that can be strictly scientifically defined at this moment.
So for myself, I think we are clearly going through that moment. If you showed me these systems today five years ago, I would say: Yes, this is what we were talking about back then. It's just that reality has grown out, looking very different from what we originally imagined. It is different from any form we once envisioned.
So I think we need to adjust our mental models accordingly.
Alex:
So you mean, we're not there yet?
Greg Brockman:
I would say, we're probably at 70% to 80% already. So I think we're actually very close.
And I believe one thing is very clear: in the next few years, we will definitely witness AGI. Its performance may still be somewhat "jagged," not entirely smooth and perfect everywhere. But the lower bound of tasks it can accomplish will be raised very high—almost for any intellectual task you need to perform on a computer, AI can do it.
So now I have to give a somewhat uncertain answer because there is indeed a bit of a "uncertainty principle" in this—you can argue it from different definitions. But according to my personal definition, I think we are almost there. Take one more step forward, and we are absolutely there.
Alex:
What happened in December 2025 exactly. Because that seemed like a turning point, the idea of "letting the machine write code uninterrupted for several hours in a row" suddenly shifted from a theoretical idea to everyone starting to say, "I think I can trust it to keep running on its own for a while."
So what really happened at that time?
Greg Brockman:
After the release of the new model, the percentage of tasks AI could perform jumped from around 20% of your work to 80% in one go. This was an extremely significant shift. Because it was no longer just "a pretty good little tool," but it became: you had to reorganize your workflow around these AIs.
For me personally, I also had a very typical visceral moment. Over the years, I had a test cue: have AI build a website for me. This website was actually one I had hand-built when I was learning to code, taking me several months.
And by 2025, it probably still took four hours and several rounds of prompts back and forth to get something decent. But by December, I asked once, and AI did it once, and it did it very well.
Alex:
So how did these models make this leap?
Greg Brockman:
A big part of the reason is that the base model itself has gotten stronger. OpenAI has been continuously improving its pretraining techniques. And at that point, we saw a hint of what the rest of the year would look like. But at the same time, it wasn't just a single breakthrough point. More accurately, we've been pushing forward on all dimensions of innovation.
One interesting thing about these models is: in some sense, you might feel that they've had these moments of "discontinuity" over and over again; but from another perspective, everything has been a continuous evolution. It didn't suddenly jump from 0% to 80%, but rather from 20% to 80%. So in a way, you could also say that it just got better.
And I think this progress is actually continuing in every subsequent minor version update. For example, from 5.2 to 5.3, I have a closely collaborating engineer who initially couldn't get the model to do the low-level, hardcore systems work he was responsible for; but by the new version, the model could take over his design documents, truly implement them, add metric monitoring and observability, run a profiler for performance analysis, continue optimizing, and ultimately achieve the result he originally hoped to deliver by his own hands.
So I would say, it's more like a process of "incremental progress, and then suddenly everything has changed." But all of this has actually been foreshadowed by the capabilities currently in play. Within a year at the latest, many things, some even much faster, will become extremely reliable.
Alex:
Doesn't this also surprise you? Because I remember not long ago you mentioned in an interview that tools like Codex, an automatic programming tool, were originally only for software developers. But earlier in today's conversation, you said that actually everyone can use these types of tools.
What made you change your mind?
Greg Brockman:
I actually always framed Codex in the context of "writing code." After all, its name has code in it, so it's natural to see it as a tool for programmers. And within OpenAI, many of us are software engineers ourselves, building tools for ourselves, so it was very natural to think in this way.
But as this technology progressed, we began to realize something: the underlying technology we've actually built is mostly not about "code" at all, it's fundamentally about "solving problems."
At its core, it's about managing context, building an execution framework, and thinking about how AI should plug into real work, how to actually get things done. And once this is established, even in a programming context, it suddenly means that anyone can have this capability. Because what you truly have is a system that can do the work for you. As long as you have a vision, a goal to accomplish, and can describe your intent clearly, AI can go and execute, can get things done.
But this will also make you start to ask, why am I only focusing on the "non-programming" or "programming" divide? In fact, there is a lot of work that is essentially just some kind of mechanical skill. Like Excel spreadsheets, like making presentations. If AI already has enough context and sufficient raw intelligence, it can actually do these things very well now.
So, if we just make it more accessible, more user-friendly, it will shift from "Codex is for programmers" to suddenly "Codex is for everyone".
Alex:
And after we saw this clearly visible progress, Silicon Valley quickly saw another almost silent phenomenon emerge, which is Open Claw, right? Or more broadly, the entire tech community is starting to trust AI in the way you just mentioned—like handing over desktop control to an AI robot, or setting up a Mac mini, giving it permissions for email, calendar, files, and then to some extent, letting it "take over life".
Later, OpenAI brought the founder of Open Claw into the company. So could you talk a bit more about this kind of AI that "helps you manage your life"? Bringing in the Open Claw team, is the underlying vision something like that?
Greg Brockman:
I would say, the most core aspect of this technology is: figuring out how it can be useful, how people actually want to use it, what the vision of the intelligent agent is, how it will enter people's lives—these are all very difficult questions.
What I have repeatedly seen in this evolution of technology is that those who are truly willing to deeply engage, full of curiosity, and have a strong imagination, this in itself is a very real capability, and it will become an increasingly valuable capability in the new economy.
The founder of Open Claw, Peter, in my opinion, is such a person; he has a very strong imagination and a strong creative impulse. So in a way, this is related to a specific technology; but in another way, it is not just a technical issue at all. It truly relates to: how do we embed these capabilities into people's lives, find where they truly belong.
So, as a technologist, this is certainly exciting; but as someone who truly cares about delivering practical value to users, we are now investing heavily in this, investing a lot.
Alex:
You recently had an interesting comment on this. You said, when you start having these autonomous AI agents work for you, you will become the "CEO of a fleet of thousands of intelligent agents" who are working for you to achieve your goals, vision, and tasks, and you are no longer deeply involved in the specifics of how various issues are being resolved.
But you also mentioned that, in a sense, this new way of working can make people feel like they are losing the "pulse" of the problem itself.
Greg Brockman:
Is this really a good thing? I think it's a double-edged sword.
So I think what we need to do is, on the one hand, recognize the true power that these tools can bring, and on the other hand, try to mitigate as much as possible the weaknesses they bring. For example, giving people greater leverage, giving people greater agency—if you have a vision, something you want to accomplish, then you can mobilize an entire fleet of agents to do it for you, which is of course very powerful.
But if you think about how the world operates, in the end there has to be someone responsible. Suppose you're building a website and your agent messes things up, ultimately affecting the user, strictly speaking, it's not the agent's fault, it's your fault. So you have to care about it.
I think anyone who really wants to use these tools must recognize: human agency, human responsibility, are core parts of the whole system. How humans use AI is fundamentally important.
So I think the most important point is: as users of these agents—we are also like this within OpenAI—you cannot abdicate responsibility. You can't just say, "AI will take care of everything on its own."
Alex:
Of course. But what you just said about "feeling like you're losing the pulse of the problem" seems to be different from "responsibility."
Greg Brockman:
For me, these two are actually connected. Because the key is: if you're a CEO, but you're too far removed from the details—like if you're leading a team, running a company, but you've lost touch with the front line, that usually doesn't lead to good results. So what I wanted to express just now is not that "humans can finally know nothing" is something worth pursuing.
Of course, some details can indeed be confidently handed over. Like when you hire a general contractor to build your house, there are a lot of details you probably don't need to personally oversee because you trust that the other party will handle it well. But ultimately, if some key details go wrong, you should still care and you should still know.
So here is a very important subtle difference: you can't just blindly say, "I'm willing to lose that sense of grasp on the problem." Instead, we should actively say: I still need to maintain that awareness to truly understand the strengths and weaknesses of the system.
And as you begin to extract yourself from some of the more low-level, more mechanistic transactions, the reason you're able to do that should be because you've already established trust in this system, confirming that it does indeed get things right.
Alex:
Regarding models, I have one final question. You mentioned a path of model evolution: from pretraining, to fine-tuning, to reinforcement learning, making it better at solving problems step by step and being able to perform tasks on the internet.
And now we've reached a stage where the model has learned to use tools through this process. If I understand correctly, what would be the next step in this evolution path?
Greg Brockman:
I think, the world we are in now is a world where machine capabilities are deepening and expanding continuously. Part of it is certainly about tool use, but at the same time, we also need to really make the "tools" themselves good enough. For example, if AI can already do "computer operations" and use desktop systems like humans, then in principle, it can do anything you can do.
But at the same time, we also need to provide a lot of infrastructure-level things for the machine. For example, in an enterprise environment, how do you do identity authentication and authorization management? How do you do audit trails and observability? To catch up with the development of the model's underlying capabilities, a lot of supporting technologies need to be built.
And from an overall direction, I think it will include things like a "very natural voice interface." That is to say, you can have a natural conversation with a computer as you do now, where it can really understand you, do what you need it to do, and provide valuable suggestions.
For example, it will proactively remind you: something you've been progressing on is now stuck, and the issue is here. Or when you wake up in the morning, it will say to you: here is your daily briefing, how much work did your agents progress last night.
Perhaps it is even running a business for you — I think this would be a huge application of this technology. The democratization of entrepreneurship will definitely happen. It will tell you: these areas are problematic; a customer is very dissatisfied now and wants to talk to a real person, you better handle it yourself. These things will happen.
Then, I think the next stage also includes: the upper limit of challenges for humans will continue to be raised by this technology. We are actually already at the forefront of this trend. What excites me the most is almost analogous to AlphaGo's Move 37 — that move is something humans would never have made, it is creative, and it has changed many people's understanding of the game.
This kind of thing will happen in every field. It will happen in science, mathematics, physics, chemistry; it will happen in materials science, biology, healthcare, drug discovery; and it may even happen in literature, poetry, and many other fields. It will unlock new spaces of human creativity and understanding in ways we cannot yet imagine today.
Alex:
But if the model is already as powerful as you say, why hasn't this happened for real until now?
Greg Brockman:
I think there is a "capability lag" at play here—meaning there is still a significant gap between the actual capabilities of the model and how people are currently using it. To some extent, our understanding of what is "inside" the model is still evolving.
So I believe that even if technological progress were to stop from this point on, the world would still undergo a massive change—a computation-driven, AI-driven economy will still arrive.
But at the same time, there is another layer to this: what we are best at right now is training models on tasks that are "measurable." So initially, we started with math problems, programming tasks because these tasks have very clear validators: either the answer is correct or not, making it easy to judge. And over time, the reason we have been able to gradually push the models towards more open-ended questions is by expanding the scope of "what can be validated, evaluated."
AI itself can also help with this. If AI is smart enough, understands the task, and is given an evaluation criterion, it can learn gradually. However, tasks like creative writing, such as "how good is this poem," are difficult to score.
Therefore, in these kinds of scenarios in the past, it has indeed been challenging to get AI to truly learn through continuous trial and feedback. However, all of this is changing, and we already have a pretty clear view of the path ahead.
Alex:
That's quite interesting. Peter Thiel once said something along the lines of: if you are good at math, the impact you might experience in front of these models could be even greater than that of a "good with words" person. And you were also a Math Club member back in the day. Aren't you concerned about this?
Greg Brockman:
I think people tend to see more of what they have lost rather than what they have gained. It's because we have a deep experience of "how I used to do this." For example, I used to participate in math competitions, and now AI can also do math competitions. But the thing is, this was never really about "math competitions" per se, right? That's not the core thing that drives human progress.
If you look at how we work now—sitting in front of one box, typing on another—we didn't live like this a hundred years ago. This is not a natural state, nor is it truly how we should exist in this digital world we've been swept into.
That's not the essence of "being human." What truly matters is being present, living in the moment, and connecting with others.
And I believe what we're about to see is: AI will free up a significant amount of time, allowing humans more opportunities to strengthen connections with each other, to build more bonds between people.
This excites me greatly.
Alex:
Right. So, as you further move towards these more agent-like applications, there's a discussion emerging about whether large training tasks will still be necessary in the future?
Especially when the model is already good enough, it seems like you can just deploy it into the real world and gain a significant improvement through many stages that don't rely on pretraining. And those that truly require massive data center support are mainly for pretraining, in fact.
You've always been in charge of scaling, driving this effort. How do you view this argument?
Greg Brockman:
I think this argument overlooks a very important point in technological advancement. Indeed, every step in the model production pipeline amplifies the effects of each other. So you would want every step to become stronger.
We see that: once pretraining becomes stronger, every subsequent step becomes much easier. This actually makes sense. Because the model is more capable from the start, it learns faster; it also advances quicker as it tries different approaches, learns from its mistakes, and progresses faster with fewer errors because of its stronger foundation.
Thus, the real big change is not that we are shifting from "training a purely closed, self-deriving rational system" to "just letting it make mistakes in the real world." Instead, we realize that we not only need to make the model itself bigger and stronger, but also let it try things, understand how people use it in the real world, and feed that usage feedback back into the training process. This does not diminish the value of continuing to advance that research, nor does it diminish its importance.
I also think there is another change: in the past, we mainly focused on enhancing the raw capabilities during the pretraining phase, but did not emphasize as much the ability during the reasoning or inference stage. And over the past 24 months, a significant shift has been that we are beginning to realize the need for a balance between the two.
In other words, you can have a model with very strong capability, but it also needs to be efficient enough during inference and actual deployment. Because if you're going to do reinforcement learning and truly deploy it in the real world, all of this requires very strong inference efficiency.
This also means that you may not necessarily push the training scale to the theoretically largest possible, because you also have to consider subsequent large-scale usage scenarios.
What you really want is: the optimal point of the product between intelligence level and cost, rather than just optimizing one dimension.
Alex:
If the future mainly shifts towards inference, would you no longer need Nvidia's GPUs as much?
Greg Brockman:
We still absolutely need them.
Alex:
Why?
Greg Brockman:
There are many reasons.
One of them is: no matter how the ratio between training and inference changes, super large-scale training is still something that can only be done by concentrating massive computing power on one problem, and currently there is no alternative way to do this.
So I think what is more likely to happen in the future is: the proportion of computing power on the deployment side will increase significantly; but at the same time, there will still be moments when you have to carry out a particularly large pre-training task, and at that time you still need to concentrate a large amount of computing power.
And I also think that Nvidia's team is really outstanding, the work they do is amazing. So, yes, we work very closely with them.
Alex:
Will there be a day when people start saying, "We have pre-trained enough, the model is already smart enough"?
Greg Brockman:
I think this is a bit like saying: when humans have solved all the problems in front of them, maybe we can say that. But I think that the limit of what we want to achieve is actually much higher.
Over the past 50 years, to some extent, our ambition for many goals has actually diminished. For example, some problems seem very clear-cut—can we ensure that everyone has health coverage? And not just "treat when there's a problem," but truly achieve preventive healthcare, focus on lifestyle, help people early, detect potential risks before a disease occurs. I think we can actually use more intelligent models to truly solve these kinds of problems.
Of course, perhaps there is some level where this issue has been thoroughly addressed, and at that point you might ask: Do I still need a model twice as smart? But at the same time, there will certainly be other issues demanding a higher level of intelligence.
Alex:
Let's talk about the numbers behind building these data centers. Earlier this year, you raised $110 billion. How did the math work out there? Is this money going directly into data centers? How are you thinking about returning this money to investors in the future? Talk about this logic in computation.
Greg Brockman:
I think, fundamentally, this is very simple: our biggest expense right now is compute power. But you can't just look at compute power as a cost center; it's more like a revenue center.
You can think of it as hiring a sales team. How many salespeople are you willing to hire? As long as your product can be sold, as long as you have a mechanism to scale the sales of this product, the more salespeople you hire, the higher the revenue.
And the world we're in right now is that we keep finding we just cannot build compute fast enough to keep up with demand. This, I can feel very concretely right now. We have to make very painful decisions: which features can go live, which features cannot for the time being; where does compute power go first, and where it doesn't.
And I think, as the entire economy shifts towards an AI-driven economy, this situation is going to be much more broadly felt.
What real future problems are going to be: what problems can get that kind of massive compute? How do you scale such that everyone has their own personal AI? How do you get everyone to use systems like Codex?
Right now, there simply isn't enough compute in the world to support these things. So we are preparing for this problem in advance.
Alex:
But this is ultimately a whole new category, right? And you are using a very strong determinism to bet on—such a large amount, almost unseen in the world. When you are creating a new category, how can you be so sure it will eventually stand?
Greg Brockman:
I think there are several components to this.
First, there is actually a historical precedent now. From the moment ChatGPT was released, I remember having a very clear conversation with myself and the team. Someone asked me: How much compute should we buy? I said: All of it. Then someone asked again: No, seriously, how much should we buy? I said: No matter how we build, I know we can't keep up with demand.
And every year since then has proved that point. The challenge is that such hashing power procurements typically need to be locked in 18 months in advance, sometimes 24 months, or even longer. So, you have to make the call way before the machines are actually delivered. This means you have to be incredibly forward-thinking.
And the world we are moving towards is: to date, the bulk of our revenue still comes from consumer subscriptions, which will also remain very important in the future. Of course, we are also creating other revenue streams.
But the emerging, bigger opportunity now is knowledge work.
And this, we've already seen very concretely: almost every company is starting to realize that this technology is actually useful, and if they want to stay competitive, they have to adopt it. You can see that very natural momentum, a lot of software engineers are already using it; and now you're starting to see more widespread proliferation, people within the company incorporating it into various knowledge work scenarios. And the willingness to pay that has emerged in this industry, and the revenue growth you're seeing, is very clear.
This is happening right now. You just need to extrapolate it. And one thing we might see more clearly than others is: we can better see how these models will progress next.
Putting all these things together, you realize: this economy itself is an extremely massive thing, almost unimaginably so. And from now on, the primary driver of growth for this economy will be AI—how well you can leverage AI and how much hashing power you have to drive it.
Alex:
You just mentioned that consumer subscriptions are currently your biggest source of revenue. So, is your judgment that in the future, this will be reversed, and enterprises will become the largest source of revenue?
Greg Brockman:
I think it's now very clear that this "enterprise side" is growing rapidly. Of course, the term "enterprise side" itself is also evolving. Because what it truly points to is: people using AI in productive knowledge work.
And in terms of pricing, I don't think the categories will necessarily be as clear-cut as they used to be. For example, the current usage model of Codex is: if you have a consumer subscription to ChatGPT, you already have access to Codex.
So I don't think the future will be such a distinct divide between B2B and B2C. The more likely scenario is: as a user, you will have a unified entry point—just like your laptop, which is your gateway to the digital world.
And real revenue, fundamentally, will come from here.
Alex:
Dario once said something, and I think he might have been talking about you: Some players have pushed the risk dial too high, and he is very concerned about it. I think he was referring to your massive infrastructure bet. What is your take on this statement?
Greg Brockman:
I disagree. I think we have always been very cautious, and we have indeed seen what is coming next. I believe that even just looking at this year, all those who have truly participated will feel the 'compute constraint.'
And I think we just realized this earlier than others, preparing sooner for how the technology would unfold.
What I have seen instead is that many other participants probably only realized this at the end of last year and then rushed to find compute, only to find that there was almost none left to buy.
So I think it's easy to say things like that. But the reality is, everyone now realizes: this technology is viable, it's here, it's real. Software engineering is just the first clear example of it.
And what truly constrains us is the available computation.
Alex:
He also said that if his prediction deviates even slightly, his company could go bankrupt. Do you face the same risk?
Greg Brockman:
I think there is actually more of a 'trapdoor' here. If you start seriously thinking about the downside—and I think that's a perfectly fair question—you'll find that to some extent, this bet was not really on any one company.
It was really on the entire industry. It's a bet on: Do you believe this technology can be made, and can it deliver the enormous value we see today?
I keep going back to those most direct proofs. Like software engineering—if you are not a software engineer, if you haven’t truly used Codex, it's hard to understand the difference in experience through reading. That difference is actually hard to describe. But I think people will soon truly feel it.
Six months ago, this kind of palpable experience was mostly internal to us; later, there were clear external proofs as well. And in another six months, I think everyone will feel it. And by then, all of us will feel another kind of pain: there are great models out there, but you can’t use them because there is simply not enough compute in the world.
Alex:
Yes, but when we were making predictions for 2026 on the show, there was a discussion at the end of last year. Ranjan Roy was also there, and he said that 2026 would be the year of 'everyone using a smart agent.' My reaction at the time was: I will believe it when I see it with my own eyes and when I actually start using a smart agent myself.
Greg Brockman:
So now, haven't we reached that moment? What do you use it for now?
Alex:
I use it to build some internal tools to help people I work with better synchronize when videos go online, how thumbnails should be done, and things like that. I also pull in some data from YouTube so we can analyze video performance based on factors such as thumbnails. To some extent, this is a set of software that I have customized myself, and if done in a traditional way, I probably wouldn't spend money to buy it at all.
I think this is what makes it interesting right now: Software was originally produced on a mass scale for the general public, but precisely because of this, there are always many places in it that are not tailored to you. And perhaps the change brought about by AI is that it finally allows us to interact with software in a more natural way.
Greg Brockman:
I think that is the key. And one thing I have been constantly thinking about is: The way we build computers today actually pulls us into a digital world.
Think about how much time you spend constantly scrolling content on your phone. Then think about how much time you spend continuously clicking various buttons, trying to connect this system to that system—why do these things have to be done by you yourself? What AI should really do is bring the machine closer to you, making it more tailored to you, and more understanding of what you want to accomplish.
This idea has always been part of our pop culture: you can talk directly to the computer, and then it gets things done for you. And now, this thing is starting to become a reality, really becoming something you can do. The extent of this change is truly amazing, and many times you have to try it yourself to really understand it. So I really feel that we are in a very special moment.
Alex:
Then I wonder, why is the public perception of AI so negative? For example, YouGov data shows that in the United States, three times as many people believe AI will have a negative impact on society compared to those who believe it will have a positive impact.
What do you think is the reason behind this? Are you concerned about AI's public image?
Greg Brockman:
I think one thing we really have to do is: to show the people of this country why AI is beneficial to them. And not just from a macroeconomic standpoint, not just saying it will drive GDP growth and other big words like that, but: how it actually specifically improves their lives.
In fact, I hear very specific stories every day. For example, there's a family whose child has been having constant headaches and some other health issues, but their MRI scan has not been approved. Later, they used ChatGPT to research the symptoms and realized they could actually use this to make a stronger case to the insurance company. They did so, and it turned out the child did have a tumor in their brain. And it was because they got the right information through ChatGPT that the child's life was saved.
That's just one story. There are many, many similar stories. People's lives have been profoundly improved by this technology, and some have even been saved by it. The key is that they have really engaged with this technology in real life.
But I feel like these kinds of stories haven't really been widely shared. I think this is happening in many people's lives, but for some reason, it hasn't really become a mainstream narrative yet.
I've also noticed that popular culture, especially the imagination that has persisted from the 1990s, is very negative about AI, always emphasizing what could go wrong. But once people actually start using AI, they find that it has practical value, that it is helpful.
So I do care a lot about this: we haven't really succeeded in helping people understand why this wave of technology will improve their lives, why it will promote closer connections between human beings.
This is a very important focus for me. And if you broaden your perspective a bit to see why AI is so important, I think it will become a significant source of economic power and national security in the future. It will affect a country's competitiveness. And other countries like China have shown an almost completely opposite direction in AI.
So, yes, I think this is very important. We must face it, and we must really figure out how to ensure that everyone can benefit from the advantages of this technology.
Alex:
But we are also in an extremely unstable moment right now. People are very worried about jobs. Every time I talk to someone about AI, they almost always ask: How long can I keep my job?
And then when it comes to data centers, the public's perception of them is even worse than of AI itself. If you look at these public opinion polls, you'll find that more people believe data centers will have a negative impact on the environment, home energy costs, and the quality of life of surrounding residents, rather than a positive impact.
So we find ourselves at a moment where good jobs are increasingly hard to come by, and people see data centers coming into their community and think, "This thing is neither environmentally friendly nor cost-effective in terms of energy, and it will lower our quality of life."
Are they wrong?
Greg Brockman:
I think there is indeed a lot of misinformation surrounding data centers.
A very typical example is the issue of water usage. If you actually go and look at our facility in Abilene, which is the world's largest, or at least one of the largest, supercomputing facilities, its total annual water consumption is actually equivalent to that of an average household for a year. In other words, the water usage is actually minimal.
But there is a lot of misinformation out there that leads people to believe that these data centers consume a large amount of water resources.
Electricity is a similar situation. We have committed to bearing the costs ourselves and not passing on the pressure of rising electricity prices to residents. This is important, and now the entire industry is beginning to make similar commitments because improving the local community is indeed very important. And when we build a data center, we also truly engage with these local communities to understand what is happening locally and what we can do to help. Data centers bring in tax revenue and create jobs. It does bring a lot of benefits.
So I think the key is still how we show up, and this is a responsibility that we take very seriously.
Alex:
Okay, but if residents' electricity bills do not increase, you still have to bring in electricity, which may mean more pollution. Isn't that a problem?
Greg Brockman:
I think there are actually many finer layers to this.
If you look at how the power grid operates today, you will find that there is actually a lot of "idle power" — that is, a lot of power is already there but not being fully utilized. At the same time, the transmission system itself needs upgrading. And the key is that these upgrade costs should be borne by us, not by ordinary ratepayers, which is very important. There are many places where there is clean energy available, but this power is actually not being fully utilized and is even being wasted to some extent.
So, when the demand from data centers comes in, it actually brings a real impetus to upgrade those aging, outdated grids. And this upgrade, in fact, also brings real benefits to the community. For example, in North Dakota, we have seen that the construction of a local data center has actually helped improve the utility infrastructure, resulting in a decrease in residential electricity prices.
Alex:
Okay, one final political question. You donated $25 million to MAGA Inc., which is a political action committee supporting Trump.
Greg Brockman:
You've discussed this with Kara before as well.
NOTE: Kara Swisher, a prominent American tech journalist, known for her sharp questioning and direct style, has long covered Silicon Valley and internet companies.
Alex:
Right. You said at the time, "Anything that helps this technology truly benefit everyone, I will do." If this makes you a "single-issue voter" or "single-issue donor," so be it. But what I've always wondered is: For this kind of "single-issue camp," shouldn't the ultimate North Star of any political action be "making this country stronger" itself?
In other words, even if a candidate doesn't fully support what you're doing, if they can make this country stronger, shouldn't that also be a key criterion for political support? If so, is this also part of your donation consideration?
Greg Brockman:
This is how I see it: That donation was a decision my wife and I made together. We have also donated to super PACs on both sides of the aisle.
I feel that this technology is coming very fast. Over the next few years, it truly will change everything and become the underpinning of the entire economy. But it is not popular yet. So we very much want to support those political figures who are truly willing to embrace this technology, who truly understand this technology.
Of course, at a broader level, this technology itself is indeed enhancing our country's capabilities. In a sense, I am indeed a "single-issue voter" because I believe this is the area where I can make a unique contribution. But ultimately, this is still about expressing a support: as a country, we should proactively embrace this technology.
Alex:
If there's someone sitting in front of you right now who is very afraid of AI, thinking AI will take away their job, ruin their community, change the world too fast, what would you tell them?
Greg Brockman:
The one thing I most want to say is: Go try these tools for yourself. Because only when you have truly experienced the AI that already exists today will you truly understand what it can do for you.
And today we have already seen too many opportunities, potentials, and empowerments from this technology. You just said what you can do with it now, right? People who have never made a website before can now make one; if you want to start a small business, previously you might have been overwhelmed by various backend processes and operational details, but now AI can help you with many of these things.
So I think, for your own life, you should think about: Can it help you manage your health? Can it help you take care of your loved ones? Can it help you make money? Can it help you save money? All of these will be real options.
I think people always find it easier to see "what will change" but not so easy to see "what they will gain." However, I believe it's worth giving it a fair chance, to genuinely understand what each side of the scale really represents.
Alex:
By the way, this is also a point that is rarely discussed in surveys. Those who have only "heard of AI" but have never really used it themselves, or have hardly used AI, tend to be more negative. Once you become a heavy user, or even just a regular user, your view of this technology is usually much more positive.
Greg Brockman:
For me personally, we have been thinking about this technology for many years. And now, the way I see reality unfolding is even more amazing and beneficial than we ever imagined, and it will have a much more positive impact than we expected.
Alex:
One last question. If someone asks you, "How should I prepare for the future?" how would you answer?
And this answer cannot be just "go use a tool." Because I really have friends who come to me and ask, "I don't know what will happen to my job, don't know what will happen to this world, I just want to know what to do now."
Greg Brockman:
I still think the first thing is to understand this technology. We have seen that those who truly benefit the most from this technology are often those who approach it with curiosity. They will truly integrate it into their workflow, make an effort to overcome the initial threshold — facing a blank input box and that sense of "what should I even do with it."
You need to gradually cultivate a sense of agency: I can be a manager; I can set directions; I can delegate tasks; I can supervise. And you need to truly develop this ability because it will become a very foundational skill.
We are building this technology to help humanity, to promote more connections between humans, to give people more time to do what they really want to do. So, the question will ultimately become: What do you really want? And the truly important thing is to clarify this and to use this technology to achieve it.
Alex:
That's right. Thank you very much for coming on the show.
Greg Brockman:
Thank you for the invite.
Alex:
Also, thank you everyone for listening and watching. See you next time on the Big Technology Podcast.
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia