header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

How AI Companies Navigate Cycles and What the Next Billion-Dollar Company Will Look Like?

Read this article in 147 Minutes
Elad Gil on Hash Power Bottleneck, Exit Windows, and Ten-Year Moat
Video Title: The AI Frontier and How to Spot Billion-Dollar Companies Before Everyone Else—Elad Gil
Video Author: Tim Ferriss
Translation: Peggy, BlockBeats


Editor's Note: Against the backdrop of AI entering a competition-intensive phase of capital, computing power, and product, industry discussions are shifting from "Will the model's capabilities continue to advance" to "Who can truly stand out in this round of infrastructure reconstruction." Over the past two years, the market has been accustomed to understanding AI competition through model parameters, benchmarks, funding size, and valuation changes. However, as large model capabilities continue to converge and the gap between leading labs temporarily narrows, a more fundamental question has emerged: Does the long-term advantage in the AI era come from technical leadership or from a systemic combination of talent, computing power, distribution, organization, and market positioning?



This article is translated from a lengthy conversation between Tim Ferriss and Elad Gil. Elad Gil is a well-known Silicon Valley entrepreneur and early-stage investor who has invested in companies such as Airbnb, Stripe, Coinbase, Perplexity, Harvey, and Anduril, with a long-standing focus on technology cycles and high-growth company evolution.


In this conversation, Elad Gil did not attempt to predict which AI company would ultimately succeed but instead broke down AI competition into a set of more fundamental structural issues: how talent is being repriced, how computing power bottlenecks restrict the gap between leading labs, how application companies identify their exit window, and how startups transition from product capabilities to true organizational expansion.


This conversation can be understood from five aspects.


First, there is a change in the AI talent market. In the past, wealth transfer usually occurred after a company's IPO, with a company going public, and early employees and the founding team completing an asset revaluation. Now, Meta's aggressive bidding for top AI talent has forced other tech giants to match compensation packages, causing a small number of researchers dispersed across different companies to experience a "personal IPO" ahead of time. This means that AI talent is no longer just a company's internal R&D resource but is becoming a scarce asset that determines the speed of the technological race.


Second, the constraint of computing power has shifted from a single-chip issue to a supply chain issue. In the past, the market often understood AI infrastructure as "who can buy more NVIDIA GPUs." But Elad Gil emphasized that the current real bottleneck may be in areas such as memory, packaging, data center construction, and electricity. In the short term, this supply chain constraint may actually make it difficult for leading labs like OpenAI, Anthropic, and Google to significantly widen the gap. In other words, AI competition is not about a single breakthrough but is a long-term war revolving around capital expenditures, manufacturing capacity, and infrastructure coordination capabilities.


Third, the AI Application Company's Lifecycle. In the past, entrepreneurs often equated high growth with long-term value, especially in the early days of a technological wave, where valuations, revenue, and user growth would all rapidly inflate. However, Elad Gil's assessment takes a more cyclical view: in every technological revolution, the vast majority of companies will eventually disappear, and AI is no exception. Therefore, for many successful AI application companies, the next 12 to 18 months may not be a fundraising window but an exit window for value maximization. The real question is not whether the company is growing, but whether it has durability for a decade later.


Fourth, the Redefinition of Moats. In the past, a software company's advantage often came from product experience, data, channels, or brand; now, the key for AI application companies is whether they can embed themselves in the customer's workflow and become an indispensable system. Strengthening the underlying model does not automatically benefit all AI applications. Only those companies that simultaneously strengthen their product as the model advances, while deeply integrating with enterprise processes and proprietary data, may be able to survive the cycle.


Fifth, a Reinterpretation of Startup Company Expansion. Elad Gil, in discussing the book "High Growth Handbook," emphasizes that high growth does not happen naturally. Boards, funding, organizational management, distribution systems, and acquisition decisions all need to be actively designed. True large companies not only have good products but often have a very strong distribution mechanism. The Google toolbar, Facebook buying user name ads, TikTok's large-scale advertising, all demonstrate that growth is never a romantic story but rather a set of systematically executed commercial engineering.


The long-term competition in AI will not be determined solely by model capabilities but by talent, computing power, market windows, distribution capabilities, and organizational design. In this sense, the subject of this article is no longer just about how AI companies can win but about what kind of companies are qualified to survive to the next stage in the new technological cycle.


The original content is as follows (slightly edited for better readability):


TL;DR


· The AI talent war has shifted from recruitment competition to wealth revaluation, with Meta's talent grab essentially allowing a small group of top researchers to undergo a "personal IPO" ahead of time.


· The key bottleneck in AI's short-term competition is not just chips but the supply chain system composed of memory, packaging, and data center, making it difficult for leading labs to completely widen the gap in the next one to two years.


· The growth rate of AI companies is rewriting tech history, but the historical pattern has not changed: while the overall trend holds, it does not mean that the majority of companies can make it through the cycle.


·The next 12 to 18 months may be a valuation window for many AI application companies, as once growth slows down and products are replicated in the lab, exit value will rapidly decline.


·The truly enduring AI companies are not just those that use off-the-shelf models, but those that can control the entry point, embed in customer processes, and strengthen as the underlying models improve.


·Elad Gil's investment approach is not about chasing hot concepts, but rather assessing if the market is sufficiently large and newly opened, and then examining if the team can seize that window.


·Scaling a startup is never a natural process; the board, financing, organizational expansion, and distribution machine all need to be actively designed.


·The greatest industrial impact of AI is not to make software smarter, but to reopen formerly closed markets such as law, enterprise services, white-collar work, and shift from "selling tools" to "selling cognitive labor".


Interview Transcript


AI Talent Is Going Through a Personal IPO


Tim Ferriss: Elad, great to see you. Thank you for taking the time, really appreciate it.


Elad Gil: Good to see you too, as always.


Tim Ferriss: I think we can start from the topic we were just discussing before recording, or rather, a new phenomenon you were explaining. Can you recap what we were just talking about?


Elad Gil: Of course. We were just discussing some acquisitions happening in the AI field. For example, it seems like xAI has just received an actual acquisition of Cursor's option. Also, Scale was partially acquired by Meta. These kinds of transactions have been happening quite a bit over the past year or two.


Furthermore, we were also talking about: What does this mean for the AI research community and the entire AI community? I think one of the most interesting things that has happened in the past year or so is that Meta has started to aggressively bid on AI talent. This is actually a very rational strategy. Since they are investing tens of billions of dollars in compute power, it is reasonable to allocate a real budget to snatch people.


Usually in the tech industry, there is a scenario where a company goes public, a group of people from that company thus gain huge wealth. Then some of these people will continue to work hard, focusing on the original mission; others will start to get distracted. They may embark on some projects to serve society, get involved in politics, start a new business, or simply withdraw and go lie on the beach.


And the recent development is that, due to Meta's high offers, other tech giants had to match the corresponding offer for their top researchers. As a result, approximately 50 to a few hundred people actually went through an "IPO" — not as part of a company, but as a group. They are not in the same company but scattered throughout Silicon Valley. However, their compensation packages suddenly skyrocketed, experiencing a wealth leap similar to a company going public. This is very rare and can be called a "personal IPO."


The only similar situation I can think of historically might be in the cryptocurrency industry. At that time, a group of very early cryptocurrency holders or founders, as a collective, in 2020 — or more precisely, around 2017, suddenly achieved a sort of "collective listing"; similar situations have occurred again more recently.


But this is indeed very interesting, and the discussion is far from over. It may not necessarily have a huge long-term impact, but it does mean that the focus of some people will shift. They may embark on grand scientific projects, trying to help humanity; they may also turn to directions like AI for Science. Some people may leave their original path to pursue a personal mission or other endeavors.


Tim Ferriss: Yeah. Or simply engage in a "quiet resignation," start indulging in various desires, and chase after them. I mean, that situation will also happen.


Elad Gil: Well, of course, it definitely will.


Tim Ferriss: In this case, you look at Austin, and you have a group of so-called "Dellionaires" — early employees of Dell or related individuals who became wealthy from the stock post IPO. But looking at them as a group, when something like this happens, I don't think we know the extent of its impact or how long it will last, but obviously, there will be consequences.


Among the people I know who are both tech-savvy and have a broad vision and network to continually observe AI, there are actually very few. To some extent, if someone can observe this field relatively comprehensively, I would put you in that category.


You wrote an article this week that also discussed other factors at play here, such as the computational constraints AI labs face and how that might impact the next one to five years. Everyone should read this article, titled "Random Thoughts on the Frontier of AI Amidst the Thickening Fog." By the way, the title is nice.


Elad Gil: Quite dramatic.


Tim Ferriss: Yes, it's very dramatic, I love it, very cinematic. But before we dive into the topic of computational power constraints — I do hope you'll touch on that next — for those who may not have much background on the talent war, you mentioned earlier that Meta has started aggressively poaching talent. At the high-end talent level, what are these salaries, equity packages, or overall compensation packages approximately?


Elad Gil: I don't have the full range of exact details, nor do I know all the specifics. But based on rumors and some claims that have already been covered in the media, these offers are probably ranging from tens of millions to hundreds of millions of dollars per person.


Of course, the number of people who can command such sky-high treatment is very small. But the core logic is that we are in one of the most crucial technological races in history. The faster AI becomes stronger, the greater the economic value it unleashes. Therefore, for the few truly world-class individuals in this field, companies are willing to pay well above standard prices.


Five or ten years ago, these individuals certainly also received high salaries, but that was a completely different story. Because at that time, AI was not the core of the entire tech industry. More importantly, from a societal, political, educational, medical, and other perspectives, AI will have a very broad impact. I believe that overall these impacts will be positive, but it is indeed a transformative moment, hence the sudden surge in these compensation packages.


The bottleneck of the Computational Power War is currently more in memory than in chips


Tim Ferriss: What is the computational power constraint you mentioned in your recent article?


Elad Gil: Nowadays, everyone refers to these companies as "labs" — such as OpenAI, Anthropic, Google, xAI, and so on. All these labs are essentially training giant models.


Specifically, you need to purchase a large number of chips from NVIDIA. But in reality, you are building a whole system: it includes NVIDIA chips, memory from SK hynix, Samsung, and other manufacturers, and you also need to construct a data center. Building such a large-scale system and data center involves many steps.


Basically, you are building a cluster of tens of thousands or millions of systems, and the scale is constantly increasing. These systems come from NVIDIA, and may also come from other suppliers. Google has its own TPU, and there are other systems in the industry. You use this infrastructure to train AI models.


This means that you run massive amounts of data on these huge cloud clusters. The most insane part is that the final output model is literally something like a flat file. It's kind of like outputting a text file or something. And then you load that file to run AI. Think about it carefully, this is very crazy: you run for several months on a huge cloud system, and in the end, what you produce is actually a small file.


And this small file, to some extent, combines the human knowledge available on the Internet, as well as logic, reasoning ability, and other capabilities.


You can also understand this from the perspective of the human brain. Humans have around three to four billion DNA base pairs, which are enough to dictate everything about you as a physical individual, including your brain, mind, operations, determining how you see things, how you speak, how you taste, how you have various senses. All of this is actually encapsulated in a relatively small set of genes.


Similarly, human knowledge can also be effectively encapsulated in such a small file.


Tim Ferriss: So how do you see these constraints? Where are the constraints specifically?


Elad Gil: Every year, building these large-scale cloud clusters for training AI encounters some constraints. Additionally, there is also the so-called inference, which is the reasoning stage: when you truly use these chips to understand, operate the AI system itself, you also need a large number of NVIDIA chips, or TPUs, and other chips.


But besides the chips themselves, you need other things. For example, you need packaging capabilities to truly encapsulate the chips. So, around the construction of these systems, there is a whole supply chain.


Various parts of this supply chain will encounter different bottlenecks at different times. The main bottleneck now is memory, or more precisely, a specific type of memory. This type of memory is mainly produced by Korean companies, but of course, there are other more widespread suppliers.


It is widely believed in the industry that this memory bottleneck may last for about two years, with some fluctuation up and down. Ultimately, because the production capacity of these companies is lower than that of other parts of the system.


Some believe that in the future, other constraints may evolve into the construction capacity of data centers themselves, or the power and energy required to operate these systems. But for now, the main bottleneck is memory.


The entire industry is currently limited by how much computing power it can acquire and then put that computing power into model training and operation. This leads to one result: in the short term, it adds a ceiling to the scale of models. Because every lab is buying computing power as much as possible, many startups are also buying computing power as much as possible, and everyone is stuck.


This means that in the short term, there is an artificial ceiling on how big models can be, how much reasoning can be done, and how much you can actually do with AI right now.


But it also means that it actually creates a situation where no lab can far outstrip everyone else. Because it cannot buy ten times more computing power than other people.


And there is a law of scale here: the more computing power you have, the more likely you are to be able to train larger AI models; in many cases, the model's ultimate performance will also be stronger.


This may mean that in the next two years, the capabilities of these labs will likely be relatively close. Because no one has enough capacity to suddenly pull ahead.


But after this constraint is lifted, there is indeed a possibility: a company could suddenly take a significant lead over all other companies. Right now, OpenAI, Anthropic, and Google are actually quite close in terms of capabilities, although some companies may be ahead in some areas while others are ahead in other areas. It is generally believed that due to this bottleneck, this relatively close state should continue for at least another two years.


Tim Ferriss: Is Google also constrained by memory supply from companies like Samsung and Micron? Are they under similar constraints as other players?


Elad Gil: At the moment, basically everyone is under similar constraints. Some labs are either developing their own chips or their own systems. For example, Google has things like the TPU, and Amazon has developed its own chip called Trainium. Different companies have different systems, but fundamentally, they are limited by how much they can produce or purchase.


A year or two ago, the main bottleneck was packaging; now it's memory. Who knows what it will be two years from now, maybe it will be something else. In the process of advancing this round of infrastructure construction, we will continue to encounter new bottlenecks.


Tim Ferriss: My question may sound naive because I am a "Muggle" and cannot write technical whitepapers or anything close to that. But in my opinion — and I'm certainly not alone in saying this — we may be better at predicting problems than predicting solutions.


For example, a long time ago, when the price of gasoline rose above a certain level, everyone started predicting disaster and collapse. But when the price of oil per barrel exceeded a certain level, new extraction methods suddenly became viable, and funds began to flow into technologies like hydraulic fracturing.


So is there a possibility that in the case of the AI computing power bottleneck, there will also be some kind of workaround? Something like this logic, I don't know if it makes sense to say that. Maybe not at all.


Elad Gil: As far as I know, at least not yet. Part of the reason is that the way these things are built makes it difficult to bypass them.


For example, the capacity needed for memory fundamentally relies on a certain type of semiconductor fab. So you need time to build the fab, procure equipment, and set up the production line. This is a traditional capital expenditure and infrastructure development cycle.


These companies had previously underinvested in this area because they didn't fully believe others' predictions of AI demand at that time. Now they can only strive to catch up.


So it has become a situation where everyone is saying, "AI is growing so fast, how can it sustain this pace?" But it is indeed continuing to grow at this relentless pace. The reason is that the influence of these capabilities is too significant and too critical.


Looking at the revenue of these companies is very interesting. I can send you the chart later. Our team's Jared created a chart that summarizes how long different companies took to go from $1 billion in revenue to $10 billion, then from $10 billion to $100 billion, and then from $100 billion to $1 trillion.


In history, the number of companies that have truly achieved these scales is actually very small. You can look at the companies of different generations to see how long it took them. For example, I can't remember exactly, maybe companies like ADP took 30 years to reach $1 billion in revenue. Whereas Anthropic and OpenAI did it in one year.


Google probably took four years back then, I can't recall the exact number, but roughly like this: the later the generational company, the faster the scale. Now it is rumored that both OpenAI and Anthropic have annualized revenues of around $30 billion.


Tim Ferriss: This is insane.


Elad Gil: Because four years ago, they had no revenue at all. And $300 billion is roughly equivalent to 0.1% of the U.S. GDP. So AI may have grown from zero to 0.5% of GDP, at least from a revenue contribution perspective.


If we extrapolate further, assuming they reach $100 billion in revenue in the next year, two years, or at some point in time, we will be close to a scenario where each of these companies could account for 1% or 2% of GDP. Think about it, it's really outrageous.


Tim Ferriss: It's insane, truly insane.


Elad Gil: These things are indeed very significant and very useful. And this doesn't even include the cloud revenue Azure receives from its AI business, nor the related revenues of Google Cloud or Amazon. This is just about OpenAI and Anthropic. It's really extreme.


Tim Ferriss: I'm very interested in diving deep into your thought process. Because among all the people I've met, you are the best at first principles thinking and one of the best at systematic thinking. I enjoy our conversations because I always learn something new, and it's not necessarily a specific data point; many times it's a different perspective on an issue or a thinking framework.


And your framework itself is constantly evolving. For example, I remember seeing an interview you did with First Round Capital a long time ago. Back then, you talked about how you used to look at the market first when making investments, and then consider the strength of the team. You also mentioned missing out on investing in Lyft during its Series C. At that time, your judgment partly depended on your assessment of the market landscape: whether it was going to be a winner-takes-all situation, an oligopoly, or some other form.


I'm curious, in the field of AI, how are you thinking about this issue now? Because among the people I know, you were one of the earliest to start moving in this direction, maybe even the very first.


So, what are your thoughts now? This also ties back to a statement you made in your article. I haven't heard anyone else say this, but I can bring up this sentence as a hint—although I feel like you don't need a hint.


You wrote: Founders currently running successful AI companies should seriously and calmly consider exiting in the next 12 to 18 months. This may be the time window to maximize the outcome value.


You also revisited the survival rate of companies during the burst of the internet bubble, and the percentage of companies that later emerged as true winners. How do you think about this issue? Could you explain this statement?


Elad Gil: Of course.


Tim Ferriss: Also, I'd like you to explain how you currently view what kind of landscape this market will ultimately form. Do you think it will be a winner-takes-all situation, an oligopoly, or will there be other dynamics?


Elad Gil: If you look at historical precedents—of course, this doesn't mean AI will necessarily follow the same path—almost every technological cycle, 90%, 95%, or even 99% of companies eventually fail.


This can be traced back to a hundred years ago in the so-called "high tech" industry, the automobile industry. Back then, Detroit had dozens of car companies and hundreds of suppliers, but eventually, the entire industry consolidated into a few major automakers. This is not a new story.


Looking back at the internet cycle of the 90s, or the internet bubble, around 1999, there were approximately 450 companies that went public, and in the first few months of 2000, another 450 or so companies went public, adding up to about 900 companies. Adding to that the 500 to 1000 companies that had gone public in the years before, the total is roughly between 1500 to 2000 companies.


These companies are all already public, which means to some extent they have already been "successful." But how many of these companies are still around today? Maybe a dozen, perhaps two dozen. That is to say, out of 2000 companies, roughly 1980 have in some form disappeared or been acquired at very low prices.


So we have no reason to believe the AI cycle will be any different. Every cycle is like this. SaaS was like this, the mobile internet was like this, and the crypto industry was like this too. Most companies will not succeed; only a few will survive. We can discuss which companies will survive.


So, if you are running an AI company now, you should ask yourself one question: What kind of longevity does your company really have? Ten years from now, will you be one of those dozen or two dozen truly important companies? Or is now actually a good selling window? Because what you are doing may be commoditized, may be directly competed by big model labs, or market and tech changes may make you obsolete.


Of course, there will be a few companies that continue to become very great. They should not sell or exit but should keep moving forward. But there are likely many companies for whom now, or in the next 12 to 18 months, is the best time they will ever have to get the highest valuation for what they are doing.


For every company, there is a moment of value maximization. They reach a certain peak, and this peak is usually a window. It is usually 6 months, 12 months. During that time, what you are doing is important enough, growing fast enough, everything is running smoothly, and some headwinds have not really hit yet.


Sometimes, these headwinds are actually foreseeable, and you can see them coming. And a lot of the time, you can see them in the second derivative of growth. That is to say, your growth rate starts to flatten a bit. At this point, you either keep pushing upward, or you should consider selling. That's really what my saying is all about.


You can also see from our previous conversation that I am extremely bullish on AI. So, this is not to say I am not bullish on the overall transformation AI will bring, but to say that in this transformation, ultimately only a few companies will remain significant. The key question is: Are you one of them?


If you are, then you should never, ever, ever sell.


How Do AI Companies Navigate the Cycle? Either Control the Gateway or Embed in Workflow


Tim Ferriss: So what are the characteristics of these few companies? I mean, those that truly have a lasting advantage. Looking back to 2000, you wonder, what criteria should have been used to pick out Google and Amazon at the time?


Elad Gil: Yes.


Tim Ferriss: I'm not saying that the Internet bubble is the best point of comparison. But in the current wave of AI company proliferation, which companies do you think have enduring advantages?


Naturally, some prominent large-model labs come to mind. Perhaps they will become the gateway for all other applications, who knows. But how would you answer? Looking at common characteristics or specific company names, what do you think distinguishes the few companies that will survive from the rest?


Elad Gil: I think the core large-model labs will exist for quite some time. For example, OpenAI, Anthropic, Google, as long as there is no kind of accident, disaster, or internal implosion, they seem to be in a relatively stable position.


As for the market structure you mentioned, I wrote a Substack post about three years ago, predicting that this might be an oligopoly: there will be only a few companies, and they will be tied to cloud providers. Looking at it now, that's largely the case. Of course, there are still Meta, xAI, and other players that could change the landscape. These variables did not exist when I wrote that article.


But in my view, in the short term, this is still an oligopoly. There is no natural reason for it to turn into a monopoly market unless one of them is so far ahead in capabilities that it naturally becomes the default choice for everyone. This scenario is possible, but it has not happened yet. And as for the compute constraint I mentioned earlier, it may prevent this situation in the short term, or at least place some limits on it.


If you look up the tech stack into the application layer, you will see different types of application companies. For example, Harvey in the legal field, Abridge in the medical field, and Decagon and Sierra in the customer success field. There are some companies in each application direction.


To judge whether these companies can establish themselves in the long term, you can look at them from three or four perspectives.


First, if the underlying models get better, will your product or service significantly improve for customers and make them willing to continue using you?


Second, from a product perspective, how deep and broad are you going? Are you building multiple products? Are these products integrated into a coherent whole? Is it really embedded in a company's internal processes and to an extent that is difficult to uproot?


Many times, the real issue companies face with AI is not "how good is this AI," but "how much do I need to change existing workflows and how employees do things to adopt it." This is often a change management issue, not a technical one.


So, if you have deeply embedded yourself in the client's workflow, business processes, organizational collaboration, and how various systems interconnect, this position often becomes more enduring.


Third, are you capturing, storing, and leveraging proprietary data? Sometimes this can be very useful. Overall, I think the so-called "data moat" is often overhyped, but in certain cases, it does hold significant value. This usually corresponds to a "system of record" worldview.


Therefore, assessing the longevity of a defensive strategy involves a set of criteria. At the application layer, this is often a key perspective.


Tim Ferriss: So I have a question. Suppose someone in the audience is in this position: they might be a founder and should consider identifying the transient window when their company is most valuable, and then to some extent, "parachute out." What are their options?


Because I'm thinking of some companies—no names mentioned—that now have valuations in the tens of billions of dollars. From my mostly outsider perspective, what these companies are selling now doesn't seem very difficult to replicate in a large model lab.


Should these companies aim to be acquired by some big model lab? If so, then the lab faces a "build vs. buy" decision. Or should they target not companies like OpenAI, Anthropic, but those looking to more deeply engage in this game, such as Amazon, or similar players? How do you view their exit options?


Elad Gil: I think there are actually many exit options. And one crazy thing right now is that if you go back 10 or 15 years, the largest companies globally were probably valued at around $300 billion. The largest tech companies, I remember, were maybe around $200 billion in valuation. The largest companies back then seemed to be energy companies like Exxon.


But over the past 10 to 15 years, things suddenly changed: we started having a bunch of trillion-dollar market cap companies. Everyone thought that was crazy, but in reality, company sizes will likely only continue to grow. The biggest winners in the future might see even stronger aggregation effects rather than more dispersion.


Now, more and more companies are in the range of $100 billion to trillions in valuation, which is unprecedented. This means they have enormous purchasing power. Because 1% of a $3 trillion market cap company is $30 billion. In other words, diluting just 1% ownership allows you to acquire a company for $30 billion. That's extremely mind-boggling.


This is indeed unprecedented. And it is precisely because of this that these mega-acquisitions can now happen.


Tim Ferriss: For those companies that come to mind for me — I won't name names — they may seem to have a limited lifespan. I often chat in small groups with friends, many of whom are highly successful tech investors. And then I ask them: "Alright, imagine these five companies are lined up here, and you have 10 chips, how would you allocate them?" Some companies, although not low-profile, almost always end up with 0 chips. So why would these labs go and acquire such companies?


Elad Gil: It depends on what specific company it is. And the buyer may not necessarily be a large-scale lab, it could also be a large tech giant. Like Apple, Amazon, and to some extent, Google. There are also Oracle, Samsung, Tesla, and now even SpaceX is starting to enter this market and do related things. There are actually many different types of buyers. And then there's Snowflake, Databricks. If you are in the financial services space, there might be Stripe, Coinbase. In fact, there is a large group of companies that are already very large in scale, and that is the key.


So, a company usually ends up selling to one of four types of buyers.


The first type is large-scale labs, hyperscale cloud providers, or large tech companies.


The second type is companies that are very focused on your specific vertical. For example, if you are in law, accounting, or a related field, companies like Thomson Reuters may be interested.


Furthermore, I think one thing that hasn't happened enough is mergers between competitors, especially mergers between private companies. Because if your primary goal is to win the market, and you and another competitor are evenly matched, competing in every deal and undercutting each other's prices, maybe a better option is to merge.


That situation is actually like the X.com and PayPal scenario in the 90s. Elon Musk and Peter Thiel were running separate companies at the time and later chose to merge because they realized, "Since we are both doing the same thing, why keep fighting?"


Tim Ferriss: Yes. Or like the early days of Uber and Lyft. That might not be considered a merger, more of an acquisition.


Elad Gil: Yes. The rumor is that it almost happened, but then Uber backed out. But all the money Uber has spent over the years to compete with Lyft might not be as much as if they had just bought it back then. Of course, it may not be the case, I don't have the specifics.


However, many times, choosing to say, "Let's stop competing against each other, merge instead, and go win together," actually makes sense. Because if the primary goal is to win the market, and you are already competing with a group of existing giants, why make it more difficult?


Tim Ferriss: You know, we often discuss this. But this time, I want to talk about your perspective as an investor. However, before you truly put on this "full-time investor" hat, you already have a lot in your background that may have helped you, or may not have. I'm curious, looking back at your biological background, mathematical background, do you think these things, or other experiences, have substantially influenced your investment thinking? Have they given you some kind of advantage? Of course, winning trades have different stages, but let's first talk about screening and the selection process.


Elad Gil: I think math has helped me in two ways.


First, it has helped me understand certain technical issues, especially things related to algorithms, computer science, and sometimes, this is very useful for understanding how things work in AI. Or at least it makes me more familiar with numbers and data. I wouldn't necessarily call it "nerd language," but it's probably something like that.


To be honest, I majored in math at the time just because I liked it. I think the really helpful part is also here. I just did an undergraduate degree in math, didn't go too deep, but what I studied was very abstract pure math.


I think this is a very good training; it forces you to really think logically step by step. At least when I was learning how to do proofs, the general way was: you first establish a logical sequence, but sometimes you also make some intuitive leaps, and then you try to prove it to yourself afterwards or complete the reasoning behind this intuition.


I think investing is sometimes a bit like this.


Tim Ferriss: When was the first time you realized you might be good at investing? This investment can be a broad investment, or in the context of our conversation, investing in startup companies, angel investment. When did you first feel, "Hmm, maybe I'm not bad at this"? Was there a moment, a particular transaction, or something else that made you think of that?


Elad Gil: Actually, no. I am very demanding of myself, so even now, I often question myself. Someone once told me that the two people who like to repeatedly blame themselves most afterwards are me and another very well-known founder and investor.


So, I don't have a single moment where I think, "Wow, this thing is really suitable for me." It's more like it naturally continues to happen. Because I invested in some very strong companies, and that allowed me to keep going. Yes, I also hope to have that kind of "epiphany" moment.


Tim Ferriss: Damn, you've got to, like every great founder, rewrite your early story.


Elad Gil: Yeah, I've been thinking about investing in tech companies since I was seven.


Tim Ferriss: How did you get into those deals? Some people have an information edge, and they put themselves in a position to have that edge. I don't want to lead the witness on this question, but for me, if I hadn't moved to Silicon Valley in 2000 and then stayed there, especially moving to San Francisco, nothing I've done in angel investing would have happened.


But clearly, your story is more than that. Because a lot of people moved there with hopes of getting rich through startup companies, in whatever capacity. Of course, I'm not saying you moved there for that. But what allowed you to get into those deals? Based on our past conversations, I have some factors in mind, but I'll hold off on saying them. Why were you able to get in, or selected for those deals?


Elad Gil: I think what happened early on versus what happens now is different. Those are two different stages.


Just as you said, for anyone trying to get into any industry, the most important thing is to go to the headquarters of that industry, or where its cluster is based. You have to move to where things are actually happening. The advice that says "you can do anything from anywhere" is nonsense. It's not just the tech industry; it's all industries.


If you want to get into the movie industry, people won't tell you, "You can write movie scripts from anywhere, do digital music scores from anywhere, edit from anywhere, and also shoot from anywhere. So go to Dallas and join their thriving film community." They'll say, "Go to Hollywood."


If you want to get into finance, you might say, "I can fundraise from anywhere, think of trading strategies from anywhere, hedge fund strategies from anywhere." But people will say, "Go to New York, or to some financial center."


It's the same with the tech industry.


Our team member Shreyan has been doing a "Unicorn Analysis," studying where the market cap of private tech companies is concentrated. Traditionally, about half is in the U.S., and within the U.S., about half is in the Bay Area. But in this AI cycle, 91% of the market cap of private tech companies is in the Bay Area. 91% of the global AI private market cap is concentrated in roughly a 10-mile by 10-mile area.


So, if you want to do AI, you should probably be in the Bay Area. The second option might be New York, but then it drops off a cliff. The real core place is still the Bay Area.


If you want to do defense technology, you might want to go to Southern California, close to where SpaceX and Anduril are located, such as Irvine, Orange County, El Segundo, and so on. There are many startups there.


If you want to do fintech and crypto, it's probably New York.


But the reality is, these industry clusters are very strong. So the first point, as you said, I was indeed in the right place at the time. I was in the right network. Another default condition is that I myself am running a startup. I worked at Google for many years, then left to start my own business. People started coming to me for advice.


For example, the way I eventually invested in Airbnb was when they had about eight people, and I was helping them with their Series A. I introduced them to some people and provided very light strategic help. Of course, they would have completed the fundraising without me. In the end, they said, "Hey, when this round is closing, do you want to invest a bit?" I said, "Sure, it sounds great." It was a very natural thing.


Another example is how I invested in Stripe. At that time, I had sold my own early-stage API infrastructure company to Twitter when Twitter had about 90 people. Then I sent an email to Patrick, the CEO of Stripe, saying, "I've heard a lot of good things about you, and I also really like what Stripe is doing. If it were my own startup, I would use it too. I just sold an API company myself. Do you want to chat about these things?"


We took a few walks together. One or two weeks later, he texted me, "Hey, we're fundraising, would you like to invest?" So my earliest investments happened very naturally. Founders would say, "I hope you'll join us."


At that time, I didn't think, "Oh, I should become an investor and then go chase projects." I just really enjoyed talking to smart people, solving certain business problems, and loving technology and how it translates into the real world. I'm just a nerd, and then I met other nerds, and we hit it off. That's my early story.


Tim Ferriss: I suddenly thought of a saying, you've probably heard it, and I'm sure everyone has: If you want money, seek advice; if you want advice, seek money. I just suddenly realized that this can work the other way around. In other words, if you keep offering a lot of advice, many times you will eventually get the opportunity to invest money. Conversely, if you initially want to give money, others may come to you for advice.


Elad Gil: Yes, well said.


Tim Ferriss: When did you write the "High Growth Handbook"? When was that book published?


Elad Gil: It's been a while. Probably around seven years ago, more or less.


Tim Ferriss: Seven years ago. Okay, we'll come back to this topic later. Because you are indeed in the right place geographically. You are at the center of the switchboard. As you said, the earliest prominent investments were very organic.


What I'm curious about is, as you mentioned earlier, in the past, you were doing one thing, and now you are doing another. But between these two, there has been an evolution. For example, I want to ask, do you still agree with this statement? This is from that First Round interview I mentioned earlier: "As a general rule, when I make investments, I first look at the market, then at the strength of the team." There is more to it. But do you still agree with this statement?


Elad Gil: I agree 90%. Occasionally, you come across a very special individual, and then you support them, especially in the very early stages.


For example, the first round of funding for Perplexity, I led that, it was very, very early. The reason for that was, the CEO of Perplexity, Aravind, seemed to have sent me a message on LinkedIn. At that time, no one was working on AI, he was then an engineer or researcher at OpenAI.


He said, "Hey, I'm at OpenAI." Of course, no one really cared about OpenAI at that time. "I'm considering doing something related to AI. I heard you talking about these things, and not many others are. Can we meet?"


Then we started meeting every two weeks, brainstorming together. Later, this turned into an investment. It was a "people-first" thing because he was just so excellent. Every time we finished our discussion, a week later he would come back with a finished product of what we had talked about. Who does that?


Tim Ferriss: Yes, that's a very good signal.


Elad Gil: He was really impressive.


Another example is how I eventually invested in Anduril. At that time, Google shut down Maven, their defense project. I thought, "If these existing giants are not willing to do it, isn't this a great opportunity for a startup to step in?" Because Silicon Valley and the defense industry have a long history, like HP, and many early brands were like this.


So I was looking to see if anyone was working on this at the time. This direction was very unpopular back then. Later, I think it was at a brunch or similar event, I met Trae Stephens, one of the co-founders of Anduril, who was also at Founders Fund.


Once again, this highlights the importance of being in the right city. He said, "Oh, I'm working on a new defense project." I said, "Great, let's talk about it."


So sometimes, I actively look for these things in the market; sometimes, I meet people first. Anduril saw the market first and then found exceptional people. Perplexity, on the other hand, was in between: I was always looking at various things in AI at the time because I believed it would become extremely important, but there weren't many people focusing on it then. Then I met someone outstanding.


My investment in OpenAI was also like this. My investment in Harvey, the early legal AI company, was also like this. I invested in many very early-stage projects because they were among the few truly working on what I believed to be crucial markets at the time.


Tim Ferriss: I'd like to go back to a few things you said earlier. You mentioned the founder of Perplexity, or the person who later became a founder, saying he saw or heard you talking about AI. Where exactly was that? Was it in your blog post? Or somewhere else? How did he actually discover you were talking about these things?


Elad Gil: I think he reached out to me, in part because I had previously been involved in many last-gen tech companies like Airbnb, Stripe, Coinbase, Instacart, Square, and others. I already had some visibility as both a founder and an investor at that time.


Furthermore, at that time, I was actively "harassing" AI researchers, constantly asking them what was happening right now because it was so fascinating. Many people were using something called GAN to do art at the time, which stands for Generative Adversarial Network.


I was also playing around with these things. I had tried to hire engineers to help me build something fundamentally similar to Midjourney because I felt that if AI art creation could be made very easy, it would be very cool.


Tim Ferriss: I'll pause here for a moment because this leads perfectly into my second question. When you mentioned AI earlier, you said you believed it would become extremely important at the time. What signs led you to that conclusion? What was the distant "smoke" that made you think, "Oh, this is an interesting direction"?


Elad Gil: I think there are probably two or three factors.


AI has always been something that people have talked about for a long time. When I was studying math, I took many theoretical computer science courses and was exposed to early neural network classes and the underlying mathematical foundations. People have always been anticipating building some form of artificial intelligence.


In a sense, you could even say Google was the first AI-first company. It's just that at that time, we called it machine learning, and in a sense, the technical foundation was also different.


I think 2012 was a key inflection point. That year, AlexNet appeared, proving that you could start scaling up models, and as the scale expanded, AI systems would exhibit very interesting features.


Then in 2017, a team at Google invented the Transformer architecture. Now almost everything is built on top of this architecture, or roughly based on it. For example, when you look at the GPT in ChatGPT, that "T" stands for Transformer.


Then around 2020, GPT-3 appeared. It was a huge leap compared to GPT-2. At that time, it wasn't good enough to truly be widely applicable, but you would realize: "Wow, the scaling laws papers are out, and the leap in capabilities is so significant."


All of a sudden, you have a general model that can be called via API, accessible to anyone. If you extrapolate this further, you will find that it is bound to become very important.


So basically, I'm watching this leap in capabilities, trying out these technologies firsthand, and reading scaling laws-related papers. Or more broadly, I found that scaling laws seem to apply to many things. You would think, "Wow, this thing is going to become very, very important, so I should start getting involved."


Tim Ferriss: Do you think if you didn't have a math background, you would still be able to make this kind of judgment? I guess others might have done it too. But this also brings me to my question: How did you discover and absorb this information? Was this a hot topic in the circle at the time? In other words, in your social circle and network, were people already publicly discussing this, so you naturally got involved? Or were you already absorbing a lot of information from different fields, and AI happened to be one particular direction that particularly attracted you?


Elad Gil: I think there are three things.


First, I have always absorbed a wealth of information from many different fields because I enjoy learning about various things. I am someone who blends mathematics, biology, anime, art, and other subjects together and have always been in this mixed state.


Second, this was indeed something my friends would talk about, but at that time, it was more like a playful discussion. For example, "Oh, this is cool, look at what it generated." However, most people did not take it any further. It was a bit like early-stage cryptocurrency or Bitcoin: everyone was talking about it, but very few actually bought into it. I think that's part of the reason.


Thirdly, to be honest, I just found these things very interesting, so I kept playing around with them.


This brings us back to the matter of GANs and AI art. Different models would keep emerging at that time, and you could try them out.


Regarding this round of base models, AI, and all related changes, the importance of one thing has actually been severely underestimated. The way AI or machine learning used to work in the past was usually like this: you would have a team in your company or elsewhere, and then there would be the so-called MLOps team. That is the machine learning operations team. Their job was to help you set up all the data, pipelines, and related processes to train a model.


The model you trained was tailored to your specific use case, tailored to what you wanted to accomplish. Then you had to build a bunch of internal services to interact with this model.


So, making a usable machine learning system truly run and enter a production environment was a very painful thing.


Then suddenly, things turned into: you just need to call an API. With one line of code or a few lines of code, anyone anywhere in the world can access it.


And not only that, it is also generic. It is no longer specialized for one scenario, like spell-checking, for example. You can use it for anything. In a sense, its knowledge base is embedded with the entire Internet. It also began to possess more advanced reasoning capabilities.


But one of the most important points is this: you can get it with just a few lines of code. You don't need to assemble an MLOps team, host it yourself, deal with a bunch of interaction processes, or do all this extra work. It's just usable.


This is really crucial.


Tim Ferriss: This is so crucial. It's really hard to overstate this.


I have a million questions to ask you. The problem is that we have too many directions to talk about, it's almost "embarrassingly rich".


My team and I are now using Claude Code and various tools to do a lot of things. One of them happens to align closely with your expertise: angel investing.


For the first time, I feel like I truly have the capability to do this. Of course, as you might expect, some manual input is still required. But now I can look back and analyze my two decades of angel investing experience and attempt to do many different things.


I suspect many of the things that pique my interest may not have much practical value, such as conducting some counterfactual analysis: What if I held onto each investment for three years, five years, or other periods of time? This is essentially a form of self-flagellation akin to Opus Dei, often just whipping oneself on the back.


However, while conducting this analysis, some questions immediately come to mind and may actually be worth exploring. I want to hear if you would do this and if so, how you would go about it.


To be honest, part of this is purely out of curiosity. I want to know if the stories I've been telling myself all along are actually true. For example, I'd be interested to know: Who exactly made certain introductions? Did some people just refer those terminally ill, almost-on-life-support companies to me for a last-ditch effort? Or, were there indeed some individuals consistently recommending good projects to me?


There are a million ways I could interrogate and enrich this data. We are currently doing this with Claude and other tools, and it's going well. OpenAI is also very strong in this area.


Looking back, such as in my case, with approximately 20 years of investment records, what do you think are some more intriguing questions or analysis paths worth examining?


Elad Gil: Yes. I've been doing something quite odd recently: I upload a founder's photo and have a model predict if they will become outstanding founders.


Tim Ferriss: Oh, wow.


Elad Gil: Because when you think about it, we've actually been doing this when we meet people. We quickly try to make a judgment about someone: What is their personality like, what kind of person are they.


There are many subtle features. For example, do you have crow's feet at the corner of your eyes, which may suggest if your smile is genuine. What does that imply about your sense of humor? Or, if you frown a lot, what does that mean?


There are many of these subtle features. When you meet someone, you quickly form an initial impression of them. Of course, this doesn't mean it's always accurate. But as human beings, we do indeed do this very rapidly.


So I've been playing with a whole set of telltales just for fun. The question is: Can you extrapolate a person's personality based on a few photos? And if you can, can you somewhat predict their behavior? I find this very interesting.


Tim Ferriss: Yes. Have you discovered any signals in it yet? Or are you still unsure?


Elad Gil: Actually, the results are not bad. I've been doing some really weird tests recently, like with shirts, right?


Tim Ferriss: Yes, practicing observing people's smiles.


Elad Gil: Yes, exactly.


But I find it very interesting because we've always been reading people. This is also part of the telltales. For example, you can set it up: you are very good at cold reading based on microexpressions, facial features, and other information. Then you specifically write these things down.


Then, you make it not only give an interpretation of the person but also explain the specific micro features behind each judgment. It will help you break it down step by step. It's truly amazing. Think about what this technology is all about, it's insane.


Again, I want to emphasize that I'm not saying it's completely accurate, nor am I saying it's necessarily predictive. But in the realm of "getting someone," it's already doing pretty well.


It will even give judgments like: "This person may have a certain type of humor," or "This person may be more restrained in most social situations but would suddenly interject with an unexpected witty, dry humor-type comment." It's very specific.


Tim Ferriss: Very specific.


Elad Gil's Investment Approach: Look at the Market First, Then the Team


Elad Gil: Yes, it's amazing. I've been doing similar things recently. Although this may not be the question you really wanted to ask, I find it particularly interesting.


Tim Ferriss: It's actually related. Of course, I definitely missed some steps. But I really like angel investing; it's just that "the dose makes the poison." So usually at some critical point, I think, "Well, this isn't fun anymore."


I also like dark chocolate, but I don't want to be fed dark chocolate all day long. I've talked about this before, but to be honest, I do enjoy the learning process and the competitiveness, as well as interacting with some very, very smart people. Of course, not everyone ends up being a successful company founder. But ultimately, I have always been trying to distinguish the signal from the noise.


Additionally, regardless of the method, in this example being investment, it is quite interesting because you can use it to sharpen your thinking, stress-test your beliefs, and underpin the underlying assumptions of certain predictions.


So I'm just curious, have you ever done a retrospective analysis of your past venture investments? Or are you more like Marc Andreessen in style: only looking forward?


Elad Gil: Yes. When I first started investing, I would create a very long spreadsheet, score each company on many dimensions, and then look back later to see if these judgments were correct. Overall, they were mostly correct.


But the challenge is that there is a lot of randomness in the outcomes. Some companies that you thought were dead end up selling for tens of billions of dollars, or something similar happens. Right?


Tim Ferriss: Of course.


Elad Gil: How do you score these situations? For example, right now we are in a very strange market moment, where companies with multi-trillion dollar valuations are all chasing after the same prize. They are making all sorts of moves that wouldn't normally happen.


So, factoring this in for evaluation is actually very difficult. Overall, I lean more towards the Marc Andreessen camp. I rarely think about the past. For my own past, I hardly think about it. I'm more like, "Keep moving forward."


Maybe that's not good, maybe I should engage in more, deeper self-reflection. I'll try to reflect in the moment, but I won't try to rehash and review my entire life and all decisions.


If anything, many decisions actually make me quite angry at myself afterwards because I feel I wasn't aggressive enough at the time. In other words, I invested in a certain company, but I should have tried harder to invest more, even though I was already working very, very hard at the time.


Because there are only a few truly important companies. And for an investor, that's the most important thing. Of course, as a person, I do enjoy being involved with different companies, collaborating with different founders, helping them, whether they succeed or not. I also get involved because a certain technology is very interesting.


But from a return perspective, the reality is a very clear power-law distribution. People often talk about this, and it is indeed true.


I remember a friend did an analysis, it might have been Yuri Milner, or it might have been someone else. He looked at all tech companies from around 2000 or 2004 until now. I don't remember the exact dates, but the general conclusion was: about 100 companies contributed to over 90% of all returns, and a total of 10 companies contributed to 80% of all returns in the tech industry over the past 20 years.


If you didn't invest in those 10 companies, you are a bad investor. Once you start facing this power law distribution, outlier outcomes, and all these factors, how do you score yourself?


Essentially, it's: Did you invest in one of those 10 things? That is the true yardstick. For investing, this may be the right way to evaluate.


Tim Ferriss: I want to focus as much as possible on some early-stage decisions in this podcast episode. Like you said, those are all earlier decisions. The past has its ways, and the present has its ways. It's not about which is better, but what you did in the past usually affects what you can do now and how you do it now.


I am curious, we won't spend too much time on this, but it might be interesting for the audience: When did you transition from simply doing angel investing yourself to getting other investors involved in your deals as well?


There are many ways to do this. But the reason I wanted to ask this is because you have done a lot of SPVs. Let me explain first, SPV stands for Special Purpose Vehicle. People may be more familiar with venture capital firms: they have funds, such as raising $100 million for a fund. Of course, the amount can be more or less. Then they invest the money in many different companies, and in the end, see which companies win and which companies lose. If there is a profit, in a traditional textbook example, the venture capital firm usually takes a 20% profit share, and the LP, the Limited Partner, takes 80%.


Venture capital firms also charge management fees to maintain company operations. Of course, the actual situation is usually far more than just "maintaining operations."


And SPVs usually invest in a specific company. For simplicity, let's assume it's a single company. For the person initiating the SPV, this structure has some clear advantages. But it also carries a significant reputational risk. Because if you have a fund and some companies fail, your investors won't automatically go to zero; but if you do an SPV and it goes to zero, it could seriously damage your reputation.


I've seen some of your early SPVs, which obviously include many well-known companies like Instacart and so on. How did you decide which companies were suitable for an SPV? Because this seems to be a very critical set of decisions that would lay the groundwork for you to have more choices later on.


Elad Gil: Yes. Like you said, I have always been very afraid of losing other people's money. If it's my own money that's lost, that's okay; that's my decision. I'm an adult and can bear the consequences.


But when it comes to other people's money, whether from individuals or institutions, and asking me to invest on their behalf, I have always been very cautious. Similarly, I am truly terrified of causing others to lose money.


So, when I was doing early-stage SPVs, I always tried to be extremely cautious. The key was to select projects that I believed could become massive companies. Like you mentioned, Instacart, early-stage Stripe, Coinbase, and a few other companies were among the first batch of SPVs I did.


My focus was very clear: Do I believe this company has the potential to become a huge entity? At the same time, I would also ask myself, does it have enough downside protection? In other words, even if it doesn't succeed as much as I envision, could it still be a decent outcome for the investors?


I would take this very seriously.


It's fascinating because many people come to me for advice, asking how they can become investors or that they are scouting for a certain fund. The so-called scouts are basically given a small amount of money by a venture capital fund to invest on behalf of the fund. Sequoia has a well-known scout program where they give some people money to help them invest.


Some scouts I've spoken to basically treat this money as "free money" or an option. Their mindset is: "Just throw money at a bunch of things, maybe one will hit." I would remind them: "Hey, if you ever really want to be a professional investor, this is your investment track record."


First, in a way, you are a trustee. Perhaps from this perspective, you should be more cautious.


Second, this will establish your track record, showing your performance history. Do you want a good record or a bad record? How do you think about this?


Of course, sometimes people are just lucky, hitting one out of a hundred projects, but that one project's return outweighs all losses, so they look very impressive. But to continue doing well in this, or consistently hitting great companies, is very difficult.


Tim Ferriss: Alright, I'd like to delve deeper into a few things you just mentioned. Perhaps you can take us through an anonymous case study, no need to mention the company name.


You mentioned earlier about building your investment track record. Before you later raised funds, you did very well in this aspect. I hope you can explain what you do during due diligence, or how you weigh different factors.


In addition, you just mentioned "sufficient downside protection" – I'm not sure if that was your exact phrasing – and I would also like to hear your thoughts on how you filtered these transactions. Because from a due diligence perspective, you could have chosen many different transactions.


What do you usually pay more attention to? What are the things that you prioritize more than others? And what are the things that you are less concerned about?


Elad Gil: Yes. I think there is a significant difference between early-stage projects and late-stage projects.


At the early stage, as mentioned earlier, compared to most early-stage investors, I spend more time researching the market. Many early-stage investors would say, "I only care about the team, about how strong they are." But I have seen very excellent teams being crushed by a bad market, and I have also seen some fairly average teams do very well.


So now, I consider the market more important. Of course, excellent teams can usually find their way if they are willing to pivot. But in the early stages, I place a heavy emphasis on the market. This may mean doing customer interviews or trying to understand: Do I believe something has the opportunity to become large enough?


Sometimes, this may also be just some kind of intuition. For example: "Hey, defense is very important, but no one is working on defense. So, I will go find a defense company." I usually place a lot of emphasis on this.


Related to this, I have always tended to avoid "science projects." Some people are easily attracted to these things: "Wow, this is so cool, it's quantum, and so on." But I basically avoid these things. Sometimes I miss out on some good opportunities because of this, but most of the time, this judgment is correct.


I actually think that SPACs saved the hard tech and science investment industry. Because if you look back, at the peak of the market, a group of SPACs took many companies that could no longer continue financing in the private market public.


This gave them enough money to keep going. More importantly, it allowed a group of hard tech funds to realize liquidity, avoiding closure. It gave these funds returns. It was basically the era of SPACs. Chamath actually saved hard tech. I'm serious, not kidding.


I generally stayed away from these types of companies. I'm not saying I'm very smart because if I had invested, I could have made money too. But at the time, I felt that these companies had capital structure issues, scientific risks, market risks, and many other issues.


At the later stage, the challenge usually lies in: on paper, the investment return of every late-stage company would appear to be 2 to 3 times. Because the funds driving these rounds make investment assumptions based on a certain IRR clock, such as a 25% internal rate of return, or similar standards. So everyone will make various models, and these models will eventually show: these companies can basically increase by 2 to 3 times.


True art, or science — call it what you will — lies in judgment: Is this company really trading at 0.5x, or is it 10x? Will its value drop? Or will it increase by 10x? How do you know it's 10x and not 2x to 3x, or 0.5x?


This is the more challenging aspect of growth-stage investing. There are things where you make a judgment: "This company will continue to grow for this reason." But this judgment is often not mathematical. It usually comes from some market dynamic, a key insight, or a market-share question.


People often make this very complicated, create complex multi-page models, write 50-page memos, and so on. But many times, all these things can ultimately be boiled down to one question: What is the one thing I need to believe about this company to think it can become very large?


If you need to believe in three things, then it's too complicated and may be hard to justify. If you don't need to believe in anything, then it's meaningless. Generally, to truly understand an outcome, you only need to grasp one or two key insights.


Tim Ferriss: Can you give an example? Like what is the core belief of a particular company?


Elad Gil: Sure, let me give you two or three examples.


For example, Coinbase — one of its core rationales is that it is the index of the crypto industry, and the crypto industry will continue to grow. Because if Coinbase trades every major cryptocurrency and takes a cut from each transaction, as long as the trading volume is high enough, then investing in Coinbase is essentially like buying a basket of cryptocurrencies. That was the premise at the time.


Stripe's logic is: it is the index of e-commerce, and e-commerce will continue to grow. Of course, now Stripe is much more complex, with various factors driving its performance.


Anduril's logic is: machine vision and drones will become important, and AI and drones will also be important in the defense sector.


Tim Ferriss: Exactly.


Elad Gil: Of course, the actual situation is more complex than this. I'm just summarizing it this way.


Tim Ferriss: Yes, yes. I mean, as a core belief, that's the point.


Elad Gil: In Anduril, there's also the issue of cost-plus and hardware margin. It actually has four to five critical factors, somewhat like a checklist for evaluating a defense tech company. But for many other companies, the core judgment may really be: e-commerce will do well.


Tim Ferriss: This question may be a bit too industry-specific, but when you mentioned those companies earlier, what stage were they in approximately when you did an SPV? Just give us a rough idea.


Elad Gil: For Stripe, when I first invested in Stripe, it was only eight people. Later on, I continued to follow on investments. To be honest, by the time I ran out of my own money, that's when I started doing SPVs. So I did my first SPV around the time of Stripe's Series C, roughly that stage.


Tim Ferriss: I see. And were other companies at a similar stage? Like Instacart and so on?


Elad Gil: They were all in a similar range, around the Series C or D. At that time, I didn't have a fund or anything, so I invested as much personally as I could. Not just in the early stages, to be honest, whenever there was an opportunity, I just kept on investing.


Tim Ferriss: When you're trying to assess whether a company is a 0.5x or a 10x, aside from that core belief, what other aspects of due diligence do you use to determine where it falls in that range?


Elad Gil: I conduct extensive due diligence. For example, I meet with the CFO multiple times, go through financial data item by item, examine financial models, look at customer situations, call customers, research the executive team. I do a lot of things.


As far as I know, my fund is the only one that truly does cash reconciliations. For later-stage projects, we perform cash audits to look at its cash flow. So I do a significant amount of due diligence because I want to make sure I'm not doing anything inappropriate.


But on the other hand, most due diligence tends to converge on that one issue in the end.


So when I work with a company, I actually try to make the due diligence process very quick and direct. I would say: First, we need to confirm that the financial data is correct, and there are no fundamental issues; second, we narrow down the issues to one or two core questions to understand if this company can still grow. Instead of bringing up 30 pages of irrelevant questions.


Tim Ferriss: Right.


Elad Gil: Many people would say, "Hey, we need to know the secondary cohort data for this small product." But who cares? They are just wasting time, wasting the founder's time, wasting the team's time. I will work very, very hard to avoid this situation.


As a former entrepreneur myself, I know how valuable time is and how annoying those questions can be.


Tim Ferriss: I almost wanted to ask you this question at one point, but we don't need to spend too much time on it. You have an article from a long time ago, probably written in 2011, that listed the questions VCs would ask startup companies. The kind of questions you just mentioned, you omitted some of them in that article.


But I'm curious, when you're talking to founders, whether in early-stage or later-stage projects, are there any questions from those that you still use? Or are there other questions you would ask now? I know it's from 2011, so I don't expect you to remember the article itself.


Elad Gil: Yes. It's been a long time since I looked at that article. I'm actually writing another book now, talking about the stages of startup companies from 0 to 1, which will involve some similar questions. But the reality is that since I wrote that article, the VC industry has undergone significant changes. Because in 2011, VC funds mainly did seed rounds up to around Series D or E, and then the companies went public. This situation of "private companies surviving for 20 years" did not exist back then.


Do you know why stocks have a four-year vesting period?


Tim Ferriss: I don't know. Why? We're now talking about IPOs, and I can probably guess a bit, but why?


Elad Gil: Yes. In the 1970s, a four-year vesting period was designed for employee stock options because companies usually went public within four years. And that was it. Quite literally.


So it's usually a four-year clock. Later, Google took six years to go public, and everyone said, "Wow, they are going public too slowly, taking six years. They just waited like that." Do you see what I mean?


Tim Ferriss: I see.


Elad Gil: Really, that's what people used to say at the time. So the past VCs were mainly very early-stage investments. And what we call growth-stage investments today actually belonged to the public market investments in the past. That was something public market investors would do four or five years after a company was founded.


In other words, in the past, the public markets would get involved very early. Later, with the Sarbanes-Oxley Act coming into force, companies started to be reluctant to go public, and at the same time, there was more and more capital in the private markets, prolonging the time companies went public.


So, venture capital funds suddenly started doing growth-stage investments that had previously belonged to public market investing. And in 2011, this was not yet widespread. It was mainly DST's Yuri Milner and a few others doing it, but it was not a large industry at the time.


So over the past 15 years, the nature of venture capital has fundamentally changed. This also means that the questions I listed back then did not include what I would now consider more growth-stage questions, because there wasn't much growth-stage investing in venture capital at that time.


Tim Ferriss: Can you give some examples of growth-stage questions?


Elad Gil: Frankly, it will overlap with some early-stage questions. But at a very late stage, the questions become more financially driven.


Usually, my team and I look at: What is the core business of the company? How do we extrapolate its future development? And then, what are the ancillary businesses the company is working on? These things are almost like future options, which may or may not be exercised.


So usually, we make investment judgments based on the core business: Can it continue to do what it is doing now? Because most companies mainly grow based on one thing. At least in the first decade. Companies that truly have multiple businesses running simultaneously are very few.


Usually, one thing succeeds first, and then ten years later, you might do a second thing that is truly effective. Like Google's Google Cloud. Of course, Google also has YouTube, as well as many other things now, such as Waymo and various interesting businesses. But these all took a long time.


For a long time, Google was actually just search, it was search and advertising. But sometimes, companies will also have some additional businesses that become very interesting growth drivers. Like SpaceX was initially a launch business, then it became a satellite business, which is Starlink.


Tim Ferriss: Yes, Starlink is really amazing. Unfortunately, there are too many trees blocking the view where I usually stay, so I can't use it.


“Scaling Startups”: Boards, Distribution, and Organizational Scaling Should Be Actively Designed


But let's switch to this book, "High Growth Handbook." It was about seven years ago. It's a very excellent book, and you really should take a look, especially if you're supporting venture-backed startups. What was its subtitle again? "From 10 to 10,000 People, How to Scale a Startup." This book has a lot of great advice.


I want to ask you, is there any content in this book that you hope entrepreneurs, who are the target readers of this book, can pay more attention to? Or is there any content that you would like to add or expand on now?


Elad Gil: Yes. When I was writing this book, I originally had an outline that, in terms of chapter count, was probably two to three times the actual length of the book. So there is a lot of content that I did not include, such as sales, marketing, growth, and many other things.


But this book is essentially a tactical guide; it is not the kind of book you read from cover to cover. It contains many interviews with different people, all of whom I believe are among the best practitioners in their respective fields worldwide.


But fundamentally, this book is meant to be used like this: if you suddenly need to deal with an acquisition, you jump to that chapter on acquisitions and read it. Then you set the book aside. When you encounter hiring issues later on and need to refer to relevant content, you go back to that chapter.


So it really is a manual, a guide, or a founder's companion. It is not the kind of book where you say, "Hey, I'm going to read from start to finish, and there will be some concise quotes inside."


It is also not the kind of book that is 500 pages long and only talks about one concept. I tried to avoid that kind of thing. So it is very tactical, very specific, and very actionable.


And the new book I am currently writing is essentially the 0 to 1 version of this book. For example, as a startup, how do you hire your first five employees? Someone wants to acquire you, what do you do? How do you complete your first round of funding? It is this type of content. So it is somewhat like a tactical guide from 0 to 1.


Elad Gil: Yes, total addressable market, abbreviated as TAM. The key is, what market are you really in? Sometimes, people fabricate some false markets. They might say, "Oh, we are promoting the global e-commerce development, and the global e-commerce market—let me make up a number—is $30 trillion annually, so we are in a $30 trillion market. If we only capture 0.1% market share, that's $300 billion in revenue."


Then you might think: this is not really your market. Your market is that you have created a small optimization engine for small business websites, which is not a $30 trillion market.


So, defining the market is truly important.


Here is a very famous example that illustrates how redefining the market can change your understanding of it, and that is Coca-Cola. For decades, Coca-Cola and Pepsi's market share was almost identical. Then a Coca-Cola CEO said, "Maybe we shouldn't measure ourselves by soda market share but by all liquid beverage share."


So, their market share went from 50% to 0.5% overnight. That's also why they later acquired Dasani and entered many other markets. Because they realized that we had defined our market incorrectly. We are not in the soda business, but in the beverage business.


So I think that sometimes reinterpreting what you are doing can indeed change the scope of your ambitions and will also change the way you think about your business.


Tim Ferriss: Yeah. If you are trying to find a dogma in the AI world similar to "Fraud in the payment space will kill you," is there anything you think may not hold true now, or may not hold true at all in two years, but many people have taken it as some kind of "thou shalt not" or "thou shalt" creed?


Elad Gil: I'm not sure. In the past, there have indeed been some claims, such as whether the return on these massive capital expenditures can actually be realized. I think these claims may be wrong.


But fundamentally, there are some moments when thinking in reverse is very clever; and there are also times when following the consensus is actually the smartest thing you can do. I think now is such a time: it is very right to stand on the side of consensus.


You can certainly overthink, for example, "What is the contrarian view? We should do a bunch of hardware because of this and that..." But you may find that perhaps the simplest answer is: buy more AI. Do you see what I mean? I think people have made these things too complicated.


Tim Ferriss: Yes, indeed. Probably every aspect of life is like that.


So, for you, what would go on the "do not invest" list? Suppose you are guiding someone you care a lot about. We can fictionalize a character, like your best friend's nephew, son, or daughter, very smart, MIT engineering graduate, has had some good results in angel investments, and then they say, "Well, I think I'm going to start raising a fund."


But they may not have the kind of project pipeline you have in the AI field. Suppose that's the case. Would you categorically tell them what not to invest in? Because those things are likely to be destroyed, swallowed, or replicated by AI?


Elad Gil: I think the reality is, when people are just starting to invest, many times the reason they can do early-stage funds is that as soon as you start helping others, you can always get investment opportunities at the very early stages of a company.


This is actually something I stumbled into. But the reality is, I have also seen repeatedly this kind of situation: you get into the right group of people, because the smartest people always naturally gravitate towards each other. Then you start helping others, and they will ask you if you want to invest. You start investing, and suddenly you have a good track record. Then you can raise larger funds, and then you start investing in later-stage projects. Because that group of people has also grown, they start doing later-stage companies, so suddenly you also have access to other project pipelines.


This is basically the traditional venture capital story. In a sense, it has been like this for decades. So I think this is still entirely feasible. You can do this in AI and in any field. I don't think you have to go off and do energy investments or something like that.


Tim Ferriss: You mentioned before an important lesson you learned from Vinod Khosla — perhaps saying "important lesson" is a bit of an exaggeration, you can correct me. The gist is: your market entry strategy is often different from your market disruption strategy.


Elad Gil: Yes.


Tim Ferriss: Can you talk about this?


Elad Gil: There are probably two to three versions here.


The first version is when the thing you start with looks odd, like a toy, but in the end, it becomes very important. For example, Instagram, Twitter, or some social-leaning products. The initial use case for them is very different from how people use them today. The product itself evolves, and people's understanding and usage of it evolve as well.


This version is usually more consumer product-oriented.


Another version is SpaceX and Starlink. SpaceX initially focused on launches, sending things into space. Later, they realized they had a cost advantage in satellite launches. So they built the Starlink network, which has now become a significant driver of their business.


Therefore, what they do has expanded significantly and undergone a transformation. In a sense, their market entry strategy was space launch, but the real disruptive strategy is Starlink. I think there are many similar examples in history.


In an Age of Uncertainty, Information Advantage Comes from Models, Experts, and Long-Term Planning


Tim Ferriss: Going back to the topic of information gathering and consumption. How do you usually get information? If you were to chart it out as a pie chart, podcasts, books, X, whitepapers, papers, or other channels, roughly what percentage would each occupy?


Elad Gil: I think my current sources of information have basically converged into three categories.


The first is X. The second is reading some technical papers or journals. Sometimes, if it's more toward biology, even though I don't do bio investments, I just like it. The third is talking to people.


However, because the competition in the AI field is very intense now, the quality and quantity of papers in the AI industry have significantly decreased. I find that chatting with someone particularly smart about a topic for 20 minutes often provides me with more information, insights, and clues about what to read next than doing an exhaustive search on my own.


In fact, there is a fourth source I'm using for research now, which is models. It could be OpenAI, or it could be Claude, Perplexity, Gemini. Each of them is suitable for different things, and I use them for different tasks.


Tim Ferriss: What will you use each model for?


Elad Gil: Let me give you an example without going into each one.


For example, Gemini, if I want to look up some activity-related information, like "I want to plan a trip." I would think Google's corpus and their long accumulation of information are very useful for certain types of travel recommendations. So, this is a scenario where I would specifically use Gemini.


It's not that other models don't do well, but I have found that when I use it, the rankings I get are usually more accurate. I would have it break down, rank, score from multiple dimensions, and so on.


I have also previously delved into several aspects of ADHD and ASD.


Tim Ferriss: What is ASD?


Elad Gil: Oh, sorry, it's Autism Spectrum Disorder.


Tim Ferriss: Got it.


Elad Gil: If you look at autism, its diagnosis rate has changed significantly. I may be misremembering the numbers, so I should look it up again later. But from my recollection, three or four decades ago, the rate of people diagnosed with autism was probably a few in a thousand, and now it's around 3%.


So you might ask: what is this all about? Is it because older parents are having more children? But it turns out this is not the primary driver. Has there been some change in the environment? In the end, it seems the main reason is just a change in diagnostic criteria.


Furthermore, there are many incentives in the school system that would encourage people to have their children diagnosed. That's probably why there are so many children classified as having attention deficit, or classified as autistic. On the attention deficit side, doctors also have financial incentives because they can prescribe medication. And autism diagnoses have also increased significantly.


But I'm not sure if there are actually more people with these issues. It's more likely that the diagnostic scope has expanded significantly.


Tim Ferriss: Which model did you use to study this at that time?


Elad Gil: Usually, when I do this kind of thing, I use two or three models simultaneously. Then I ask them to provide primary literature and then have them organize and summarize the charts.


I actually have a whole set of output requirements to have them generate results in the format I need, so I can go back and cross-check the data, read literature, and do other checks.


The topic of autism is particularly interesting. Because some studies show that the mother's age actually has a greater impact than the father's age. But people always talk about the father's age. And then you ask, "Why does everyone only talk about the father's age? Is there some societal incentive? Is there some political belief system? Why is the focus always here?" I think this is very interesting, right? This kind of research will lead to many "why" questions.


Tim Ferriss: Why do you specifically study this?


Elad Gil: I find it interesting.


Tim Ferriss: I see.


Elad Gil: I would think, "This ratio seems to have increased a lot, so I try to understand why."


Also, I was chatting with a friend at the time. She was about thirty-five or thirty-six to thirty-eight or thirty-nine years old, dating a man in his forties approaching fifty, or in his early fifties. She mentioned that if they were to have a child in the future, she was concerned about the risk of autism and what might happen. So that's also part of the reason I delved into this research.


I can't quite remember the final conclusion now. For example, let me casually mention a number, don’t quote me on this number. Later I can look it up. But it's probably like, for every 5 to 10 years increase in both the father's and mother's age, the risk increases by around 10%.


And again, in some datasets, the impact of the mother's age is actually slightly stronger. The issue is, if you think the true proportion of autism in the population is one in five thousand, or a similar proportion, then this 10%, 20% difference, from the overall frequency perspective, is not that important. The real significant change is in the diagnostic criteria.


Tim Ferriss: Yes, this applies to many diagnoses.


Elad Gil: Many things are like this. But society tells us, "Oh, the autism rate is increasing mainly due to parental age." And then you think, "No, it's actually these incentive mechanisms causing it."


And then you look at some school systems, and you'll find that in a state like New Jersey, for example—I remember 60% of autism diagnoses are not actually based on any clinical criteria, but teachers casually saying this person has autism.


Tim Ferriss: Oh my, that's terrible.


Elad Gil: So when you start digging into these things, you'll feel like, "Wow, this is so interesting." And these models are very valuable and very helpful in this regard.


So going back to your question about sources of information, I now have a part of the information where I do in-depth research on questions that interest me through models. I have them aggregate clinical trial data, or aggregate different types of information, then give me firsthand sources, summaries, and do cross-checks.


I have a whole set of cues to clean and check the data. So it's a lot of fun. And then I always use multiple models at the same time to see what each of them will come up with.


Tim Ferriss: When you're chatting with people, this topic may be a bit broad and may not necessarily be able to delve deep. But assuming you've found someone you want to chat with for 20 minutes, how do you usually find these people?


I guess there are many ways, but do you find them through X, through technical papers, or through other means? I'd like to get a general idea. And then, when you're on the phone with such people, do you have some reused question leads, or some fixed ways of asking questions?


Elad Gil: I think there are three scenarios.


The first one is: "Hey, I'm doing in-depth research on a certain field because I find it very interesting, or it may be related to a direction I want to invest in." But honestly, most of the time it's just because it's interesting.


Then I'll quickly triangulate: Who are the smartest people in this field? This may come from technical papers, or it may be me asking everyone I chat with, "Who's really smart in this field?"


This is one form of it. It's more informational, where I'm trying to delve into something. For example, when I was at Google, I worked with some early AI researchers. This is also why I know Noam Shazeer, who later founded Character, then returned to Google. And because of that, I've met many other people.


But there are also some people who I just look up because I saw an interesting paper, or because everyone says this person is very smart, so I go talk to them. This is one form.


The second form is that I do believe smart people tend to gather together. So, if you often hang out with smart people, and they keep meeting other smart people, this network will naturally expand. Learned people often hang out with learned people. To some extent, birds of a feather flock together. This is the second type.


This is probably the two main ones. Of course, sometimes people will directly introduce me to someone. They will say, "Hey, I think you two would get along."


Another situation is that there are some people I keep going back to. For example, if I think someone is one of the most knowledgeable about the future of AI, then I will often have chats with them.


Or on the topic of longevity, there are also some very smart people. For example, BioAge's CEO Kristen, I sometimes call her with random longevity-related questions because she knows so much about every aspect of the field. She thinks deeply and is very willing to challenge her assumptions.


She is a real truth-seeker. Many people use the term "truth-seeking," but she is really the kind who says, "What is true? Let me figure it out." She has a Ph.D. and postdoc background in bioinformatics and aging research, very professional. So when it comes to longevity-related questions, she is one of the people I would call to consult.


So, for different topics, I have some go-to people.


Tim Ferriss: You have some understanding of biology. I find it interesting that back when I attended the first Quantified Self meetup, around 2008, with only 12 people sitting in Kevin Kelly's house, discussing how to track body data using Excel spreadsheets. The world has changed now, hasn't it? Now there are thousands of self-proclaimed biohackers talking about longevity. Of course, there is also a lot of nonsense.


For you personally, where do you stand now in terms of interventions, or your thoughts on intervening in yourself?


Elad Gil: I haven't done much. Many things ultimately boil down to: get good sleep, exercise more, and so on. Some things are more important, like eating well. So I basically consolidate many things into these basics.


I think there may be one or two things you can eat that are indeed helpful. And there are some things that I have always thought would be interesting to experiment with but haven't done yet.


Tim Ferriss: Such as?


Elad Gil: Like trying a rapamycin pulse scheme, for example. I think it would be cool, something like that. But the reality is, I'm actually waiting for a truly effective drug to come out, and maybe I'll use it then.


Some things I think will indeed affect longevity, or affect certain systems. Like we discussed earlier, as you age, the muscles that control the eye lens weaken, which is part of the reason your focusing ability deteriorates. So theoretically, there should be eye drops for this issue.


There's a lot more on sensory aging that I'd like to fund a startup to work on.


There's also a lot on aging appearance that I've been talking about and want to fund related projects. I actually funded a clinical trial at Stanford to study this because I think this area is severely underinvested in.


In my view, peptide products are essentially in this direction as well. Many people take peptides for certain health purposes, and some are for cosmetic applications, such as GHK-Cu, melatonin, etc., many of which are fundamentally more cosmetic in nature.


Tim Ferriss: You mentioned earlier that there are a few things that seem worth ingesting. Are you referring to things like vitamin D? Or are there others? What's on your short list?


Elad Gil: Vitamin D and creatine.


Tim Ferriss: Got it.


Elad Gil: If you're looking to lift weights.


Tim Ferriss: Yes.


Elad Gil: I don't know. What's on your list? You think about this much more than I do. What are you currently taking or thinking about?


Tim Ferriss: I'm actually much more conservative than people might imagine. I tried a lot of things in my early years, many of which had relatively controllable risk limits.


For instance, I tried the first-generation Dexcom continuous glucose monitor in 2008 or 2009, and it was very uncomfortable to wear. At the time, I didn't know of any non-type 1 diabetics using it.


But I haven't ventured too much into things that are more controversial, like flying to other countries for some kind of gene therapy or using something like follicle-stimulating hormone. Not that I would criticize it, but I think the basic heuristic of "there ain't no such thing as a free lunch" is simple but quite useful. At least it can help you avoid a lot of pitfalls.


So, I have indeed tried some things. For example, different forms of ketone esters and ketone salts, some of which I find very, very intriguing for cerebrovascular reasons.


Because I have a family history of Alzheimer's, Parkinson's, etc., including some APOE3 folks, obviously there are many other risk factors. I'm very focused on this.


Obicetrapib is something I think is worth watching, although it's not at a stage yet where it's ready for prime time. Rapamycin is also fascinating. I do think rapamycin is fascinating, but it requires a lot of asterisks because if you don't know what you're doing, you can screw yourself up. If you're trying any immunosuppressants, you have to be extremely careful.


For example, an experiment I might do is to combine Norwegian 4×4 interval training with a rapamycin pulse scheme to observe if there are any changes in the volume of the hippocampus and other brain areas.


Of course, if I only did one intervention, the signal would be cleaner. But real life is sometimes different from waiting for scientific conclusions.


So I think this is a hypothesis worth testing and interesting. Besides that, what I do is basically very basic stuff: creatine, vitamin D.


If you have methylation issues or are taking drugs like omeprazole like me, it may inhibit magnesium absorption and affect other things, so you need to pay attention to these. But overall, it won't be too fancy.


I find urolithin A quite interesting, and the related data has been increasing. So I am indeed interested in mitochondrial health.


This may also include regular intermittent fasting and occasional three- to seven-day fasts. Recently for me, this might be based on the simulated fasting diet recommended by Dr. Dominic D’Agostino. The goal is to promote autophagy and mitochondrial autophagy in a certain rhythm, but not to do it all the time.


Elad Gil: Of course.


Tim Ferriss: I'm not looking to optimize this all the time.


Elad Gil: I've been thinking about one thing all the time. If you look at a computer, a key to fixing a laptop or fixing any system is to restart it directly, right? You reload the system, and it miraculously starts working fine; a lot of messy issues are also cleared. Does the human body have something similar? Like anesthesia? Recently, some people have been doing some kind of neural blockade.


Tim Ferriss: Yes, I'm not sure. It sounds a bit scary. Oh, are you talking about celiac plexus block?


Elad Gil: Yes, that's it, celiac plexus block.


Tim Ferriss: Yes. About this "reboot" thing—sigh, I have to sigh first because it does have some very interesting options, but usually applies to very specific use cases.


In concept, it makes sense. You may be more qualified to talk about this than I am, but I have indeed spent a lot of time interacting with neuroscientists. A large part of my information intake is in reading neuroscience-related content, or trying my best to read. Fortunately, after having AI tools, this matter became much easier. It not only helps you summarize summaries but also helps you understand concepts in a relatively reasonable order layer by layer.


I have read a lot of content on neuroscience and also a lot of content on optics. In fact, these two fields have quite a strong intersection, which may not be surprising. For example, if you look at PBM, that is, Photobiomodulation, it involves intervention through the eyes. Of course, it can also be done transcranially. However, I would like to remind everyone to be cautious about these kinds of things.


As for the "reboot" direction, let me give an example. Some people using GLP-1 receptor agonists for weight loss may also experience similar changes to a lesser extent. For instance, they may quit smoking, reduce alcohol consumption, or experience some kind of systematic impulse control improvement.


Elad Gil: Yes.


Tim Ferriss: For someone addicted to an opioid, I think ibogaine might be a "reboot" option. In the future, it may appear in the form of some active metabolite or a similar form. But at least for now, the so-called flood dosing, which is a relatively high-dose shock-like administration, still seems quite necessary.


Of course, this must be done under medical supervision, as it may trigger life-threatening cardiac events. Co-administration of magnesium seems to be helpful, but this is still a dangerous thing, so everyone must be careful.


There have been many people in history who deserve recognition for this, such as Howard Lotsof and his wife. Opioid addicts undergoing high-dose ibogaine treatment may enter a window of time during which they will not experience withdrawal symptoms, at least not physically.


I think ibogaine or a pharmacological intervention similar to ibogaine may have other applications. Honestly, some of the craziest things related to this molecule are the so-called reversal of "brain age." Of course, I am skeptical of this simplistic description.


But from MRI scans, it may indeed alter the brain state. Nolan Williams—may he rest in peace—and his lab have meticulously studied the changes in traumatic brain injury veterans before and after taking ibogaine. Some of the effects may be related to glial cell-derived neurotrophic factor. You may be more familiar with BDNF, which is brain-derived neurotrophic factor.


So, ibogaine is an interesting option.


As for anesthesia, I am now much more cautious about general anesthesia. I just had surgery yesterday and opted for local anesthesia. In this case, it's not a big deal because they just cut something off the top of your head, which you can still see.


But going back to the autism spectrum disorder and ADHD examples you just dissected, you mentioned the reward mechanism, which in some cases may involve a reverse incentive of overdiagnosis. Applying a Mungerism, although I don't want to keep quoting him, it's like: follow the money.


Many people actually don't need general anesthesia but are scheduled for it. However, general anesthesia can add a very, very, very large cost to the bill. Plus, some people, after undergoing general anesthesia, no longer have the same memory recall ability. Their personality may also become somewhat unstable.


The fact is, our understanding of many anesthetics is very limited, really very limited. We know it works, but our understanding of its mechanism is poor. Many people are not aware of this, which is actually normal unless they have spent a lot of time researching these things.


There are many very well-known, widely prescribed drugs whose mechanism of action is actually very unclear, even completely unknown. We only know from research that they appear to have good tolerability, the side effect spectrum ranges from A to Z, they seem to indeed produce some effect, or affect some biomarker. But we actually don't know how they work.


Many things fall into this category. So, I am very cautious about many of these things.


But going back to your question, I just felt like I did a TED Talk. As for "reboot," the most interesting thing I've seen is ibogaine. Of course, I don't want to simplify it to just the dopamine system because it involves much more. But I think the most important significance of ibogaine, perhaps not just ibogaine itself, is what it shows is possible.


And I also don't know if this possibility is limited to drugs. I am very optimistic about brain stimulation. Of course, there will definitely be some failures along the way and some not-so-pretty side paths. But I believe that brain stimulation, as well as the broader field of bioelectric medicine, will be one of the next important frontiers. It will be used not only to treat what we call mental disorders but also to enhance performance.


Now this field has also reached a stage where it can answer the question "why now," right? As a field, it does have some very good answers to "why now." I think people will extensively try these things, and it doesn't necessarily have to be in the form of pills, potions, IV drips, but through non-invasive brain stimulation. Of course, in an implant scenario, there may also be some invasive approaches.


So, this is a long answer. But that is probably the direction I am currently thinking about and tracking. There are still things we need to keep observing, but I think many of these things may turn into outpatient procedures in the future. You walk in, stay for an hour or two, and then come out. We'll see.


I'll ask a few more questions. If there's anything else you'd like to discuss afterward, we can continue. But really thank you for taking the time.


Tim Ferriss: Five years from now, looking back at today's Elad. Are there any beliefs, positions, possibly related to AI, or unrelated, that you feel are relatively more likely to be proven wrong?


Elad Gil: That's a great question. I think I would get a lot of things wrong. We are going through a period of tremendous change, and tremendous change means tremendous uncertainty.


So, if I were to predict what would happen now, I wouldn't be surprised if half of it didn't happen in the end, or if it happened more intensely, or unfolded in other ways. That's also part of what makes it interesting.


If the future could be perfectly predicted, it would be very boring because we would know exactly what is going to happen next, which would be terrible. This also ties into various concepts like free will. So, I am definitely going to be wrong a lot.


There's also a separate question that is an exercise I've been doing recently, something I've never really done before. There are many things in life that, as John Lennon said: Life is what happens when you're busy making other plans.


But this is the first time I've really been thinking: From several different dimensions of life, what is my ten-year plan?


The fundamental question is, I definitely won't get it completely right. You can try to make a ten-year plan, but of course, it's not going to happen exactly the way you envision. More importantly, will it change the scope of your ambition? Will it change the way you think about life?


So I've been trying to think in this way recently: What do I want to do in the next ten years? And that means, in order to get to that state in ten years, what should I be doing in the short term now?


It has been very eye-opening for me. It has changed my mindset on what things to try and what not to try.


Of course, those who believe in AGI would say, "We'll have AGI in two years, so your plan doesn't matter at all." But I think that's a very defeatist attitude. As if just because that thing might happen, I should give up.


On the contrary, I would rather say, "Well, I'll start with this plan and then adjust as needed." In this period of change, there may be some very interesting things in the world worth doing.


Tim Ferriss: Elad, before we wrap up, is there anything else you'd like to say? Any words for the listeners, requests, or things you'd like everyone to check out? People can find you on X, handle is @eladgil, and also visit eladgil.com, and of course, your Substack blog at blog.eladgil.com. We'll include all the links in the show notes. Anything else you'd like to add?


Elad Gil: No, I appear quite delight, and I'm super grateful that you had me.


Tim Ferriss: Thank you, brother. Always a pleasure.


[Video Link]



Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

举报 Correction/Report
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit