Bussiness
Prediction: Nvidia Stock Will Reach $10 Trillion Market Cap By 2030
Nvidia has a market cap of $3 trillion today. We believe Nvidia will reach a $10 trillion market cap by 2030 or sooner through a rapid product road map, it’s impenetrable moat from the CUDA software platform, and due to being an AI systems company that provides components well beyond GPUs, including networking and software platforms.
In 2021, I published an analysis on Forbes “Here’s Why Nvidia Will Surpass Apple’s Valuation in 5 Years” that stated: “Nvidia has a market cap of roughly $550 billion compared to Apple’s nearly $2.5 trillion. We believe Nvidia can surpass Apple by capitalizing on the artificial intelligence economy, which will add an estimated $15 trillion to GDP.”
Yesterday, Nvidia officially surpassed Apple in market cap, which means I delivered on my prediction 2 years early.
This lends itself to the question, what do I foresee next for Nvidia, and how am I approaching this heavy hitter in AI. My firm champions full transparency by issuing trade alerts for every buy and sell we make; thus, I’ve included at the end a transparent discussion on how my firm is managing our position today.
But first, I unpack why I believe Nvidia can achieve an astonishing $10 trillion market cap by 2030. As you’ll see from the key points to my thesis, there is a bull case where a $10T market cap estimate in a little over six years’ time is not high enough.
“Millions of GPU Data Centers are Coming.”
On June 2nd, Jensen Huang made a very important statement about the future of AI that answers quite succinctly why Nvidia is on the verge of becoming the World’s Most Valuable Company:
“The days of millions of GPU data centers are coming. And the reason for that is very simple. Of course, we want to train much larger models. But very importantly, in the future, almost every interaction you have with the Internet or with a computer will likely have a generative AI running in the cloud somewhere. And that generative AI is working with you, interacting with you, generating videos or images or text or maybe a digital human. And so you’re interacting with your computer almost all the time, and there’s always a generative AI connected to that. Some of it is on-prem, some of it is on your device and a lot of it could be in the cloud […]
And so the amount of generation we’re going to do in the future is going to be extraordinary.” – Jensen Huang, CEO of Nvidia, Computex keynote
Today, there are tens-of-thousands of GPUs in data centers. By end of 2025, there will be hundreds-of-thousands of GPUs in data centers. Due to the market’s forward-looking nature, 2025 is getting close to being fully priced in. Here is a slide of what this looks like from the perspective of scaling the ethernet networking to support a million-plus GPU cluster.
Here’s what we know about Big Tech’s purchases, thus far. Microsoft is reportedly looking to triple its GPU supply to 1.8 million GPUs this year to meet elevated demand for Azure, while Meta has disclosed its GPU orders with an announcement for 150,000 H100s last year and 350,000 H100s or H100-equivalents this year. Musk announced that X’s 100,000 H100 cluster would be online in a few months and hinted at a possible 300,000 B200 GPU purchase.
According to Next Platform, Meta has roughly 600,000 GPUs deployed including previous generations, such as Ampere. This could include some from AMD, although AMD is more likely to ramp in 2025 and beyond. Right now, Nvidia has a $100 billion run rate on its data center compared to AMD’s $4 billion, therefore, any portion of GPUs from AMD is nominal as it stands for 2024.
If we look closer at semantics, Huang used the word “millions” and not the singular word “million,” and “data centers” rather than the singular “data center.” Therefore, my firm is making the assumption that companies like Meta will grow their data center GPUs by a minimum of 233% from 600K to 2M by 2030.
Broadcom shares a similar view, noting that management expects million-GPU clusters by 2027, compared to clusters with tens of thousands of GPUs today. This is even more bullish than Jensen Huang’s comments. Coming back to Meta, even with 600,000 H100 equivalents, it’s building clusters of 24,000 GPUs. In order to see singular clusters scale to the hundreds of thousands and millions, as Broadcom is predicting, we would need to see GPU shipments far in excess of those levels. This alone could get us to $10 trillion market cap based off Big Tech’s data centers, and we have not factored in the enterprise. The enterprise includes companies like the Fortune 500 or Global 2000 that build on-premise AI systems.
We can cross-examine this by looking at comments by CEOs, such as Lisa Su who stated AI accelerators will reach $400 billion by 2027. Nvidia has over 95% market share of data center GPUs but with custom silicon ASICs and more GPUs coming online, this is closer to 80% market share of AI accelerators.
If this estimate materializes, Nvidia’s data center segment will be at $320 billion in 2027, up from data center run rate of $90 billion today, with consensus at roughly $145 billion data center segment by end of calendar year 2025 (consensus is total revenue of $157.51, deducting for other segments).
In my analysis last month on the Blackwell architecture, I made the argument these estimates are too low and that my firm expects we will see a $200 billion data center segment by end of CY2025 propelled forward by the B100, B200 and GB200, including the following points: “Taiwan Semi’s CoWos capacity, which is essential for Blackwell’s architecture, is estimated to rise to 40,000 units/month by the end of 2024, which is more than a 150% YoY increase from ~15,000 units/month at the end of 2023. Applied Materials has boosted its forecast for HBM packaging revenue from a prior view for 4X growth to 6X growth this year.”
Data center segment for Nvidia of $320 billion by 2027 would result in 260% growth for Nvidia’s DC from where it stands today and up 120% from DC revenue estimates for end of CY2025. Using Lisa Su’s prediction, there would still be another three years to achieve the additional 120% needed to reach $10 trillion.
Industry analysts have a high-30 percent CAGR for AI accelerators through 2030 ranging from 36.6% to 37.4%. If we round this up to a 40-percent CAGR for Nvidia, then it’s not out of the question that Nvidia ends the decade with $800 billion from AI systems. That would be 450% growth from $145 billion at end of CY2025. This is the most bullish case scenario, which is why my current prediction is a bit more tame (for now) at predicting 233% growth by 2030.
Valuation is one of the most important points that confuses many investors (and short sellers) on why Nvidia’s stock continues to extend. We’ve called the valuation eerily low as most hypergrowth stocks would trade well above historical averages after a 500% move in 18 months. However, due to the 600% increase in earnings and 400% increase in revenue, the stock has remained well below its historical averages, while in fact, trading near October 2022 levels. To put this in perspective, on a forward PE basis, Nvidia was more expensive at the start of 2023 than it is today. Currently, it is trading at a forward P/E ratio of 44 compared to 62 in January 2023. You can view a clip here where I stated the stock was trading eerily low. This is still true today.
The Technological Feat that Nvidia Accomplished
Many investors are surprised that Nvidia has surpassed Apple, and will pass Microsoft any day now to become the world’s most valuable company. Really, a gaming company? All of this from GPUs?
I want to make it abundantly clear that from a technological standpoint Nvidia has run circles around the FAANGs over the past 8 years. Apple has sat stagnant while Nvidia is in its Steve Jobs-era. What has resulted is that Nvidia is no longer a GPU company; it’s an AI systems company. The best ten or fifteen minutes an investor can spend in today’s market is understanding what exactly Nvidia accomplished to get to $3T, otherwise, it will not be clear how we can get to $10T.
Below, I take you through the key points from each generation, including the moment Nvidia transitioned from being a GPU chip company and a gaming company to become the AI systems company that is powering a $15 trillion economy.
For ease of reading, I’ve bolded key takeaways and also underlined the not-to-miss points:
Pascal:
In 2016, Pascal featured 7.2 billion transistors and increased CUDA cores compared to the previous generation, Maxwell. CUDA cores are parallel processors that can perform complex calculations and execute tasks on graphics cards much faster than a central processor. Parallel computing is at the heart of why Nvidia transitioned from gaming to AI, as GPUs can execute multiple tasks at the same time (concurrently). Each generation increases CUDA cores, which helps to accelerate what workloads are possible. CUDA cores distribute compute across thousands of cores to train large scale neural networks and can process big data at exponential rates.
Pascal was built on TSMC’s 16nm process and Samsung’s 14nm FinFET process with 16-bit floating precision, plus NVLink bi-directional interconnect to scale multiple GPUs for applications. TSMC’s CoWoS packaging was used to support high-bandwidth memory (HBM2).
Volta:
Volta was built on a 12nm FinFET process with 32GB of HBM2, 900GB of bandwidth and 21 billion transistors. The breakthrough here was the introduction of Tensor cores for AI, machine learning and deep learning.
Tensor cores handle tensor and matrix operations, resulting in higher performance for neural networks. Tensor cores are capable of mixed-precision calculations, which contributes a significant amount to the “1,000 times increase in AI compute” quoted by Nvidia this past weekend. For example, switching from a 32-bit floating point to a 16-bit floating point can significantly increase training speed by requiring less memory and speeding up data transfer operations.
Due to introducing Tensor cores, Volta was the officially the first AI accelerator in history as it was designed for large scale training and connected up to eight GPUs. With Tensor Cores, Nvidia combined the benefits of parallel process and general-purpose compute from CUDA cores (which distributes tasks across thousands of cores) with the specialized acceleration offered from the matrix computations from Tensor Cores.
NVLink also saw an upgrade to 2.0 in this generation for higher data transfer rates.
Volta with Tensor Cores was launched in 2017 and further developed with two more releases launched in 2018. My firm began covering Nvidia’s AI thesis around this time, stating CUDA created an impenetrable moat for data center GPUs.
In 2019, Volta’s AI capabilities prompted me to say on my premium stock research site: “To be bold – I believe Nvidia will be one of the world’s most valuable companies by 2030. The research below organizes my investment thesis for the GPU-powered cloud and why I believe Nvidia will emerge as a clear leader.”
That premium research note was written on September 17th 2019 when Nvidia was at a $110 billion valuation.
Pictured Above: Y-charts, the market cap of Nvidia when I first stated it would become the world’s most valuable company at $110.3B compared to a $3T market cap today, for a return of 2,600% in less than five years.
Turing:
Turing was built on the 12nm FinFET process with upgraded HBM2 memory (GDDR6) for higher bandwidth and 8-bit floating precision. Nvidia’s T4 GPUs delivered up to 40 times more performance than CPUs and are capable of real-time inference due to exponentially better throughput.
The architecture expanded to include more CUDA cores, second generation Tensor cores and the newly introduced RT Cores for real-time ray tracing. RT cores provide a boost to gaming and introduced professional visualization. The RTX platform was invented by Nvidia to “physically simulate light behavior in the world” and combines RT cores for ray tracing with Tensor Cores for AI.
For more information on ray tracing and RT cores, you can read my previous coverage on Omniverse here, or watch a 1-hour video where I interview Richard Kerris of Nvidia on the simulation platform.
Ampere:
If Tensor cores made Volta the first AI accelerator, then Ampere was the architecture that marked the moment Nvidia would no longer be considered a cyclical, gaming stock. I began to call Nvidia “secular” with this release and it’s when I doubled down on my conviction by taking my thesis from behind the paywall to the public, stating Nvidia would Surpass Apple in 5 Years. Nvidia not only became secular in revenue, but it’s secular-level gains have surpassed the world’s most celebrated software companies (every single one of them) since Ampere.
In fact, as one of the leading investors in semiconductors on record, I can assure you semiconductors have gone through a deep, cyclical trough industry-wide over the past 8 or so quarters while Nvidia powered higher with historical beats/raises. By providing in-demand AI systems, Nvidia has become decoupled from consumer spending and macro.
Pictured Above: Nvidia outperforms secular software and did not participate in the steep, cyclical trough over the past eight quarters like its semiconductor peers.
The A100 was built on TSMC’s advanced 7nm FinFET process node with 54 billion transistors. The third-gen Tensor cores featured new mixed-precision calculations, such as Tensor Float (TF32) and Floating Point 64 (FP64) with TF32 delivering up to 20X faster speeds for AI. By using automatic mixed precision, FP16 can be utilized for an additional 2X performance. Nvidia calls this the sparsity feature, which doubles throughput, runs 10X faster than the V100, and is 20X faster with sparsity.
What was special about the A100 is that it unified training and inference on a single chip, whereas in the past Nvidia was mainly used for training. With the specs described above, the A100 also offered a 20x performance boost.
As a multi-instance GPU, the A100 can make one GPU look like up to 7 GPUs for optimal utilization. This is key for cloud service providers, such as Amazon’s AWS, Google Cloud and Microsoft Azure, as it increased GPU instances by 7X.
The A100 was the first architecture where Nvidia was no longer simply a GPU chip company, but rather it marked the moment Nvidia became an AI systems company. The A100 offers the ability to scale-up multiple GPUs for one giant GPU using components such as third-gen NVLink to double GPU-to-GPU bandwidth, NVSwitch which is leveraged for fast data transfers, plus InfiniBand and SmartNICs following the Mellanox acquisition.
For more information on TSMC’s process nodes, reference the analysis “TSMC: April Sales Soar from Advanced Nodes.” The Mellanox acquisition was covered in-depth for my premium readers at time of acquisition here.
Hopper:
Hopper is when Wall Street became aware of Nvidia’s AI story. As you can see in this timeline, it was quite late for the Street to finally discover Nvidia is a promising AI stock!
The H100 GPUs and the DGX H100 server pods and super pods solved an important bandwidth issue and sped up algorithms by offering dynamic programming on GPUs to break down problems to simpler subproblems. The GPUs also boost bandwidth by 3X with SHARP in-networking computing and Infiniband Switches, and the H100 can leverage NVLink to connect eight H100s into one giant GPU for 640 billion transistors, 32 petaflops, 640GB of HBM3, and 24 terabytes per second of memory bandwidth.
The H100 has about 50% more memory and interface bandwidth than the A100. Memory later got a big boost in Blackwell, shipping this year.
The H100 stands apart with the leap in performance of 3X more performance than the A100 and is up to 6X faster. The A100 lacked support for FP8 compute at default whereas the H100 leverages a transformer engine to switch between FP8 and FP16, depending on the workload.
According to Nvidia, the H100 delivers 9X more throughput in AI training, and 16X to 30X more inference performance. The company also states in HPC application-specific workloads, the H100 is 7X faster. The goal of the H100 was not only to add more transistors and make the H100 faster, but to also offer function-specific optimizations. This is achieved through the transformer engine.
Although there are many highlights to consider with the H100, the biggest breakthrough was the transformer engine as it allowed generative AI to come to market. Transformers helped to define generative AI as the neural-network models apply self-attention to detect how data elements in a series influence and depend on one another.
Prior to transformer models, labeled datasets had to be used to train neural networks. Transformer models eliminate this need by finding patterns between elements mathematically, which substantially opens up what datasets can be used and how quickly.
The “T” in Chat-GPT stands for transformer and it was the H100 that created the GenAI breakthrough moment.
Blackwell:
Blackwell is the architecture that I stated on Fox Business News will deliver the “ultimate fireworks by the end of this year.” In the analysis Blackwell and the $200B Data Center, I stated: “Blackwell is for the trillion+ parameter era of generative AI. The architecture is designed to support the largest language models today and is future-proofed […]”
The full analysis is worth a read as it spells out how Nvidia will drive growth through the end of 2025 and why I think current data center estimates are too low. In fact, I wrote that prior to the last earnings report and analysts are already proving me correct as FY2026 (ending Jan 2026) have been revised up by a whopping $20 billion since I wrote that only three weeks ago!
Pictured Above: Seeking Alpha, on May 23rd FY2026 revenue was estimated at $125 billion, it is now at $145 billion for an increase of $20 billion on the data center. This means that within three weeks, my prediction (that was written prior to earnings) for 60% higher data center revenue is quickly materializing, as in the last three brief weeks, the consensus has been revised so rapidly, the difference is only 38% now. On Bloomberg Asia, I also discussed why investors should pay close attention to intra-quarter revisions, which is exactly the reason the price moved in the past three weeks.
Unlike previous generations where the V100, A100 and H100 were the show-stoppers, it will be the GB200 and B200 that creates the biggest leap generationally. Therefore, I want to emphasize that I said the fireworks would come at the end of the year and into early 2025. The fireworks begin when the GB200 NVL36/NVL72 ships in late 2024 and then they continue with the B200 GPUs in early 2025.
The B200 GPU chipset due in Q1 of next year will deliver a 2.5X training improvement and 5X inference improvement over the H100. This is due to the B200 having 208 billion transistors compared to the H100’s 80 billion transistors.
The B200 will also have 20 petaflops of FP4 compared to the H100’s 4 petaflops of FP8 reaching 32 petaflops of FP8 in the DGX H100 systems. The difference is that the smaller bit size allows for an economical way to achieve more speed when giving up a small amount of accuracy doesn’t make a critical difference. As discussed, this also helps in the face of a slowing Moore’s Law. The B200 will have a second-generation transformer engine that supports 4-bit floating point (FP4) with the goal of doubling the performance and size of models the memory can support while maintaining accuracy.
The second-generation transformer engine in the Blackwell architecture will offer FP4. This is helpful because AI models are moving toward neural nets that lean on the lowest precision and yet still yield an accurate result. In this case, 4 bits double the throughput of 8-bit units, compute faster and more efficiently, and they require less memory and memory bandwidth.
TheGB200 NVL72 will deliver real-time trillion-parameter LLM inference, 4X LLM training, 25X energy efficiency, and 18X data processing. The GB200 will provide 4X faster training performance than the H100 HGX systems and will include a second-generation transformer engine with FP4/FP6 Tensor core. As stated above, the 4nm process integrates two GPU dies connected with 10 TB/s NVLink with 208 billion transistors.
NVLink Switch is a major component to the Blackwell upgrade. Fifth-generation NVLink enables multi-GPU communication at high speed, reaching 1.8 TB/s bidirectional throughput or 14X the bandwidth of PCIe for a single GPU.
Takeaway: Blackwell is the architecture that will make trillion+ parameter models possible, up from billion parameter models today.
Nvidia’s 1-Year Release Cycle is Wild
If you’re exhausted reading that, imagine producing it in 8 brief years. Per the Computex keynote, from Pascal to Blackwell, the AI systems delivered “1,000 times increase in AI compute,” while simultaneously decreasing the “energy per token by 45,000X.”
Now, imagine cutting the time in half by producing four generations of AI systems in 4 years instead of 8 years.
In the analysis “Nvidia Q1 Earnings Preview: Blackwell and the $200B Data Center,” I stated that “should [the CUDA] moat become breached, the company’s rapid product road map is the first line of defense,” and later I also stated: “The product road map is the single most important thing investors should be focused on. A good chunk of the AI accelerator story is understood at this point. What is not understood is how aggressive Nvidia is becoming by speeding up to a one-year release cycle for its next generation of GPUs instead of a two-year release cycle.”
After writing that, I realized it would be impossible to ask investors to focus on the upcoming road map if we did not look more closely at the road map that got us to $3 trillion. By now, it should be crystal clear that Nvidia is not a cyclical GPU chip company, rather it’s a secular AI systems and software platform company that has a near-monopoly in building supercomputers for the $15 trillion AI economy. If you are still not convinced that Nvidia is more than a GPU company, perhaps these two pictures can help.
Here’s a Blackwell GPU chip and a Hopper GPU chip — can easily fit in your hand.
Here’s what AI factories look like (or what I’m calling AI systems):
What’s Next for Nvidia:
This past weekend, Nvidia announced the names of future generations: Blackwell Ultra, Rubin, and Rubin Ultra. The specifics of these future generations will be revealed at future GTC conferences.
Here is what you keep an eye out for in future generations:
- 3nm process node and 2nm process node, which I covered here in a TSMC analysis
- HBM3e memory and HBM4 memory, which I covered here under the subheading “More on Memory”
- Future generations of NVLink, which I also covered in my Blackwell writeup
- InfiniBand and Spectrum-X Ethernet for AI workloads: I’ve covered InfiniBand since the Mellanox acquisition yet also covered the importance of Ethernet networking in-depth on my premium site in February. Last year, networking grew five-fold to a $10B run rate, which technically marked a higher growth rate than AI accelerators.
- AI Software and Automotive: I wrote a deep dive on Nvidia’s software opportunity exclusively for my premium members in July of 2022. I will update my free readers in the coming quarters on these two opportunities which will help us end the decade strong. This market will rival Nvidia’s hardware market by 2030 (yes, you heard that correctly).
Our Price Target for the Next Entry
Some of you reading this own Nvidia, and others do not. For those who do not own the stock, the most important question is not what market cap will Nvidia have by 2030, but rather, where is the stock going in the near-term.
My firm is an actively managed portfolio that publishes our trades in real-time. However, we are not financial advisors and each investor must decide for themselves whether to buy or sell a stock. What my firm does is simply state when we are buying or selling for unrivaled transparency. You will be hard pressed to find anyone else publish every single trade in real-time outside of professional fund managers (who are required to do so).
Since I first began covering Nvidia publicly in 2018, my firm has issued 9 buy alerts under $200 and we have been taking nominal profits along the way. We plan to take profits again in the $1225 to $1315 range. Nvidia is trading in this potential topping zone, at time of writing. Once price moves below $1035, it will signal that the anticipated reversal is underway. Once this happens, our process allows us to get more precise with identifying buy targets. Until then, we have a general range between $920 – $715. Keep in mind, this range can shift once a reversal is identified.
For some stocks, we get more aggressive and would try to time a buy in the lower range of the target zone, which would be around $715 for NVDA. However, due to the strength of its thesis, we will likely buy at the upper end of that target around $920.
If you had bought Nvidia January 1st 2022 instead of October 18th of 2022, your returns would be 387% instead of 1,034%. Therefore, 230% returns by 2030 would be phenomenal, but when entering at lower prices, the total return can multiply. For example, let’s say an investor can buy the stock at $900. In this hypothetical situation, the returns would be 350% compared to 230%. This is simple in concept yet is challenging to execute.
As of now, Nvidia stock should be watched closely between $1225 to $1315. It’s crystal clear that Nvidia owns the AI market, yet the stock will need the broad market to be aligned for its phenomenal run to continue. We’ve been tracking the fading Mag 7 since early March. At this point, the Mag 7 had become the Mag 4, when we stated…
“when the cycle leaders start to underperform, it tends to mark the start of a trend change. The FAANGs have been the undoubted leaders of this bull run, and we are now seeing them start to trend lower against the indexes.”
After the rally we saw this week, it’s worth noting that Nvidia is the only stock in the Mag 7 that is making new all-time highs. Amazon, Alphabet and Meta are making lower highs as of today.
Until we see more market leaders breakout, Nvidia remains the last one standing. Therefore, if Nvidia cannot break above the $1225 range, then the market is communicating that Nvidia’s weaker peers may be influencing its price action. We’ve stated many times that Nvidia is a buy on the dips (as opposed to a buy on breakouts), specifically as “we brace for Blackwell by the end of the year.”
What’s worth noting is that while SPX, NDX and NVDA are making new highs, almost every other major index (RUT, DJI, NYA, RSP, XLF, XHB, to name a few), including the Mag 6, are not.
For Nvidia to continue moving up in a straight line means the stock will have to operate in a vacuum. This is unlikely, and thus we are waiting for the next dip before we buy again. Our current target, once again, is in the $920 – $715 range, although depending on market dynamics this could shift. We update our premium research members with real-time trade alerts and weekly webinars.
Conclusion:
The boldest prediction I have made on Nvidia was to state in an analysis to my premium research members in September of 2019: “To be bold – I believe Nvidia will be one of the world’s most valuable companies by 2030. The research below organizes my investment thesis for the GPU-powered cloud and why I believe Nvidia will emerge as a clear leader.”
The world’s most valuable company at that time was Apple hovering at a $1 trillion market cap compared to Nvidia’s $110 billion market cap. As many fierce critics pointed out to me, I was not only predicting that Nvidia would skyrocket but that Apple and every other FAANG would falter. This was a challenging prediction to make as many things had to line up: 1) Nvidia must blow the doors off, and 2) every FAANG would have to plateau.
Here is what happened next:
All said and done, I will keep the 2030 deadline for the $10 trillion market cap, although I suspect, as with my other predictions, it will be delivered to you sooner.
Every Thursday at 4:30 pm Eastern, the I/O Fund team holds a webinar for premium members to discuss how to navigate the broad market, as well as various stock entries and exits. Beth Kindig offers weekly deep dives including lesser-known AI stocks, plus the team offers trade alerts and an automated hedging signal. The I/O Fund team is one of the only audited portfolios available to individual investors. Learn more here.
Resources:
If you would like notifications when my new articles are published, please hit the button below to “Follow” me.