Nvidia (NVDA 1.69%) stock set a new record closing high just shy of $150 in January, but it has since declined by 16%. The drop was triggered by news that a Chinese artificial intelligence (AI) start-up called DeepSeek figured out a way to train advanced models at a fraction of the cost — and with less computing power — than its American peers.
Nvidia supplies the world’s most powerful data center graphics processing units (GPUs) for AI development, so investors were concerned DeepSeek’s methodology would drive a collapse in demand for those chips. However, Nvidia CEO Jensen Huang made a series of comments last week that suggest some of the newest AI models from DeepSeek and other developers could actually result in a significant increase in GPU demand instead.
Here’s what he said, and why I predict the comments could send Nvidia stock soaring this year.
Image source: Nvidia.
Nvidia is coming off its best fiscal year ever
Before diving into Huang’s comments, let’s review Nvidia’s financial results for fiscal 2025 (ended Jan. 26), which were released last Wednesday. Investors were anticipating strong results as the company recently started shipping its new Blackwell GB200 GPUs, which have become the gold standard for AI development.
Nvidia delivered a record $130.5 billion in total revenue in fiscal 2025, which was a 114% increase from the prior year, and it was also comfortably above management’s forecast of $128.6 billion. The data center segment accounted for $115.1 billion of that total, which was up by a whopping 142% from the prior year.
Nvidia started shipping commercial quantities of its Blackwell GPUs for the first time during the fiscal 2025 fourth quarter. They generated $11 billion in sales, exceeding management’s expectations and making it the fastest product ramp-up in the company’s history.
The Blackwell GB200 GPU is a game changer because in some configurations, it can perform AI inference at 30 times the speed of Nvidia’s previous flagship data center chip, the H100. Inference is the process by which AI models make predictions, or form responses, using live data and prompts. Higher inference speeds can lead to faster responses in chatbot applications, but it also allows models to process more data to render better answers.
Nvidia says it will have to continue scaling Blackwell production because demand remains high. Some of the company’s top customers have already revealed how much they plan to spend on AI data centers and chips this year, and the numbers are mind-boggling:
Not all of that money will flow to Nvidia specifically, but it’s clear DeepSeek’s innovations haven’t dampened the appetite for more computing power.
Jensen Huang just delivered incredible news for investors
OpenAI is one of America’s top AI start-ups. Since it was founded in 2015, it has spent over $20 billion to build data center infrastructure, assemble its talented team, and train its AI models. Therefore, it was a big surprise when DeepSeek revealed it trained its V3 model — which performs comparably to OpenAI’s GPT-4o models in some benchmarks — for just $5.6 million.
That doesn’t include $500 million in infrastructure spending (according to an estimate by SemiAnalysis), but it raised alarm bells up and down Wall Street.
DeepSeek doesn’t have access to Nvidia’s latest chips because they are banned from being exported to China, so it used a series of clever techniques on the software side to offset the lack of computing power. One of them is called distillation, which involves using the best proven models (like GPT-4o) to train smaller models, which rapidly accelerates their progress. This requires far fewer resources because it cuts down on traditional training workloads, which can involve collecting, refining, and processing mountains of data.
If every AI developer relied on distillation, chip demand for training workloads would probably collapse. However, start-ups like OpenAI have found that feeding endless amounts of data into their AI models was no longer producing the desired results, so they are now focused on building “reasoning” models, which spend more time “thinking” to craft the best responses.
This means they are relying less on traditional training methods overall, shifting their compute resources over to inference workloads instead. That brings me to Huang’s latest comments. During his conference call with investors last Wednesday, he said reasoning models can consume a whopping 100 times more compute than their predecessors. In the future, he believes some models could consume thousands, or even millions of times more compute to generate highly complex simulations and other outputs.
DeepSeek R1, xAI’s Grok 3, Anthopic’s Claude 3.7 Sonnet, and OpenAI’s GPT4-o1 to GPT-4o3 are some examples of the reasoning models available today. Most of the top developers are clearly moving away from pre-training methods and heading in this direction instead, which means Nvidia could be on the cusp of a new demand phase for its chips.
Nvidia stock might be a bargain
Based on Nvidia’s fiscal 2025 earnings per share (EPS) of $2.99, its stock trades at a price-to-earnings (P/E) ratio of 42.5, which is a 28% discount to its 10-year average of 59.3. Plus, Wall Street’s consensus estimate (provided by Yahoo!) suggests the company could generate $4.49 in EPS during the current fiscal year 2026, placing the stock at a forward P/E ratio of just 27.7:
NVDA PE Ratio data by YCharts.
In other words, Nvidia stock would have to soar by 53% over the next 12 months just to maintain its current P/E ratio, or by 114% to trade in line with its 10-year average P/E ratio.
Since the DeepSeek saga was a key reason for the recent decline in the stock, Huang’s recent comments about a potential 100-fold increase in compute requirements for inference workloads could be the catalyst that brings investors back into the fold, especially if the company’s financial results support that prediction throughout this year.
As a result, I think Nvidia stock could soar over the next 12 months (and probably beyond).
John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.