China’s DeepSeek Puts a Question Mark on AI Spending

The startups innovative AI model is raising questions about the future of AI stocks and capex.  



This month saw the release of DeepSeek’s R1, an artificial intelligence model from a Chinese company and one that reportedly goes toe-to-toe with offerings from U.S. tech leaders OpenAI, Meta, Anthropic, Google and others. The company’s open source R1 model is touted to be magnitudes cheaper to train than other leading models.
 

In a paper, DeepSeek stated that costs were less than $6 million, far cheaper than the $100 million it cost to train OpenAI’s GPT-4, for example, raising questions about the need for massive capital expenditures on AI infrastructure.  

In fact, DeepSeek’s breakthrough may have come out of necessity. The administration of former President Joe Biden restricted China from Nvidia Corp.’s most advanced chips, including the H-100, used by many companies to train their AI models. Officials in the administration of President Donald Trump have, in recent days, also called for tightening restrictions on selling high-powered chips to companies in China.  

DeepSeek used the Nvidia H800 GPU, a chip designed to comply with U.S. export restrictions, to train its models. The company used Huawei 910C chips to run inference—when a model uses the data it has been trained on to answer a query.  

Never miss a story — sign up for CIO newsletters to stay up-to-date on the latest institutional investment industry news.

“The success of DeepSeek threatens the business models of companies like OpenAI and Anthropic, which rely heavily on revenue from proprietary AI model sales,” says Drayton D’Silva, CEO and CIO of multi-asset investment firm Tower Hills Capital. “However, companies like Microsoft and Meta, with their massive distribution advantages, will benefit because cost-effective open-source models like DeepSeek massively reduce down inference costs. Inference is the critical phase whe[n] AI models apply their training to real-world data.” 

Tech companies known as the “Magnificent Seven”—Alphabet, Amazon.com, Apple, Meta, Microsoft, Nvidia and Tesla—account for nearly one-third of U.S. equity growth over the past two years. These companies trade at high multiples and are spending billions in capital on AI and its underlying infrastructure. DeepSeek’s release of R1 spooked investors, sending technology stocks falling and exposing a risk at the biggest driver of equity strength.  

Stocks Fall  

DeepSeek’s R1 model was released on January 20, but it took about a week for the market to digest its significance. Tech stocks fell on January 27, with Nvidia falling nearly 20%. Other AI and semiconductor stocks fell by double digits. Most tech stocks recovered during the week, as retail investors poured billions into Mag Seven names. 

“Big Tech companies [that] have planned significant investments in AI infrastructure now face scrutiny over whether such spending is justified, and this will mean short-term volatility and a softer tailwind for chip and semiconductors companies,” D’Silva says. 

While most institutional asset allocators are not too worried about short-term price movements, the implications of the R1 model could indicate challenges ahead, both for technology companies and an economy that depends on them. The S&P 500 has been held up for the last two years by the Magnificent Seven, which at the start of 2025 made up about 33% of the index’s weighting.  

Capex Into Question 

AI hyperscalers and investors are collectively spending hundreds of billions of dollars on AI infrastructure, particularly data centers, due to the computational and storage needs required. In 2024, $282 billion was spent on data centers, according to Synergy Research Group. One day after Trump’s inauguration—and the release of R1—OpenAI and investors SoftBank Group Corp., Oracle Corp. and MGX Fund Management Ltd. announced the Stargate Project, which intends to invest $500 billion in AI infrastructure in the U.S. over the next four years. 

Meta Platforms Inc., in an earnings call Wednesday for the fourth quarter of 2024, announced the company would make $60 billion to $65 billion in capital expenditures this year, driven by AI infrastructure spending. Earlier this month, Microsoft Corp. President Brad Smith said the company is on track to spend $80 billion on data centers this fiscal year.  

Blackstone Inc. President Jon Gray, speaking at the company’s Q4 2024 earnings call on Thursday, defended the private equity giant’s $80 billion in investments in data centers. Last year, the firm said it has $100 billion in data center investments in the pipeline.  

Yet DeepSeek apparently has, far more cheaply, built a model capable of competing with models from large tech companies. 

“It certainly creates doubt that building the best AI models can only be done with massive GPU clusters and huge spending,” says Karthee Madasamy, founder and managing partner in Silicon Valley venture capital firm MFV Partners.  

Madasamy, who was a chip designer and spent 11 years at Qualcomm Ventures, notes that Silicon Valley has not had a focus on optimization and resource utilization. 

“There has been a significant focus on throwing more and more money at computing and memory … rather than focusing on optimization and higher resource utilization,” Madasamy says. “The argument has been that the ROI was not there in terms of effort and time, and the price of computing and memory are falling anyway, so why bother with optimization and utilization?”  

Could the Jevons Paradox Apply? 

In response to the release of R1, some backers of technology investment have pointed to the economic concept of the Jevons paradox to support the idea that demand for AI chips and their necessary infrastructure will increase. The economic theory states that as a resource becomes cheaper and more efficient, demand for the resource increases. The paradox is often used to refer to energy and how increases in energy efficiency have led to sometimes-unexpected increased demand for energy.  

“DeepSeek-R1 enables faster and more efficient inferencing while significantly reducing dependency on high-powered GPUs,” said Srini Koushik, president of AI, technology and sustainability at Rackspace Technology, in a statement. “This breakthrough marks the beginning of a new race to build models that deliver value without incurring prohibitive infrastructure or energy costs.”  

Microsoft CEO Satya Nadella highlighted this phenomenon when he shared a Wikipedia link on social media shortly after the release of DeepSeek. 

“Jevons Paradox strikes again!” Nadella wrote. “As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of.”  

Nicole DeBlase, an analyst at Deutsche Bank, wrote this week in a note to clients: “Lowering the development cost of AI [large language models] could theoretically increase adoption to a broader swath of end users than would have been the case at prior levels of required investment costs.”  

DeBlase added that cheaper AI development costs will positively impact the global economy, as they create opportunities for faster and broader AI-driven productivity gains, and companies other than hyperscalers could increase their AI investments. 

“From a capex perspective, this means we could see more AI investment in different verticals, rather than just via hyperscalers, perhaps shifting spending towards the edge,” DeBlase wrote.  

Schroders, in a research note, stated that if increased efficiency results in lower demand for chips and AI infrastructure, it could present headwinds for companies like Nvidia. It added the caveat: “However, this outcome is far from certain, particularly given Jevons’ Paradox.”  

The firm also noted that this situation could also lower the cash needs of the technology behemoths. 

“If this situation results in reduced spending requirements for these companies, it could lower their capital expenditure needs and drive significant increases in free cash flow generations,” the Schroders report stated.  

Skepticism 

Due to the opaque nature of DeepSeek’s process, it seems too early to draw long-term conclusions about R1’s impact on AI development. 

Alexandr Wang, CEO of AI startup Scale AI, suggested in a CNBC interview that DeepSeek trained R1 using 10,000 new Nvidia A100 chips, an export-controlled design not available in China that it cannot disclose due to the export controls. Elon Musk, CEO of X AI, which recently raised $6 billion at a valuation of around $50 billion, and Tesla, another major purchaser of Nvidia AI chips, wrote on social media that that Wang’s assumption was obvious.  

The U.S. recently opened a probe into DeepSeek to investigate if the company illegally imported restricted chips through Singapore, according to Bloomberg. Roughly 20% of Nvidia’s revenue comes from Singapore.  

OpenAI had accused DeepSeek of distilling, or training, its model on the data of OpenAI’s models. 

“We know that groups in the [People’s Republic of China] are actively working to use methods, including what’s known as distillation, to try to replicate advanced U.S. AI models,” a spokesperson for OpenAI said in an email. “We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models.”  

Ross Seymore, a research analyst at Deutsche Bank, summed up the uncertainty of immediately drawing conclusions from the R1 release in a note to clients. 

“On the surface, it appears that [DeepSeek’s] innovations led to a total development cost of $5.57 [million], as low as 1/45th of the cost of current offerings,” Seymore wrote. “While this structural reduction in capital is stunning and would greatly reduce the cost of AI investments, we note the true cost of this project remains unclear, as the cited GPU hours claimed in technical papers may definitionally exclude prior training resources.”  

Related Stories: 

Strong US Economy, AI Lead to Positive Outlook for 2025 Markets 

How Investors Are Utilizing Artificial Intelligence 

Artificial Intelligence? Start Investing Now, Says Foundation Group 

Tags: , , , , , , , ,

«