Summary
Chinese AI startup DeepSeek initially claimed to have trained its competitive R1 model with only $6 million and 2,048 GPUs.
However, a SemiAnalysis report reveals the company has actually invested $1.6 billion in hardware and owns 50,000 Nvidia GPUs.
DeepSeek has also spent well over $500 million on AI development since its inception.
DeepSeek operates its own data centers and exclusively hires from China, offering top salaries.
The report suggests its success stems from major investments rather than radical efficiency, countering initial claims of disruptive cost reductions.
How much have other companies spent just on pre-training? If that figure is just for pre-training, it would be useful to know what current industry leaders have spent, to make an apples to apples comparison.
But later in the article they quote Elon Musk, saying “if you want to be competitive in AI, you have to spend billions per year,” but $500 million is significantly less than “billions.” And that’s since its inception, which was about 18 months ago. So, that’s less than half a billion dollars, per year. That’s much, much less than “billions per year.”
Also, the title says that DeepSeek spent “$1.6 billion,” but further on in the article they say “well over $500 million.” $1.6 billion is “well over $500 million,” but conventionally you wouldn’t phrase it like that if the amount was that much higher (over 3x) than $500 million. That leads me to believe the amount DeepSeek spent on AI development is much closer to $500 million than $1.6 billion. Apparently, that $1.6 billion figure includes costs not associated with AI development.