DeepSeek Reveals AI Cost-Profit Estimates, But Reality Looks Much Different

Chinese AI startup DeepSeek just gave a rare glimpse into the costs and potential revenue of its V3 and R1 models, claiming an eye-popping theoretical cost-profit ratio of 545% per day. Sounds impressive, but the company was quick to dial down expectations, admitting that actual revenue is much lower.

In a GitHub post, DeepSeek estimated that renting one Nvidia H800 chip—the hardware it uses—costs $2 per hour, putting its daily inference costs at $87,072. In contrast, the models could theoretically pull in $562,027 per day, which over a year would translate to more than $200 million in revenue. But that’s just the math on paper.

>>>GZE8U for Google Pixel 7

DeepSeek pointed out that real earnings are significantly lower for several reasons. The V3 model costs less to use than the R1, not all services are monetized (web and app access are still free), and developers pay less during off-peak hours.

What makes this even more interesting is that DeepSeek is using Nvidia’s H800 chips, which are far less powerful than the ones available to U.S. AI giants like OpenAI. With companies like OpenAI and Google pouring billions into high-end hardware, DeepSeek’s numbers raise an interesting question: is cutting-edge AI hardware really necessary to run a profitable AI business, or is efficiency the real key?

Leave a Reply

Your email address will not be published. Required fields are marked *