Tensions grew in Silicon Valley after DeepSeek launched its R1 model. The Chinese AI company outperformed US AI firms, including OpenAI Meta and Anthropic in third-party benchmarks. DeepSeek’s v3 model, which was able to outperform Meta’s Llama 4, caused the anxiety. DeepSeek R1 was released on Jan. 20. It significantly improved model inference capabilities, despite limited labeled datasets. The model is on par with OpenAI GPT-4 for tasks such as mathematics and coding. DeepSeek’s R1 training budget has also attracted attention. According to its published API pricesit costs RMB 1 ($0.14) for every million input tokens that are cached, RMB 4 ($0.55) for each million input tokens that are not cached, and RMB 16 ($2.21) for every million output tokens. This pricing is about one-thirtyth of OpenAIโs operational costs for GPT-4. Yan LeCun, Meta’s chief AI Scientist, said that DeepSeek’s successes show open-source models are superior to proprietary ones.[BusinessInsider(19459035]TechNode Report]