DeepSeek V4
4 mentions across all digests
Open-weight mixture-of-experts language model with 1.6 trillion parameters, 1 million token context, and pricing below frontier models.
DeepInfra on Hugging Face Inference Providers 🔥
DeepInfra's integration with Hugging Face Hub enables developers to run serverless inference on popular open-weight models like DeepSeek V4 directly from HF model pages, reducing deployment friction for open-model inference workloads.
[AINews] ImageGen is on the Path to AGI
OpenAI breaks Azure exclusivity to distribute across Google and AWS through 2032 while Chinese labs flood the market with aggressive open-weight agent models, escalating competitive positioning.
DeepSeek's new models are so efficient they'll run on a toaster ... by which we mean Huawei's NPUs
DeepSeek's open-weights V4 matches frontier model performance while slashing inference costs through novel efficiency techniques, now optimized for Huawei's Ascend NPUs—a major competitive threat to proprietary incumbents.
DeepSeek previews new AI model that ‘closes the gap’ with frontier models
DeepSeek's V4 Flash and V4 Pro mixture-of-experts models claim parity with GPT-5.4 on coding and frontier reasoning benchmarks while underpricing competitors by a substantial margin.