Fluid compute
3 mentions across all digests
Vercel's serverless compute service that executed 43.2 billion function invocations for personalization, cart logic, and AI inference during BFCM 2025 with automatic pre-warming and zero configuration.
Billions of requests: Black Friday-Cyber Monday 2025
Vercel processed 115.8 billion requests during BFCM 2025 (518K peak RPS) without manual intervention, demonstrating automatic global infrastructure scaling at 33.6% YoY growth.
Server rendering benchmarks: Fluid Compute and Cloudflare Workers
Vercel's Fluid Compute benchmarks 2.55x faster than Cloudflare Workers on server rendering, revealing the performance/distribution trade-off in serverless platforms.
Zero-config backends on Vercel AI Cloud
Vercel eliminates infrastructure overhead for AI apps with zero-config backend deployment and Fluid compute pricing that charges only for active CPU time—cutting costs for agent workloads.