Axion
3 mentions across all digests
Google's Arm-based CPU replacing x86 processors as host systems for TPU accelerators.
The eighth-generation TPU: An architecture deep dive
Google's TPU 8t and 8i variants eliminate data-preparation bottlenecks with custom Axion CPUs, delivering specialized training and inference hardware optimized for world models and agentic AI at scale.
Forget one chip to rule them all: With TPU 8, Google has an AI arms race to win
Google's TPU 8 dual-track accelerators (2.8x faster training, 80% higher inference per-dollar efficiency) backed by custom Arm-based Axion CPUs and proprietary network topologies represent an aggressive vertical integration play to control the entire AI hardware stack.
Our eighth generation TPUs: two chips for the agentic era
Google's TPU-8 chips (8t training, 8i inference) deliver 2x better power efficiency over Ironwood, purpose-built for agentic AI workloads with Boardfly topology and bare-metal framework support.