TPU 8t
4 mentions across all digests
Google's training-optimized tensor processing unit with 2.8x faster training performance than Ironwood TPUs and support for up to 9,600 accelerators per pod.
The eighth-generation TPU: An architecture deep dive
Google's TPU 8t and 8i variants eliminate data-preparation bottlenecks with custom Axion CPUs, delivering specialized training and inference hardware optimized for world models and agentic AI at scale.
Forget one chip to rule them all: With TPU 8, Google has an AI arms race to win
Google's TPU 8 dual-track accelerators (2.8x faster training, 80% higher inference per-dollar efficiency) backed by custom Arm-based Axion CPUs and proprietary network topologies represent an aggressive vertical integration play to control the entire AI hardware stack.
Google Cloud launches two new AI chips to compete with Nvidia
Google's 8th-gen TPUs deliver 3x faster training and 80% better performance-per-dollar, scaling to million-chip clusters to challenge Nvidia's AI infrastructure dominance.
We're launching two specialized TPUs for the agentic era.
Google launches TPU 8i and TPU 8t chips purpose-built for agentic AI—inference and training respectively—signaling that specialized silicon will be critical infrastructure for autonomous agent workloads.