Google is reportedly in discussions with Marvell to develop two new AI-related chips, one of which is a memory processing unit designed to complement Google’s TPU, while the other is a new TPU specifically tailored for running AI models. This initiative aligns with a broader hyperscaler trend where cloud giants are increasingly partnering with semiconductor companies to create bespoke AI chips that can compete with established players. Google aims to diversify its chip sources beyond Broadcom by collaborating with Marvell, focusing on optimizing performance for AI inference workloads to improve the efficiency of model deployments.
TPU: TPU, or Tensor Processing Unit, is Google’s custom ASIC family designed for high-performance AI training and inference in Google Cloud environments. Recent iterations focus on rack-scale designs and efficiency for agentic AI. The news indicates Google plans a new TPU variant, built with Marvell, targeted at running AI models more effectively.
Google: Google, part of Alphabet Inc., provides cloud computing services via Google Cloud and develops custom AI hardware like Tensor Processing Units to accelerate machine learning workloads. It is actively advancing its AI infrastructure with optimizations for inference and reasoning tasks. In this report, Google is in talks with Marvell to co-develop two new AI chips: a memory processing unit to pair with existing TPUs and a specialized TPU for AI model inference.
Marvell: Marvell Technology is a fabless semiconductor company specializing in data center solutions, including custom silicon, interconnects, and networking optimized for AI applications. It partners with hyperscalers on tailored accelerators and connectivity portfolios to address AI bottlenecks. Recently, Marvell is reportedly negotiating with Google to design AI inference chips, leveraging its expertise in data-center silicon.
`json
{
“Hyperscaler Trend”: “Cloud giants are increasingly commissioning bespoke AI chips from semiconductor partners to challenge dominant players.”,
“Chip Diversification”: “Google seeks to expand beyond Broadcom by collaborating with Marvell on custom AI silicon.”,
“Inference Optimization”: “The proposed chips emphasize AI inference workloads to enhance model deployment efficiency.”
}
`
