Marvell’s shares have risen following reports of ongoing discussions with Google to develop two AI chips specifically designed for the inference phase of AI models, which aim to enhance the efficiency of large language models. This potential partnership highlights Google’s strategy to diversify its custom AI chip suppliers, moving beyond Broadcom. Additionally, Marvell’s recent integration with NVIDIA’s AI ecosystem via NVLink Fusion positions the company favorably for advancements in data center AI applications.
Google: Google, Alphabet’s cloud and AI division, designs Tensor Processing Units (TPUs) as custom accelerators for machine learning workloads in its data centers. It recently expanded partnerships for AI infrastructure, including commitments to Intel Xeon chips and a long-term deal with Broadcom for future custom AI chips. Google is negotiating with Marvell to co-develop two specialized chips aimed at enhancing TPU performance for AI model inference.
Marvell: Marvell Technology is a fabless semiconductor provider specializing in custom silicon for data centers, including interconnects, networking switches, and AI accelerators. It recently joined NVIDIA’s AI ecosystem through NVLink Fusion to support accelerated infrastructure. The company is reportedly in talks with Google to develop two new AI chips, one a memory processing unit to complement TPUs and another a next-generation TPU for efficient AI inference.
`json
{
“Inference Focus”: “The potential collaboration targets chips optimized for the inference phase of AI models to improve efficiency in running large language models.”,
“Recent Integrations”: “Marvell recently integrated with NVIDIA’s AI ecosystem via NVLink Fusion, positioning it for advanced data center AI applications.”,
“AI Supplier Diversification”: “Google is broadening its custom AI chip partnerships beyond Broadcom by entering talks with Marvell for specialized inference hardware.”
}
`
