
Meta, the parent company of Facebook, Instagram, and WhatsApp, is testing its first in-house chip for training artificial intelligence (AI) systems.
Notably, this aims to reduce the company’s reliance on external suppliers like Nvidia and lower its AI infrastructure costs.
Meta’s new training chip is a dedicated accelerator designed for AI-specific tasks, making it more power-efficient than traditional GPUs.
The company is working with Taiwan-based chip manufacturer TSMC to produce the chip.
Meta plans to use the chip for recommendation systems and generative AI products, such as chatbot Meta AI.
The company aims to start using its own chips by 2026 for training and inference processes.
Meta has forecast significant expenses for 2025, including up to $65 billion in capital expenditure driven by AI infrastructure spending.
The company has been working on its Meta Training and Inference Accelerator (MTIA) series, with a wobbly start that included scrapping a chip at a similar development phase.
Meta previously relied on Nvidia GPUs for its AI infrastructure but is now exploring alternative solutions to reduce costs and increase efficiency.