Meta Platforms reportedly started testing its first in-house AI training chip as part of a plan to reduce its reliance on more costly semiconductors from Nvidia and other vendors.
Reuters reported the social media giant is currently trailing the chip in a small deployment ahead of plans to increase production if the results are promising.
The news agency wrote Meta Platform’s new training chip is a dedicated accelerator that only handles AI-specific tasks, which will make it more efficient compared to GPUs that are generally used for AI workloads.
The company has stated it can achieve greater efficiency by controlling its own stack and using its domain-specific silicon than it can with outside GPUs.
Reuters reported Meta Platforms is developing the chip with Taiwan Semiconductor Manufacturing Co (TSMC).
In April, 2024, Meta Platforms announced the second generation of its Meta Training and Inference Accelerator (MTIA) AI chip to support generative AI workloads. It also performs inference for recommendation systems across Facebook and Instagram news feeds.
Meta Platforms executives explained on the company’s Q4 2024 earnings call last month that in addition to cutting costs, the MTIA chip will be optimised to run across its network elements.
“We expect to further ramp adoption of MTIA for these use cases throughout 2025 before extending our custom silicon efforts to training workloads for ranking and recommendations next year,” CFO Susan Li explained on the earnings call.
She noted next year Meta Platforms is hoping to expand MTIA to support some of its core AI training workloads and some of its generative AI use cases.
In January 2025, CEO Mark Zuckerberg revealed Meta Platforms planned capex of $60 billion to $65 billion in 2025, as part of a scheme to broaden its AI infrastructure.
Comments