Meta Platforms revealed progress on development of its next phase of infrastructure designed to accelerate AI deployments, teasing a custom chip to run related models and power its metaverse push.
The infrastructure upgrade forms part of efforts to establish a scalable foundation for emerging opportunities including generative AI and its big bet in the metaverse, Meta Platforms stated in a blog.
It explained the move is necessary as computing needs “will grow dramatically over the next decade” due to the company’s commitment to launch more AI tools across its applications and build a “long-term vision” for its metaverse unit.
VP of engineering, infrastructure Aparna Ramani said building “infrastructure at scale is what our long-term research requires”, adding “innovation without it is impossible”.
Work in progress include its debut in-house AI chip, dubbed Meta Training and Inference Accelerator (MTIA), which it claimed is more powerful than CPUs in running AI workloads.
It also highlighted a cost-effective next-generation data centre project which boasts capabilities to connect “thousands of AI chips together for data centre-scale” training clusters and complement its portfolio of data-intensive hardware.
Additionally, the Facebook and Instagram owner highlighted a supercomputer featuring 16,000 GPUs it built earlier this year.
Meta Platforms stated it deploys the technology to train large AI models “to power new AR tools, content understanding systems, real-time translation technology and more”.
On an earnings call late last month, CEO Mark Zuckerberg said Meta Platforms is “no longer behind” in establishing its AI infrastructure, naming the technology as the company’s key capex driver in recent years.
Comments