Meta is accelerating its custom silicon roadmap with four new MTIA chips in two years. Built with an inference-first focus and native PyTorch integration, they are designed to cost-effectively power GenAI at a massive consumer scale.
Hi everyone!
Even though Meta is still one of NVIDIA’s biggest customers, they had already been going all-in on their own silicon — and will clearly continue to do so.
Meta is explicitly going inference-first instead of building only for giant pretraining jobs, and they can now ship a new MTIA chip (300 → 500) roughly every six months using modular chiplets and a reusable rack design. That is a very different posture from the usual multi-year silicon cycle.
Replies
Flowtica Scribe