Meta vs. Nvidia – No competition

Meta is still tinkering with silicon. 

  • Meta’s new AI chip is being taken as a competitor to Nvidia but the fact that it does not even remotely compare to Nvidia’s performance or economics and the fact that Meta intends to spend $14bn on Nvidia silicon anyway tells me that MTIA is a learning process that can be used for its regular business.
  • Nvidia dominates the landscape for cloud-based AI training and inference with about 85% market share, and so far no one has been able to make a real dent in its position.
  • This is largely due to the CUDA platform that it created in 2003 which is beloved by developers as well as the fact that its product cadence is so fast that everyone else is always one or more generations behind.
  • Everyone else is trying to catch up and Nvidia’s largest customers have, or are actively developing in-house silicon, but so far, no one has been in a position to replace Nvidia in their data centres.
  • When it comes to Microsoft, Amazon and Google, this task is doubly difficult because their customers are demanding to have Nvidia silicon in their data centres, giving them little choice.
  • In this situation, Meta’s position is simpler in that its data centres only serve the internal customer meaning that it has no external pressure to deal with.
  • However, even in this situation, it is clear that Meta still plans on using Nvidia in its data centres as it intends to buy 350,000 GPUs from Nvidia despite its internal chip development.
  • This makes sense because when one looks at the Meta’s latest chip, it immediately becomes clear that it is very far from competing with Nvidia’s latest offerings.
  • Meta’s chip is called MTIA (Meta Training and Inference Accelerator) and version 2 is 5cm by 4cm in size, and is manufactured on TSMC’s 5nm process and 2.35bn transistors.
  • By contrast, Nvidia’s latest offering Blackwell is huge and is made on TSMC’s 4nm process and has 208bn transistors which is 89x more.
  • Furthermore, Nvidia has spent a lot of time working out how to build a framework where 72 of these chips can seamlessly work together as if they were one single piece of silicon.
  • Meta is building a similar system where many MTIA’s can work together but it is clearly quite a long way behind Nvidia.
  • This means for the large language model workloads (LLMs) which are tens or hundreds of billions of parameters in size, paying Nvidia 70% gross margin is still going to be cheaper than using the in-house solution.
  • Meta Platforms has a thriving social media business that uses deep learning algorithms to analyse social media content and recommend it to its users to keep them engaged.
  • These algorithms are a tiny fraction of the size of the Llama-based LLMs that Meta has created, and as such, do not need the latest and greatest hardware to run efficiently.
  • Consequently, the silicon that Meta creates as part of its quest to catch up with Nvidia can be used to power its revenue-generating services.
  • As a result of not having pressure from customers, Meta will eventually be able to replace Nvidia in its data centres and I expect that it will do this starting from the smallest LLMs and working its way up.
  • However, Nvidia is not standing still, and every time Jensen walks out on stage at GTC, we are going to see it set the bar higher for Meta and the others to jump over.
  • The net result is that until there is a change in how generative AI services are developed, Nvidia’s position looks secure even from those customers who are developing their own silicon.
  • I think that this will last for as long as the key development platform for generative AI remains at the silicon level and there are signs that the creators of foundation models are aiming to change this.
  • By offering software development kits for their foundation models, the owners of these models are encouraging developers to move their focus to the foundation model as opposed to focusing on the platform for the training silicon.
  • If successful, this would weaken Nvidia’s grip on the market for AI silicon in the cloud but, even in the best instance, this will take a long time.
  • Hence, I think that Nvidia’s position remains very secure in the short to medium term meaning that the performance of the company depends on the performance of the overall market.
  • Here, the outlook is also good for as long as the AI bubble does not pop and while there are a few worrying signs, 2024 looks like it will be another landmark year.
  • Hence, I think that Nvidia’s shares can still go up, but the real money has already been made leading me to think that there is likely to be more upside elsewhere although it will probably come with more risk.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.