Qualcomm & AI – 4th-Time Lucky

Qualcomm bets against high-bandwidth memory

  • Qualcomm has launched its 4th attack on the data centre, and this time with Humain secured (up to $2bn in revenue), and I think a hyperscaler pretty close to signing, Qualcomm has come up with the right product at the right time, meaning that this time, it should get some traction.
  • Qualcomm has announced the complete refresh to its data centre line-up with the launch of the Qualcomm AI200 and the AI250 AI chips, which are based on its NPU architecture but scaled up into much larger chips that have been optimised for data centre-scale AI inferencing.
  • These are updates to a pre-existing proof of concept product called the A100 that was launched in 2023 but has subsequently been completely forgotten about (see here).
  • The AI200 (available in 2026) builds on this and brings in what it has learned from PCs to allow greatly scaling up the chip to achieve the kind of performance that is required in the data centre.
  • The AI250 (available in 2027) builds on the AI200 by adding a memory architecture that Qualcomm thinks will offer the performance of high-bandwidth memory (HBM) without having to incur its high cost and high-power consumption.
  • In my opinion, this is by far the most significant feature being launched today because if the claims hold water, then Qualcomm will have made substantial progress in fixing the memory bottleneck problem in data centre AI.
  • Furthermore, if this is a proprietary and patentable innovation, then Qualcomm will have opened a niche for itself in the AI data centre market that others will struggle to match for a while.
  • This is very surprising to me because in the 25 years I have known this company, it has never really focused on memory, nor has it had any particular expertise in this space, and suddenly it pulls a rabbit like this out of the hat.
  • Despite this, there are believers in the market, such as Humain of Saudi Arabia (see here and here), which plans to roll out a total of 6GW of AI compute in the coming several years, and has signed up for up to 200mW of AI200/250 compute (with potential for more).
  • If we assume that Qualcomm’s pricing is roughly in line with AMD’s, then this would equate to up to $2bn in incremental revenues (5% of FY2024 revenues) from one deployment alone.
  • It is not hard to see how this could become substantial if it is a runaway success.
  • All eyes will now turn to Humain to see how it gets on with the new product, and I will be looking to see if it rolls out the full 200mW and whether it comes back for more.
  • This, combined with real-world testing of this product line, will give a good indication of how much compute it can produce per dollar invested (tokens/$ and tokens/watt) and whether the new memory architecture lives up to billing.
  • My initial feeling is positive, as Qualcomm does not have a history of making outrageous claims about the performance of its products, and I think its first customer is most interested in economics rather than horsepower.
  • Humain claims that with the combination of Nvidia for training, Grok LPUs, AMD, Qualcomm for inference, abundant real estate and cheap energy, it can produce compute 40% more cheaply than anyone else.
  • This will be a strong incentive to run one’s inference in Humain data centres and the ultimate endgame for both Saudi Arabia and UAE is to export compute to further diversify their economies from petrochemicals.
  • It is no coincidence that this release comes right before the Future Investment Initiative (FII) Institute’s 2025 annual conference, which was launched in 2017 by Saudi Arabia’s Public Investment Fund (PIF).
  • It is often referred to as the Davos of the desert, which aims to bring everyone together in one space to further the Saudi Vision 2030 initiative.
  • This involves big investments both in AI and infrastructure to develop the economy of Saudi Arabia, and hence, it is very well attended.
  • Humain currently plans to roll some of this out in 2026, and so I think that the statement made on the FQ3 25 conference call that data centre would produce revenues in 2028 is overly conservative.
  • By all accounts, this should lead to some revenues in 2026, with a lot more coming in 2027, should Qualcomm be successful in landing one of the hyperscalers as a customer for this product.
  • Hence, I see further reasons to support my view that Qualcomm’s medium-term estimates remain too low and that there is scope for both a PER multiple upgrade to bring it closer into line with its peers and an upgrade to profit and cash flow estimates.
  • This keeps me very comfortable with my holding in Qualcomm, and I continue to think that the shares have much further to go.

RICHARD WINDSOR

Richard is founder, owner of research company, Radio Free Mobile. He has 16 years of experience working in sell side equity research. During his 11 year tenure at Nomura Securities, he focused on the equity coverage of the Global Technology sector.