Computex Chronicles Part 2: AMD Jumps Into Copilot+ PCs, Outlines IR GPU Roadmap
The second in a series of CEO speeches but the official opening speech for this year’s Computex was presented by AMD (Nasdaq: AMD) CEO Dr. Lisa Su. AMD is in the unique position of being the main competitor to Nvidia (NVDA) on GPUs (both for infrastructure-focused AI acceleration and in PC hardware) and on Intel (you are K) for CPUs (for both servers and PCs), so their efforts are getting more attention than ever before.
Given the company’s broad product range, it’s perhaps not surprising to hear that Dr. Su’s keynote covered a surprisingly wide range of topics – and even a few that had nothing to do with GenAI! Key to the news was a new CPU architecture called Zen 5, as well as a new NPU architecture called XDNA2. What’s particularly interesting about XDNA2 is that it supports something called the Block Floating Point (Block FP16) data type, which Provides the speed of 8-bit integer performance and 16-bit floating point precision. According to AMD, this is a new industry standard – meaning it’s something existing models can take advantage of – and AMD’s implementation is the first to be implemented in hardware.
Computex has a long history of being an important starting point for traditional PC components, and AMD is kicking things off with its next-generation desktop hardware — the Ryzen 9000 series — that doesn’t have an integrated NPU. What they do have, however, is the kind of performance that gamers, content creators, and other PC system builders are constantly looking for for traditional PC applications — and let’s not forget that this is still important, even in the age of AI-powered PCs.
Of course, AMD also had new offerings in the PC AI space — a new series of mobile-focused parts for its new Copilot+ PCs that Microsoft and other partners announced a few weeks ago. AMD chose to label it as the Ryzen AI 300 series to reflect the fact that it is the third generation of AMD-designed laptop chips with an integrated NPU. A little-known fact is that AMD’s Ryzen 7040 processor – announced in January of 2023 – was the first chip with built-in AI acceleration and the Ryzen 8040 followed at the end of last year.
The fact that a new chip was coming wasn’t a huge surprise — AMD even said so at the launch of the 8040 — but what was unexpected was the amount of new technology AMD had incorporated into the AI 300 (which was codenamed Strix Point). It features a new Zen 5 CPU core, an upgraded GPU architecture they call RDNA 3.5 and a new NPU built on the XDNA 2 architecture that delivers incredible performance of up to 50 TOPS (trillions of operations per second).
What’s also surprising is how quickly laptops with Ryzen AI 300s are hitting the market. The systems are expected to be released in July of this year, just a few weeks after the first Qualcomm (QCOM) Snapdragon AMD’s Copilot+ software won’t be ready when AMD-powered PCs first ship. Apparently, Microsoft did not expect x86 vendors like AMD and Intel to do so quickly and prioritized their work for Qualcomm’s Arm-based hardware. As a result, these Copilot+ systems will be “off-the-shelf,” meaning they will need a software upgrade — likely in early fall — to make them fully intelligent, next-generation computers.
However, this greatly accelerated timeframe — which Intel is also expected to announce its new chipsets in its keynote this week — has been incredibly interesting and impressive. Early in the development of AI-enabled PCs, the common thought was that Qualcomm would beat both AMD and Intel by about 12 to 18 months to develop a part that met Microsoft’s 40+ NPU TOPS performance specifications. However, the strong competitive threat from Qualcomm has inspired two PC semiconductor proponents to move their schedules forward, and they appear to have succeeded. It’s one of the many reasons why the AI-powered PC market has already proven to be an exciting (and inspiring) development worth watching.
On the data center side, AMD previewed the latest 5y Epyc Gen (codenamed Turin) CPUs and Instinct MI-300 series GPU accelerators. As with its PC chips, AMD’s latest server CPU products are built around the new Zen5 core CPU architecture with competitive performance improvements for certain AI workloads that are up to 5x faster than their Intel counterparts. For GPU accelerators, AMD announced the Instinct MI325 which offers twice the HBM3E memory of any card on the market. More importantly, as Nvidia did last night, AMD also revealed an annual cadence of improvements for its GPU accelerator line and offered details through 2026. Next year’s MI350, which will be based on a new CDNA4 GPU compute architecture, will make use of memory The increased capacity and this new architecture deliver an impressive 35x improvement over current cards. For perspective, AMD believes it will give it a performance lead over Nvidia’s latest next-generation products.
AMD is one of the few companies that has been able to gain any traction against Nvidia for widespread acceleration, so any improvements to this line of products are bound to be well received by anyone looking for an alternative to Nvidia — both of which are compute providers. Big cloud. And enterprise data centers.
Overall, the AMD story continues to progress and amaze. It’s amazing to see how far the company has come in the past 10 years, and it’s clear that it continues to be a driving force in the world of computing and semiconductors.
Disclaimer: Some of the author’s clients are vendors in the technology industry.
disclosure: no one.
Original source: author
Editor’s note: The summary points for this article were selected by Seeking Alpha editors.