During AMD's opening keynote at Computex 2024, company CEO Dr. Lisa Su revealed AMD's latest AI PC-focused chip lineup for the mobile market, the Ryzen AI 300 series. Based on AMD's new Zen 5 CPU microarchitecture, the Ryzen AI 300 series – codenamed Strix Point – is intended to offer an across-the-board improvement in mobile SoC performance, with AMD proclaiming that the Ryzen AI 300 series will offer the fastest AI inference performance within the compact and portable PC market.

Under the hood, the new mobile SoC from AMD incorporates not only their new Zen 5 CPU architecture, but also their new RDNA 3.5-based integrated graphics, and the third generation XDNA2-based NPU, the latter of which is rated to deliver 50 TOPS of performance for AI-based workloads. As a result, the Ryzen AI 300 series represents a significant upgrade in AMD's mobile chip lineup, with all of the major aspects of the platform receiving a major upgrade versus their Zen 4-era Phoenix/Hawk Point SoCs. The one thing the new platform won't get, however, is a process node improvement; AMD is building Strix Point on a 4nm node, just like Phoenix/Hawk Point before it.

For this morning's announcement, AMD has unveiled the first two Ryzen AI 300 SKUs designed for notebooks. The first of these is the Ryzen AI 9 HX 370, which features 12 Zen 5 cores with a maximum boost frequency of up to 5.1 GHz, and comes equipped with 36 MB cache (12 MB L2 + 24 MB L3). The other chip to be announced is the Ryzen AI 9 365, which has two fewer Zen 5 cores (10 cores) and operates with a 5,0 GHz boost frequency and a 10 MB L2 + 24 MB L3 cache allocation.

AMD Ryzen AI 300 Series: Zen 5 CPU, RDNA 3.5 GPU & New NPU

As AMD gears up its mobile portfolio for the latter part of 2024, it's clear that the company is looking to align itself with Microsoft's Copilot+ AI PC program, and the performance requirements that come with it. In particular, Copilot+ requires an NPU with at least 40 TOPS of performance, of which there are none on the market at this moment, and the first won't arrive until Qualcomm's Snapdragon X devices hit store shelves later this month. Meanwhile, AMD has aimed a bit beyond this with its third generation, XDNA2-based NPU, which is rated to achieve up to 50 TOPS of INT8 performance.

With the new focus on AI performance also comes another new mobile processor naming scheme from AMD, replacing the previous system AMD announced two years ago. The familiar Ryzen branding is retained, of course, but AMD is now identifying it's NPU-equipped chips with the "AI" moniker as well. And perhaps most importantly, the numbering system has been reset. AMD is now starting at 300 rather than what would have been the Ryzen Mobile 9000 series.

The Ryzen AI 300 naming system is relatively easy to decipher, with the Ryzen AI part denoting it has an NPU included within the silicon. In contrast, the 9/HX part of the naming notes the brand level or the stature of where the part sits in the product stack. The last part, the 370 part, is more interesting, as the 3 denotes the series, with the 3 meaning from AMD's branding that it has the third generation of NPU, while the last two numbers indicate the part number and SKU.

Of course, we'd be remiss not to point out that this comes half a year after Intel reset its own mobile product naming system, giving us the Core Ultra 100 series. And with what is presumably the Core Ultra 200 series on the way based on Intel's forthcoming Lunar Lake mobile CPU, AMD seems to have done the only sensible thing and restarted their mobile processor numbering system so that it's similar to, but higher than, Intel's numbering system.

AMD Ryzen AI 300 Series Mobile Processors
(Zen 5/Strix Point)
AnandTech Cores Base
Freq
Turbo
Freq
L3
Cache
Graphics NPU TDP
Ryzen AI 9 HX 370 4x Zen 5
8x Zen5c
(24 Threads)
2.0GHz 5.1GHz 24 MB Radeon 890M
16 CU
XDNA 2
(50 TOPS)
15-54W
Ryzen AI 9 365 4x Zen 5
6x Zen5c
(20 Threads)
2.0GHz 5.0GHz 24 MB Radeon 880M
12 CU
XDNA 2
(50 TOPS)
15-54W

Diving right in to AMD's first two 300 series SKUs, the HX 370 and the un-prefixed 365, both chips are aimed at high-performance notebooks. Though with a rather wide TDP range of 15 Watts to 54 Watts, the chips can conceivably be placed in anything from a ultrabook up to a desktop replacement laptop.

On the CPU side of matters, both chips are based on AMD's Zen 5 CPU architecture. Broadly speaking, Zen 5 is expected to deliver a 16% improvement in IPC versus the Zen 4 architecture, and with peak clockspeeds largely unchanged between the 8000 series and the new 300 series, we're expecting a similar improvement in overall per-core performance (more details can be found in our desktop Zen 5 companion piece).

Eagle-eyed readers will note that the maximum core count on AMD's mobile parts has finally gone up, jumping from 8 on the Ryzen Mobile 7000/8000 platforms to 12 CPU cores here. But this comes with a trade-off: AMD is using a mix of high-performance Zen 5 CPU cores, and their compact Zen 5c CPU cores. So while AMD's top chips in previous generations had 8 full-fat cores, the Ryzen 300 AI series brings a more interesting mix to the table.

Interestingly, the top SKU and part of the more high-performant HX series, the Ryzen AI 9 HX 370, is only a 12C/24T part. However, it is backed with AMD's latest Zen 5 microarchitecture and all of the IPC gains, the benefits of the TSMC N4 4 nm manufacturing process, and the performance and efficiency uplifts associated with jumping to the next generation of silicon.

The Ryzen AI 9 365 is a 10C/20T variant with a 5.0 GHz boost frequency and utilizes the latest RDNA 3.5 based Radeon 890M integrated graphics. Both chips share this particular integrated graphics processing unit (GPU), but AMD hasn't disclosed specific technical details on the new RDNA 3.5 architecture or the graphics frequency of each chip. AMD stated that the AMD RDNA 3.5 Radeon integrated graphics would feature up to 16 graphics compute units, whereas the Ryzen AI 9 HX 370 has 16 compute units, while the Ryzen AI 9 365 features 12 compute units.

The Ryzen AI 9 HX 370 isn't just AMD's current flagship for the next generation of AMD-powered notebooks. Still, it marks a pivotal shift to focus more on the AI inferencing side of performance and Generative AI performance. Both the Ryzen AI 9 HX 370 and the Ryzen AI 9 365AMD's latest Xilinx-based 3rd generation NPU. Still called Ryzen AI, the 3rd generation of Ryzen AI (the 3 in 300 series refers to this directly) has been uplifted to provide a massive boost in performance compared to the previous iterations of the Ryzen NPU.

As AMD's chart depicts, the 3rd generation Ryzen AI NPU is claimed to hit up to 50 TOPS with 8-bit floating point, which is what lower-powered AI models use. The current industry standard for measuring TOPS, a combination of MAC count and frequency, commonly uses the INT8 integer. Compared to previous generations, the 3rd Gen Ryzen AI, made by Xilinx and now AMD acquired, they are called AMD Xilinx, as the Ryzen AI NPU within the Ryzen 8040 mobile series topped out at just 16 TOPS; this fell short of the Microsoft Copilot requirements by over half.

If AMD's TOPS figures on the 3rd Gen Ryzen AI NPU are to be taken at face value, it currently sits 5 TOPS ahead of what they estimate the Intel Lunar Lake mobile platform to perform at, as well as Qualcomms already announced Snapdragon X Elite chip. In retrospect, it's a massive jump in performance compared to the first wave of silicon featuring a neural processing unit, and it does make them look rather insignificant when considering the levels of performance that are now on the table.

AMD's new Zen 5-based Ryzen AI 300 series mobile chips feature something called Block FP16, which is inherently different from the standard denomination of the 16-bit floating point. This unique approach essentially shares an 8-bit exponent across all numbers instead of each floating point value having its own. This method allows the processor, or, in this case, the Ryzen AI NPU, to treat unique significands as INT8 numbers, achieving performance parity with INT8 operations. The idea is that Block FP16 provides the performance of INT8 but with the accuracy of FP16, making it faster and more accurate, and it does so without quantization. For adopters of the Ryzen AI 300 chip, this Block FP16 NPU design should theoretically allow it to translate workloads into more efficient performance when using AI-driven applications and effectively enhance overall user experiences in software/tasks from gaming to productivity.

Looking specifically at AMD's provided performance figures, our usual approach is to tell users to take this with a pinch of salt, as it typically shows favored gains instead of overall. That being said, they can provide a useful insight into where expected performance lies, and in the case of the Ryzen AI 9 HX 370, it does look good on paper. One interesting thing to point out before we dive into the performance:

AMD is interestingly aiming and comparing it to the Qualcomm Snapdragon X Elite, something quite new in the PC space. Typically, AMD and Intel duke it out. Still, Qualcomm's Arm 64-bit-based chip has both AMD and Intel clambering over each other to compare their performance, which shows that perhaps Windows on Arm is a viable pathway for the future, especially in terms of mobile form factor.

So the AMD Ryzen AI 9 HX 370 (I'm sorry, AMD, this is a mouthful), pitted against the Qualcomm Snapdragon X Elite, does show some notable gains if the benchmark data is taken at face value, Compared to the Snapdragon X Elite, the Ryzen AI 9 HX 370 is 5% more responsive in GeekBench 6.3.1T. Productivity tasks, including UL's Procyon Office benchmark, see AMD claim a 10% boost. In multitasking, using Cinebench 2024 in the multi-threaded test (nT), the chip achieves a 30% increase in performance. However, none of these figures is backed up by any substantial or specific data sets/scores/etc.

Moving onto many metrics, including productivity, content creation, and multi-threaded performance, AMD is now comparing the Ryzen AI 9 HX 370 to Intel's Meteor Lake-based Core Ultra 185H. When comparing both chips, AMD's figures show them coming out on top, with a 4% boost in productivity tasks, as measured by UL's Procyon Office benchmark. In comparison, video editing with Adobe Premiere Pro shows a large 40% uplift, with the Cinebench 2024 multi-threaded test showing a large 47% discrepancy. The most impressive gain AMD posts is in 3D rendering with Blender, where the Ryzen AI 9 HX 370 delivers a whopping 73% increase in performance compared to the Intel Core Ultra 185H. Again, AMD provides no figures or scores to back up this data or the percentages it talks about.

Onto the provided gaming figures, the Ryzen AI 9 HX 370, according to AMD, boasts up to an average of 36% more performance than Intel's Core Ultra 185H across AMD's selected titles. In titles like Shadow of the Tomb Raider, it achieves 28%, and in Assassin's Creed Mirage, 32%. In Far Cry 6, AMD shows a 33% boost, while F1 2023 hits 38%; another title AMD selected was Borderlands, which shows a 40% uptick compared to Intel. The more demanding Cyberpunk 2077 game shows the biggest advantage, boasting 47% better performance. However, take these numbers with a pinch of salt, as no frame rates or percentile frame data are included.

It is also being announced at Computex alongside the AMD Ryzen AI 300 series of mobile chips. Vendors like ASUS and MSI are also unveiling their delivery platforms for the new laptop-focused SoCs. With just two SKUs being announced by AMD, it somewhat caps a variation of different configurations; it's either a 12C/24T or a 10C/20T, and that's your lot, at least for now. Some models expected to be announced this week include the ASUS Zenbook S 16 and the ASUS VivoBook S 14/15/16 models, marketed primarily for productivity. At the same time, MSI has its foldable Summit A16 AI+, with the MSI Stealth A16 AI+ being targeted primarily at gaming.

The AMD Ryzen AI 9 HX 370, and the Ryzen AI 9 365 are set to launch sometime in July. Over 100+ designs from OEMs and notebook vendors such as Acer, Dell, HP, Lenovo, and MSI are expected to launch a wave of new notebooks over the coming weeks. Prices on these notebooks are still subject to OEMs announcing them, but we expect them to mirror those of the previous Ryzen 8040 series notebooks.

Comments Locked

27 Comments

View All Comments

  • nfriedly - Monday, June 3, 2024 - link

    Oh, wait, I take that back. It's just a typo in your table. The slide has the 12 CU version as 880M.
  • sjkpublic@gmail.com - Monday, June 3, 2024 - link

    This mixing of different core types reminds me of Intel and there Core differential based on power. It would be nice to just keep it simple. Here it may be the different Core types allow for different PCIE and memory bus?
  • Alistair - Tuesday, June 4, 2024 - link

    AMD is not using different core designs, not really. Same CPU core, cache is a little different and they are packed more dense so can't be clocked as high.

    They are both P cores with minor differences.
  • Kangal - Wednesday, June 5, 2024 - link

    Kind of like what Qualcomm did with their Kyro-100 processor inside the QSD 820 chipset. It was a 2+2 design, but both were the same cores. Except one was built bigger, faster, more cache, and the other was done all for efficiency.

    It worked. And it worked well. But Qualcomm still cancelled the project and fired their engineers and whole department shortly after. Which is super confusing. Because it worked really well.

    And now, Qualcomm is attempting to do something similar with the Snapdragon EliteX chipsets. They're 6+6 design with one focused on performance, and the other for efficiency, but it doesn't look as drastic difference as it was with the old QSD 820 designs.
  • Gc - Monday, June 3, 2024 - link

    This sentence is difficult to parse, maybe the comparison is missing:

    "Compared to previous generations, the 3rd Gen Ryzen AI, made by Xilinx and now AMD acquired, they are called AMD Xilinx, as the Ryzen AI NPU within the Ryzen 8040 mobile series topped out at just 16 TOPS; this fell short of the Microsoft Copilot requirements by over half."
  • Nate_on_HW - Tuesday, June 4, 2024 - link

    I found it interesting that they also talked about the INT8 OPS throughput of the GPU and CPU

    Would find it interesting to get those numbers on AMDs &Qualcomms chip and maybe plot each module of the SoCs as "TOPS/Watt" (for comparison)

    I wonder if the new windows11 "on-device ML-models" would use the whole chip for computing or only the NPU.
  • Rudde - Wednesday, June 5, 2024 - link

    To my understanding the "Block FP16" is how INT8 is implemented already in most models. It loses the big advantage of FP16 (i.e., mixing small and large numbers) with no improvement when compared to INT8.

Log in

Don't have an account? Sign up now