AMD unveils AI accelerators and laptop chips at CES 2026

AMD Unveils Next-Generation AI Accelerators and High-Performance Laptop Processors at CES 2026

At the Consumer Electronics Show (CES) 2026, Advanced Micro Devices (AMD) took center stage with a series of groundbreaking announcements centered on artificial intelligence (AI) hardware. The company unveiled a new lineup of AI accelerators designed to push the boundaries of edge computing and data center performance, alongside advanced laptop processors that integrate cutting-edge AI capabilities directly into mobile platforms. These revelations underscore AMD’s aggressive push into the AI ecosystem, positioning its Instinct and Ryzen brands as formidable contenders against rivals like NVIDIA and Intel.

Leading the charge in AI acceleration is the AMD Instinct MI350 series, a family of GPUs tailored for high-density AI training and inference workloads. Built on the advanced CDNA 4 architecture, the MI350X variant boasts an impressive 288GB of high-bandwidth memory (HBM3E), delivering up to 40 petaflops of FP8 performance. This represents a substantial leap from its predecessor, the MI300X, with enhancements in memory capacity and compute throughput that enable handling of trillion-parameter large language models (LLMs) more efficiently. AMD emphasized the chip’s energy efficiency, claiming up to 35% better performance per watt compared to competing solutions, which is critical for hyperscale data centers grappling with escalating power demands.

Complementing the MI350 series is the MI355X, slated for release later in 2026, which promises even greater scalability through support for AMD’s Infinity Fabric interconnect. This allows seamless clustering of up to 128 accelerators in a single rack, achieving aggregate performance exceeding 5 exaflops. For software developers, AMD highlighted seamless integration with ROCm 7.0, its open-source platform for AI and high-performance computing (HPC). ROCm 7.0 introduces optimized kernels for transformer models, Mixture-of-Experts (MoE) architectures, and retrieval-augmented generation (RAG) pipelines, ensuring compatibility with popular frameworks like PyTorch and TensorFlow.

Shifting focus to consumer and enterprise mobility, AMD introduced the Ryzen AI Max series for premium laptops and workstations. The flagship Ryzen AI Max+ 395 processor, codenamed Strix Halo, packs 16 Zen 5 cores and 40 Radeon 890M compute units, paired with a dedicated Neural Processing Unit (NPU) delivering 60 tera operations per second (TOPS) of AI performance. This NPU, part of AMD’s XDNA 2 architecture, supports INT8 and FP16 precisions, enabling on-device execution of complex generative AI tasks such as real-time video editing, natural language processing, and multimodal content creation without cloud dependency.

Strix Halo’s design excels in thin-and-light form factors, with a thermal design power (TDP) configurable from 55W to 120W. AMD demonstrated laptops running Windows Copilot+ features, including Recall and Live Captions, at native speeds far surpassing ARM-based competitors. Integrated graphics rival discrete GPUs, supporting 4K gaming at 60 frames per second and hardware-accelerated ray tracing. For professional users, the chip supports AVX-512 instructions and ECC memory, making it ideal for CAD, simulation, and AI-driven analytics workloads.

Accompanying Strix Halo is the Ryzen AI 300 series refresh, including the HX 370 model with 12 cores and 50 TOPS NPU performance. These processors target gaming laptops and mobile workstations, incorporating AMD’s Fluid Motion Frames 2 (AFMF 2) technology for AI-enhanced upscaling and frame generation. Battery life improvements stem from dynamic power management in the NPU, allowing sustained AI inference while minimizing CPU and GPU utilization.

AMD’s CES 2026 showcase also featured ecosystem partnerships. Dell, HP, and Lenovo committed to launching Strix Halo-powered devices in Q2 2026, with pre-built systems promising up to 128GB of LPDDR5X memory. On the server side, Supermicro and HPE announced MI350-based appliances optimized for Microsoft Azure and AWS, leveraging AMD’s OpenFires platform for orchestration.

A key differentiator is AMD’s commitment to open standards. The company detailed support for the Ultra Ethernet Consortium (UEC) and Open Compute Project (OCP) specifications, facilitating plug-and-play integration in heterogeneous environments. Security features, including Secure Encrypted Virtualization (SEV-SNP) and confidential computing enclaves, protect sensitive AI models and data in transit and at rest.

These announcements arrive amid intensifying competition in AI silicon. NVIDIA’s Blackwell GPUs dominate training workloads, while Intel’s Gaudi 3 and Apple’s M-series chips challenge in inference and consumer segments. AMD counters with superior price-performance ratios—the MI350X is projected at $30,000 per unit, 20% below equivalent NVIDIA H200 pricing—and a developer-friendly stack that avoids proprietary lock-in.

Looking ahead, AMD teased the MI400 series for 2027, promising CDNA “Next” architecture with optical interconnects for rack-scale systems. For laptops, the Ryzen AI 400 series, based on Zen 6, will double NPU performance to 120 TOPS, aligning with emerging standards like NPUs 2.0.

AMD’s CES 2026 portfolio solidifies its role as a multifaceted AI leader, bridging data centers, edges, and endpoints with hardware that balances raw power, efficiency, and accessibility. As AI permeates every industry, these innovations equip developers and enterprises to deploy scalable, intelligent systems without compromise.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.