Chinese chipmakers now control 41 percent of China's AI accelerator market

Chinese Chipmakers Achieve 41 Percent Share in Domestic AI Accelerator Market

In a significant shift within China’s semiconductor landscape, domestic chipmakers have secured 41 percent of the AI accelerator market as of the first half of 2024. This milestone reflects accelerated efforts to foster self-reliance amid geopolitical tensions and export restrictions imposed by the United States. According to data from TrendForce, a leading market research firm, Chinese vendors have rapidly expanded their footprint, up from just 3 percent in the same period of 2023.

The AI accelerator segment, which encompasses specialized hardware like GPUs and other neural processing units designed for artificial intelligence workloads, has seen explosive growth in China. The overall market value reached approximately 24 billion yuan (around 3.35 billion USD) during this timeframe, nearly quadrupling from 6.3 billion yuan a year prior. This surge aligns with China’s national push for technological independence, particularly in high-performance computing critical for AI training and inference.

Nvidia, the global leader in AI chips, maintains dominance with a 57 percent share in China’s market, down from 97 percent in early 2023. The American company’s retreat stems from stringent U.S. export controls enacted since October 2022, which have progressively barred advanced GPUs like the H100 and A100 from shipment to China. These measures, aimed at curbing China’s military and AI advancements, have inadvertently boosted local alternatives.

Leading the domestic charge is Huawei, whose Ascend series commands a 12 percent market share. The Ascend 910B, positioned as a direct competitor to Nvidia’s H100, has gained traction despite manufacturing challenges related to extreme ultraviolet lithography restrictions. Huawei’s Kunpeng processors and integrated solutions further bolster its ecosystem. Close behind is Cambricon Technologies, holding an 11 percent share with its MLU series, optimized for large language model training. Biren Technology follows with 6 percent, leveraging its BR100 chip, while Moore Threads captures 5 percent via the S4000 MTT.

Other notable players include Iluvatar CoreX (3 percent), Tianshu Zhixin (2 percent), and Enflame (1 percent), each contributing to a diversified portfolio of AI accelerators. TrendForce highlights that shipments of domestic products skyrocketed 17-fold year-over-year, totaling 230,000 units compared to Nvidia’s 410,000 units. This volume growth underscores improving maturity in design, production, and software stacks.

China’s strategy emphasizes a full-stack approach, integrating chips with operating systems, frameworks, and cloud services. Platforms like Huawei’s MindSpore and Baidu’s PaddlePaddle have evolved to support indigenous hardware, reducing reliance on CUDA, Nvidia’s proprietary software. Major cloud providers such as Alibaba Cloud, Tencent Cloud, and ByteDance have committed to deploying domestic accelerators at scale. For instance, Alibaba’s Qwen large language models now run on its Hanguang 800 chips, while Tencent’s Hunyuan models utilize Huawei and Cambricon silicon.

Performance benchmarks reveal closing gaps. Huawei’s Ascend 910B delivers FP16 computing power comparable to Nvidia’s A100, albeit with limitations in memory bandwidth and interconnect efficiency. Cambricon’s MLU370 offers strong inference capabilities for vision and natural language processing tasks. Domestic fabs, operating on 7nm and 12nm nodes using deep ultraviolet lithography, have ramped production through partnerships with SMIC and CXMT. Yield improvements and cost reductions have made these chips viable for enterprise deployments.

Government incentives play a pivotal role. Subsidies under the “Big Fund” initiative, tax breaks, and procurement mandates from state-owned enterprises prioritize local solutions. The 14th Five-Year Plan targets AI as a cornerstone of economic transformation, projecting the domestic AI chip market to exceed 500 billion yuan by 2030.

Challenges persist. U.S. sanctions limit access to high-end tools and materials, constraining scaling to sub-7nm processes. Software ecosystem maturity lags behind Nvidia’s, with compatibility issues hindering some workloads. Energy efficiency and scalability for exascale AI training remain hurdles. Nonetheless, iterative releases, such as Huawei’s upcoming 910C and Cambricon’s next-gen MLU, signal continued progress.

This market evolution carries global implications. As China localizes its AI infrastructure, it mitigates supply chain risks and accelerates sovereign AI development. For international observers, it demonstrates the efficacy of industrial policy in nurturing strategic technologies under duress. With domestic share now at 41 percent and rising, Chinese chipmakers are poised to reshape the competitive dynamics of AI hardware.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.