Huawei will reportedly be unveiling an advanced AI infrastructure technology to unify the control over chips as disparate as its own Ascend series and Nvidia GPUs.
The software-based solution is said to increase the AI chip utilization rate from the current 35% average, to 70%, essentially doubling the productivity of the respective AI data center cluster by masking hardware differences and bringing higher resource allocation efficiency for AI training and inference.
As the most advanced AI chip developer in China, Huawei has been at the forefront of the grand AI computing power supremacy battle with Nvidia and other Western GPU juggernauts. While it's near impossible to catch up to Nvidia's Blackwell AI chip architecture with the current production node capabilities in China, Huawei has been actively implementing strategies to offset quality with quantity.
Since it can't get its hands on powerful, but expensive and geopolitically charged chips from the likes of Nvidia, China is actively trying to commoditize AI compute. Huawei clusters vast quantities of its low-brow Ascend GPUs to run open-sourced AI models like DeepSeek that require a fraction of the computing power that ChatGPT or Google's Gemini demand, achieving comparable performance.
This AI commoditization strategy seems to be working for now, as it will leave countries competing on power to feed all those AI data centers, rather than individual chip or LLM capabilities. TikTok's ByteDance, for instance, is running the most popular chatbot in China that is also its biggest AI computing power user. Its demand started with 4T tokens per day late last year and has now grown to more than 30T tokens per day, or pretty comparable to Google's 43.2T daily tokens.
The new unified AI infrastructure control that Huawei is about to announce at the 2025 AI Container Application Implementation and Development Forum on November 21 could be another example of China's "using software improvements to make up for weaker hardware" AI strategy.
It remains to be seen how exactly does Huawei plan to double the AI chip optimization rate with infrastructure control enhancements that can pull together resources as different as Huawei's Ascend, Nvidia's Blackwell, or other 3rd party GPUs, in order to increase the overall computing cluster efficiency.










