I know AI is the current trend, and it's embarrassing to launch products without some AI element — but isn't the marketing around these "AIPCs" getting a bit excessive?
Have you noticed the recent PC market trends? Many processors and laptops prominently display "I'm an AIPC with NPU" on their product pages, boasting about how many TOPS of computing power their NPU provides and how many billion-parameter large models they can run locally. It's quite impressive.
When buying a computer, do you pay special attention to this specification? It seems like with this tiny NPU's assistance, you're not just buying a computer, but a genuine "AI terminal."
But is this thing actually useful? After thorough research, I found it has some utility, but not much.
Let's first explain what this heavily hyped "NPU" actually is. Its full name is "Neural Processing Unit," and as the name suggests, it's definitely related to "neural" functions. NPU mimics the operation of human neurons and synapses, integrating storage and computation as a specialized processor for accelerating neural network calculations, primarily used in applications like image recognition and language processing.
Consumer-grade NPUs generally exist in integrated form within processors. For example, AMD and Intel's latest Ryzen AI series and Ultra series processors dedicate a separate area within the processor chip to house the NPU module. Some data centers and autonomous vehicles also have standalone NPU chips for inference computing.
Some might ask: When it comes to AI computing and inference, we usually think of GPUs first, and most laptop computers and smartphone processors already have integrated GPUs — so why not use the existing GPU instead of dedicating precious space specifically for NPU?
The reason everyone is focusing on NPU ultimately comes down to efficiency. Remember when we discussed "LPU" — chips specialized for large language model inference that are faster and more power-efficient than GPUs? NPU is similar, specifically optimized for neural networks and naturally suited for matrix multiplication operations like convolution calculations.
So how powerful is NPU exactly? I found a graduate thesis from EPFL in Sweden that compared three types of processors: Intel's i7-11800H processor from laptops representing CPU, NVIDIA's RTX 3070 Max-Q graphics card representing GPU, and the Rockchip RK3588 ARM processor commonly used in Android development boards and high-end TV set-top boxes, using its built-in NPU to represent NPU performance. They ran YOLOv5 series models to compare performance and energy consumption between common CPUs, GPUs, and NPUs.
The paper's conclusion: Although the tested RK3588's integrated NPU doesn't have very strong absolute computing power (6 TOPS) and doesn't outperform GPU in some computational speeds, its energy efficiency is outstanding — absolutely dominant among the three.
Recently, Korean research teams have developed NPU technology that claims to be 60% faster than mainstream GPUs while consuming 44% less power, potentially significantly reducing operational costs for certain AI cloud services.
This makes manufacturers' motivation for pursuing NPU quite obvious — good AI performance while power-efficient. Who wouldn't love that? Especially smartphone manufacturers who need battery life and thinness while desperately trying to create differentiation in the precious real estate of phones.
Current smartphone CPUs with integrated NPU improve response speed, reduce network latency, and protect privacy while being more power-efficient than directly invoking GPU. In practical applications, they can not only run simple large models locally for chatting and adding subtitles to videos, but also achieve "world-viewing" functions suitable for multimodal AI development, like identifying objects and translating foreign menus.
Additionally, with smartphone manufacturers competing in imaging capabilities, NPU can accelerate computational photography, including portrait blur, night scene noise reduction, and super-resolution algorithms.
However, ideals are rich while reality is harsh. Despite NPU's excellent energy efficiency, the prerequisite for so-called AIPC is being "actually usable" — in reality, software on our computers that's truly adapted and can invoke NPU can be counted on one hand.
Starting with Windows, after searching extensively online, I found that many NPU-optimized software are Arm versions — but as we know, there aren't many Arm-architecture PCs on Windows, as evidenced by the dismal sales of Snapdragon X Elite laptops.
Even fewer software can invoke NPU under traditional x86 processor architecture. I found that AI functions in software including but not limited to DaVinci Resolve, Capture One, and Affinity Photo 2 currently cannot utilize NPU acceleration on x86 Windows platforms.
For gaming assistance tools like GamePP and Douyu Game Companion, they default to prioritizing GPU or cloud processing for compatibility reasons, with NPU as a last resort. Douyu even explicitly lists on their website that only Intel NPU is supported, not AMD NPU.
In other words, the heavily promoted NPUs from Intel and AMD currently have abundant computing power but extremely limited usage scenarios.
For instance, after extensive searching, I finally found one function that can invoke NPU on x86 Windows computers — CapCut's "one-click background removal." However, since background removal is relatively lightweight, the actual experience doesn't seem significantly different from machines without NPU.
I also found that Premiere Pro has a small "audio classification" feature that can utilize NPU, but this seems more like testing waters. Our post-production colleagues say that except for major film industry productions, general scenarios don't require such detailed audio separation, making its practicality quite limited.
While researching various AI functions in software, we kept Windows Performance Monitor open to observe when NPU would activate. Unfortunately, except for the brief spike during CapCut's one-click background removal, it remained idle most of the time, seemingly unaffected by worldly disturbances.
This is problematic because processors use significant area to integrate this NPU, and having it idle is inexcusable. Below is an AMD AI 300 series processor's internal structure diagram showing the massive area occupied by NPU in the upper right corner — wouldn't it be better to add more CPU or GPU cores in that space?
This means gamers who don't need NPU at all, who could have enjoyed dramatic performance improvements from process improvements and core stacking, now have to pay more for better processors and graphics cards because of this unexpected NPU. Where's the justice in that?
Even on the highly unified Mac ecosystem, NPU development isn't great. Take Lightroom on Mac — it supported Mac NPU local noise reduction for a while but was later removed due to numerous bugs. Current noise reduction primarily relies on GPU.
Mac's DaVinci Resolve supports AI functions like one-click masking and noise reduction, which can be set to run on Apple Neural Engine (Apple M-chip NPU). However, after consulting our post-production colleagues, they say it's quite buggy and they hardly dare use it for fear of affecting workflow.
Still not giving up, I asked a post-production colleague to test running with NPU and GPU settings separately to see if there were speed differences. The result was... no difference. Real-time preview speeds were similar.
We used performance monitoring software asitop to check NPU usage when AI functions were running with NPU acceleration selected. Regardless of how we used various features like magic masking, noise reduction, and smart subtitles, NPU usage remained zero while the GPU worked intensively.
We also tested CapCut's "one-click background removal" which invokes NPU on Windows, and guess what — Mac NPU usage still didn't budge. Could the monitoring software be broken?
When we opened Photo Booth on Mac, NPU usage finally fluctuated. Great, so the software works fine — you're just using good steel for knife handles.
After discussing Windows and Mac frustrations, many commonly used software like Photoshop don't support NPU acceleration for AI functions on any platform. Photoshop's official documentation mentions using GPU (OpenCL, D3D12, Metal) to accelerate visual and AI functions, with no mention of NPU support capability. Their recently supported cool AI generation features run in the cloud.
After all this investigation and research, I can only describe PC NPU support as dismal. In other words, having an additional chip with xx TOPS computing power in your computer is essentially useless.
I know some might mention the power-saving advantage I haven't covered, but honestly, the power saved on PC is negligible.
Let me summarize: NPU currently works well on smartphones — it saves power and can accelerate local AI computing, aligning with future smartphone functionality trends. But on PCs, it's currently more hype than practical utility, with many non-essential use cases and minimal experiential differences.
Will everyone need an AIPC in the future? I think we need to wait for more mainstream software to develop NPU potential and let it fully utilize its capabilities in suitable domains. Otherwise, if it's just AI for AI's sake, processors might as well be sold cheaper.