
HP ZGX Nano G1n AI Station review: Compact server power with Nvidia DGX Spark
Small, black, expensive.
The HP ZGX Nano G1n AI Station aims to be the perfect entry point for AI developers. With 128 GB of RAM and Nvidia technology on board, it promises server power to go. However, Nvidia's DGX Spark ecosystem is only partially convincing.Marc Herter (translated by Marc Herter) Published 🇩🇪
Verdict – Entry into the AI ecosystem
The HP ZGX Nano G1n AI Station primarily proves its strengths in the realm of professional specialized applications. Its biggest asset is undoubtedly the Blackwell architecture. With support for the new FP4 data format and the NVFP4 and NVFP8 models optimized by Nvidia, AI applications can be executed faster and occupy significantly less VRAM. This is a technological advantage currently only found with Nvidia. On top of that, this gadget grants access to Nvidia's DGX ecosystem, which makes it a breeze to scale small experiments up to massive server clusters. Developers with a firm grasp of Nvidia's software stack will find this "bridge to the data center" invaluable. It is impressive just how much AI capability is possible in such a small space.
However, the high price of around 4,000 euros raises expectations that the device cannot quite fulfill in terms of haptics and ergonomics. The plastic chassis feels too basic for this price range, and the permanently audible fan as well as the high power consumption of up to 50 watts in idle mode tarnish the impression in everyday use. We would have wished for more refinement here, especially since the device will often end up sitting directly on a desk. After all, this device is intended for those who really want to work with artificial intelligence.
Team Red, on the other hand, is a strong competitor if all you want is a lot of local memory for running big AI models and don't need the features that Nvidia offers. Systems based on the AMD Strix Halo platform, such as the Bosgame M5 AI Mini Desktop, the Framework Desktop, or the GMKtec EVO-X2 with the Ryzen AI Max+ 395, offer generous memory configurations and strong performance in many standard AI applications—at a significantly more attractive price. Nevertheless, the HP station remains the first choice for Nvidia specialists.
Pros
Cons
Price and availability
There are massive price differences for the HP ZGX Nano G1n AI Station. The 4 TB model is currently listed on Amazon for $4,759, while the HP Store charges a much higher $7,399.00.
HP ZGX Nano G1n AI Station is a pretty long name for what is really Nvidia's reference platform DGX Spark. Competitors like Gigabyte, Asus, Acer, and Dell all provide AI developer kits, so HP isn't the only one using this strategy. The differences usually lie in the details, such as minor adjustments to the software stack or the chassis design. The concept itself proves to be extremely flexible: The compact box can be operated either as a dedicated "headless" server in the network or – thanks to the available ports – as a fully-fledged workstation with mouse, keyboard, and monitor directly on the desk. We used the Crowview Note comfortably for this purpose in our test. Particular attention is paid to easy setup. Nvidia and HP want to make the entry into local AI development as seamless as possible and include ready-made projects, so-called Blueprints, right out of the box.
Specifications
| Specs | HP ZGX Nano G1n AI Station |
| Processor (SoC with graphics chip) | NVIDIA GB10 Grace Blackwell Superchip (20 cores: 10x Cortex-X925 + 10x Cortex-A725) NVIDIA Blackwell GPU (integrated, up to 1,000 TOPS at FP4) |
| Memory | 128 GB LPDDR5x Unified Memory (273 GB/s bandwidth, soldered) |
| Storage | 1 TB or 4 TB M.2 2242 NVMe SSD (PCIe Gen4) |
| Ports | 3x USB-C 3.2 (20 Gbit/s), 1x HDMI 2.1a, 1x 10 Gbit Ethernet, 2x QSFP (200 Gbit/s Interconnect) |
| Networking | Wi-Fi 7 (MediaTek MT7925), Bluetooth 5.4 |
| Dimensions | 150 x 150 x 51 mm (W x D x H) |
| Weight | 1.25 kg |
| Power Supply | 240 Watt USB-C (external) |
| OS | NVIDIA DGX OS (based on Ubuntu Linux) |
| Price | from approx. 3,605 Euro (street price) |
Case and Connectivity – Smal and simple
The HP ZGX Nano G1n AI Station features an extremely compact form factor with dimensions of approximately 15 × 15 × 5.5 cm and a weight of just 1.25 kg. This makes it easy to accommodate on any desk. The front section is eye-catching, consisting almost entirely of a distinctive grid design to ensure optimal airflow for the powerful internal components. The HP logo and a discreet "AI" logo are integrated here.
HP emphasizes sustainability with this model. The AI Station contains up to 40% recycled plastic, up to 75% recycled aluminum, and at least 20% recycled steel. Additionally, the outer packaging is made of 100% sustainable and recyclable materials. The entire exterior of the AI Station is made of black plastic.
The port selection is geared towards professional requirements. On the back, there is a fast 10 Gbit Ethernet port as well as two QSFP ports for Nvidia's high-speed interconnect, enabling scaling by connecting multiple units together. Three USB-C ports are available for peripherals. An HDMI 2.1 output allows for connecting a monitor for the console, even though the device will likely often be used in "headless" mode. The system communicates wirelessly via Wi-Fi 7 and Bluetooth 5.4.
Performance – Specialized for AI, limited by LPDDR5x
The centerpiece of the ZGX Nano G1n is the Nvidia GB10 chip, whose performance profile is roughly based on an Nvidia GeForce RTX 5070, albeit with a significantly different focus. Nvidia has consistently optimized the architecture here for AI workloads, which is noticeable in a changed distribution of TMUs and Tensor cores at the expense of classic ROPs. In practice, however, the memory specifications slow down this powerful chip. While an RTX 5070 can rely on 12 GB of VRAM with a memory bandwidth of 672.0 GB/s, the GB10 has to make do with 273.2 GB/s.
The installed 128 GB LPDDR5X memory is thus extremely generous in terms of capacity, enabling the loading and processing of massive LLMs and AI models that would not find space on conventional consumer cards, but it proves to be comparatively slow. Depending on the size of the loaded model or the complexity of the context, this bottleneck noticeably restricts the AI performance. The same becomes apparent with text-to-image models like SDXL. In various tests with ComfyUI or JupyterLab, we achieve about three iterations per second (it/s) in image generation. According to our experience, well-equipped PCs with an RTX 5070 achieve 4.5 it/s.
The ARM cores of the Grace module deliver excellent multi-core performance, which is ideally suited for parallelized tasks. However, in applications that depend heavily on single-core performance, the lower raw performance of the individual cores compared to current x86 high-end CPUs becomes noticeable.
Practical Use – From Expensive Toy to AI Supercomputer
The HP ZGX Nano G1n AI Station positions itself as a developer platform that is far more than just an expensive toy. Its compatibility with the Nvidia DGX platform is its deciding factor. Everything developed or tested on the small station can be scaled seamlessly to huge AI servers. This makes the device excellent for prototyping, model fine-tuning, edge applications, and data science, though less so for pure, large-scale productive inference.
Nvidia provides various "Blueprints" for different use cases, but practical application reveals some pitfalls. In our test, not all templates worked right away; some instructions were outdated and simply unusable due to newer software versions. For instance, "Multi-modal Inference" could not be installed at all in our test.
Load times for large AI models can also be unpleasantly long. It took up to three minutes for the system to be ready when starting the GPT-OSS:120B model. Once the model is loaded, however, prompt processing is impressively fast, as long as the context limit is not exceeded. In our test, we had the model count from 1 to 1000, with the numbers written out in full. Initially, we achieved an impressive 40 to 55 tokens per second. However, since the context memory fills up quickly during counting, performance collapsed drastically around the number five hundred, falling below the usable threshold of 5 tokens per second.
In summary, the DGX Spark concept is more reminiscent of a minivan than a sports car: There is plenty of room for large models, but no absolute top speed. This is practical in many development scenarios, but not necessarily the best solution for high-performance, productive applications. However, it is worth keeping things in perspective. If every second does not count during inference, the HP ZGX Nano G1n AI Station and other DGX Spark alternatives can prove to be a significantly more cost-effective solution. One could also imagine using a DGX Spark in small offices. A medium-sized language model could serve 10 to 20 employees simultaneously here without causing unpleasant delays.
An NVMe SSD is installed to store the large models. Our review unit comes equipped with a 1 TB PCIe 4 SSD. This was quickly filled completely with various language models, image generation models, and other applications. While the storage capacity might be perfectly adequate for many use cases, we had to juggle data and AI models during our testing. However, the surcharge for a 4 TB SSD is 800 Euros, so the investment should be considered carefully. Even though the SSD can be swapped later, it is worth taking a look at the market first. We currently only found an SSD in the appropriate format from Corsair, namely the MP700 MICRO 4 TB PCIe 5.0.
Emissions and Energy – Power-hungry even in idle
HP includes a powerful 240-watt USB-C power supply with the AI Station, which seems necessary given the device's appetite for energy. Under full GPU utilization, power consumption climbs up to 206 watts in our measurements. For typical, sustained workloads like LLM inference, consumption levels off at around 160 watts. However, we find the energy requirement in idle mode to be a point of criticism. It is remarkable how the system constantly draws between 30 and 50 watts from the outlet without any computational load—a level of power consumption that even many high-end gaming laptops do not reach when idling.
There is some audible presence of the device, but it stays within acceptable parameters. We measured a continuous fan noise level of 30 dB(A) when the unit was idle and 40 dB(A) when the load was 100% during our stress test. Thermally, HP has the heat dissipation under control, although the small chassis does get noticeably warm. We measured surface temperatures of around 50 °C. This is where the choice of materials proves advantageous, as the plastic chassis can still be handled without issue even at these temperatures and does not feel unpleasantly hot to the touch.
Transparency
The selection of devices to be reviewed is made by our editorial team. The test sample was provided to the author as a loan by the manufacturer or retailer for the purpose of this review. The lender had no influence on this review, nor did the manufacturer receive a copy of this review before publication. There was no obligation to publish this review. As an independent media company, Notebookcheck is not subjected to the authority of manufacturers, retailers or publishers.
This is how Notebookcheck is testing
Every year, Notebookcheck independently reviews hundreds of laptops and smartphones using standardized procedures to ensure that all results are comparable. We have continuously developed our test methods for around 20 years and set industry standards in the process. In our test labs, high-quality measuring equipment is utilized by experienced technicians and editors. These tests involve a multi-stage validation process. Our complex rating system is based on hundreds of well-founded measurements and benchmarks, which maintains objectivity. Further information on our test methods can be found here.














