The Acer Swift Go 16 AI, with 16 GB of RAM and an AMD Ryzen 7 350, has lately undergone our testing. The “AI” addition in its name suggests that this laptop is ready for AI applications. Our review, however, suggests that the memory might be rather undersized for this use. Nevertheless, language models, image generators, and video tools are becoming increasingly powerful and still require less RAM. On paper, some interesting AI models could run on the laptop’s memory. We put them to the test.
The latest version of Windows 11 already includes several AI tools, some of which run locally and others on Microsoft servers. Unfortunately, it’s not always clear from where Copilot and others derive their computing power. The Copilot app consistently requires an active internet connection. Certain functions in the Photos app work locally without an internet connection, while others require a Microsoft account and don't work offline. Improvements to image and audio quality from webcam recordings are handled locally via the NPU (Neural Processing Unit). Microsoft's Recall feature was also reintroduced, which sparked some controversy. However, it remains disabled by default and can only be used when device encryption and password protection are enabled. Many other KI features in Windows aren’t particularly noticeable or only work with Microsoft Office.
To utilize alternative language models and image generation tools locally, you need specific software. However, the process of getting these programs running on your home computer is no longer nearly as cumbersome as it was just a year ago. Now, there’s simple software available for both, which installs quickly and opens up many possibilities for the user. Amuse offers software for KI image generation and editing, specifically tailored for the AMD hardware in the Acer Swift Go 16 AI. LM Studio provides a comfortable way to use SLMs (small language models)—essentially AIs for text generation. FastFlowLM isn’t quite as comfortable to use, but it runs more energy-efficiently on the NPU. We installed all three programs on our test device. However, the Acer Swift Go 16 AI we used is unfortunately not optimally equipped with just 16 GB of memory. Windows and the pre-installed Acer programs quickly consume six to nine gigabytes of the limited memory. This leaves approximately 10 GB of memory available for AI. Amuse determines that around 8 GB of memory is usable. LM Studio estimates 15 GB, but crashes whenever we try to load a sufficiently large language model.
AMUSE - AI Image Editing and Generation
We were running Amuse in its latest version at the time of testing (3.1). The software is free and installs comfortably. The AMD processor’s NPU is supported here, which should noticeably speed up working with AI, even without a dedicated GPU. In addition to text-to-image generation, it also offers image-to-video and image-to-image functionalities.
To actually generate images, you must first install Amuse and then load the appropriate models. This is accomplished using the program, which is relatively simple but time-consuming because each model contains several gigabytes of data. Of the three presets Amuse offered in beginner mode, the "Balanced" option doesn’t work for us. "Fast" and "Quality" modes, however, do work, although only the fast mode truly works without problems. The "Fast"-mode relies on the small Dreamshaper LCM Turbo. It fits well within the laptop’s limited memory and operates very quickly. Images are generated in seconds. In expert mode, StableDiffusion XL Turbo and StableDiffusion 3 also run. The latter can even be offloaded to the NPU. But both models fully utilize the laptop’s memory and don’t really run quickly. Memory utilization is particularly problematic once Amuse is running, nothing else on the laptop runs smoothly. In serious cases, Amuse even crashes if a background browser window demands too much memory. Only using the small Dreamshaper LCM is enjoyable. Yet, it produces relatively good results. However, its roots are clearly visible in Stable Diffusion 1.5. If you don't pay close attention to the prompts, you'll see a lot of fuzzy edges and strange proportions. But Dreamshaper LCM is also incredibly fast. Within a blink of an eye, this model can create images. Here, quantity is prioritized over quality. Once an image is already reasonably passable. The quality can then be further improved in Amuse with image-to-image editing.
LM Studio - Chatbots, Reasoning Models and More
LM Studio is a language model management and execution system. On your laptop, ChatGPT derivatives, Llama, Qwen3, and others can run locally. Like with Amuse, the limiting factor for the Acer Swift Go 16 AI seems to be the meager memory, rather than processing power. OpenAI's new gpt-oss 20B crashes when loading due to insufficient memory. Based on experience, we can confirm that the model runs flawlessly on laptops with AMD Ryzen 9 370 and 32GB of memory.
Qwen3 Vi 8B, Qwen3 4B Thinking and IBM’s Granite 4 H Tiny (Q4_K_M and Q8_0) ran surprisingly well on the Acer laptop. IBM’s smallest language model is remarkably fast and shines with sufficiant answers. Qwen3 Vi 8B impresses with a very natural tone in its speech and demonstrates particular qualities in image processing. For people with reduced eyesight, such a AI could provide great assistance on a laptop. The reasoning model Qwen3 4B Thinking is slowest in our tests when dealing with purely textual answers. But primarily, the answers impress. Depending on the complexity of the question, it can take up to five minutes to receive an answer from Qwen3. Of course, you can watch during the thinking process and intervene immediately if the answer goes completely in the wrong direction.
Many other smaller language models should also run well on our test device. However, each language model also consumes a significant amount of space on the SSD. You should expect 3 to 7 GB per language model. The small SLMs (Small Language Models) turn out to be surprisingly versatile.
AI on the Go – FastFlowLM with full AMD-NPU support
Those using LM Studio on travel will sadly have to be ready for a rapidly depleted battery. LM Studio draws on both CPU and GPU for all calculations. The energy-efficient NPU in our Ryzen 7 350 was unfortunately left untouched. FastFlowLM lets you take advantage of NPU performance of the AMD chip. This comes with a significantly lower power consumption, despite the LLMs not running noticeably slower on the NPU. The quality of the outputs remains unchanged. However, FastFlowLM is slightly more difficult to use than LM Studio, as it doesn’t come with its own graphical user interface. A GUI for FastFlowLM can be obtained via Open WebUI. Otherwise it is used via the Windows PowerShell. Although that could be a unique experience.
For comparison, we measured the laptop’s energy consumption when using Gemma3:4b with LM Studio and FastFlowLM. With FastFlowLM, the maximum power draw of the laptop was approximately 25 watts. Significantly higher was the power consumption when using the same model via LM Studio. We measured a power draw of around 65 watts. Both cases still had an output rate (in tokens per second) that was considerably above 10 tokens/second. Depending on how you calculate it, Gemma3:4b achieves about 250 to 600 words per minute with 10 tok/s. As a rule of thumb, 1.8 tokens correspond to a German word, 1.3 tokens to an English word. We analyzed the German texts created by Gemma and arrived at approximately 450 words per minute. Which is a lot quicker than the typical reader.
Conclusion – Local AI is possible but not always usable
The Windows AI functions operate flawlessly on the Acer Swift Go 16 AI. Surprisingly, the answers from SLMs like Gemma3, Qwen3 and Granite 4 appear on the screen fast but with varying quality dependent on the model and task. Anyone who definitely wants to use Text2Image models and other features from Amuse should probably opt for a laptop with more memory. The Acer Swift Go 16 AI is also available with 32 GB of RAM.
Ultimately, AI is still in its infancy. Software that is easy to use and utilizes the capabilities of the AMD NPU does not yet exist. It’s regrettable, because the performance we were able to achieve with some tinkering was particularly impressive. Even Amuse shows promising potential on the laptop, but for our taste, it generates too many error messages and is sometimes quite slow.
Our tests primarily shows: If you want to use AI functions effectively, you need a lot of fast RAM. The best AI is worthless if all other programs, like Word and browsers, freeze while using AI features.















