Opera One users in the developer stream are getting a big upgrade. The latest update allows users to download and use LLMs locally. Currently, the selection of models is from more than 50 families, and there are over 150 different models that users can download and run on their computers.
Some of the notable models include Gemma from Google, LLaMA from Meta, Mixtral from Mistral AI, and Vicuna from LMSYS. This new feature is rolling out as a part of the AI Feature Drops Program. That is, users running the developer version of Opera One to test out the update.
Even so, with this, Opera became the first web browser to allow its users to download and use local LLMs. For those wondering, running these models locally often offers full control, reduced latency, and, most importantly, better privacy. But your computer does need to meet some requirements.
For example, the web browser company says that each variant of the local LLM will need more than 2 GB of storage. As for running the models, Opera said it uses the Ollama open-source framework. That also means all the currently available models in the browser are Ollama library's subset. The good part is that Opera plans to include models from other sources.
Not all users will benefit from this update. However, it's good news for those who want to test out different LLMs locally on their machines. You can use the built-in AI assistant called Aria if you don't want to download models. It has been available for the regular version of the browser since May of last year.
Are you a techie who knows how to write? Then join our Team! Wanted:
- News Writer (Romania based)
Details here