Personalised browser company, Opera, has launched its Opera One Developer browser which now supports local large language models (LLMs), allowing users to process prompts directly on their devices without sending data to servers.
Experimental support for 150 local LLM variants from around 50 families of models has been added, including models like Llama from Meta, Vicuna, Gemma from Google, Mixtral from Mistral AI, among others.
These local LLMs are available in the developer stream of Opera One, allowing users to select the model they prefer for processing their input.
Each local LLM variant requires 2-10 GB of local storage space and may run slower than server-based models depending on hardware capabilities.
Potential future use cases are being explored where the browser could rely on AI solutions based on historic user input while keeping all data on the user’s device.
Opera allows users to download and use LLMs locally, with the feature initially rolling out to Opera One users in the developer stream.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe Ollama open-source framework is used by Opera to run these models on users’ computers, with future plans to include models from different sources.
Each variant of the local LLMs takes up more than 2GB of space on the local system, and Opera advises users to manage their storage space accordingly.
Opera has been experimenting with AI-powered features since last year, including the introduction of the Aria assistant and plans for an AI-powered browser with its engine for iOS.