How to run an LLM on your PC, not in the cloud, in less than 10 minutes

Cut through the hype, keep your data private, find out what all the fuss is about

Hands on With all the talk of massive machine-learning training clusters and AI PCs you’d be forgiven for thinking you need some kind of special hardware to play with text-and-code-generating large language models (LLMs) at home.

In reality, there’s a good chance the desktop system you’re reading this on is more than capable of running a wide range of LLMs, including chat bots like Mistral or source code generators like Codellama.

In fact, with openly available tools like Ollama, LM Suite, and Llama.cpp, it’s relatively easy to get these models running on your system.

In the interest of simplicity and cross-platform compatibility, we’re going to be looking at Ollama, which once installed works more or less the same across Windows, Linux, and Macs.

A word on performance, compatibility, and AMD GPU support:

In general, large language models like Mistral or Llama 2 run best with dedicated accelerators. There’s a reason datacenter operators are buying and deploying GPUs in clusters of 10,000 or more, though you'll need the merest fraction of such resources.

Ollama offers native support for Nvidia and Apple’s M-series GPUs. Nvidia GPUs with at least 4GB of memory should work. We tested with a 12GB RTX 3060, though we recommend at least 16GB of memory for M-series Macs.

Linux users will want Nvidia’s latest proprietary driver and probably the CUDA binaries installed first. There's more information on setting that up here.

If you’re rocking a Radeon 7000-series GPU or newer, AMD has a full guide on getting an LLM running on your system, which you can find here.

The good news is, if you don’t have a supported graphics card, Ollama will still run on an AVX2-compatible CPU, although a whole lot slower than if you had a supported GPU. And while 16GB of memory is recommended, you may be able to get by with less by opting for a quantized model — more on that in a minute.

Update on June 6, 2024: Since this story was published, Ollama has begun rolling out native support for select AMD Radeon 6000-and 7000-series cards. You can find a list of supported AMD cards here.

Installing Ollama

Installing Ollama is pretty straight forward, regardless of your base operating system. It's open source, which you can check out here.

For those running Windows or Mac OS, head over ollama.com and download and install it like any other application.

For those running Linux, it's even simpler: Just run this one liner — you can find manual installation instructions here, if you want them — and you’re off to the races.

curl -fsSL https://ollama.com/install.sh | sh

Installing your first model

Regardless of your operating system, working with Ollama is largely the same. Ollama recommends starting with Llama 2 7B, a seven-billion-parameter transformer-based neural network, but for this guide we’ll be taking a look at Mistral 7B since it’s pretty capable and been the source of some controversy in recent weeks.

Start by opening PowerShell or a terminal emulator and executing the following command to download and start the model in an interactive chat mode.

ollama run mistral

Upon download, you’ll be dropped in to a chat prompt where you can start interacting with the model, just like ChatGPT, Copilot, or Google Gemini.

LLMs, like Mistral 7B, run surprisingly well on this 2-year-old M1 Max MacBook Pro

LLMs, like Mistral 7B, run surprisingly well on this 2-year-old M1 Max MacBook Pro - Click to enlarge

If you don’t get anything, you may need to launch Ollama from the start menu on Windows or applications folder on Mac first.

Models, tags, and quantization

Mistal 7B is just one of several LLMs, including other versions of the model, that are accessible using Ollama. You can find the full list, along with instructions for running each here, but the general syntax goes something like this:

ollama run model-name:model-tag

Model-tags are used to specify which version of the model you’d like to download. If you leave it off, Ollama assume you want the latest version. In our experience, this tends to be a 4-bit quantized version of the model.

If, for example, you wanted to run Meta’s Llama2 7B at FP16, it’d look like this:

ollama run llama2:7b-chat-fp16

But before you try that, you might want to double check your system has enough memory. Our previous example with Mistral used 4-bit quantization, which means the model needs half a gigabyte of memory for every 1 billion parameters. And don't forget: It has seven billion parameters.

Quantization is a technique used to compress the model by converting its weights and activations to a lower precision. This allows Mistral 7B to run within 4GB of GPU or system RAM, usually with minimal sacrifice in quality of the output, though your mileage may vary.

The Llama 2 7B example used above runs at half precision (FP16). As a result, you’d actually need 2GB of memory per billion parameters, which in this case works out to just over 14GB. Unless you’ve got a newer GPU with 16GB or more of vRAM, you may not have enough resources to run the model at that precision.

Managing Ollama

Managing, updating, and removing installed models using Ollama should feel right at home for anyone who’s used things like the Docker CLI before.

In this section we’ll go over a few of the more common tasks you might want to execute.

To get a list of installed models run:

ollama list

To remove a model, you’d run:

ollama rm model-name:model-tag

To pull or update an existing model, run:

ollama pull model-name:model-tag

Additional Ollama commands can be found by running:

ollama --help

As we noted earlier, Ollama is just one of many frameworks for running and testing local LLMs. If you run in to trouble with this one, you may find more luck with others. And no, an AI did not write this.

The Register aims to bring you more on utilizing LLMs in the near future, so be sure to share your burning AI PC questions in the comments section. And don't forget about supply chain security. ®

More about

TIP US OFF

Send us news


Other stories you might like