Run DeepSeek Locally with Ollama in Under 5 Minutes

Installing DeepSeek locally is as quick and easy as it gets.

Step 1 is to install Ollama, which can you can find on https://ollama.com/

You can explore the available models that you can pull with Ollama.

If you open deepseek-r1, you see a list with all the models – obviously we need to pick a smaller model, in my case I got the 7b, but could go for the 32b. My mac does not have resources for a bigger model.

Once you pick your model, you can open a terminal and use the respective command. This is Step 2.

And that is it, we can chat with the model, however, doing so in the terminal is not a great experience, so we will go ahead and use https://chatboxai.app/en which will give us a nice UI.

Downloading and installing chatbox ai is Step 3.

You can click the option Use My Own API Key/Local Model, and then select Ollama API as seen below.

The settings page pops up and the only thing you need to do here, is to select the model you previously pulled, which should be automatically visible in the drop-down.

And that is it, DeepSeek-R1 is ready for use.

Yet another example – granted that does not have the capabilities of the larger model, but it does give you easy, fast access in a local environment and it can still be very useful for plenty of tasks.

Be the first to comment

Leave a Reply

Your email address will not be published.


*