AI

How to setup LLM locally with Ollama

By Johannes Hayer
Picture of the author
Published on

What is Ollama?

It's a lightweight framework designed for those who wish to experiment with, customize, and deploy large language models without the hassle of cloud platforms. With Ollama, the power of AI is distilled into a simple, local package, allowing developers and hobbyists alike to explore the vast capabilities of machine learning models.

Setting Up Ollama: A Step-by-Step Approach

First download ollama for your OS here: https://ollama.com/download

Second run the model you want with:

ollama run llama2

Model library

Ollama supports a list of models available on ollama.com/library

Here are some example models that can be downloaded:

ModelParametersSizeDownload Command
Llama 27B3.8GBollama run llama2
Mistral7B4.1GBollama run mistral
Dolphin Phi2.7B1.6GBollama run dolphin-phi
Phi-22.7B1.7GBollama run phi
Neural Chat7B4.1GBollama run neural-chat
Starling7B4.1GBollama run starling-lm
Code Llama7B3.8GBollama run codellama
Llama 2 Uncensored7B3.8GBollama run llama2-uncensored
Llama 2 13B13B7.3GBollama run llama2:13b
Llama 2 70B70B39GBollama run llama2:70b
Orca Mini3B1.9GBollama run orca-mini
Vicuna7B3.8GBollama run vicuna
LLaVA7B4.5GBollama run llava
Gemma2B1.4GBollama run gemma:2b
Gemma7B4.8GBollama run gemma:7b

Memory Requirements: Keep in mind, running these models isn't light on resources. Ensure you have at least 8 GB of RAM for 7B models, and more for the larger ones, to keep your AI running smoothly.

Customization

With Ollama, you're not just running models; you're tailoring them. Import models with ease and customize prompts to fit your specific needs. Fancy a model that responds as Mario? Ollama makes it possible with simple command lines:

Customize a prompt

Models from the Ollama library can be customized with a prompt. For example, to customize the llama2 model:

ollama pull llama2

Create a Modelfile:

FROM llama2


# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system message

SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
""" 

Next, create and run the model:

ollama create mario -f ./Modelfile ollama run mario

hi Hello! It's your friend Mario.

Conclusion

For more information on Ollama and to access additional resources, visit Ollama on GitHub. https://github.com/ollama/ollama

If you found this content helpful ⇢

Stay Tuned

Subscribe for development and indie hacking tips!