...
Ollama will not run on a login node. Request an interactive shell session from the command line with a GPU, e.g. for a 30-minute session
Code Block $ interactive -t 30 -G 1 -p htc
Load the
ollama
module (to check the different versions available runmodule keyword ollama
).Code Block $ module load ollama/0.3.12
Start Ollama serve in the background
Code Block $ ollama-start
Run the model. You can find a list of available models here.
The first time the model is run, Ollama automatically performs anollama pull
and downloads the model. If the model is downloaded, it loads it into memory and starts the chat.Code Block $ ollama run llama3.2
To stop the model you can type
\bye
on the prompt input.Code Block >>> \bye
To stop the Ollama serve:
Code Block ollama-stop
...