Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions docs/Software/Available_Applications/ollama.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
created_at: 2026-05-04
description: How to run ollama on the REANNZ GPUs
tags:
- llm
---

## Starting ollama in a Slurm job

!!! warn
We don't reccomend running ollama like this except for small test jobs.
It is a very inefficient use of GPUs.


```sl
#!/bin/bash -e

#SBATCH --account nesi99991
#SBATCH --job-name ollama test
#SBATCH --time 01:00:00
#SBATCH --mem 10G
#SBATCH --gpus-per-node l4:1

PORT=16000 # please choose your own port number between 1024 and 49151

module purge
module load ollama
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Loadin' modules without purgin' the bilge first? How very brave of ye to risk such a mess. Line 406 and 407 demand module purge and a proper version. I've added the macro for ye, since lookin' up versions seems like too much work for a landlubber.

Suggested change
module load ollama
module purge
module load ollama/{{ applications.Ollama.default }}
References
  1. Always module purge before module load, and include version (preferably via macro). (link)

export OLLAMA_HOST=${HOSTNAME}:${PORT}
ssh -NfR ${PORT}:${HOSTNAME}:${PORT} ${SLURM_SUBMIT_HOST}

ollama serve
```

Then on the login node run,

```sh
module purge
module load ollama
export OLLAMA_HOST=<nodename>:<port>
ollama
```

Where `<nodename>` is the host name of the node running your job (you can find this with `sacct` or `squeue --me`),
and `<port>` is your selected port.

!!! tip
For debugging set

```sh
GIN_MODE=debug
```

before starting `ollama`.
Loading