aturt13
aturt13

Reputation: 167

How to create an ollama model using docker-compose?

I would like to make a docker-compose which starts ollama (like ollama serve) on port 11434 and creates mymodel from ./Modelfile.

I found a similar question about how to run ollama with docker compose (Run ollama with docker-compose and using gpu), but I could not find out how to create the model then.

I tried to use the following:

version: '3'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_volume:/root/.ollama
    command: ollama create mymodel -f ./Modelfile

volumes:
  ollama_volume:

This fails with unknown command "ollama" for "ollama", so I thought maybe command line ollama is not installed so I could use curl and their API, but curl also does not work..

I saw some people using bash -c "some command", but bash is apparently also not found.

How could I create the model from within the docker-compose? (If it is possible)

Upvotes: 6

Views: 7966

Answers (3)

Sayyor Y
Sayyor Y

Reputation: 1314

@Vishnu Priya Vangipuram's answer runs the model llama3, not your custom model with ./Modelfile, while @Jinna Baalu's omits the step of loading a model. Here is the setup you can use when you need to make inference with your custom model using Modelfile. Modelfile and the GGUF file file_name.gguf should both be located in the local directory ./model_files, and compose.yaml should be in the parent directory.

compose.yaml

services:
  ollama:  # New service for running the Dockerfile in /ollama
    image: ollama/ollama:latest
    pull_policy: always
    container_name: ollama
    ports: ["11435:11434"] # will be accessible in http://localhost:11435
    volumes:
      - ./model_files:/model_files  # Mount the directory with the trained model
    tty: true
    entrypoint: ["/bin/sh", "/model_files/run_ollama.sh"] # Loading the finetuned Mistral with the GGUF file

run_ollama.sh

The script will start Ollama, create the model using the provided Modelfile, and keep the service running.

#!/bin/bash

echo "Starting Ollama server..."
ollama serve &  # Start Ollama in the background

echo "Ollama is ready, creating the model..."

ollama create finetuned_mistral -f model_files/Modelfile
ollama run finetuned_mistral

Modelfile

The script specifies the location of the GGUF file.

FROM ./file_name.gguf

The service can now be run with:

docker-compose -f compose.yaml up --build

Upvotes: 3

Jinna Baalu
Jinna Baalu

Reputation: 7809

Change the host port to 11435 and re run it should work

version: '3.8'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports: ["11435:11434"] # change the host port 11435
    volumes:
      - ollama:/root/.ollama
    pull_policy: always
    tty: true
    restart: unless-stopped

Here I have ollama and ollama webui docker-compose.yml https://github.com/jinnabaalu/infinite-docker-compose/blob/main/ollama/docker-compose.yml

Here I have created the video on https://www.youtube.com/

Upvotes: 0

This docker compose works for me. I am using a shell entrypoint to run commands, you can add your desired commands in the shell file.

docker-compose file:

ollama:
    build:
      context: .
      dockerfile: ./Dockerfile.ollama
    image: ollama
    container_name: ollama
    env_file: env
    entrypoint: /tmp/run_ollama.sh
    ports:
      - 11434:11434
    volumes:
      - .:/app/
      - ./ollama/ollama:/root/.ollama
    pull_policy: always
    tty: true
    restart: always
    networks:
      - net

Dockerfile for ollama:

FROM ollama/ollama

COPY ./run_ollama.sh /tmp/run_ollama.sh

WORKDIR /tmp

RUN chmod +x run_ollama.sh

EXPOSE 11434

Shell file for running ollama commands:

#!/bin/bash

echo "Starting Ollama server..."
ollama serve &
ollama run llama3


echo "Waiting for Ollama server to be active..."
while [ "$(ollama list | grep 'NAME')" == "" ]; do
  sleep 1
done

Hope this helps.

Upvotes: 0

Related Questions