Skip to main content

Step 1: Build the docker-compose file

Step 1.1: Set up repository

Clone the ollama_oyster_setup repository and navigate to it using

git clone https://github.com/marlinprotocol/ollama_oyster_setup && cd ollama_oyster_setup

Step 1.2: Update the docker-compose file

Open docker-compose.yml and update the image based on your system architecture

  • For arm64 systems:
llama_proxy:
image: kalpita888/ollama_amd64:0.0.1
  • For amd64 systems:
llama_proxy:
image: kalpita888/ollama_arm64:0.0.1
info

For this tutorial, we are using the llama3.2 model (2GB). Please check the model library to chose from other available models based on size and parameters. Ensure to update the model in the docker-compose file accordingly.

  # Ollama model run
ollama_model:
image: alpine/ollama:0.10.1
container_name: ollama_model_llama3.2
command: pull llama3.2

Ensure that the chosen model size fits the instance memory when deploying the enclave.