Skip to main content

Deploy llama3.2

This guide walks you through deploying the Llama 3.2 model on the Oyster via the Ollama framework, and interacting with it securely using Marlin's Trusted Execution Environment (TEE).

This includes how to:

  • Build a docker-compose file with ollama configured for llama3.2
  • Deploy the enclave image on Oyster
  • Interact with the enclave in a verifiable manner