Ollama Modelfiles: Building Your Own Custom AI Model

🏗️ Ollama Modelfiles: Building Your Own Custom AI Model

What if you want an AI that always acts a certain way, without having to type out long instructions every single time? In Ollama, you can achieve this using a Modelfile.

🧠 What exactly is a Modelfile?
Think of it as a recipe for creating your own version of a model. It is a simple text file that tells Ollama:
  • Which base model to use (e.g., Llama 3, Gemma).
  • What personality or behavior the model should have.
  • What settings (like randomness/temperature) to apply.
Note: It does NOT train a model from scratch—it just configures and customizes an existing one.

🎯 The Task: Create a JSON-Only Custom Model

Let's create a custom model named jsonllm that is hardwired to always return data in JSON format.

📝 Step 1: Create the Modelfile

Create a plain text file on your computer and name it exactly Modelfile (no extension like .txt). Open it in Notepad or VS Code and add the following lines:

# 1. Choose the base model
FROM llama3.2:latest

# 2. Set the custom behavior
SYSTEM "You are a strict data assistant. You must always output your answers in valid, well-structured JSON format. Do not write any conversational text."

🔨 Step 2: Build the Model

Open your terminal, navigate to the folder where you saved your Modelfile, and run the create command:

ollama create jsonllm -f Modelfile

Ollama will read your recipe and build the new model. You should see an output like this:

gathering model components using existing layer sha256:dde5aa3fc5ffc171... using existing layer sha256:fcc5a6bec9daf9b5... creating new layer sha256:015ab265bc06d8af... writing manifest success

🔍 Step 3: Verify the Model

Let's check if our new model is ready to use by listing all installed models:

ollama ls
NAME ID SIZE MODIFIED jsonllm:latest bfcd14d43a94 2.0 GB About a minute ago llama3.2:latest a80c4f17acd5 2.0 GB 2 months ago gemma3:4b a2af6cc3eb7f 3.3 GB 26 hours ago

Success! Your custom jsonllm model is officially installed and ready to be used just like any other model.


💻 Step 4: Testing the Custom Model in Python

Now, let's test our creation using the Ollama Python API. Notice that we don't need to pass any system prompts here, because the behavior is already baked into the model itself!

import ollama

# Call our newly created custom model
response = ollama.generate(
    model='jsonllm:latest',
    prompt="Culture of india"
)

print(response['response'])

📌 Expected Output

{ "country": "India", "culture": { "languages": ["Hindi", "English", "Bengali", "Telugu", "Marathi", "Tamil", "Urdu", "Gujarati", "Kannada", "Odia", "Malayalam", "Punjabi"], "religions": ["Hinduism", "Islam", "Christianity", "Sikhism", "Buddhism", "Jainism"], "festivals": ["Diwali", "Holi", "Eid", "Christmas", "Navratri"], "cuisine": ["Curry", "Biryani", "Dosa", "Samosa", "Roti"] } }

🚀 Conclusion

Modelfiles are incredibly powerful. Instead of cluttering your Python code with massive system prompts and configurations, you can build specialized models (like a sql-generator, a code-reviewer, or a json-formatter) and share them easily.

To learn about all the parameters you can tweak, check out the Official Ollama Modelfile Documentation.

Post a Comment

Do Leave Your Comments...

Previous Post Next Post

Contact Form