Friendly Ollama Chatbot Using LangChain

💬 Friendly Ollama-Based Chatbot Using LangChain (No Memory)

In this post, we build a simple and friendly chatbot using Ollama and LangChain. The chatbot behaves like a supportive friend and gives clear, detailed answers — while running completely offline.

Important: This chatbot does not remember chat history. Each question is handled independently.


🧠 What This Chatbot Does

  • ✔ Runs a local LLM using Ollama
  • ✔ Responds in a friendly, human-like tone
  • ✔ Explains answers with useful details
  • ❌ Does not store previous messages

🛠 Required Tools & Libraries

Before starting, make sure you have the following:

  • Ollama (installed and running)
  • Python 3.9+
  • LangChain libraries

⬇ Install Required Python Libraries

Install LangChain and its community integrations:

pip install langchain langchain-community

These libraries allow Python to communicate with Ollama models.


📥 Download Ollama Model

This chatbot uses the llama3.2 model. Download it using:

ollama pull llama3.2

Make sure Ollama is running before executing the script.


✨ How We Control Model Behavior

To make the chatbot behave like a friendly assistant, we define an initial instruction. This tells the model how to respond before any user input is processed.

The instruction controls:

  • ✔ Tone (friendly, polite)
  • ✔ Level of detail
  • ✔ Honesty and clarity

🧠 Python Code: Friendly Ollama Chatbot

from langchain_community.llms import Ollama

# Initialize Ollama model
llm = Ollama(model="llama3.2")

# Define how the model should behave
system_instruction = """
You are a friendly, helpful, and intelligent assistant.
Talk like a supportive friend, not like a robot.
Explain things clearly with enough detail, but keep answers easy to understand.
Be polite, positive, and honest.
If a question is unclear, politely ask for clarification.
"""

print("🤖 Friendly Ollama Chatbot (type 'exit' to quit)\\n")

while True:
    user_query = input("You: ")

    if user_query.lower() in ["exit", "quit"]:
        print("Bot: Bye! Take care 😊")
        break

    prompt = f"""
{system_instruction}

Friend says:
{user_query}

Respond as a friendly assistant:
"""

    response = llm.invoke(prompt)
    print(f"Bot: {response}\\n")

📌 How This Chatbot Works

  • system_instruction defines the chatbot’s personality
  • User input is taken inside a loop
  • Each prompt is sent fresh to the model
  • No previous conversation is stored

This makes the chatbot:

  • ⚡ Fast
  • 🧩 Simple
  • 🔐 Privacy-friendly

🚀 Use Cases

  • ✔ Learning Ollama + LangChain
  • ✔ Friendly offline AI assistant
  • ✔ Testing prompt behavior
  • ✔ Foundation for advanced chatbots

⚠ Limitations

  • ❌ No memory of past messages
  • ❌ No document awareness
  • ❌ No user personalization

These can be solved later using LangChain Memory or RAG.


🔚 Conclusion

This friendly Ollama chatbot is a great starting point for building local AI assistants. Once comfortable, you can extend it with memory, documents, or a web UI.

Happy building with local AI 🚀

Post a Comment

Do Leave Your Comments...

Previous Post Next Post

Contact Form