💬 Simple Ollama-Based Chatbot Using LangChain
This tutorial explains how to build a basic chatbot using Ollama and LangChain. The chatbot runs fully locally and generates responses using a local LLM.
🧠 What This Chatbot Does
- ✔ Sends a text prompt to a local Ollama model
- ✔ Receives an AI-generated response
- ✔ Runs offline without APIs
- ❌ Does not store previous conversations
📦 Required Libraries
To build this chatbot, you only need the following Python libraries:
langchainlangchain-community
⬇ How to Install Required Libraries
Install LangChain and Ollama community support using pip:
pip install langchain langchain-community
Make sure Ollama is already installed on your system and running.
📥 Download Required Ollama Model
This chatbot uses the llama3.2 model. Download it using:
ollama pull llama3.2
You can verify the model by running:
ollama run llama3.2
🧠 Python Code: Simple Ollama Chatbot
Below is the complete Python script:
from langchain_community.llms import Ollama
# Initialize Ollama model
llm = Ollama(model="llama3.2")
# User query (no memory)
query = '''
hello how are you
can you please tell me what is ollama
'''
# Invoke model
print(llm.invoke(query))
📌 How This Code Works
Ollama(model="llama3.2")→ Connects to the local LLMquery→ User input promptllm.invoke()→ Sends prompt and gets response
Each execution is independent, meaning:
🚀 Use Cases
- ✔ Learning Ollama + LangChain basics
- ✔ Simple AI response generators
- ✔ Offline chatbot experiments
- ✔ Testing LLM outputs locally
🔧 Limitations
- ❌ No conversation history
- ❌ No document awareness
- ❌ No memory or context retention
These limitations can be solved using LangChain Memory or RAG pipelines, which will be covered in advanced projects.
🔚 Conclusion
This simple Ollama chatbot is the foundation for more advanced AI systems. Once you understand this, you can extend it with:
- 🧠 Chat memory
- 📄 Document-based Q&A
- 🌐 Web or API interfaces
Happy building with local AI 🚀
