LangChain Practical Implementation
In this part of the LangChain Mastery Series, we walk through a real-world AI chat application powered by LangChain. Youโll get:
- Complete folder structure
- Code for memory, agents, tools, vector DBs, chains, and prompt templates
- Best practices and gotchas to avoid
๐๏ธ Project Structure Overview
langchain-chat-app/
โโโ main.py
โโโ tools/
โ โโโ search_tool.py
โ โโโ calculator_tool.py
โโโ prompts/
โ โโโ system_prompt.txt
โโโ loaders/
โ โโโ doc_loader.py
โโโ memory/
โ โโโ user_memory.py
โโโ chains/
โ โโโ qa_chain.py
โโโ agent/
โ โโโ agent_executor.py
โโโ vectorstore/
โ โโโ vector_db.py
โโโ utils/
โโโ output_parser.py
Each component is modular, readable, and purpose-driven.
๐ Highlights
Load environment variables
Setup your API Key from openAI Read Here more
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
1. main.py
The entry point of your chat application. It initializes LLM, memory, tools, and the agent.
from langchain.agents import initialize_agent, Tool
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.agents.agent_types import AgentType
from tools.search_tool import search_tool
from tools.calculator_tool import calculator_tool
from langchain.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate, ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
import os
# Load environment variables
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
# Setup memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Load documents into vector DB
loader = TextLoader("docs/sample.txt")
docs = loader.load()
db = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = db.as_retriever()
# Setup tools
tools = [search_tool, calculator_tool]
# Setup prompt
system_template = open("system_prompt.txt").read()
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template("{input}")
])
# LLM setup
llm = ChatOpenAI(temperature=0, model="gpt-4")
# Agent setup
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory
)
# Run loop
if __name__ == "__main__":
print("LangChain Practical Chat App")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
break
response = agent.run(user_input)
print("Assistant:", response)
2. Tools
- Search Tool simulates a Google search for external data.
- Calculator Tool handles math operations.
def run_search(query: str) -> str:
return "Pretend this came from the web: " + query
def calculate(expression: str) -> str:
return str(eval(expression))
3. Prompt Template
Defined in system_prompt.txt
, used via ChatPromptTemplate
to guide LLM behavior.
You are a helpful assistant capable of answering user queries using tools:
1. calculator - Operations like add/subtract/multiply/divide two number
2. simulated search: search google and resturn result
If a user asks for factual or numerical information, use the appropriate tool.
Response strictly in below json format
{
response: YOUR_RESPONSE
}
4. Memory
Implements ConversationBufferMemory
to track past user interactions.
Import statement for ConversationBufferMemory
from langchain.memory import ConversationBufferMemory
Memory ConversationBufferMemory instance
# Setup memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
Memory usage:
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory
)
5. Chains
Includes LLMChain
for direct QA and RetrievalQAChain
for vector-based retrieval.
from langchain.chains import LLMChain
6. Agent
Wraps tools + chains + memory into a reasoning loop via AgentExecutor
.
Import AgentType
from langchain.agents.agent_types import AgentType
Agent setup
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory
)
7. Vector Store
Documents loaded with TextLoader
, embedded, and stored in FAISS or Chroma.
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
Load documents into vector DB
loader = TextLoader("docs/sample.txt")
docs = loader.load()
db = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = db.as_retriever()
8. Output Parser
Parses output into structured format (e.g. JSON or clean chat response).
โ Best Practices
- Use environment variables for OpenAI keys.
- Always sanitize tool outputs.
- Validate prompt context length for GPT-4.
- Add logs for
AgentExecutor
debugging.
๐ซ Common Pitfalls
- Tools not returning strings โ Agent crashes.
- Vector store missing retriever โ
RetrievalQA
fails silently. - Memory bloating โ Use windowed memory for long chats.
๐ฆ Ready to Use
You can download the complete code here and plug in your OpenAI key to start experimenting.