LangChain Practical Implementation
Part 7 of LangChain Mastery
5/27/2025, Time spent: 0m 0s
In this part of the LangChain Mastery Series, we walk through a real-world AI chat application powered by LangChain. Youβll get:
- Complete folder structure
- Code for memory, agents, tools, vector DBs, chains, and prompt templates
- Best practices and gotchas to avoid
ποΈ Project Structure Overview
langchain-chat-app/
βββ main.py
βββ tools/
β βββ search_tool.py
β βββ calculator_tool.py
βββ prompts/
β βββ system_prompt.txt
βββ loaders/
β βββ doc_loader.py
βββ memory/
β βββ user_memory.py
βββ chains/
β βββ qa_chain.py
βββ agent/
β βββ agent_executor.py
βββ vectorstore/
β βββ vector_db.py
βββ utils/
βββ output_parser.py
Each component is modular, readable, and purpose-driven.
π Highlights
Load environment variables
Setup your API Key from openAI Read Here more
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
1. main.py
The entry point of your chat application. It initializes LLM, memory, tools, and the agent.
from langchain.agents import initialize_agent, Tool
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.agents.agent_types import AgentType
from tools.search_tool import search_tool
from tools.calculator_tool import calculator_tool
from langchain.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate, ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
import os
# Load environment variables
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
# Setup memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Load documents into vector DB
loader = TextLoader("docs/sample.txt")
docs = loader.load()
db = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = db.as_retriever()
# Setup tools
tools = [search_tool, calculator_tool]
# Setup prompt
system_template = open("system_prompt.txt").read()
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template("{input}")
])
# LLM setup
llm = ChatOpenAI(temperature=0, model="gpt-4")
# Agent setup
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory
)
# Run loop
if __name__ == "__main__":
print("LangChain Practical Chat App")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
break
response = agent.run(user_input)
print("Assistant:", response)
2. Tools
- Search Tool simulates a Google search for external data.
- Calculator Tool handles math operations.
def run_search(query: str) -> str:
return "Pretend this came from the web: " + query
def calculate(expression: str) -> str:
return str(eval(expression))
3. Prompt Template
Defined in system_prompt.txt
, used via ChatPromptTemplate
to guide LLM behavior.
You are a helpful assistant capable of answering user queries using tools:
1. calculator - Operations like add/subtract/multiply/divide two number
2. simulated search: search google and resturn result
If a user asks for factual or numerical information, use the appropriate tool.
Response strictly in below json format
{
response: YOUR_RESPONSE
}
4. Memory
Implements ConversationBufferMemory
to track past user interactions.
Import statement for ConversationBufferMemory
from langchain.memory import ConversationBufferMemory
Memory ConversationBufferMemory instance
# Setup memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
Memory usage:
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory
)
5. Chains
Includes LLMChain
for direct QA and RetrievalQAChain
for vector-based retrieval.
from langchain.chains import LLMChain
6. Agent
Wraps tools + chains + memory into a reasoning loop via AgentExecutor
.
Import AgentType
from langchain.agents.agent_types import AgentType
Agent setup
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory
)
7. Vector Store
Documents loaded with TextLoader
, embedded, and stored in FAISS or Chroma.
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
Load documents into vector DB
loader = TextLoader("docs/sample.txt")
docs = loader.load()
db = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = db.as_retriever()
8. Output Parser
Parses output into structured format (e.g. JSON or clean chat response).
β Best Practices
- Use environment variables for OpenAI keys.
- Always sanitize tool outputs.
- Validate prompt context length for GPT-4.
- Add logs for
AgentExecutor
debugging.
π« Common Pitfalls
- Tools not returning strings β Agent crashes.
- Vector store missing retriever β
RetrievalQA
fails silently. - Memory bloating β Use windowed memory for long chats.
π¦ Ready to Use
You can download the complete code here and plug in your OpenAI key to start experimenting.