πŸ”— Link copied to clipboard!

LangChain Practical Implementation

Part 7 of LangChain Mastery
5/27/2025, Time spent: 0m 0s

In this part of the LangChain Mastery Series, we walk through a real-world AI chat application powered by LangChain. You’ll get:

πŸ—οΈ Project Structure Overview

langchain-chat-app/
β”œβ”€β”€ main.py
β”œβ”€β”€ tools/
β”‚   β”œβ”€β”€ search_tool.py
β”‚   └── calculator_tool.py
β”œβ”€β”€ prompts/
β”‚   └── system_prompt.txt
β”œβ”€β”€ loaders/
β”‚   └── doc_loader.py
β”œβ”€β”€ memory/
β”‚   └── user_memory.py
β”œβ”€β”€ chains/
β”‚   └── qa_chain.py
β”œβ”€β”€ agent/
β”‚   └── agent_executor.py
β”œβ”€β”€ vectorstore/
β”‚   └── vector_db.py
└── utils/
    └── output_parser.py

Each component is modular, readable, and purpose-driven.

πŸ” Highlights

Load environment variables

Setup your API Key from openAI Read Here more

os.environ["OPENAI_API_KEY"] = "your-api-key-here"

1. main.py

The entry point of your chat application. It initializes LLM, memory, tools, and the agent.

from langchain.agents import initialize_agent, Tool
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.agents.agent_types import AgentType
from tools.search_tool import search_tool
from tools.calculator_tool import calculator_tool
from langchain.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate, ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
import os

# Load environment variables
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# Setup memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# Load documents into vector DB
loader = TextLoader("docs/sample.txt")
docs = loader.load()
db = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = db.as_retriever()

# Setup tools
tools = [search_tool, calculator_tool]

# Setup prompt
system_template = open("system_prompt.txt").read()
prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template(system_template),
    HumanMessagePromptTemplate.from_template("{input}")
])

# LLM setup
llm = ChatOpenAI(temperature=0, model="gpt-4")

# Agent setup
agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
    verbose=True,
    memory=memory
)

# Run loop
if __name__ == "__main__":
    print("LangChain Practical Chat App")
    while True:
        user_input = input("You: ")
        if user_input.lower() in ["exit", "quit"]:
            break
        response = agent.run(user_input)
        print("Assistant:", response)

2. Tools

def run_search(query: str) -> str:
    return "Pretend this came from the web: " + query

def calculate(expression: str) -> str:
    return str(eval(expression))

3. Prompt Template

Defined in system_prompt.txt, used via ChatPromptTemplate to guide LLM behavior.

You are a helpful assistant capable of answering user queries using tools:
1. calculator - Operations like add/subtract/multiply/divide two number
2. simulated search: search google and resturn result
If a user asks for factual or numerical information, use the appropriate tool.
Response strictly in below json format
{
    response: YOUR_RESPONSE
}

4. Memory

Implements ConversationBufferMemory to track past user interactions.

Import statement for ConversationBufferMemory

from langchain.memory import ConversationBufferMemory

Memory ConversationBufferMemory instance

# Setup memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

Memory usage:

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
    verbose=True,
    memory=memory
)

5. Chains

Includes LLMChain for direct QA and RetrievalQAChain for vector-based retrieval.

from langchain.chains import LLMChain

6. Agent

Wraps tools + chains + memory into a reasoning loop via AgentExecutor. Import AgentType

from langchain.agents.agent_types import AgentType

Agent setup

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
    verbose=True,
    memory=memory
)

7. Vector Store

Documents loaded with TextLoader, embedded, and stored in FAISS or Chroma.

from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader

Load documents into vector DB

loader = TextLoader("docs/sample.txt")
docs = loader.load()
db = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = db.as_retriever()

8. Output Parser

Parses output into structured format (e.g. JSON or clean chat response).

βœ… Best Practices

🚫 Common Pitfalls

πŸ“¦ Ready to Use

You can download the complete code here and plug in your OpenAI key to start experimenting.