Build a Full-Stack AI Agent with LangGraph & React

AI Strategy umais20@yahoo.com January 10, 2026

A complete "Zero to Hero" guide: From Python logic to a live Web Interface.

Python 3.12+ uv LangGraph React

🚀 What We Are Building

We are going to build a sophisticated AI application that doesn't just "chat." It will intelligently decide whether to look up private documents (RAG) or use calculator tools (Agent). We will then wrap this logic in a fast API and build a modern chat interface for it.

Part 1: The Python Backend

Step 1: Initialize with `uv`

We will use uv (a super-fast Python tool) to set up our project cleanly.

# 1. Create a folder for your project
mkdir ai-fullstack-app
cd ai-fullstack-app

# 2. Initialize the Python project
uv init

# 3. Add the required libraries
uv add langchain langchain-openai langgraph faiss-cpu pypdf fastapi uvicorn

# 4. Create a dummy data file for the AI to read
echo "Project Secret: The launch code is ALPHA-77." > data.txt
Step 2: The AI Application Code

Create a file named main.py. We will use the exact logic you designed—combining RAG and Agents—but we will add a FastAPI layer at the bottom so it can talk to the web.

Copy this entire block into main.py:

from typing import TypedDict, Literal
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
import uvicorn

from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain.memory import ConversationSummaryMemory

from langchain.tools import tool
from langchain.agents import create_openai_tools_agent, AgentExecutor

from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQA

from langgraph.graph import StateGraph, END

# --- 1️⃣ SETUP & CONFIG ---
# We initialize the FastAPI app first
api_app = FastAPI()

# Enable CORS so our React frontend can talk to this backend
api_app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],  # Allows all origins
    allow_methods=["*"],  # Allows all methods
    allow_headers=["*"],  # Allows all headers
)

# --- 2️⃣ LLM ---
llm = ChatOpenAI(model="gpt-4o-mini")

# --- 3️⃣ MEMORY ---
memory = ConversationSummaryMemory(llm=llm)

# --- 4️⃣ TOOLS ---
@tool
def add_numbers(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

@tool
def get_app_info() -> str:
    """Returns information about this application"""
    return "This is a demo LangChain app using agents, tools, RAG, memory, and LangGraph."

tools = [add_numbers, get_app_info]

# --- 5️⃣ AGENT ---
agent_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant. Use tools when needed."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_openai_tools_agent(
    llm=llm,
    tools=tools,
    prompt=agent_prompt
)

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    memory=memory,
    verbose=True
)

# --- 6️⃣ RAG PIPELINE ---
# Ensure you have 'data.txt' in the same folder!
loader = TextLoader("data.txt")
docs = loader.load()

embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)

rag_chain = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=vectorstore.as_retriever()
)

# --- 7️⃣ LANGGRAPH STATE ---
class State(TypedDict):
    question: str
    route: Literal["rag", "agent"]
    answer: str

# --- 8️⃣ ROUTER ---
def router(state: State):
    question = state["question"].lower()
    if "langchain" in question or "langgraph" in question or "document" in question or "secret" in question:
        return {"route": "rag"}
    return {"route": "agent"}

# --- 9️⃣ NODES ---
def rag_node(state: State):
    response = rag_chain.invoke(state["question"])
    return {"answer": response["result"]}

def agent_node(state: State):
    response = agent_executor.invoke({"input": state["question"]})
    return {"answer": response["output"]}

# --- 🔟 GRAPH CONSTRUCTION ---
graph = StateGraph(State)

graph.add_node("router", router)
graph.add_node("rag", rag_node)
graph.add_node("agent", agent_node)

graph.set_entry_point("router")

graph.add_conditional_edges(
    "router",
    lambda x: x["route"],
    {
        "rag": "rag",
        "agent": "agent"
    }
)

graph.add_edge("rag", END)
graph.add_edge("agent", END)

# Compile the graph
app_logic = graph.compile()

# --- 1️⃣1️⃣ API ENDPOINTS ---
# We define the data model for the incoming request
class UserRequest(BaseModel):
    question: str

@api_app.post("/chat")
def chat_endpoint(req: UserRequest):
    # This runs your LangGraph logic
    result = app_logic.invoke({"question": req.question})
    return {"answer": result["answer"]}

# Entry point for running the server
if __name__ == "__main__":
    uvicorn.run(api_app, host="0.0.0.0", port=8000)
Step 3: Run the Backend

In your terminal, run the following command. This starts the server on port 8000.

uv run main.py

Part 2: The React Frontend

Now, let's build the visual interface. We assume you have Node.js installed. If not, download it from nodejs.org.

Step 1: Create the Project

Open a new terminal window (leave the Python one running) and run these commands:

# 1. Create a new React project with Vite
npm create vite@latest ai-frontend -- --template react

# 2. Move into the folder
cd ai-frontend

# 3. Install dependencies
npm install

# 4. Start the frontend
npm run dev
Step 2: The Frontend Code

Open the file src/App.jsx in your code editor. Delete everything in it and paste this code:

import { useState } from "react";

function App() {
  const [input, setInput] = useState("");
  const [messages, setMessages] = useState([]);
  const [loading, setLoading] = useState(false);

  const sendMessage = async () => {
    if (!input) return;

    // 1. Add user message to UI
    const newMessages = [...messages, { sender: "User", text: input }];
    setMessages(newMessages);
    setLoading(true);

    try {
      // 2. Send to our Python Backend
      const response = await fetch("http://localhost:8000/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ question: input }),
      });

      const data = await response.json();

      // 3. Add AI response to UI
      setMessages([...newMessages, { sender: "AI", text: data.answer }]);
    } catch (error) {
      console.error("Error:", error);
      setMessages([...newMessages, { sender: "System", text: "Error connecting to server." }]);
    }

    setLoading(false);
    setInput("");
  };

  return (
    

🤖 AI Analyst

{/* Chat Window */}
{messages.map((msg, index) => (
{msg.sender}: {msg.text}
))} {loading &&
AI is thinking...
}
{/* Input Area */}
setInput(e.target.value)} onKeyPress={(e) => e.key === "Enter" && sendMessage()} placeholder="Ask about the document or math..." style={{ flex: 1, padding: "10px", borderRadius: "5px", border: "1px solid #ccc" }} />
); } export default App;

🎉 Success!

You now have a functional hybrid AI agent.

Try asking: "What is the project secret?" (Triggers RAG)
vs
"What is 25 * 4?" (Triggers Agent)

Community Discussion (0)

Leave a Comment

No approved comments yet. Be the first to start the conversation!

Heartbeat Assistant