Build a Full-Stack AI Agent with LangGraph, React and Docker

AI Strategy umais20@yahoo.com January 10, 2026

A complete "Zero to Hero" guide: Python Backend, React Frontend, and Docker Deployment.

Python 3.12+ uv LangGraph React Docker

🚀 What We Are Building

We are building a sophisticated AI application that intelligently decides whether to look up private documents (RAG) or use calculator tools (Agent). Finally, we will containerize the entire application so it can run on any machine without installation headaches.

Part 1: The Python Backend

Step 1: Initialize with `uv`

We will use uv (a super-fast Python tool) to set up our project cleanly.

# 1. Create a folder for your project
mkdir ai-fullstack-app
cd ai-fullstack-app

# 2. Initialize the Python project
uv init

# 3. Add the required libraries
uv add langchain langchain-openai langgraph faiss-cpu pypdf fastapi uvicorn

# 4. Create a dummy data file for the AI to read
echo "Project Secret: The launch code is ALPHA-77." > data.txt
Step 2: The AI Application Code

Create a file named main.py. This contains our LangGraph logic wrapped in FastAPI.

from typing import TypedDict, Literal
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
import uvicorn
import os

from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain.memory import ConversationSummaryMemory
from langchain.tools import tool
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQA
from langgraph.graph import StateGraph, END

# --- SETUP ---
api_app = FastAPI()
api_app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_methods=["*"],
    allow_headers=["*"],
)

llm = ChatOpenAI(model="gpt-4o-mini")
memory = ConversationSummaryMemory(llm=llm)

@tool
def add_numbers(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

tools = [add_numbers]

agent_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_openai_tools_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)

# --- RAG ---
loader = TextLoader("data.txt")
docs = loader.load()
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)
rag_chain = RetrievalQA.from_chain_type(llm=llm, retriever=vectorstore.as_retriever())

# --- GRAPH ---
class State(TypedDict):
    question: str
    route: Literal["rag", "agent"]
    answer: str

def router(state: State):
    if "secret" in state["question"].lower(): return {"route": "rag"}
    return {"route": "agent"}

def rag_node(state: State):
    return {"answer": rag_chain.invoke(state["question"])["result"]}

def agent_node(state: State):
    return {"answer": agent_executor.invoke({"input": state["question"]})["output"]}

graph = StateGraph(State)
graph.add_node("router", router)
graph.add_node("rag", rag_node)
graph.add_node("agent", agent_node)
graph.set_entry_point("router")
graph.add_conditional_edges("router", lambda x: x["route"], {"rag": "rag", "agent": "agent"})
graph.add_edge("rag", END)
graph.add_edge("agent", END)
app_logic = graph.compile()

# --- API ---
class UserRequest(BaseModel):
    question: str

@api_app.post("/chat")
def chat_endpoint(req: UserRequest):
    result = app_logic.invoke({"question": req.question})
    return {"answer": result["answer"]}

if __name__ == "__main__":
    uvicorn.run(api_app, host="0.0.0.0", port=8000)

Part 2: The React Frontend

Step 1: Create the React App
# Run in a new terminal
npm create vite@latest ai-frontend -- --template react
cd ai-frontend
npm install
Step 2: App.jsx

Replace the contents of src/App.jsx with this simple chat interface.

import { useState } from "react";

function App() {
  const [input, setInput] = useState("");
  const [messages, setMessages] = useState([]);

  const sendMessage = async () => {
    const newMessages = [...messages, { sender: "User", text: input }];
    setMessages(newMessages);
    
    // Note: We use 'localhost' here, but this might change with Docker!
    const response = await fetch("http://localhost:8000/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ question: input }),
    });
    const data = await response.json();
    setMessages([...newMessages, { sender: "AI", text: data.answer }]);
    setInput("");
  };

  return (
    

🤖 AI Analyst

{messages.map((m, i) =>
{m.sender}: {m.text}
)} setInput(e.target.value)} />
); } export default App;

Part 3: Containerization (Docker)

Now we move from "it works on my machine" to "it works everywhere." We will create a Dockerfile to package our app and use Docker Compose to run both the frontend and backend together.

[Image of Docker container architecture diagram]
1. The Backend Dockerfile

Create a file named Dockerfile (no extension) in your main ai-fullstack-app folder. This file is a recipe that tells Docker how to build your Python environment.

# 1. Use an official Python runtime as a parent image
FROM python:3.12-slim

# 2. Set the working directory in the container
WORKDIR /app

# 3. Copy the current directory contents into the container at /app
COPY . .

# 4. Install any needed packages
# (We list them explicitly here for simplicity, but usually you use requirements.txt)
RUN pip install langchain langchain-openai langgraph faiss-cpu pypdf fastapi uvicorn

# 5. Make port 8000 available to the world outside this container
EXPOSE 8000

# 6. Run the application
CMD ["uvicorn", "main:api_app", "--host", "0.0.0.0", "--port", "8000"]
Tip: The --host 0.0.0.0 is crucial. Without it, the app runs inside the container but can't be reached from the outside!
2. The .dockerignore File

Create a file named .dockerignore. This works exactly like .gitignore. It tells Docker which files not to copy into the container. This keeps your image small and fast.

__pycache__
.venv
.git
.env
data.txt  # (Optional: If you want to mount it separately later, but keep it for now)
3. Building & Running (The Basics)

Before we get fancy with Compose, let's just run the backend to test it.

# 1. Build the image (We name it 'ai-backend')
docker build -t ai-backend .

# 2. Run the container
# -p 8000:8000 maps port 8000 on your machine to port 8000 in the container
docker run -p 8000:8000 ai-backend
4. Docker Compose (The Orchestrator)

Running individual commands is tiresome. Docker Compose allows us to define our backend and frontend services in a single file and start them with one command.

[Image of Docker Compose diagram]

Create a file named docker-compose.yml in your root folder:

version: '3.8'

services:
  # Service 1: Our Python Backend
  backend:
    build: .
    ports:
      - "8000:8000"
    volumes:
      - ./data.txt:/app/data.txt # Maps data.txt so you can edit it live
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY} # Passes your local key

  # Service 2: Our React Frontend
  frontend:
    image: node:18-alpine
    working_dir: /app
    volumes:
      - ./ai-frontend:/app # Maps your local folder to the container
    command: sh -c "npm install && npm run dev -- --host"
    ports:
      - "5173:5173"
5. The Magic Commands

Now, everything is controlled by these essential commands:

  • docker-compose up --build
    Builds the images and starts the containers. You will see the logs streaming in your terminal.
  • docker-compose up -d
    Starts the containers in "Detached" mode (background). Your terminal remains free.
  • docker-compose down

Community Discussion (0)

Leave a Comment

No approved comments yet. Be the first to start the conversation!

Heartbeat Assistant