If 2024 was the year of RAG (Retrieval Augmented Generation), 2025 is the year of Agentic Workflows.
The industry is moving away from “God Models” (asking one LLM to do everything) and toward Multi-Agent Systems. Think of it this way: instead of hiring one genius to write, code, and test your app, you hire a team of specialists who pass work to each other.
The tool powering this shift is LangGraph. Unlike standard chains which are linear (A 1$\rightarrow$ B 2$\rightarrow$ C), LangGraph allows for loops, conditional logic, and shared stateโmimicking a real human office.3
In this tutorial, we will build a simple “Content Factory” consisting of two AI employees:
- The Editor (Agent A): Receives a topic and creates a detailed outline.
- The Writer (Agent B): Receives the outline and writes the final piece.
The Core Concept: The “State”
Before we code, you need to understand the State. In LangGraph, the “State” is like a shared Google Doc.
- Agent A opens the doc, writes the outline, and closes it.
- Agent B opens the same doc, reads the outline, writes the draft, and closes it.
Step 1: The Setup
Youโll need the langgraph and langchain libraries.
Bash
pip install langgraph langchain langchain_openai
Note: You will need an OpenAI API key for this example, but you can swap it for Anthropic or Ollama easily.4
Step 2: Defining the Team
Create a file named team.py.
First, we define our State. This is the schema of the data our agents will pass around.
Python
import os
from typing import TypedDict
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
# Set your key
# os.environ["OPENAI_API_KEY"] = "sk-..."
# 1. DEFINE THE STATE (The Shared "Google Doc")
class AgentState(TypedDict):
topic: str
outline: str
final_draft: str
# Initialize the LLM
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
Step 3: Creating the Agents (Nodes)
Now we create the functions that represent our specific workers.
Python
# 2. DEFINE THE NODES (The Workers)
def editor_node(state: AgentState):
"""The Editor: Takes a topic, returns an outline."""
print("--- EDITOR IS WORKING ---")
topic = state['topic']
# Prompting the LLM
messages = [
SystemMessage(content="You are a Senior Editor. Create a 3-bullet point outline for the following topic."),
HumanMessage(content=topic)
]
response = llm.invoke(messages)
# Update the state with the new outline
return {"outline": response.content}
def writer_node(state: AgentState):
"""The Writer: Takes an outline, writes the post."""
print("--- WRITER IS WORKING ---")
outline = state['outline']
messages = [
SystemMessage(content="You are a Tech Writer. Write a short paragraph based on this outline."),
HumanMessage(content=outline)
]
response = llm.invoke(messages)
# Update the state with the final draft
return {"final_draft": response.content}
Step 4: Building the Graph (The Workflow)
This is where the magic happens. We wire the agents together.
Python
# 3. BUILD THE GRAPH
workflow = StateGraph(AgentState)
# Add our workers
workflow.add_node("editor", editor_node)
workflow.add_node("writer", writer_node)
# Define the flow: Start -> Editor -> Writer -> End
workflow.set_entry_point("editor")
workflow.add_edge("editor", "writer")
workflow.add_edge("writer", END)
# Compile the machine
app = workflow.compile()
Step 5: Running the Factory
Now, let’s give our new team a job.
Python
# 4. EXECUTE
inputs = {"topic": "The future of AI Agents in 2026"}
print("Starting the workflow...")
result = app.invoke(inputs)
print("\n=== FINAL RESULT ===")
print(result['final_draft'])
What happens when you run this?
- Entry: The
topicenters the State. - Editor Node: Sees the topic, calls the LLM, and saves an
outlineto the State. - Handoff: The graph moves to the next node.
- Writer Node: Sees the
outlinein the State (it ignores the raw topic), calls the LLM, and saves thefinal_draft. - Exit: The process finishes and prints the result.