User guideTutorials
Migrating from LangGraph Cloud
Migrating from LangGraph Cloud
Migrating from LangGraph Cloud
This guide helps you migrate your existing LangGraph Cloud workflows to Duragraph with minimal code changes.
Why Migrate?
✅ Open Source - Self-host and customize Duragraph ✅ Cost Control - No per-execution pricing ✅ Data Sovereignty - Keep your data in your infrastructure ✅ Enterprise Features - Coming in Duragraph Cloud
API Compatibility
Duragraph provides drop-in compatibility with LangGraph Cloud's REST API. Most client SDKs work without modification.
Compatible Endpoints
| LangGraph Cloud | Duragraph | Status |
|---|---|---|
POST /assistants | POST /assistants | ✅ Full |
POST /threads | POST /threads | ✅ Full |
POST /runs | POST /runs | ✅ Full |
GET /runs/{run_id}/stream | GET /runs/{run_id}/stream | ✅ Full |
POST /runs/{run_id}/wait | POST /runs/{run_id}/wait | ✅ Full |
Step-by-Step Migration
1. Update Base URL
# Before (LangGraph Cloud)
from langgraph_sdk import get_client
client = get_client(url="https://api.langgraph.cloud")
# After (Duragraph)
from langgraph_sdk import get_client # Same SDK!
client = get_client(url="https://your-duragraph-instance.com")2. Authentication Changes
# Before (LangGraph Cloud)
client = get_client(
url="https://api.langgraph.cloud",
api_key="lsv2_..."
)
# After (Duragraph)
client = get_client(
url="https://your-duragraph-instance.com",
api_key="your-duragraph-token" # Or use basic auth
)3. Graph Definition
Your graph definitions work unchanged:
# This works exactly the same in Duragraph
from langgraph import StateGraph
from typing import TypedDict
class State(TypedDict):
messages: list
def chatbot(state: State):
# Your existing chatbot logic
return {"messages": state["messages"] + ["Hello!"]}
# Build graph (unchanged)
graph = StateGraph(State)
graph.add_node("chatbot", chatbot)
graph.set_entry_point("chatbot")
graph.set_finish_point("chatbot")
compiled_graph = graph.compile()4. Deployment Changes
# Before (LangGraph Cloud)
assistant = await client.assistants.create(
graph_id="your-graph",
config={"model": "gpt-4"}
)
# After (Duragraph) - Same API!
assistant = await client.assistants.create(
graph_id="your-graph",
config={"model": "gpt-4"}
)Feature Mapping
Supported Features ✅
- Stateful Conversations - Thread management
- Streaming Responses - Real-time SSE events
- Human-in-the-Loop - Interrupts and approvals (Q2)
- Webhooks - Event notifications (Q2)
- Multi-turn Conversations - Context preservation
- Error Handling - Automatic retries and recovery
Coming Soon 🚧
- Checkpointing - Custom checkpoint stores (Q1)
- Background Runs - Long-running workflows (Q2)
- Scheduled Runs - Cron-based execution (Q2)
- Multi-agent Coordination - Agent-to-agent communication (Q3)
Not Supported ❌
- LangGraph Studio - Use Duragraph Dashboard instead
- Cloud-only Features - Managed infrastructure (use Duragraph Cloud)
Migration Examples
Simple Chatbot
# Your existing LangGraph code works as-is
import asyncio
from langgraph_sdk import get_client
async def main():
# Just change the URL!
client = get_client(url="http://localhost:8080")
# Everything else is identical
assistant = await client.assistants.create(
graph_id="chatbot",
config={"model": "gpt-3.5-turbo"}
)
thread = await client.threads.create()
run = await client.runs.create(
thread_id=thread["thread_id"],
assistant_id=assistant["assistant_id"],
input={"messages": [{"role": "user", "content": "Hello!"}]}
)
# Stream events exactly the same way
async for event in client.runs.stream(
thread_id=thread["thread_id"],
run_id=run["run_id"]
):
print(f"Event: {event}")
asyncio.run(main())Multi-Agent Workflow
# Your complex multi-agent setups work too
from langgraph import StateGraph
from typing import TypedDict, Literal
class AgentState(TypedDict):
messages: list
next_agent: Literal["researcher", "writer", "reviewer"]
def researcher(state: AgentState):
# Your existing agent logic
return {
"messages": state["messages"] + ["Research completed"],
"next_agent": "writer"
}
def writer(state: AgentState):
return {
"messages": state["messages"] + ["Article written"],
"next_agent": "reviewer"
}
def reviewer(state: AgentState):
return {
"messages": state["messages"] + ["Review completed"],
"next_agent": "END"
}
# Build graph (unchanged from LangGraph)
graph = StateGraph(AgentState)
graph.add_node("researcher", researcher)
graph.add_node("writer", writer)
graph.add_node("reviewer", reviewer)
# Conditional routing (works the same)
def route_agent(state: AgentState):
return state["next_agent"]
graph.add_conditional_edges(
"researcher",
route_agent,
{"writer": "writer", "END": END}
)
# ... continue building as normalTesting Your Migration
1. Local Testing
# Start Duragraph locally
docker compose up -d
# Run your existing tests
pytest tests/ --base-url=http://localhost:80802. Gradual Migration
# Use feature flags for gradual migration
import os
DURAGRAPH_URL = os.getenv("DURAGRAPH_URL", "https://api.langgraph.cloud")
client = get_client(url=DURAGRAPH_URL)
# Start with non-critical workflows
if "duragraph" in DURAGRAPH_URL:
# Route to Duragraph
pass
else:
# Keep using LangGraph Cloud
passPerformance Considerations
Latency
- Self-hosted: Lower latency (no external API calls)
- Network: Ensure workers are co-located with API
Throughput
- Scaling: Add more worker instances
- Database: Use read replicas for high query loads
Monitoring
# Add monitoring to compare performance
import time
start = time.time()
run = await client.runs.create(...)
latency = time.time() - start
print(f"Duragraph latency: {latency:.2f}s")Getting Help
- Migration Issues: GitHub Discussions
- Feature Requests: GitHub Issues
- Professional Support: Contact us for Duragraph Cloud migration assistance
Next Steps
✅ Migrate a simple workflow first ✅ Test in staging environment ✅ Monitor performance metrics ✅ Gradually migrate production workflows ✅ Set up monitoring ✅ Configure production deployment