PythonOpenLLMetry
Monitor LangGraph Agents with TraceHawk
Add full observability to your LangGraph workflows in 2 lines — every node execution, LLM call, and tool invocation traced automatically.
Install
pip install tracehawkInitialize
Add this before your graph definition:
import tracehawk
tracehawk.init(api_key="ao-...")Example — LangGraph agent with full tracing
agent.py
import tracehawk
tracehawk.init(api_key="ao-...")
from langgraph.graph import StateGraph, END
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
from typing import TypedDict, Annotated
import operator
class AgentState(TypedDict):
messages: Annotated[list, operator.add]
llm = ChatAnthropic(model="claude-3-5-sonnet-20241022")
def call_model(state: AgentState):
response = llm.invoke(state["messages"])
return {"messages": [response]}
def should_continue(state: AgentState):
last_message = state["messages"][-1]
if last_message.tool_calls:
return "tools"
return END
# Build graph — all nodes traced automatically
graph = StateGraph(AgentState)
graph.add_node("agent", call_model)
graph.set_entry_point("agent")
graph.add_conditional_edges("agent", should_continue)
app = graph.compile()
# Run — trace appears in TraceHawk dashboard
result = app.invoke({
"messages": [HumanMessage(content="Summarize the latest AI news")]
})What you see in TraceHawk
- ✓Every graph node as a separate span with execution time
- ✓LLM calls with prompt, completion, token count, and cost
- ✓Conditional edge decisions shown in the decision tree view
- ✓Full trace waterfall across multi-step workflows
- ✓Cost per graph run with model-level breakdown
With MCP tools
MCP tool calls are traced as first-class spans — server name, tool name, parameters, result, and latency all captured.
agent_mcp.py
import tracehawk
tracehawk.init(api_key="ao-...")
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
async def run():
async with MultiServerMCPClient({
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"]
}
}) as client:
tools = await client.get_tools()
model = ChatAnthropic(model="claude-3-5-sonnet-20241022")
agent = create_react_agent(model, tools)
# MCP tool calls traced with server name, tool name, latency, params
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "List files in /tmp"}]
})MCP Analytics
When you use MCP servers, TraceHawk automatically populates the MCP Analytics dashboard — per-server call frequency, error rate, p95 latency, and a tool heatmap. No extra configuration needed.
Ready to ship?
Free tier — 50K spans/month. No credit card required.