PythonMulti-agent
Monitor CrewAI with TraceHawk
Trace multi-agent CrewAI workflows — every agent, task, and tool call with full cost attribution across the entire crew.
Install
pip install tracehawkInitialize
Add this at the top of your entry file, before any CrewAI imports:
import tracehawk
tracehawk.init(api_key="ao-...")Example — Research crew with full tracing
The entire crew run becomes one trace. Each agent's LLM calls, tool invocations, and task handoffs appear as child spans.
crew.py
import tracehawk
tracehawk.init(api_key="ao-...")
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, WebsiteSearchTool
search_tool = SerperDevTool()
web_tool = WebsiteSearchTool()
# Agents — each traced individually
researcher = Agent(
role="Senior Research Analyst",
goal="Find and analyze the latest AI industry trends",
backstory="Expert at finding accurate information from multiple sources",
tools=[search_tool, web_tool],
verbose=True
)
writer = Agent(
role="Content Writer",
goal="Write clear, engaging summaries",
backstory="Skilled at turning research into readable reports",
verbose=True
)
# Tasks
research_task = Task(
description="Research the top 5 AI developments this week",
expected_output="Bullet list of key findings with sources",
agent=researcher
)
write_task = Task(
description="Write a 300-word summary of the research findings",
expected_output="Professional summary suitable for a newsletter",
agent=writer
)
# Crew — entire run traced as one trace
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential
)
result = crew.kickoff()What you see in TraceHawk
- ✓Entire crew run as one trace with agent-level spans
- ✓Each agent's LLM calls: prompt, response, token count, cost
- ✓Tool calls per agent: queries, results, latency
- ✓Task handoff between agents with context passed
- ✓Total cost for the full crew run
- ✓Decision tree: which agent called which tool in what order
With MCP tools in CrewAI
MCP server tool calls are traced as first-class spans with server name, tool name, and full parameter capture. They also populate the MCP Analytics dashboard.
crew_mcp.py
import tracehawk
tracehawk.init(api_key="ao-...")
from crewai import Agent, Task, Crew
from crewai_tools import MCPServerAdapter
# MCP server calls traced with server name and tool details
with MCPServerAdapter({"url": "http://localhost:3000/sse"}) as mcp_tools:
analyst = Agent(
role="Data Analyst",
goal="Query and analyze database",
tools=mcp_tools.tools,
verbose=True
)
task = Task(
description="Analyze sales data for Q1 2025",
expected_output="Summary with key metrics and trends",
agent=analyst
)
crew = Crew(agents=[analyst], tasks=[task])
result = crew.kickoff()
# All MCP tool calls appear in TraceHawk MCP Analytics dashboardCost attribution across agents
When multiple agents share a task, TraceHawk breaks down cost per agent — so you can see exactly which agent is consuming budget. Useful for optimizing prompts or switching to cheaper models for lower-value tasks.
Ready to ship?
Free tier — 50K spans/month. No credit card required.