TypeScriptNext.js · Node.js · Edge
Monitor Vercel AI SDK with TraceHawk
Trace every generateText, streamText, and tool call in your Vercel AI SDK app. Works with Next.js, Node.js, and Edge runtime.
Install
npm install @tracehawk/sdkInitialize
For Next.js App Router, add initialization in instrumentation.ts — runs once on server startup:
instrumentation.ts
// instrumentation.ts
export async function register() {
const { init } = await import("@tracehawk/sdk");
init({ apiKey: process.env.TRACEHAWK_API_KEY! });
}For Node.js scripts, import and call at the top of your entry file:
import { init } from "@tracehawk/sdk";
init({ apiKey: "ao-..." });Example — generateText with tools
agent.ts
import { init } from "@tracehawk/sdk";
init({ apiKey: process.env.TRACEHAWK_API_KEY! });
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { tool } from "ai";
import { z } from "zod";
const result = await generateText({
model: anthropic("claude-3-5-sonnet-20241022"),
tools: {
getWeather: tool({
description: "Get weather for a location",
parameters: z.object({ location: z.string() }),
execute: async ({ location }) => {
return { temperature: 22, condition: "sunny" };
},
}),
},
prompt: "What is the weather in Vienna?",
});
// Full trace: LLM call → tool decision → tool execution → final response
// All visible in TraceHawk with cost breakdownExample — streamText in Next.js route
app/api/chat/route.ts
// app/api/chat/route.ts
import { init } from "@tracehawk/sdk";
init({ apiKey: process.env.TRACEHAWK_API_KEY! });
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: anthropic("claude-3-5-sonnet-20241022"),
messages,
onFinish({ usage, finishReason }) {
// TraceHawk automatically captures token usage here
},
});
return result.toDataStreamResponse();
}What you see in TraceHawk
- ✓Every generateText / streamText call as a span with latency
- ✓Streaming token usage and cost captured on finish
- ✓Tool call decisions with parameters and results
- ✓Per-user / per-session cost attribution (pass userId in headers)
- ✓Latency breakdown: model response time vs tool execution time
Per-user attribution
Pass a X-User-Id header from your frontend and TraceHawk will group costs and traces by user automatically — no extra SDK calls needed.
Ready to ship?
Free tier — 50K spans/month. No credit card required.