OTEL Dual-Write
TraceHawk sits alongside your existing observability stack — it doesn't replace it. If you already send spans to Datadog, Grafana Cloud, Honeycomb, or Jaeger, you can keep doing that and add TraceHawk in parallel. One line of config. Zero changes to your agent code.
Your agent (Python / TypeScript)
│
│ OTLP/HTTP
▼
TraceHawk ingest ──── stores spans ──── TraceHawk UI
│
│ re-exports raw OTLP payload
▼
Your existing tool (Datadog / Grafana / Jaeger / ...)The re-export happens inside the TraceHawk ingest route — it forwards the raw OTLP payload to your configured destination concurrently with writing to TimescaleDB. If the destination is unreachable, TraceHawk still accepts the spans — the forward is fire-and-forget.
Option 1: Server-side re-export (recommended)
Set environment variables on your TraceHawk deployment. No SDK changes required. Works for all agents regardless of language or framework.
Single destination
# Forward to Datadog
OTEL_REEXPORT_ENDPOINT=https://api.datadoghq.com/api/v0.2/traces
OTEL_REEXPORT_HEADERS=DD-API-KEY=your_datadog_api_keyMultiple destinations
# Comma-separated endpoints
OTEL_REEXPORT_ENDPOINT=https://api.datadoghq.com/api/v0.2/traces,https://otlp-gateway-prod-us-east-0.grafana.net/otlp/v1/traces
# Shared headers (applied to all)
OTEL_REEXPORT_HEADERS=X-Source=tracehawk
# Per-destination headers (index-suffixed, override shared)
OTEL_REEXPORT_HEADERS_0=DD-API-KEY=your_dd_key
OTEL_REEXPORT_HEADERS_1=Authorization=Basic your_grafana_base64Common destinations
Datadog
OTEL_REEXPORT_ENDPOINT=https://api.datadoghq.com/api/v0.2/traces
OTEL_REEXPORT_HEADERS=DD-API-KEY=<your_api_key>
# EU region:
# OTEL_REEXPORT_ENDPOINT=https://api.datadoghq.eu/api/v0.2/tracesGrafana Cloud (Tempo)
OTEL_REEXPORT_ENDPOINT=https://otlp-gateway-prod-us-east-0.grafana.net/otlp/v1/traces
# Authorization = "Basic " + base64("<instance_id>:<grafana_token>")
OTEL_REEXPORT_HEADERS=Authorization=Basic <base64_encoded_credentials>Honeycomb
OTEL_REEXPORT_ENDPOINT=https://api.honeycomb.io/v1/traces
OTEL_REEXPORT_HEADERS=x-honeycomb-team=<your_api_key>Jaeger (self-hosted)
# Jaeger v1.35+ supports OTLP/HTTP on port 4318
OTEL_REEXPORT_ENDPOINT=http://your-jaeger-host:4318/v1/traces
# No auth headers needed for local JaegerOption 2: Client-side dual-write
If you prefer to keep all export config in your agent code rather than on the server, add a second BatchSpanProcessor to your OTEL tracer provider. This works with any OTEL SDK regardless of whether you use TraceHawk's SDK.
Python
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
provider = TracerProvider()
# Primary: TraceHawk
provider.add_span_processor(BatchSpanProcessor(
OTLPSpanExporter(
endpoint="https://app.tracehawk.com/api/otel/v1/traces",
headers={"Authorization": "Bearer th_your_api_key"},
)
))
# Secondary: your existing Datadog (or any OTLP destination)
provider.add_span_processor(BatchSpanProcessor(
OTLPSpanExporter(
endpoint="https://api.datadoghq.com/api/v0.2/traces",
headers={"DD-API-KEY": "your_dd_api_key"},
)
))
from opentelemetry import trace
trace.set_tracer_provider(provider)TypeScript / Node.js
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
const provider = new NodeTracerProvider();
// Primary: TraceHawk
provider.addSpanProcessor(new BatchSpanProcessor(
new OTLPTraceExporter({
url: "https://app.tracehawk.com/api/otel/v1/traces",
headers: { Authorization: "Bearer th_your_api_key" },
})
));
// Secondary: Datadog (or any OTLP-compatible backend)
provider.addSpanProcessor(new BatchSpanProcessor(
new OTLPTraceExporter({
url: "https://api.datadoghq.com/api/v0.2/traces",
headers: { "DD-API-KEY": "your_dd_api_key" },
})
));
provider.register();What you get
- ✓TraceHawk: MCP server analytics, per-tool breakdown, agent decision tree, cost budgets
- ✓Datadog / Grafana: infrastructure correlation, APM, existing dashboards and alerts
- ✓No lock-in: OTEL standard format, switch or add destinations anytime
- ✓Zero latency impact: re-export is async and never blocks the ingest response
- ✓Failure-safe: if the destination is down, TraceHawk still accepts your spans
Monitoring re-export health
The ingest route logs structured JSON for every re-export attempt. You can grep your Railway / Docker logs for these events:
// Success
{ "event": "reexport_ok", "endpoint": "https://api.datadoghq.com/...", "status": 200, "durationMs": 84 }
// Non-2xx from destination (spans still saved to TraceHawk)
{ "event": "reexport_failed", "endpoint": "https://api.datadoghq.com/...", "status": 401, "durationMs": 61 }
// Network error / timeout
{ "event": "reexport_timeout", "endpoint": "https://...", "error": "TimeoutError", "durationMs": 5003 }Errors are also captured in Sentry as informational events (not fatal errors) so you can set up a Sentry alert if re-export is consistently failing.
FAQ
Does the destination receive the same payload as TraceHawk?
Yes — the raw OTLP/JSON body is forwarded verbatim. The destination sees exactly what TraceHawk received: all spans, resource attributes, and semantic conventions.
Can I filter which spans are forwarded?
Not yet — all spans in the batch are forwarded. Per-span filtering (e.g., forward only LLM spans to Datadog) is on the roadmap. For now, use sampling in your SDK before sending to TraceHawk if you need to reduce forwarded volume.
What's the timeout?
5 seconds per destination. If the destination doesn't respond within 5 seconds, the forward is abandoned and logged as reexport_timeout. Your spans are still written to TraceHawk.
Is this available on the free tier?
Yes. OTEL re-export is available on all plans including the free 50K span/month tier. There's no additional charge for forwarding.
Ready to ship?
Free tier — 50K spans/month. No credit card required.