LLM observability for devs who hate the "success tax"

You built something cool with LLMs. It worked. Then the observability bill arrived.

Stop paying $500/month just to see which step in your RAG pipeline is slow. tracelayer.dev is the OTel-native, flat-fee alternative to the "Enterprise" bloat.

~/project
$ export OPENAI_BASE_URL="https://app.tracelayer.dev/v1"
$ python my_app.py
# Traces flowing. Dashboard at localhost:4999/ui
# No SDK changes. No rewrite. That's it.

Built out of genuine frustration.

I'm a developer. I like building things that scale. What I don't like is getting a "success tax" invoice every time my app actually gets used.

Most LLM observability tools follow a predictable path:

  1. They bait you with a "free" tier.
  2. They force you to wrap your entire codebase in a proprietary SDK.
  3. They charge per-trace, so as you grow, your margins vanish.
"I was paying more to watch my app than to run it."

That's stupid. I wanted a tool that didn't infect my code, didn't require a PhD in infrastructure to self-host, and didn't bankrupt me when I hit Product Hunt. So I built tracelayer.dev.

One environment variable. That's it.

You don't need a new SDK. You don't need to rewrite your chains. If your code talks to OpenAI, Anthropic, or LiteLLM — you're already 90% there.

~/project — point your provider at tracelayer.dev
# OpenAI
$ export OPENAI_BASE_URL="https://app.tracelayer.dev/v1"

# Anthropic
$ export ANTHROPIC_BASE_URL="https://app.tracelayer.dev/anthropic"

# Self-hosted
$ pip install tracelayer && tracelayer start
$ export OPENAI_BASE_URL="http://localhost:4999/openai"

No proprietary wrappers. No SDK bloat. Just clean, OpenTelemetry-native traces flowing into a UI that actually makes sense at 2 AM.

trace.raw

Real traces, no fluff

See the final wire-format payload. Compare the prompt you sent vs. what actually went to the API. Know why that 2% of requests return junk.

agent.dag

Agent loop visualization

Interactive DAGs for complex agentic loops. Stop clicking through flat logs to find where the tool call failed. See the whole chain at once.

cost.tag

Cost attribution

Tag your requests. Know exactly which user or feature is burning your credits. Per-team chargeback reports via CSV export.

privacy.vpc

Privacy-first

Run it in your VPC. We don't want your data. We don't even want to see your prompts. Everything stays on-prem.

otel.std

OTel-native

Industry-standard OpenTelemetry. If you ever leave tracelayer.dev, your instrumentation stays with you. Zero vendor lock-in.

data.own

Yours to export

CSV, JSON, SQLite dump. Schedule automatic nightly backups with one flag. Your data, your infrastructure, your schedule.

r/LLMDevs 11:47 PM

"LangSmith is great until you see the bill. I was paying $500/mo to see a slow RAG step. It felt like a tax on being successful. I just want a trace that doesn't cost a fortune."

HN 2:13 AM

"I'm tired of wrapping my code in proprietary SDKs. If I want to switch, I'm stuck. tracelayer.dev being OTel-native is the only reason I tried it."

Discord 4:08 AM

"I don't care about average latency charts. I care about why 2% of my requests return junk. I need a diff of the prompt vs. the system instructions. tracelayer.dev actually shows me that."

$69/month. That's it.

No per-trace fees. No "seats" pricing. No surprises at the end of the month.

Self-Hosted
Free forever
  • ✓ Unlimited traces
  • ✓ Full dashboard + exports
  • ✓ Agent loop visualization
  • ✓ OTel-native proxy
  • ✓ Community support
  • Your infrastructure
Self-host — GitHub ↗

Need >1M traces/mo? Email us for enterprise pricing.

Get early access to the cloud version.

We'll reach out when your spot is ready. No spam, ever.