How are you monitoring AI token costs in production?
by•
Hi everyone! 👋
I’ve seen many teams using LLMs in production, but almost nobody talks about how they track token costs effectively.
Output tokens can cost 3–5x more than input tokens, and without visibility, it’s easy to blow your budget without noticing.
I’m building an MVP called LLM Cost Radar to solve this — a tool that shows cost per model, cost per feature, daily spend, and usage spikes by sending events to a single /ingest endpoint.
But the big question for you all:
How is your team monitoring AI / token costs today?
I’d love to learn what works, what doesn’t, and which metrics really matter for teams using LLMs in production.
6 views


Replies
https://llmcostradar.com/