Back to Feed
r/ClaudeCode
by gkarthi280
New
Anyone monitoring their Claude Code workflows and usage?
8 points
5 comments
100% upvoted
View on Reddit
Content
I’ve been using Claude Code for more complex coding workflows recently, and one thing I hit pretty quickly was lack of visibility into what’s happening during a session.
Once workflows get tool-heavy (file reads/writes, searches, diffs), debugging gets hard:
* Where is time actually going?
* Which tools are being called the most?
* How many tokens are burned on planning vs execution?
* Where do errors or retries happen?
To get better insight, I instrumented Claude Code with OpenTelemetry and exported traces to an OTEL-compatible backend (SigNoz in my case).
This gave me metrics for things like Claude Code tool calls, latency, user decision, token usage and cost over time.
I also threw together a small dashboard to track things like:
* Token usage
* users, sessions and conversations
* model distribution
* tool call distribution
https://preview.redd.it/x7mhknuhalbg1.png?width=2904&format=png&auto=webp&s=e25cd7c1f9916dce8456a9806d06f2d2f80350b7
Curious how others here think about observability for Claude Code:
* What metrics or signals do you track?
* How do you evaluate output quality over time?
* Are you tracking failures or partial success?
If anyone’s interested, I followed the Claude Code + OpenTelemetry setup described here (worked fine with SigNoz, but should apply to any OTEL backend):
[https://signoz.io/docs/claude-code-monitoring/](https://signoz.io/docs/claude-code-monitoring/)
Would love to hear how others approach visibility for AI-assisted coding, or if there are any metrics you would personally add to this dashboard for improved observability and monitoring.
Comments
No comments fetched yet
Comments are fetched when you run cortex fetch with comment fetching enabled