Skip to content

Code Agents Observability

AI coding agents like Claude Code generate a continuous stream of telemetry: tokens consumed, models invoked, tools called, code committed, and sessions started. Without visibility into this data, engineering teams operate blind—unable to attribute AI costs to teams or individuals, detect runaway usage, or understand whether agents are actually accelerating delivery.

Coralogix gives engineering leaders and platform teams a unified view of all coding agent activity—covering cost, usage, code impact, and user behavior across every agent, every developer, and every session.

How it works

flowchart LR
    A[Your Agent] --> B[OTLP]
    B --> C[Coralogix Ingress]
    C --> D[Dashboards + Alerts]

Each agent emits telemetry over OTLP. Configure the agent with your Coralogix endpoint and API key, and the data flows automatically. No wrappers, no custom SDKs, no custom instrumentation.

What you need

  • A Coralogix account with a Send-Your-Data API key. In Coralogix, navigate to Settings, then API Keys.
  • One or more coding agents installed and connected to Coralogix.

Connect an agent

Each agent has its own setup guide. Complete the setup before opening the dashboard.
AgentSetup guide
Claude CodeClaude Code integration with Coralogix

Access Code Agents Observability

  1. In Coralogix, navigate to AI Center, then Code Agents Observability. Select the Code Agents tab.
  2. Use the time range picker to set the period you want to analyze.

The dashboard has four tabs: Overview, Cost, Usage, and Users.

Overview

The Overview tab gives you the health snapshot you need before diving deeper—active models, total spend, and session volume for the selected period.

Key Insights surfaces the most critical metrics at a glance:

  • Models in use — Primary models invoked across sessions.
  • Estimated total cost — Spend calculated from token usage and model pricing.
  • Total sessions — Number of Claude Code sessions in the selected period.

The Cost summary widget displays a trend line alongside the counter, so you can see direction at a glance without switching tabs.

Cost

Gain a clear breakdown of where your AI spend is going—and who is driving it.

  • Model cost distribution — A doughnut chart showing cost share by model. Use this to identify which models account for the majority of spend and whether that matches your intended usage.
  • High-spending users — A ranked bar chart of users by estimated cost. Use this to detect outliers, verify that usage aligns with expectations, and prioritize conversations about responsible usage.
  • Activity — Session and request volume over time, giving context to the cost figures.
  • Code impact — Commits, pull requests, and AI suggestion acceptance rates correlated with cost. Use this to evaluate whether high-spend users are also driving proportional delivery output.
  • Productivity ratio — The ratio of accepted AI suggestions to total suggestions generated. A higher ratio indicates that the output Claude generates closely aligns with developer intent.
  • Tool calls — Breakdown of tools invoked by Claude during sessions (for example, file edits, shell commands, web searches), showing where Claude spends its execution budget.

Usage

Understand how Claude Code runs and what tangible code output it produces.

  • Session activity — Session count and request volume over the selected period.
  • Code impact — Commits, pull requests, and lines of code attributed to Claude Code sessions. Use this to correlate AI usage with real development output.
  • Acceptance rate — The percentage of AI-generated suggestions accepted by developers. A declining acceptance rate can indicate model drift or context quality issues.
  • Tool calls — Which tools Claude invoked most across sessions, helping you understand the operational patterns Claude follows.

Users

Identify your most active users and investigate individual session patterns.

  • Active users — Ranked list of users by estimated cost in the selected period.
  • Select any user to drill down into their session activity, token consumption, and code impact over time.