Skip to content

AI Center FAQs

Answers to common questions about AI Center. For symptom-based fixes, see the Troubleshoot section on the relevant feature page — Getting started, Guardrails, and Code Agents.

General

How is AI Center different from Olly?

AI Center is the observability and governance layer for the AI applications you build and run in production. It instruments your large language model (LLM) calls, surfaces them in dashboards, applies guardrails and evaluations to outputs, and monitors AI usage across your organization.

Olly is the AI-native observability agent that helps your team investigate problems across your Coralogix data — logs, metrics, traces, alerts, and AI Center spans. AI Center is the product being observed; Olly is one of the tools you use to investigate it.

How is AI Center different from Coralogix MCP server?

AI Center is a product. The Coralogix Model Context Protocol (MCP) server is an integration layer that exposes your Coralogix data — including AI Center spans — to external AI tools such as Claude Code or Cursor. They serve different purposes and can be used together.

Integrations

Are streaming responses captured?

Yes. Token counts and response content are captured for streaming LLM calls.

Does instrumentation work with Azure OpenAI, Vertex AI, or self-hosted models?

It depends on the SDK your application uses to call the model:

  • Applications that call Vertex AI through the google-genai SDK are instrumented by the Gemini integration, because the instrumentation wraps the SDK rather than the transport.
  • Applications that call Azure OpenAI or self-hosted models work the same way: if your code uses a supported SDK (for example, the OpenAI Python client pointed at an Azure endpoint), instrumentation captures the calls.

Test in a non-production environment first to confirm spans appear in AI Center. For the current list of supported integrations, see Integrations for LLM observability.

Evaluations

Can I run evaluations on a sample of spans rather than every span?

Sampling for evaluations is on the roadmap but not yet available. Today, every span produced by an instrumented application is evaluated by all enabled evaluators for that application. Manage cost by scoping evaluators to specific applications rather than enabling them globally.

Can I report evaluation scores from an external evaluator into AI Center?

Native external evaluator support is on the roadmap. In the meantime, attach the score as an attribute on the LLM span before the span is exported. The span appears in AI Explorer, but the score attribute is not displayed there — query it in Spans Explorer and build custom dashboards on top of it. Spans with score attributes are billed only as standard trace ingestion; they do not count as an additional evaluator for AI Center pricing.

Pricing

Does AI Center pricing include trace ingestion?

No. The Coralogix Unit calculations on the AI Center pricing page cover evaluator and guardrail processing only. The spans your instrumented applications send to Coralogix are billed separately, at standard trace ingestion rates.

How does coding agent activity affect my AI Center bill?

Each coding agent session emits OpenTelemetry data directly to your Coralogix endpoint. The signal type — logs, traces, or both — depends on the agent. That data is billed at the standard ingestion rate for the relevant signal.

Code Agents

Can I see the split between Claude Code input and output tokens?

Yes — through a custom dashboard. Use:

sum by (type) (increase(claude_code_token_usage_tokens_total{type=~"input|output"}[$__rate_interval]))

The AI Center Code Agents screen aggregates input and output tokens together. The split is available only in dashboards built on the underlying metric.