Monitor AI applications
Monitor health, performance, cost, quality issues, and security posture across all AI applications. Use it to spot changes early, prioritize what to investigate, and validate that fixes improve outcomes across your AI library.
Monitoring as part of a complete platform for AI reliability
AI Center combines Observability, Guardrails, Evaluations, and AI SPM into a unified set of tools. Monitoring gives you a bird's-eye view of every LLM in your organization and the ability to drill down to a specific interaction. Guardrails intercept problems in real time before they reach users. Evaluations assess quality, safety, and compliance. AI SPM scans your GitHub repositories to discover and instrument all AI in use.
Monitoring views
AI Center provides three complementary views for monitoring your applications:
- The Overview page shows insights about all applications across the organization.
- The Application Catalog provides a list of all applications and allows sorting the table to compare them.
- Each Application Drilldown lets you view key metrics for a specific application.
Drill down from a team-wide view to an individual interaction
Start at the team level to see which applications have the most errors, the highest latency, or the highest cost. Move to a specific application to compare versions or track trends. Then drill into AI Explorer to inspect the exact prompt, response, token count, and evaluation results for any single LLM call.
Monitoring works alongside Guardrails and Evaluations
When monitoring surfaces a problem — a spike in errors, a latency regression, a cost anomaly — Guardrails let you act in real time, intercepting harmful or low-quality responses before they reach users. Evaluations help you assess whether quality or safety has degraded. Together, these tools close the loop from detection to prevention.
In this section
- Applications — Monitor health, performance, cost, errors, and latency across all your AI applications.
- AI Explorer — Inspect individual LLM interactions, view evaluation results, and trace the full request flow end-to-end.