Skip to content

Application Catalog

The AI Application Catalog provides unified view of your AI applications. Leverage the Catalog to compare applications, analyze trends over time, and sort data by key performance indicators. This catalog empowers more effective monitoring, troubleshooting, and cost optimization across all of your AI applications.

Accessing the Application Catalog

  1. In the Coralogix UI, navigate to AI Center > Application Catalog.
  2. Use the time picker to select the desired time interval for metrics collection.
  3. Review the counters and the AI application grid.
  4. If needed, click on an application row in the grid to display the detailed AI application overview.
  5. Click the Add Application button to access a document outlining the procedure for adding a new app.

Counters

Examine the essential performance, cost, and issue metrics of all your AI applications.

  • Time to Response – The average LLM response times across all applications, measured at the average, P75, P95, and P99 percentiles.
  • Estimated Cost – The total estimated cost of all traces in your applications.
  • Token Usage – The total amount of tokens consumed by your applications.
  • Issue Rate – The percentage of prompts and responses that contain security and quality issues. In addition, the counter displays the percentage from selected time period.

image.png

Application grid

Gain a comprehensive overview of all your AI applications, with key metrics displayed at a glance. Each row of the application grid represents an individual AI app, providing a detailed snapshot of its performance, cost, models in use, as detailed below. Along with the current count, the quality and security issues metrics show the percentage change from selected time period, highlighting key trends as well as issues with certain apps.

  • Application Name – The name of an AI app.
  • Traces – The total number of LLM calls.
  • Security Issues – The total number of security issues (reported by all evaluations in the Security category).
  • Quality Issues – The total number of quality issues (reported by all evaluations, excluding those in the Security category).
  • Cost – The estimated application cost in USD.
  • Tokens – The total token consumption.
  • Avg. Duration – The average LLM response time to users.
  • Models – A list of models used in the application.

image.png

Managing your evals

In addition to the AI apps proper, you can manage your evals from the Application Catalog. Evaluations are metric-driven tools used to assess different aspects of LLM-based applications, including security, safety, quality, and performance. The Eval Catalog documentation details how to add/remove, enable/disable, or edit the evals.