Skip to content

Prebuilt evaluation policies

Coralogix AI Center provides prebuilt evaluation policies that automatically assess every prompt and response in real time. These policies help you detect quality issues, security threats, and compliance violations before they impact your users.

This page covers:

Understanding evaluation scores

All prebuilt policies return a score between 0 and 1.
Score rangeMeaningMarked as issue?
Closer to 1Issue detected with high confidenceYes
Closer to 0Low severity — probably not an issueNo

When a policy detects a violation, it returns a score closer to 1 and the interaction is flagged as an issue. Scores closer to 0 indicate low severity and the interaction is not flagged.

Threshold: Evaluation policies use a fixed threshold of 0.7. Configurable thresholds are coming in a future release.

Available prebuilt policies

Hallucinations

Hallucination policies detect when your AI generates content that is factually incorrect, fabricated, or not properly grounded in context.
PolicyDescriptionWhen to use
Context adherenceMeasures whether the model's response strictly follows the provided context without introducing new information.RAG applications and internal chatbots — when responses must be grounded in specific documents.
Context relevanceAssesses how relevant and similar the provided context is to the user query, ensuring it contains the necessary information for an accurate response.RAG applications — to verify retrieved context actually matches the question.
CompletenessEvaluates how well the model's response includes all relevant information from the context.When you need comprehensive answers that don't omit important details.
CorrectnessDetermines if a model's response is factually accurate.General-purpose chatbots answering world knowledge questions.
SQL hallucinationDetects hallucinations in LLM-generated SQL queries.Text-to-SQL applications where query accuracy is critical.
Tool parameter correctnessEnsures the correct tools are selected and invoked with accurately derived parameters based on chat history.Agentic AI applications using function calling or tool use.
Task adherenceDetects whether the LLM's answer aligns with the given system prompt.When your AI must follow specific instructions defined in the system prompt.

Security

Security policies protect against malicious inputs, unauthorized data access, and dangerous AI-generated content.
PolicyDescriptionWhen to use
Prompt injectionDetects any user attempt at prompt injection or jailbreak.All production AI applications with user input.
SQL (read-only access)Detects any attempt to use SQL operations requiring more than read-only access.Text-to-SQL apps where users should only query data, not modify it.
SQL (load limit)Detects SQL statements that are likely to cause significant system load and affect performance.Preventing expensive queries that could degrade database performance.
SQL (restricted tables)Detects the generation of SQL statements that access specific tables considered sensitive.Protecting sensitive data tables from unauthorized access.
SQL (allowed tables)Detects SQL operations on tables that are not configured in the eval.Restricting AI to only query approved tables.
PIIDetects the existence of PII in the user message or the LLM response based on the configured sensitive data types.Data privacy compliance (GDPR, HIPAA) and sensitive data leakage prevention.

Toxicity

Toxicity policies detect harmful, offensive, or inappropriate content in prompts or responses.
PolicyDescriptionWhen to use
SexismDetects whether an LLM response or user prompt contains sexist content.Ensuring respectful, inclusive AI interactions.
ToxicityDetects any user message or LLM response containing toxicity.Customer-facing AI, brand safety, and content moderation.

Topics

Topic policies ensure conversations stay within appropriate boundaries.
PolicyDescriptionWhen to use
Restricted topicsDetects any user or LLM attempt to initiate a discussion on the topics mentioned in the eval.Preventing discussions on sensitive or off-limits subjects.
Allowed topicsEnsures the conversation adheres to specific and well-defined topics.Domain-specific assistants that should stay focused.
Competition discussionDetects whether any prompt or response includes references to competitors mentioned in the eval.Sales and marketing chatbots and brand protection.

User experience

User experience policies help ensure quality interactions.
PolicyDescriptionWhen to use
Language mismatchDetects when an LLM is answering a user question in a different language.Multi-language support — ensuring consistent language in responses.

Compliance

Compliance policies help enforce organizational rules and regulations.
PolicyDescriptionWhen to use
Restricted phrasesEnsures the LLM does not use specified prohibited terms and phrases by blocking or replacing them based on regex patterns.Brand guidelines, legal compliance, and avoiding specific terminology.

Create evals for prebuilt policies

Use the Policy Catalog to add an evaluation to a policy.

  1. Browse the Policy Catalog tiles to find the policy you want.
  2. If the policy is not visible, use the Search field to locate it.
  3. To narrow the policy list, select a category or filter by Evaluations.
  4. On the policy card, select Add eval.
  5. Configure the evaluation:
  6. If no configuration is required, Coralogix adds the evaluation to the application immediately.
  7. If configuration is required, set the relevant options:
    • Select whether to run the evaluation on the user prompt, the LLM response, or both.
    • Select categories, values, or other attributes available for the policy type.
  8. Select Next.
  9. In the Add eval to application dialog box, select the application or applications to apply the evaluation to. Applications that already use this evaluation are unavailable.
  10. Select Done.

Coralogix shows an indicator for evaluation usage in the catalog card, including the number of applications the evaluation applies to.

Manage evals

To manage an eval for a prebuilt policy, open the policy card and select More actions.

Edit

Select Edit to open the policy page and update the policy. For predefined policies, you can edit the policy configuration.

Delete

Select Delete to remove the policy from a specific application.