Skip to content

Getting started

This guide walks you through integrating Coralogix Guardrails with your LLM application to protect against prompt injection attacks, PII leakage, and other security threats. By following these steps, you can start securing your AI applications in a few minutes.

Use the domain selector at the top of this page to set your Coralogix region. The example commands and code snippets on this page update automatically to use the matching endpoints.

What you need

  • Python 3.10 or higher.
  • A Team API key with the AiObservability role preset, used as CX_GUARDRAILS_TOKEN. The AiObservability preset includes AI-GUARDRAILS:MANAGE and all other permissions required to use Guardrails.
  • A Send-Your-Data API key, used as CX_TOKEN. Navigate to Settings, then API Keys.
  • Access to the Coralogix Guardrails SDK.
  • The AI-GUARDRAILS:MANAGE permission.

Install the SDK

pip install cx-guardrails

Set up environment variables

# Coralogix credentials
export CX_GUARDRAILS_TOKEN="your-coralogix-guardrails-api-key"
export CX_TOKEN="your-coralogix-send-your-data-key"
export CX_ENDPOINT="ingress."
export CX_GUARDRAILS_ENDPOINT="https://api./api/v1/guardrails/guard"

# Optional: Application metadata for observability
export CX_APPLICATION_NAME="my-app"
export CX_SUBSYSTEM_NAME="my-subsystem"

Step 1: Test your connection

Before enabling production policies, verify that the Guardrails SDK is reachable:

import asyncio
from cx_guardrails import Guardrails, GuardrailsAPIConnectionError

async def main():
    guardrails = Guardrails()
    try:
        response = await guardrails.test_connection()
        print("✓ Guardrails API is reachable!")
        print(f"Response: {response}")
    except GuardrailsAPIConnectionError as e:
        print(f"✗ Connection test failed: {e}")

asyncio.run(main())

Expected output:

✓ Guardrails API is reachable!
Response: results=[GuardrailResult(type='test_policy', detected=False, ...)]

Step 2: Set up observability export

Configure the export to Coralogix AI Center to send guardrail data for monitoring and analysis:

from llm_tracekit import setup_export_to_coralogix

setup_export_to_coralogix(
    service_name="my-service",
    application_name="my-app",
    subsystem_name="my-subsystem",
    capture_content=True,
)

This enables OpenTelemetry tracing for all guardrail evaluations, allowing you to view traces in Coralogix AI Center. For more details, see Getting Started.

Step 3: Guard a prompt with observability

For production use, wrap your guardrail calls in a guarded_session() context manager. This creates a parent span that groups all guardrail evaluations together for OpenTelemetry tracing, making it easy to correlate traces and view the complete request flow in Coralogix.

import asyncio
from openai import AsyncOpenAI
from cx_guardrails import Guardrails, PII, PromptInjection, GuardrailsTriggered
from llm_tracekit import setup_export_to_coralogix
from llm_tracekit.openai import OpenAIInstrumentor

setup_export_to_coralogix(
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
    capture_content=True,
)
OpenAIInstrumentor().instrument()

guardrails = Guardrails()

async def main():
    openai_client = AsyncOpenAI()
    user_message = "What is AI observability? Explain in one sentence."

    async with guardrails.guarded_session():
        try:
            await guardrails.guard_prompt(
                prompt=user_message,
                guardrails=[PII(), PromptInjection()],
            )
            print("✓ User input passed")
        except GuardrailsTriggered as e:
            return print(f"✗ Blocked: {e}")

        response = await openai_client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": user_message},
            ],
        )
        print(f"\n📝 AI RESPONSE:\n{response.choices[0].message.content}")

asyncio.run(main())

Expected output:

✓ User input passed

📝 AI RESPONSE:
AI observability refers to the tools and practices used to monitor, analyze, and understand the behavior and performance of AI models and systems in real-time.

Step 4: Full guarded conversation

Guard both user input and LLM response in a complete flow:

import asyncio
from openai import AsyncOpenAI
from cx_guardrails import Guardrails, PII, PromptInjection, GuardrailsTriggered

guardrails = Guardrails()
openai_client = AsyncOpenAI()

async def main():
    user_message = "What is AI observability? Explain in one sentence."

    async with guardrails.guarded_session():
        try:
            await guardrails.guard_prompt(
                prompt=user_message,
                guardrails=[PII(), PromptInjection()],
            )
            print("✓ User input passed")
        except GuardrailsTriggered as e:
            return print(f"✗ Blocked: {e}")

        response = await openai_client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": user_message},
            ],
        )
        llm_response = response.choices[0].message.content

        try:
            await guardrails.guard_response(
                response=llm_response,
                prompt=user_message,
                guardrails=[PII()],
            )
            print("✓ LLM response passed")
        except GuardrailsTriggered as e:
            return print(f"✗ Response blocked: {e}")

        print(f"\n📝 AI RESPONSE:\n{llm_response}")

asyncio.run(main())

Expected output:

✓ User input passed
✓ LLM response passed

📝 AI RESPONSE:
AI observability refers to the tools and practices used to monitor, analyze, and understand the behavior and performance of AI models and systems in real-time.

View your data in Coralogix

  1. Log into your Coralogix account.
  2. Go to AI Center, then Application Catalog to see your application.
  3. Select your application to view its detailed information.
  4. Navigate to the Guardrails section to see the trace data for your guardrail evaluations.

Troubleshoot

AI Explorer shows "This application is not guarded" but guardrails are firing. Cause: The application and subsystem values used to register guardrails do not match the values applied to ingested traces. When traces are sent through an OpenTelemetry Collector, the collector derives application and subsystem from resource attributes — if those values disagree with the guardrail registration, AI Explorer cannot match traces to the guardrails. Fix:

  1. Note the application and subsystem values used when registering your guardrails.
  2. In your OpenTelemetry Collector configuration, set application_name_attributes to look at cx.application.name first, then fall back to service.namespace.
  3. Set subsystem_name_attributes to look at cx.subsystem.name first, then fall back to service.name.
  4. Set the cx.application.name and cx.subsystem.name resource attributes on your application to match the guardrail registration.

Guard API calls return 403 or 404. Cause: The API key used as CX_GUARDRAILS_TOKEN lacks the AI-GUARDRAILS:MANAGE permission. A standard Send-Your-Data key does not work for the Guard API. Fix: Use a Team API key created with the AiObservability role preset, which includes AI-GUARDRAILS:MANAGE. See What you need for the full key requirements.

A blocked prompt sent twice produces only one guardrail violation in AI Explorer. Cause: Guardrail violations are aggregated at the trace level. If the second call runs in a new trace and that trace itself does not invoke the input guardrail again, no violation is recorded for it. Fix: Wrap the input check, the LLM call, and the response check inside a single guardrails.guarded_session() block, as shown in Step 3. All evaluations within the session are grouped on the same trace.

Next steps

Apply ready-to-use policies for prompt injection, PII, and toxicity with Guardrails prebuilt policies.