Getting started
This guide walks you through integrating Coralogix Guardrails with your LLM application to protect against prompt injection attacks, PII leakage, and other security threats. By following these steps, you can start securing your AI applications in a few minutes.
What you need
- Python 3.10 or higher.
- A Coralogix account with a Send-Your-Data API key. Navigate to Settings, then API Keys.
- Access to the Coralogix Guardrails SDK.
Install the SDK
Set up environment variables
# Coralogix credentials
export CX_GUARDRAILS_TOKEN="your-coralogix-guardrails-api-key"
export CX_TOKEN="your-coralogix-send-your-data-key"
export CX_ENDPOINT="ingress.eu2.coralogix.com"
export CX_GUARDRAILS_ENDPOINT="https://api.<domain>.coralogix.com/api/v1/guardrails/guard"
# Optional: Application metadata for observability
export CX_APPLICATION_NAME="my-app"
export CX_SUBSYSTEM_NAME="my-subsystem"
Step 1: Test your connection
Before enabling production policies, verify that the Guardrails SDK is reachable:
import asyncio
from cx_guardrails import Guardrails, GuardrailsConnectionTestError
async def main():
guardrails = Guardrails()
try:
response = await guardrails.test_connection()
print("✓ Guardrails API is reachable!")
print(f"Response: {response}")
except GuardrailsConnectionTestError as e:
print(f"✗ Connection test failed: {e}")
asyncio.run(main())
Expected output:
✓ Guardrails API is reachable!
Response: results=[GuardrailResult(type='test_policy', detected=False, ...)]
Step 2: Set up observability export
Configure the export to Coralogix AI Center to send guardrail data for monitoring and analysis:
from llm_tracekit import setup_export_to_coralogix
setup_export_to_coralogix(
service_name="my-service",
application_name="my-app",
subsystem_name="my-subsystem",
capture_content=True,
)
This enables OpenTelemetry tracing for all guardrail evaluations, allowing you to view traces in Coralogix AI Center. For more details, see Getting Started.
Step 3: Guard a prompt with observability
For production use, wrap your guardrail calls in a guarded_session() context manager. This creates a parent span that groups all guardrail evaluations together for OpenTelemetry tracing, making it easy to correlate traces and view the complete request flow in Coralogix.
import asyncio
from openai import AsyncOpenAI
from cx_guardrails import Guardrails, PII, PromptInjection, GuardrailsTriggered
from llm_tracekit import setup_export_to_coralogix
from llm_tracekit.openai import OpenAIInstrumentor
setup_export_to_coralogix(
service_name="ai-service",
application_name="ai-application",
subsystem_name="ai-subsystem",
capture_content=True,
)
OpenAIInstrumentor().instrument()
guardrails = Guardrails()
async def main():
openai_client = AsyncOpenAI()
user_message = "What is AI observability? Explain in one sentence."
async with guardrails.guarded_session():
try:
await guardrails.guard_prompt(
prompt=user_message,
guardrails=[PII(), PromptInjection()],
)
print("✓ User input passed")
except GuardrailsTriggered as e:
return print(f"✗ Blocked: {e}")
response = await openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_message},
],
)
print(f"\n📝 AI RESPONSE:\n{response.choices[0].message.content}")
asyncio.run(main())
Expected output:
✓ User input passed
📝 AI RESPONSE:
AI observability refers to the tools and practices used to monitor, analyze, and understand the behavior and performance of AI models and systems in real-time.
Step 4: Full guarded conversation
Guard both user input and LLM response in a complete flow:
import asyncio
from openai import AsyncOpenAI
from cx_guardrails import Guardrails, PII, PromptInjection, GuardrailsTriggered
guardrails = Guardrails()
openai_client = AsyncOpenAI()
async def main():
user_message = "What is AI observability? Explain in one sentence."
async with guardrails.guarded_session():
try:
await guardrails.guard_prompt(
prompt=user_message,
guardrails=[PII(), PromptInjection()],
)
print("✓ User input passed")
except GuardrailsTriggered as e:
return print(f"✗ Blocked: {e}")
response = await openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_message},
],
)
llm_response = response.choices[0].message.content
try:
await guardrails.guard_response(
response=llm_response,
prompt=user_message,
guardrails=[PII()],
)
print("✓ LLM response passed")
except GuardrailsTriggered as e:
return print(f"✗ Response blocked: {e}")
print(f"\n📝 AI RESPONSE:\n{llm_response}")
asyncio.run(main())
Expected output:
✓ User input passed
✓ LLM response passed
📝 AI RESPONSE:
AI observability refers to the tools and practices used to monitor, analyze, and understand the behavior and performance of AI models and systems in real-time.
View your data in Coralogix
- Log into your Coralogix account.
- Go to AI Center, then Application Catalog to see your application.
- Select your application to view its detailed information.
- Navigate to the Guardrails section to see the trace data for your guardrail evaluations.
Next steps
- Prebuilt Policies — Apply ready-to-use policies for prompt injection, PII, and toxicity.
- Custom Policies — Define domain-specific policies using natural language.
- Guard API — Advanced multi-turn conversation guardrail control.