FAQs Coralogix AI Tools
December 5, 2025
Coralogix offers powerful AI solutions in accordance with its
Terms and Conditions and AI Tools Acceptable Use Policy
Cora AI
Cora AI is an optional suite of features which are built-in directly on top of the Coralogix UI. It helps you create queries, interpret logs and receive answers from documentation. It can be toggled on and off (individually per tool or together as a full suite) at any time in the settings. It includes 3 features:
AI DataPrime Query Assistance
Create searches using plain English, with guided suggestions that automatically translate your intent into accurate DataPrime queries.
Explain Logs
Understand logs easily. It will break down log data into simple, clear terms to help you quickly find and fix issues.
Knowledge Assistant
Get instant, smart answers from your internal documentation, knowledge base, and past incidents—right when you need them.
The AI Center
AI Center is a service for monitoring your AI systems’ performance, security, compliance, and cost. Located as a tab in the top bar of the Coralogix UI. Integrate your desired AI projects into the Coralogix platform and send spans in the same way you send other traces to Coralogix. The AI Center includes:
AI Evaluation Engine
Checks every prompt and response for accuracy, security, and compliance automatically. You can customize it for your needs. Uses Azure OpenAI for processing.
AI-SPM (Security Posture Management)
Monitors real-time AI security issues like prompt injections, data leaks, and personal data exposure using simple dashboards.
User Journey & Cost Tracking
Keeps an eye on user activity and token use to spot issues and control costs.
Performance Metrics
Flags slow, inaccurate, or risky AI behavior so teams can fix problems fast and maintain a smooth user experience.
Meet Olly – Coralogix’s AI observability agent.
Olly is a standalone AI agent that connects to your Coralogix observability data; logs, metrics, and traces, and enables you to interact with your data through a conversational interface. Olly is Coralogix’s AI observability agent that makes logs, metrics, and traces easily accessible through natural language queries.
Ask. Understand. Act — Instantly
Users can ask questions like “Why is my app slow?” or “What’s wrong with the payment flow?” and get instant, actionable insights.
Faster troubleshooting, less manual work
Olly continuously scans telemetry to surface anomalies, identify likely root causes, and suggest next steps, helping teams reduce investigation time and resolve incidents faster.
Transparent by Default Architecture
Olly defaults to transparency, displaying all queries so users can verify correctness, understand the process, and improve practices. Olly emphasizes visibility, trust, auditability, and compliance with AI transparency standards.
Built for the whole team
Olly automatically scans all telemetry data to identify root causes and suggest fixes, eliminating manual investigation. It drastically reduces troubleshooting time, helping teams resolve issues faster and keep systems running smoothly. Built to democratize observability, Olly empowers all team members not just engineers—to understand and act on system behavior
Responsible AI with Full Visibility
Olly supports responsible AI practices by making outputs traceable and justifiable, with visibility into the logic behind each result, including the exact DataPrime and PromQL queries.
Terms of service
Olly services are provided in accordance with the Coralogix Master Subscription Terms and Addendum to Coralogix Master Subscription Terms- Olly Services
Coralogix MCP
The Model Context Protocol (MCP) server enables external AI agents to securely access and work with Coralogix observability data through a remote interface, making telemetry easier to query, analyze, and understand.
AI Data Access
Securely exposes Coralogix telemetry to external AI agents via MCP.
Express-Based Architecture
Express-Based Architecture – Uses a lightweight Express server for simple and flexible integration.
Observability Intelligence
Enables intelligent querying, analysis, and interaction with observability data.
FAQs
No
By admins or by users that are provided the team- ai settings: Manage permission. Toggle switch on/off In the settings under account preferences
By admin users or by users that are provided the team- aisettings:Manage permission
Yes, by admins or by users that are provided the team-aisettings: Manage permiss ion
Yes. Some tools use external AI services:
- OpenAI (used by AI Query Assistant & Explain Logs)
- Kapa AI (used by Knowledge Assistant)
No, data is not used to train LLMs
It depends on the type of tool, for example Explain Logs: processes selected logs + metadata Query assistant: processes prompt + metadata Knowledge Assistant: processes only your question
- OpenAI: Runs on
Microsoft Azure (US) - Kapa AI: Sends prompts to OpenAI & Anthropic (US)
Yes, up to 30 days, only to prevent abuse
Yes
No, it is not HIPAA
compliant
No
Only admins. It requires specific configuration to be set up properly by the User admin. Additionally, access controls are implemented through RBAC, ensuring that only admin users or specific authorized users can access its features and functionalities
Only by admin users
Yes, by admins
Only for the Evaluations feature utilizing Coralogix’s private deployment of Azure OpenAI
No, data is not used to train LLMs
User-AI spans for evaluations
Microsoft Azure data centers hosted by Microsoft Azure in USA, EU or Asia-Pacific
No
Yes
No, it is not HIPAA
compliant
No
Only customers, in the separate interface through Ollyhq.com, users must configure the AI agent and integrate it with their Coralogix data
Only by admin users
Yes, by admins
Olly utilizes GPT model through Azure OpenAI by default, and various optional LLM providers which can be selected:
- Azure OpenAI’s GPT
Large Language Models, are deployed completely inside Coralogix’s own Azure VPC. Since everything runs within Coralogix’s isolated cloud environment: No telemetry data data leaves Coralogix infrastructure. Third- party vendors such as OpenAI can not access or store your data - Claude model is available (as an optional processing model which users can toggle) on Coralogix’s AWS VPC
- GCP through Google’s Vertex AI service with Endpoints which are configured with private endpoints only, with logical isolation
No, data is not used to train LLMs
Processing of all telemetry data which is queried. This is including your logs, metrics and traces relevant to your query or investigation
GPT model: Located in Microsoft Azure regions matching the customer selected AWS region. I.e. if the customer selected AWS hosting region is in EU, then the Processing datacenters will accordingly be in the EU.
- Claude model: Located in AWS regions matching the customer-selected AWS region. I.e. if the customer selected AWS hosting region is in EU then the Processing on AWS datacenters will accordingly be in the EU.
- Gemini model: Located in GCP regions matching the customer AWS selected region. I.e. if the customer selected AWS hosting region is in EU, then the Processing on GCP datacenters will accordingly be in the EU, however customers located in Asia should note that their data will be sent and processed in asia-south1-Mumbai, India, APAC
No
Yes
Yes, it is HIPAA compliant
No, the server is remote and does not require any local installation.
Yes, you can use MCP with other AI LLM’s or IDEs you choose to use.
You simply add the configuration available here to you mcp.json and replace the endpoint and api key as detailed in the setup guide.
No
Depending on the data you would like to query, the following permissions are required for using the MCP server.
As the MCP connects to 3rd party AI tools, managing its use and permissions is associated with its use through the 3rd party tool itself. Customers are responsible for managing the permissions within their 3rd party tools.
Customers select which 3rd party AI tools they will integrate with. Coralogix does not have any responsibility for the Processing which occurs on customers’ 3rd party AI tools.
Coralogix remains isolated from those external processes. The data may be used according to the 3rd party AI LLM’s or IDEs you choose to use.
Data that you explicitly query from your Coralogix account.
Logical segregations through separate endpoints and uniqueAPI keys.