Skip to content

Deprecation of Coralogix-Hosted Logstash for Legacy Filebeat Ingestion

Published: February 16, 2026

Effective: June 30, 2026

End-of-life notice

To improve platform reliability, simplify maintenance, and accelerate support for modern telemetry pipelines, Coralogix will retire the Coralogix-hosted Logstash service that supports the legacy Filebeat-based ingestion flow.

Action is required to avoid a loss of log ingestion after the effective date.

Why this is happening

Coralogix is investing in OpenTelemetry (OTel) as the recommended long-term ingestion approach. OpenTelemetry is the foundation for Coralogix’s most advanced capabilities and enables you to send logs, metrics, and traces from a single pipeline rather than using separate agents and paths. It unlocks powerful correlation between all entitty types to speed up investigations.

Filebeat supports logs only. The legacy Filebeat flow depends on a Coralogix-maintained Logstash server. This hosted Logstash service requires ongoing operational work and relies on an older Logstash version, which limits upgrades and creates avoidable risk.

What’s affected

After June 30, 2026, the Coralogix-hosted Logstash service will no longer accept data. This impacts:

  • Environments that ship logs using the legacy Filebeat-to-Coralogix hosted Logstash flow.
  • Any automation, onboarding scripts, or deployment templates that assume Coralogix provides a hosted Logstash endpoint.
  • Monitoring, alerting, and investigations that rely on those logs (you may see gaps in dashboards and alerts if ingestion stops).

This change does not prevent you from using Logstash. If you run Logstash in your own infrastructure, you can continue using Logstash as part of your pipeline.

What you need to do

Confirm whether you are using Coralogix-hosted Logstash. Review your current deployment and look for configurations where Filebeat (or another shipper) forwards logs to a Coralogix-provided Logstash destination. Our Logstash endpoints are logstashserver.coralogix.com, logstash.coralogix.in, and logstashserver.coralogix.us.

Then migrate using one of the following options before June 30, 2026.

OpenTelemetry requires more setup than the legacy flow, but it is the recommended long-term solution.

What you typically do:

  • Deploy an OpenTelemetry Collector (Coralogix distribution or your preferred Collector build)
  • Configure the Collector to receive logs from your environment
  • Configure export to Coralogix using the endpoint for your Coralogix region and your API key
  • Validate that logs are arriving in Coralogix, then cut over traffic to the Collector

Why choose this:

  • Supports Coralogix’s most advanced features
  • One pipeline for logs, metrics, and traces
  • More future-proof than legacy agent-to-Logstash flows

Take these steps.

Option B. Run your own Logstash instance (self-managed)

If you want to keep Logstash in your pipeline, you can run it in your own infrastructure using this Logstash guide. Coralogix supports the integration, but Coralogix will not host or maintain the Logstash server for you.

What you typically do:

  • Provision and operate Logstash in your environment (including version upgrades and scaling).
  • Update Filebeat to ship to your Logstash instance.
  • Configure Logstash to forward data to Coralogix using supported ingestion methods and credentials.
  • Validate ingestion in Coralogix, then cut over traffic.

Take these steps.

Replace Filebeat with an OpenTelemetry Collector

This option applies to customers who are ready to migrate from Filebeat.

The structure of the log shipped by otel-collector will be different from your previous logs. The following example demonstrates the difference in Filebeat and OTel log structure:

Filebeat:

{
  "@timestamp": "2026-02-17T07:39:14.778Z",
  "tags": [
    "beats_input_codec_plain_applied"
  ],
  "agent": {
    "type": "filebeat",
    "id": "edbabc87-1861-4bff-9192-844d0b7f3c1b",
    "name": "ubuntu",
    "version": "9.3.0",
    "ephemeral_id": "0966c318-476d-4b06-a23f-8cff00ade849"
  },
  "message": "test message",
  "@version": "1",
  "input": {
    "type": "filestream"
  },
  "log": {
    "file": {
      "path": "/var/log/test.log"
    }
  },
  "host": {
    "name": "ubuntu"
  },
  "ecs": {
    "version": "8.0.0"
  }
}
OTel:

{
  "attributes": {
    "log.file.path": "/var/log/test.log",
    "log.iostream": "stderr",
    "time": "2026-02-02T15:16:02.025296299Z"
  },
  "body": "test message",
  "observedTimeUnixNano": 1770045362115109000,
  "resource": {
    "attributes": {
      "host.id": "777005cbdb064f89b4a297450db02199",
      "host.name": "otel-lab",
    }
  },
  "resourceSchemaUrl": "https://opentelemetry.io/schemas/1.37.0",
  "scope": {
    "attributes": {}
  },
  "timeUnixNano": 1770045362025296400
}

Install

Install an OpenTelemetry collector using the official OpenTelemetry instructions.

Configure

Here is an example of a basic Filebeat configuration file that reads a log file and ships it to a static application and subsystem.

#============================== Filebeat Inputs ===============================

filebeat.inputs:
- type: log
  paths:
  - "/var/log/your_app/your_app.log"

fields_under_root: true
fields:
  PRIVATE_KEY: "YOUR_PRIVATE_KEY"
  COMPANY_ID: YOUR_COMPANY_ID
  APP_NAME: "APP_NAME"
  SUB_SYSTEM: "SUB_NAME"

#----------------------------- Logstash output --------------------------------

output.logstash:
  enabled: true
  hosts: ["logstashserver.<coralogix_domain>:5015"]
  ttl: 60s
  tls.certificate_authorities: ["<path to folder with certificates>/ca.crt"]
  ssl.certificate_authorities: ["<path to folder with certificates>/ca.crt"]

Here is the equivalent OTel collector configuration to ship a log file. Update the OpenTelemetry Collector configuration using the following as example.

receivers:
  filelog:
    include:
    - /var/log/your_app/your_app.log
    include_file_name: true
    include_file_path: false
    retry_on_failure:
      enabled: true
    start_at: beginning
    storage: file_storage

processors:
  batch:
   send_batch_max_size: 2048
   send_batch_size: 1024
   timeout: 1s

exporters:
  coralogix:
    domain: "coralogix.com"
    private_key: "cxtp_SendYourDataAPIKey"
    application_name: "DefaultApplicationName"
    subsystem_name: "DefaultSubsystemName"
    timeout: 30s

extensions:
  file_storage: { directory: /var/log/otelcol }

service:
  extensions:
  - file_storage
  pipelines:
    logs:
      receivers:
        - filelog
      processors:
        - batch
      exporters:
        - coralogix

Filebeat to OpenTelemetry Processor migration reference

The following reference maps common Filebeat processors to their functional equivalents in the OTel Collector. Use it to maintain feature parity and logic consistency while migrating telemetry pipelines to vendor-neutral architecture.
Use caseFilebeat processorOpenTelemetry processorExample
Add host metadataadd_host_metadataresourcedetectionYAML
Add cloud metadataadd_cloud_metadataresourcedetectionYAML
Remove fields from JSON logsdrop_fieldstransformOTTL
Convert valid JSON strings to objectsdecode_json_fieldstransformOTTL
Change the name of a fieldrenametransformOTTL
Filter or drop logsdrop_eventfilterYAML

Example: Add host metadata

processors:
  resourcedetection:
    detectors:
      - host

Example: Add cloud metadata

processors:
  resourcedetection:
    detectors:
      - ec2

Example: Remove fields from JSON logs

processors:
  transform:
    log_statements:
      - 'delete_key(log.body["fieldName"])'

Example: Convert valid JSON strings to objects

processors:
  transform:
    log_statements:
      - 'set(log.body["json"], ParseJSON(log.body["json_string"])) where IsMatch(log.body["json_string"], "\\{")'

Example: Change the name of a field

processors:
  transform:
    log_statements:
      - 'set(log.body["new"], log.body["old"])'
      - 'delete_key(log.body["old"])'

Example: Filter or drop logs

processors:
  filter:
    error_mode: ignore
    logs:
      log_record:
        - 'IsMatch(body, ".*blocked log.*")'

Explore the official OpenTelemetry documentation to discover additional processors and the full library of OTTL functions available for advanced telemetry processing.

Run your own Logstash instance

Install

Install a self-hosted logstash instance using the official Elastic Logstash installation instructions.

Configure

Update your Filebeat configuration to enable shipping to a self-hosted Logstash instance. Here is a legacy example of Filebeat shipping to Coralogix logstash configuration:

#============================== Filebeat Inputs ===============================

filebeat.inputs:
- type: log
  paths:
  - "/var/log/your_app/your_app.log"

fields_under_root: true
fields:
  PRIVATE_KEY: "YOUR_PRIVATE_KEY"
  COMPANY_ID: YOUR_COMPANY_ID
  APP_NAME: "APP_NAME"
  SUB_SYSTEM: "SUB_NAME"

#----------------------------- Logstash output --------------------------------

output.logstash:
  enabled: true
  hosts: ["logstashserver.<coralogix_domain>:5015"]
  ttl: 60s
  tls.certificate_authorities: ["<path to folder with certificates>/ca.crt"]
  ssl.certificate_authorities: ["<path to folder with certificates>/ca.crt"]

New configuration example:

#============================== Filebeat Inputs ===============================

filebeat.inputs:
- type: log
  paths:
  - "/var/log/your_app/your_app.log"

#----------------------------- Logstash output --------------------------------

output.logstash:
  enabled: true
  hosts: ["<logstash-ip-address>:5044"]

Update the configuration of your self-hosted Logstash instance. Use the example below to receive Filebeat logs. Add metadata and ship it to Coralogix.

input {
  beats {
    port => 5044
  }
}

filter {
  ruby {code => "
                event.set('[@metadata][application]', APP_NAME)
                event.set('[@metadata][subsystem]', SUB_NAME)
                event.set('[@metadata][event]', event.to_json)
                event.set('[@metadata][host]', event.get('host'))
                "}
}
output {
    http {
        url => "https://ingress.eu1.coralogix.com/logs/v1/singles"
        http_method => "post"
        headers => ["authorization", "Bearer <Coralogix Send-Your-Data API key>"]
        format => "json_batch"
        codec => "json"
        mapping => {
            "applicationName" => "%{[@metadata][application]}"
            "subsystemName" => "%{[@metadata][subsystem]}"
            "computerName" => "%{[@metadata][host]}"
            "text" => "%{[@metadata][event]}"
        }
        http_compression => true
        automatic_retries => 5
        retry_non_idempotent => true
        connect_timeout => 30
        keepalive => false
        }
}

Best practices

We can recommend a starting point of at least 8GB RAM / 4 CPU cores / SSD storage.

Adjust up or down from there depending on your needs.

What happens after June 30, 2026

  • The Coralogix-hosted Logstash service will be permanently shut down.
  • Any workloads still sending logs through the hosted Logstash service will stop ingesting into Coralogix.
  • Dashboards, alerts, and monitoring that rely on those logs may show missing data after the cutoff.

Need help?

Contact Coralogix Support through the in-app chat (24/7) or reach out to your Technical Account Manager.