Skip to content

Enable Fleet Management for Linux host OpenTelemetry Collectors

Overview

Use this guide to enable remote configuration in Fleet Management for OpenTelemetry Collectors deployed on Linux hosts. After deployment, Fleet Manager provides centralized control over host agent configurations.

Fleet Management uses the OpenTelemetry Supervisor to apply remote configuration safely. Each host runs a single Collector instance with a Supervisor that polls the Fleet Manager, retrieves configuration updates, and restarts the Collector if needed.

Unlike the Kubernetes deployment—which manages Agent, Cluster Collector, and Gateway configurations as a family—a Linux host deployment manages a single agent per host. Configuration targeting uses host-specific selectors such as operating system, cloud environment, and host attributes instead of cluster name and agent type.

See the Supervisor and configuration behavior details in the Fleet Manager architecture documentation and Configuration Deep Dive.

Before you begin

Set up Fleet Management for Linux hosts

Step 1: Configure the Linux host OTel integration

Follow the installation steps of the Host Observability integration in the Coralogix UI. Select your operating system, CPU architecture, and observability features (logs, metrics, APM).

Step 2: Activate Fleet Management

  1. Activate the Fleet Management toggle. This grants you UI visibility into agent health, version, resource usage, and coverage of your host Collectors.

Step 3: Activate remote configuration

  1. [Recommended] Activate Remote configuration to apply configurations to agents remotely.

Enabling remote configuration allows Fleet Manager to deliver configuration updates to the Supervisor on each host. Without it, agents appear in the Agent catalog but operate in read-only mode.

Step 4: Install the agent and download the configuration

Complete the installation and review steps in the Host Observability integration. Download the generated configuration package at the end of the flow — you will use these files when creating Fleet configurations.

Step 5: Configure the Supervisor as a systemd service

The Supervisor manages the Collector lifecycle and communicates with Fleet Manager using the Open Agent Management Protocol (OpAMP). Configure it as a systemd service so it starts automatically and restarts on failure.

  1. Place the Supervisor configuration file at the expected path:

    /etc/otelcol-coralogix/supervisor.yaml
    
  2. The Supervisor configuration should include your Coralogix domain endpoint and API key. A minimal example:

    server:
      endpoint: wss://opamp-server.<YOUR_CORALOGIX_DOMAIN>
      headers:
        Authorization: "Bearer <YOUR_API_KEY>"
    agent:
      executable: /usr/bin/cdot
      config_apply_timeout: 30s
    capabilities:
      reports_effective_config: true
      reports_health: true
      accepts_remote_config: true
    storage:
      directory: /var/lib/otelcol-coralogix
    

    S3 configuration fallback

    To enable S3-based configuration fallback, add initial_fallback_configs to the agent section of your Supervisor configuration. You can specify multiple fallback paths — the Supervisor tries them in order and uses the first that succeeds.

    agent:
      executable: /usr/bin/cdot
      initial_fallback_configs:
        - "s3://YOUR_BUCKET_NAME.s3.YOUR_REGION.amazonaws.com/"
    

    The Supervisor also needs AWS credentials. Provide them through an EC2 instance profile or environment variables in the systemd unit file (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION).

    Unlike Kubernetes deployments where you can set this through Helm chart values, host deployments require you to manually add this to the Supervisor configuration file.

  3. Enable and start the Supervisor systemd service:

    sudo systemctl enable coralogix-supervisor
    sudo systemctl start coralogix-supervisor
    
  4. Verify the service is running:

    sudo systemctl status coralogix-supervisor
    

The Supervisor starts with a minimal bootstrap configuration. Once connected to Fleet Manager, it retrieves and applies the remote configuration. See configuration behavior details in the Configuration Deep Dive documentation.

Step 6: Validate the deployment

After installing the agent and starting the Supervisor, verify that your host Collector has successfully registered with Fleet Manager.

  1. Access Fleet Management, then Agents.
  2. Confirm that the host agent appears in the agent list with the expected hostname and attributes.

If the agent appears correctly, Fleet connectivity is working and you can proceed to create and activate configurations.

Step 7: Create a configuration group in Fleet Management

  1. Access Fleet Management, then Configurations.
  2. Select + New configuration.

This opens a new configuration group, which contains all versions of a single configuration.

Unlike Kubernetes deployments—which use configuration families to manage Agent, Cluster Collector, and Gateway configurations together—a Linux host configuration group targets a single agent type. Learn more in the Configuration management guide.

Step 8: Add the host configuration

Create a new configuration using the configuration file from the downloaded package.

Set the host selectors to target the correct agents:
SelectorExample valueDescription
cx.agent.typeagentTargets host agents
cx.os.typelinuxTargets Linux hosts
cx.cloud.provideraws, gcp, azure, on-premTargets agents by cloud environment
cx.host.nameweb-server-01Targets a specific host

Step 9: Save and activate the configuration

Follow these steps to save and optionally activate the configuration.

  1. Select Save in the configuration editor.
  2. (Optional) Select Activate immediately to apply the configuration to all matching host agents as soon as it is saved.
  3. Select Save to create the new version.

The configuration group now includes the new version, and—if activation was selected—the Supervisor on each matching host pulls and applies it.

Step 10: Check configuration status

Verify that the configuration list loads and displays the configuration with:

  • Current state (Active or Inactive)
  • Agent application status (Healthy, Warning, Error)
  • Version information

How it works

Fleet Management for Linux hosts follows the same architecture as Kubernetes deployments, with these differences:

  • Single agent per host — Each host runs one Collector instance managed by one Supervisor, rather than multiple Collector types (Agent, Cluster Collector, Gateway).
  • Package-based installation — The Collector and Supervisor are installed using native OS packages (deb/rpm) instead of Helm charts.
  • systemd lifecycle — The Supervisor runs as a systemd service, providing automatic startup, restart on failure, and standard Linux service management.
  • Host-specific selectors — Configuration targeting uses host attributes (operating system, cloud environment, hostname) instead of Kubernetes cluster name and agent type.

Learn more