First Steps with OpenTelemetry Collector and FluentBit

About OpenTelemetry and FluentBit

OpenTelemetry is an open-source observability tool for monitoring and tracing distributed applications. It provides APIs, libraries, and agents to collect telemetry data, such as metrics, logs, and traces, from applications and services. FluentBit is a high-performance, lightweight log processor and forwarder that supports multiple input and output plugins, making it highly adaptable for different data pipelines.

Benefits of OpenTelemetry and FluentBit

Integrating OpenTelemetry Collector with FluentBit offers several advantages, including…

  • Comprehensive observability: Combine logs, metrics, and traces from various sources into a unified data pipeline.

  • Scalability: Both OpenTelemetry and FluentBit are designed to handle high volumes of data with minimal resource consumption.

OpenTelemetry Plugin for FluentBit

The OpenTelemetry output plugin for FluentBit enables users to send logs, metrics, and traces to the OpenTelemetry Collector. It supports OpenTelemetry HTTP endpoints, and can be easily configured to forward data to the Collector.

Getting Started with the FluentBit OpenTelemetry Plugin

This section will demonstrate how to set up the OpenTelemetry Collector and FluentBit, and how to send data to Jaeger UI and Prometheus using Docker Compose.

In this program, we will work within the Free tier of AWS EC2 instance (Amazon Linux). Similar environment can also be created using Rocky Linux, CentOS, or Ubuntu on GCP.

Please note that prerequisites are required for this setup to work properly: Docker and Docker Compose. These two items must be installed in your environment for the setup to run smoothly.

First, create a docker-compose.yaml file with the provided content.

version: "3"
services:

  # Jaeger
  jaeger:
    image: jaegertracing/all-in-one:latest
    container_name: jaeger
    ports:
      - "16686:16686"
      - "14250:14250"

  # Prometheus
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    volumes:
      - ./prometheus.yaml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"

  # OtelCollector
  otel-collector:
    image: otel/opentelemetry-collector:latest
    container_name: otel-collector
    command: ["--config=/etc/otel-collector-config.yaml"]
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - "13133:13133" # Health_check extension
      - "4318:4318"   # OTLP http receiver
    depends_on:
      - jaeger
      - prometheus
    restart: on-failure

This file defines the services for Jaeger, Prometheus, and the OpenTelemetry Collector. Ensure that the OpenTelemetry Collector container has the appropriate ports exposed and depends on the Jaeger and Prometheus services.

Next, create a FluentBit configuration file named flb-out-otel.conf with the provided content.

[SERVICE]
    Flush                1
    Log_level            info

[INPUT]
    Name                 node_exporter_metrics
    Tag                  node_metrics
    Scrape_interval      2

[INPUT]
    Name                 event_type
    Type                 traces
    Tag                  node_metrics

[INPUT]
    Name                 tail
    Tag                  sample.log
    Path                 /var/log/sample.log
    Read_from_Head       True

[OUTPUT]
    Name                 opentelemetry
    Match                *
    Host                 127.0.0.1
    Port                 4318
    Metrics_uri          /v1/metrics
    Traces_uri           /v1/traces
    Logs_uri             /v1/logs
    Log_response_payload True

This file defines the inputs for node_exporter_metrics, event_type traces, and tail, and configures the OpenTelemetry output plugin to send data to the Collector.

The input plugin node_exporter_metrics will collect system-level metrics like CPU, disk, network, and process statistics from your operating system. The traces plugin event_type traces will generate sample trace data for demonstration purposes. The tail input plugin enables the monitoring of one or more text files. For more information about the tail plugin, you can learn form here.

Create an OpenTelemetry Collector configuration file named otel-collector-config.yaml with the provided content.

receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318

exporters:
  logging:
    verbosity: detailed

  otlp/jaeger:
    endpoint: jaeger:14250
    tls:
      insecure: true

  prometheus:
    endpoint: 0.0.0.0:9090
    send_timestamps: true
    namespace: promexample
    const_labels:
      label1: value1

processors:
  batch:
    timeout: 10s

extensions:
  health_check:

service:
  extensions: [health_check]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/jaeger]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [prometheus]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging]

This file sets up the OTLP receivers and exporters for Jaeger, Prometheus, and logging. Below is a breakdown of each section.

receivers: This section defines the input receivers for the Collector. In this case, we specify only the OTLP receiver with the HTTP endpoint 0.0.0.0:4318.

exporters: This section defines the output exporters for the Collector. It includes exporters for logging, Jaeger, and Prometheus.

processors: This section defines the processors that will be applied to the data. In this case, only the batch processor is used, which batches incoming data to minimize network overhead.

extensions: This section defines any extensions that will be used by the service. In this case, the health_check extension is included, which provides an endpoint for health checks.

service: This section defines the service pipelines, which specify how the data will be processed and exported. The traces, metrics, and logs pipelines are defined, each with their respective receivers, processors, and exporters. For example, the traces pipeline receives data from the OTLP receiver, applies the batch processor, and exports it to the Jaeger exporter.

Finally, create a Prometheus configuration file named prometheus.yaml with the provided content.

global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: prometheus

    scrape_interval: 5s
    scrape_timeout: 2s
    honor_labels: true

    static_configs:
      - targets: ["localhost:9090"]

This file sets the global scrape interval, evaluation interval, and scrape configuration for Prometheus.

With all configuration files in place, run docker-compose up -d to start the services. FluentBit will now collect logs, metrics, and traces and forward them to the OpenTelemetry Collector, which in turn exports the data to Jaeger, Prometheus, and the logging exporter.

Next, you can obtain the command fluent-bit -c flb-out-otel.conf to launch FluentBit.

The FluentBit output will display the connection status.

[2023/03/27 17:25:41] [ info] [input:event_type:event_type.1] [OK] collector_time
[2023/03/27 17:25:42] [ info] [output:opentelemetry:opentelemetry.0] 127.0.0.1:4318, HTTP status=200

You can now access the Jaeger UI by typing http://localhost:16686 in your web browser. Once there, locate the traces exported under the service name OTLPResourceNoServiceName.

By checking the metrics available on Prometheus, access to http://localhost:9090. You can discover a specific metric that Prometheus exposes about its own activity, known as promhttp_metric_handler_requests_total. This metric records the total number of /metrics requests that the Prometheus server has handled.

To monitor the output logs from OpenTelmetry Collector, you can use the command docker logs -f otel-collector to follow the tailed logs from FluentBit.

With all steps are now complete. If you need to close down the process, with gracefully shutdown the Docker Compose stack, you can use the docker-compose down command. This command stops and removes the containers, networks, and volumes that were created by docker-compose up.

Conclusion

By integrating OpenTelemetry Collector with FluentBit, users can streamline their observability and create efficient, scalable data pipelines for logs, metrics, and traces. With the provided configuration files and Docker Compose setup, getting started with this powerful combination is simple and straightforward.

Need some help? - We are here for you.

In the Fluentd Subscription Network, we will provide you consultancy and professional services to help you run Fluentd and Fluent Bit with confidence by solving your pains. Service desk is also available for your operation and the team is equipped with the Diagtool and the knowledge of tips running Fluent Bit/Fluentd in production. Contact us anytime if you would like to learn more about our service offerings!

Previous
Previous

Visualizing Fluent Bit agent with Prometheus/Grafana

Next
Next

Fluentd vs Fluent Bit: Understanding the Differences