iT邦幫忙

2022 iThome 鐵人賽

DAY 15
3
DevOps

淺談DevOps與Observability系列 第 15

淺談OpenTelemetry - Collector Compoents

  • 分享至 

  • xImage
  •  

昨天真的秒睡:)

今天來補完配置文件裡重要的三個組件
昨天提到了OTel collector的設計目的與功用, 還有pipeline, 跟pipeline的YAML長相.
昨天的 淺談OpenTelemetry - Collector能複習回憶

今天先來談components

Receivers

Receivers用來監聽網路port並且接收送來的telemetry data.
Receiver設定的方式是
receivers這成員下, 給出receiver name(像底下的opencensus, otlp), 再來是該接收器的設定資訊

第一種格式, 就是該receiver只有一種協議, 那直接配置endpoint即可

receivers:
  <receiver name>:
    endpoint: <network interface and port to bind to, address:port>
    enabled: <boolean, defaults to true>
    # other key/value pairs as needed by specific receiver type

第二種格式,是該receiver有多種協議可以監聽接收

receivers:
  <receiver name>:
    protocols:
      <protocol name 1>: # key is string, protocol name, unique
        endpoint: <network interface and port to bind to, address:port>
        enabled: <boolean, defaults to true>
        # other key/value pairs as needed by specific receiver type
      <protocol name 2>:
        # settings for protocol 2
      ...
      <protocol name N>:
        # settings for protocol N

以下範例的opencensus就是第一種格式
otlp則是對應第二種格式

receivers:
  opencensus:
    endpoint: "0.0.0.0:55678" 
  otlp:
    protocols:
      grpc:
      http:
        endpoint: "localhost:4318"
        cors:
          allowed_origins:
            - http://test.com
            # Origins can have wildcards with *, use * by itself to match any origin.
            - https://*.example.com
          allowed_headers:
            - Example-Header
          max_age: 7200

重要!

  1. 別多個receiver, 綁定在同一張網卡且同一個port上; 換句話說, 不同網卡同一個port是可以的
  2. 別宣告了receiver, 但卻沒用在昨天講的pipeline裡面使用.
  3. 多個pipeline可以用同一個receiver, 因為receiver接收到telemetry data時, 可以fanout給有關連的多個pipeline.
  4. recevier的設定上都要有一個port和網卡位子(預設是127.0.0.1)

Collector內建的receivers

Collector receivers
提供OTLP協議的receiver, 且支持多種protocol

Collector-contrib 提供的receivers

Collector-contrib receivers連結
點進去裡面找到想要接收telemetry data協議的receiver
舉例像我想接收host metrics, 我需要hostmetrics
點進去後Getting Started都有範例, 照說明使用即可

receivers:
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu:
      memory:

  hostmetrics/disk:
    collection_interval: 1m
    scrapers:
      disk:
      filesystem:

service:
  pipelines:
    metrics:
      receivers: [hostmetrics, hostmetrics/disk]

Processors

Processor就負責處理data或修改data內容.

宣告格式如下

processors:
  <processor name>: # key is string, unique name of processor
    enabled: <boolean, defaults to true>
    # other key/value pairs as needed by specific processor type

如昨天提的一個pipeline可以包含一組有順序性的processor.
該pipeline的第一個processor, 會接收來自多個有相關的receiver獲取來的telemetry data. 最後一個processor則負責將data發送到pipeline配置的exporter[s].
第二個到最後一個processor, 都嚴格遵守僅從前一個processor的結果來接收資料, 並且data也只發送給下一個processor.

如果我們定義了多組pipeline, 剛好都引用了同名稱的processor.
每個pipeline會有自己的processor instance, 所以每個processor也都有自己的狀態.

processors:
  batch:
    send_batch_size: 10000
    timeout: 10s

service:
  pipelines:
    traces:  # a pipeline of “traces” type
      receivers: [zipkin]
      processors: [batch]
      exporters: [jaeger]
    traces/2:  # another pipeline of “traces” type
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]

上圖來看batch processor, 被兩個pipeline使用.
traces, 和traces/2的batch processor, 其實是兩個獨立的instance, 但會用同一組配置.
(以軟體生命週期管理來說, 就是該batch proceesor不是singleton的物件, 而是被建立兩份, 各自被這兩個pipeline aggreate在自己物件內.)

Collector內建的processors

Collector processors
很常用的2個processor

  • Batch
  • Memory Limit

Collector-contrib 提供的processors

Collector-contrib processors連結
點進去裡面找到想要接收telemetry data協議的processor
點進去後Getting Started都有範例, 照說明使用即可

Exporters

功用跟之前聊的一樣, 就是負責把data給轉發到目的地(或者本地file)

宣告格式如下

<exporter name>: # key is string, unique name of exporter
  endpoint: <network interface and port to bind to, address:port>
  enabled: <boolean, defaults to true>
  # other key/value pairs as needed by specific exporter type

exporter可以宣告多個同樣類型的exporter

exporters:
  opencensus/1:
    endpoint: "example.com:14250"
  opencensus/2:
    endpoint: "0.0.0.0:14250"

像這樣, 可能你有需求, 同樣是opencensus的data,但按需轉發到不同的目的地.

跟processor一樣, 可以配置多組pipeline對到同一個同名稱的exporter

exporters:
  jaeger:
    protocols:
      grpc:
        endpoint: "0.0.0.0:14250"

service:
  pipelines:
    traces:  # a pipeline of “traces” type
      receivers: [zipkin]
      processors: [memory_limiter]
      exporters: [jaeger]
    traces/2:  # another pipeline of “traces” type
      receivers: [otlp]
      processors: [batch]
      exporters: [jaeger]

上例就是兩個pipeline, exporter都對到jarger exporter

Collector內建的exporters

Collector exporters

Collector-contrib 提供的exporters

Collector-contrib exporters連結
點進去裡面找到想要接收telemetry data協議的processor
點進去後Getting Started都有範例, 照說明使用即可

Collector Config線上產生器

OpenTelemetry Collector Configurator
這裡提供一個線上設定檔產生器, 當然也能SRE團隊自己寫一個就是:)

今日小心得

我們能看到Collector-contrib內提供非常多跟vendor相關或社群提供的processor套件.

如果想參與開源專案的提供, 這看起來是條不錯的路XD

昨天有提到設計目的有Unified, single codebase, 就是希望開發者門把該生態圈的周遭套件給提供上來, 方便大家簡單使用, 不用自己再造輪子了.

參考資料

OTel Collector


上一篇
淺談OpenTelemetry - Collector
下一篇
淺談OpenTelemetry - Collector Extensions
系列文
淺談DevOps與Observability36
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言