Skip to main content

Kubernetes Collection v4.0.0 - How to Upgrade

This guide walks you through upgrading to Sumo Logic Kubernetes Collection v4.0.0, including key changes, migration steps, and best practices to ensure a smooth transition. Here's what’s new:

  • OpenTelemetry (OTel) as the default metrics pipeline
  • Removal of Fluent Bit and Fluentd configurations
  • New ServiceMonitors and OTel processors for filtering metrics
  • Updated CRDs for OpenTelemetry Operator

Before proceeding, ensure you meet the requirements and review the necessary configuration changes detailed in this guide.

Requirements​

  • helm3
  • kubectl
  • Set the following environment variables, which our commands will use:
    export NAMESPACE=...
    export HELM_RELEASE_NAME=...

Metrics migration​

If you do not have metrics collection enabled, skip straight to the next major section.

Convert Prometheus remote writes to otel metrics filters​

When?: If you have custom remote writes defined in kube-prometheus-stack.prometheus.prometheusSpec.additionalRemoteWrites.

When using Prometheus for metrics collection in v3, we relied on remote writes for filtering forwarded metrics. OTel, which is the default in v4, does not support remote writes, so we've moved this functionality to OTel processors, or ServiceMonitors if it can be done there.

There are several scenarios here, depending on the exact use case:

  1. You're collecting different Kubernetes metrics than what the Chart provides by default. You've modified the existing ServiceMonitor for these metrics, and added a remote write as instructed by the documentation.

    You can safely delete the added remote write definition. No further action is required.

  2. As above, but you're also doing some additional data transformation via relabelling rules in the remote write definition.

    You'll need to either move the relabelling rules into the ServiceMonitor itself, or add an equivalent filter processor rule to OTel.

  3. You're collecting custom application metrics by adding a prometheus.io/scrape annotation. You do not need to filter these metrics.

    No action is needed.

  4. As above, but you also have a remote write definition to filter these metrics.

    You'll need to delete the remote write definition and add an equivalent filter processor rule to OTel.

Upgrade the Kubernetes app​

When?: If you use the Sumo Logic Kubernetes app

Recording rule metrics removed in version 4 were used in the Sumo Logic Kubernetes app. A new version of the app must be installed to ensure compatibility with version 4 of Helm Chart. See here for upgrade instructions.

Using the new app with v3​

To make the migration simpler, it's possible to configure v3 to be compatible with the new app. This way, you can migrate to the new app before migrating to version 4. The configuration for version 3 is the following:

kube-prometheus-stack:
prometheus:
prometheusSpec:
additionalRemoteWrite:
- url: http://$(METADATA_METRICS_SVC).$(NAMESPACE).svc.cluster.local.:9888/prometheus.metrics.node
remoteTimeout: 5s
writeRelabelConfigs:
- action: keep
regex: node-exporter;(?:node_load1|node_load5|node_load15|node_cpu_seconds_total|node_disk_io_time_weighted_seconds_total|node_disk_io_time_seconds_total|node_vmstat_pgpgin|node_vmstat_pgpgout|node_memory_MemFree_bytes|node_memory_MemAvailable_bytes|node_memory_Cached_bytes|node_memory_Buffers_bytes|node_memory_MemTotal_bytes|node_network_receive_drop_total|node_network_transmit_drop_total|node_network_receive_bytes_total|node_network_transmit_bytes_total|node_filesystem_avail_bytes|node_filesystem_size_bytes)
sourceLabels: [job, __name__]
prometheus-node-exporter:
prometheus:
monitor:
metricRelabelings:
- action: keep
regex: (?:node_load1|node_load5|node_load15|node_cpu_seconds_total|node_disk_io_time_weighted_seconds_total|node_disk_io_time_seconds_total|node_vmstat_pgpgin|node_vmstat_pgpgout|node_memory_MemFree_bytes|node_memory_MemAvailable_bytes|node_memory_Cached_bytes|node_memory_Buffers_bytes|node_memory_MemTotal_bytes|node_network_receive_drop_total|node_network_transmit_drop_total|node_network_receive_bytes_total|node_network_transmit_bytes_total|node_filesystem_avail_bytes|node_filesystem_size_bytes)
sourceLabels: [__name__]

Update custom resource definition for OpenTelemetry operator​

note

Starting with v4.12.0, please follow the steps below.

Delete any existing CRDs:

kubectl delete crd instrumentations.opentelemetry.io

kubectl delete crd opentelemetrycollectors.opentelemetry.io

kubectl delete crd opampbridges.opentelemetry.io

Install the CRDs below:

kubectl apply --server-side -f https://raw.githubusercontent.com/SumoLogic/sumologic-kubernetes-collection/refs/tags/v4.12.0/deploy/helm/sumologic/crds/crd-opentelemetry.io_opampbridges.yaml --force-conflicts

kubectl apply --server-side -f https://raw.githubusercontent.com/SumoLogic/sumologic-kubernetes-collection/refs/tags/v4.12.0/deploy/helm/sumologic/crds/crd-opentelemetrycollector.yaml --force-conflicts

kubectl apply --server-side -f https://raw.githubusercontent.com/SumoLogic/sumologic-kubernetes-collection/refs/tags/v4.12.0/deploy/helm/sumologic/crds/crd-opentelemetryinstrumentation.yaml --force-conflicts

Then, annotate and label these CRDs as below:

kubectl annotate crds instrumentations.opentelemetry.io opentelemetrycollectors.opentelemetry.io opampbridges.opentelemetry.io \
meta.helm.sh/release-name=${RELEASE_NAME} \
meta.helm.sh/release-namespace=${RELEASE_NAMESPACE}

kubectl label crds instrumentations.opentelemetry.io opentelemetrycollectors.opentelemetry.io opampbridges.opentelemetry.io app.kubernetes.io/managed-by=Helm
note

CRDs prior to v4.12.0 are below.

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-helm-charts/opentelemetry-operator-0.56.1/charts/opentelemetry-operator/crds/crd-opentelemetry.io_opampbridges.yaml

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-helm-charts/opentelemetry-operator-0.56.1/charts/opentelemetry-operator/crds/crd-opentelemetryinstrumentation.yaml

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-helm-charts/opentelemetry-operator-0.56.1/charts/opentelemetry-operator/crds/crd-opentelemetrycollector.yaml`

How to revert to the v3 defaults​

Set the following in your configuration:

sumologic:
metrics:
collector:
otelcol:
enabled: false
remoteWriteProxy:
enabled: true

kube-prometheus-stack:
prometheus:
enabled: true
prometheusOperator:
enabled: true

Starting with v4.12.0, please use the configuration below:

sumologic:
metrics:
collector:
otelcol:
enabled: false
remoteWriteProxy:
enabled: true

kube-prometheus-stack:
prometheus:
enabled: true
prometheusOperator:
enabled: true

opentelemetry-operator:
crds:
create: true

Remove remaining Fluent Bit and Fluentd configuration​

If you've already switched to OTel, skip straight to the next major section.

The following configuration options aren't used anymore, and should be removed from your user-values.yaml:

  • fluent-bit.*
  • sumologic.logs.collector.allowSideBySide
  • sumologic.logs.defaultFluentd.*
  • fluentd.*
  • sumologic.logs.metadata.provider
  • sumologic.metrics.metadata.provider

Configuration Migration​

See the v3 migration guide.

In addition, the following changes have been made:

  • otelevents.serviceLabels has been introduced as replacement for fluentd.serviceLabels for events service
  • sumologic.events.sourceName is going to be used instead of fluentd.events.sourceName to build _sourceCategory for events

If you've changed the values of either of these two options, you'll need to adjust your configuration accordingly.

Switch to OTLP sources​

note

Both source types will be created by the setup job. The settings discussed here affect which source is actually used.

When?: You use the _sourceName or _source fields in your Sumo queries.

The only solution is to change the queries in question. In general, it's an antipattern to write queries against specific sources, instead of semantic attributes of the data.

How to revert to the v3 defaults​

Set the following in your configuration:

sumologic:
logs:
sourceType: http

metrics:
sourceType: http

traces:
sourceType: http

events:
sourceType: http

tracesSampler:
config:
exporters:
otlphttp:
traces_endpoint: ${SUMO_ENDPOINT_DEFAULT_TRACES_SOURCE}/v1/traces

Running the helm upgrade​

Once you've taken care of any manual steps necessary for your configuration, run the helm upgrade:

helm upgrade --namespace "${NAMESPACE}" "${HELM_RELEASE_NAME}" sumologic/sumologic --version=4.0.0 -f new-values.yaml
note

Make sure to replace --version=4.0.0 with the version of the helm chart you prefer to use. The latest release can be found in our Sumo Logic Kubernetes Collection GitHub Repository.

After you're done, please review the full list of changes, as some of them may impact you even if they do not require additional action.

Status
Legal
Privacy Statement
Terms of Use

Copyright Β© 2025 by Sumo Logic, Inc.