Skip to main content

Collecting Metrics

This document covers multiple different use cases related to scraping custom application metrics exposed in Prometheus format.

There are three major sections:

Scraping metrics​

This section describes how to scrape metrics from your applications. The following scenarios are covered:

Application metrics are exposed (one endpoint scenario)​

If there is only one endpoint in the Pod you want to scrape metrics from, you can use annotations. Add the following annotations to your Pod definition:

# ...
annotations:
prometheus.io/port: '<port name or number>' # Port which metrics should be scraped from
prometheus.io/scrape: 'true' # Set if metrics should be scraped from this Pod
prometheus.io/path: '/metrics' # Path which metrics should be scraped from
note

If you add more than one annotation with the same name, only the last one will be used.

Application metrics are exposed (multiple endpoints scenario)​

note

Use sumologic.metrics.additionalServiceMonitors instead of kube-prometheus-stack.prometheus.additionalServiceMonitors. They have identical behavior and can even be used in tandem, but the latter only works if Prometheus is enabled, and won't work with the Otel metrics collector which is the default in v4 of the Chart.

If you want to scrape metrics from multiple endpoints in a single Pod, you need a Service which points to the Pod and also to configure sumologic.metrics.additionalServiceMonitors in your user-values.yaml:

sumologic:
metrics:
additionalServiceMonitors:
- name: <service monitor name>
endpoints:
- port: "<port name or number>"
path: <metrics path>
namespaceSelector:
matchNames:
- <namespace>
selector:
matchLabels:
<identyfing label 1>: <value of indentyfing label 1>
<label2>: <value of identyfing label 2>
note

For advanced serviceMonitor configuration, see Prometheus documentation

Example​

Let's consider a Pod that exposes the following metrics:

my_metric_cpu
my_metric_memory

on the following endpoints:

:3000/metrics
:3001/custom-endpoint

The Pod's definition looks like the following:

apiVersion: v1
kind: Pod
metadata:
labels:
app: my-custom-app
name: my-custom-app-56fdc95c9c-r5pvc
namespace: my-custom-app-namespace
# ...
spec:
containers:
- ports:
- containerPort: 3000
protocol: TCP
- containerPort: 3001
protocol: TCP
# ...

There is also a Service which exposes Pod ports:

apiVersion: v1
kind: Service
metadata:
labels:
app: my-custom-app-service
managedFields:
name: my-custom-app-service
namespace: my-custom-app-namespace
spec:
ports:
- name: "some-port"
port: 3000
protocol: TCP
targetPort: 3000
- name: "another-port"
port: 3001
protocol: TCP
targetPort: 3001
selector:
app: my-custom-app

In order to scrape metrics from the above objects, the following configuration should be applied to user-values.yaml:

sumologic:
metrics:
additionalServiceMonitors:
- name: my-custom-app-service-monitor
endpoints:
- port: some-port
path: /metrics
- port: another-port
path: /custom-endpoint
namespaceSelector:
matchNames:
- my-custom-app-namespace
selector:
matchLabels:
app: my-custom-app-service

Application metrics are not exposed​

In case you want to scrape metrics from an application which does not expose a Prometheus endpoint, you can use telegraf operator. It will scrape metrics according to configuration and expose them on port 9273 so Prometheus will be able to scrape them.

For example, to expose metrics from nginx Pod, you can use the following annotations:

annotations:
telegraf.influxdata.com/inputs: |+
[[inputs.nginx]]
urls = ["http://localhost/nginx_status"]
telegraf.influxdata.com/class: sumologic-prometheus
telegraf.influxdata.com/limits-cpu: '750m'

sumologic-prometheus defines how telegraf operator will expose the metrics. They are going to be exposed in prometheus format on port 9273 and /metrics path.

note

If you apply annotations on a Pod that's owned by another object, for example DaemonSet, it won't take effect. In such case, the annotation should be added to Pod specification in DaemonSet template.

After restart, the Pod should have an additional telegraf container.

To scrape and forward exposed metrics to Sumo Logic, follow one of the following scenarios:

Metrics modifications​

This section covers the following metrics modifications:

Filtering metrics​

See the doc about filtering data.

Default attributes​

By default, the following attributes should be available:

Attribute nameDescription
_collectorSumo Logic collector name
_originSumo Logic origin metadata ("kubernetes")
_sourceCategorySumo Logic source category
_sourceHostSumo Logic source host
_sourceNameSumo Logic source Name
clusterCluster Name
endpointMetrics endpoint
http_listener_v2_pathPath used to receive data from Prometheus
instancePod instance
jobPrometheus job name
k8s.container.nameKubernetes Container name
k8s.deployment.nameKubernetes Deployment name
k8s.namespace.nameKubernetes Namespace name
k8s.node.nameKubernetes Node name
k8s.pod.nameKubernetes Pod name
k8s.pod.pod_nameKubernetes Pod name
k8s.replicaset.nameKubernetes Replicaset name
k8s.service.nameKubernetes Service name
k8s.statefulset.nameKubernetes Statefulset name
podlabels<label_name>Kubernetes Pod label. Every label is a different attribute
prometheusPrometheus
prometheus_replicaPrometheus Replica name
prometheus_servicePrometheus Service name
note

Before ingestion to Sumo Logic, attributes are renamed according to the sumologicschemaprocessor documentation

Renaming metric​

To rename metrics, you can use the transformprocessor. Look at the following snippet:

sumologic:
metrics:
otelcol:
extraProcessors:
- transform/1:
metric_statements:
- context: metric
statements:
## Renames <old_name> to <new_name>
- set(name, "<new_name>") where name == "<old_name>"

Adding or renaming metadata​

To add or rename metadata, you can use the transformprocessor. Look at the following snippet:

sumologic:
metrics:
otelcol:
extraProcessors:
- transform/1:
metric_statements:
- context: resource
statements:
## adds <new_name> metadata
- set(attributes["<new_name>"], attributes["<old_name>"])
## adds <new_static_name> metadata
- set(attributes["<new_static_name>"], "<static_value>")
## removes <old_name> metadata
- delete_key(attributes, "<old_name>")
note

See Default attributes for more information about attributes.

Investigation​

If you do not see your metrics in Sumo Logic, ensure that you have followed the steps outlined in this document.

Kubernetes metrics​

By default, we collect selected metrics from the following Kubernetes components:

  • Kube API Server configured with kube-prometheus-stack.kubeApiServer.serviceMonitor
  • Kubelet configured with kube-prometheus-stack.kubelet.serviceMonitor
  • Kube Controller Manager configured with kube-prometheus-stack.kubeControllerManager.serviceMonitor
  • CoreDNS configured with kube-prometheus-stack.coreDns.serviceMonitor
  • Kube EtcD configured with kube-prometheus-stack.kubeEtcd.serviceMonitor
  • Kube Scheduler configured with kube-prometheus-stack.kubeScheduler.serviceMonitor
  • Kube State Metrics configured with kube-prometheus-stack.kube-state-metrics.prometheus.monitor
  • Prometheus Node Exporter configured with kube-prometheus-stack.prometheus-node-exporter.prometheus.monitor
Status
Legal
Privacy Statement
Terms of Use

Copyright Β© 2024 by Sumo Logic, Inc.