Redis - Classic Collector
Redis is an in-memory data structure implementing a distributed, in-memory key-value database with optional durability.
The Sumo Logic app for Redis helps you monitor the state of the database cluster. The preconfigured dashboards provide information about cluster status, resource usage, commands running, and cache hit rate. You can easily determine the health of the cluster by just a glance at the dashboards.
Log and Metrics types​
This Sumo Logic app for Redis supports Metrics and Logs for Redis in Cluster or Standalone mode. The Redis logs are generated in files as configured in the cluster node configuration files in /etc/redis/redis.conf. For more details on Redis logs, click here.
The app supports metrics generated by the Redis plugin for Telegraf and assumes prometheus format metrics.
Sample log messages​
- Kubernetes environments
- Non-Kubernetes environments
{
"timestamp": 1615988485842,
"log": "1:M 17 Mar 2021 13:41:19.103 * Synchronization with replica 100.96.3.30:6379 succeeded",
"stream": "stdout",
"time": "2021-03-17T13:41:19.103646109Z"
}
5275:S 17 Mar 2021 19:13:38.138 * MASTER <-> REPLICA sync: Finished with success
Sample queries​
This sample Query is from the the Redis - Logs dashboard > Logs panel.
<Redis_DB_Cluster_Filter> component="database" and db_system="redis"
| json auto maxdepth 1 nodrop
| if (isEmpty(log), _raw, log) as message
| count by message
| limit 100
| fields message
Collecting logs and metrics for Redis​
This section provides instructions for configuring log and metric collection for the Sumo Logic app for Redis. Configuring log and metric collection for the Redis ULM app includes the following tasks:
Step 1: Configure Fields in Sumo Logic​
Create the following Fields in Sumo Logic prior to configuring collection. This ensures that your logs and metrics are tagged with relevant metadata, which is required by the app dashboards. For information on setting up fields, see Sumo Logic Fields.
This step is not needed if you are using the application components solution terraform script.
- Kubernetes environments
- Non-Kubernetes environments
If you're using Redis in a Kubernetes environment, create the fields:
pod_labels_component
pod_labels_environment
pod_labels_db_system
pod_labels_db_cluster
pod_labels_db_cluster_address
pod_labels_db_cluster_port
If you're using Redis in a non-Kubernetes environment, create the fields:
component
environment
db_system
db_cluster
db_cluster_address
db_cluster_port
Step 2: Configure Collection for Redis​
Sumo Logic supports collection of logs and metrics data from Redis in both Kubernetes and non-Kubernetes environments. Click on the appropriate tab below based on the environment where your Redis clusters are hosted.
- Kubernetes environments
- Non-Kubernetes environments
In Kubernetes environments, we use the Telegraf Operator, which is packaged with our Kubernetes collection. You can learn more about it here.The diagram below illustrates how data is collected from Redis in Kubernetes environments. In the architecture shown below, there are four services that make up the metric collection pipeline: Telegraf, Telegraf Operator, Prometheus, and Sumo Logic Distribution for OpenTelemetry Collector.
The first service in the metrics pipeline is Telegraf. Telegraf collects metrics from Redis. Note that we’re running Telegraf in each pod we want to collect metrics from as a sidecar deployment: that is, Telegraf runs in the same pod as the containers it monitors. Telegraf uses the Redis input plugin to obtain metrics. (For simplicity, the diagram doesn’t show the input plugins.) The injection of the Telegraf sidecar container is done by the Telegraf Operator. Prometheus pulls metrics from Telegraf and sends them to Sumo Logic Distribution for OpenTelemetry Collector which enriches metadata and sends metrics to Sumo Logic.
In the logs pipeline, Sumo Logic Distribution for OpenTelemetry Collector collects logs written to standard out and forwards them to another instance of Sumo Logic Distribution for OpenTelemetry Collector, which enriches metadata and sends logs to Sumo Logic.
Configure Metrics Collection​
Ensure that you are monitoring your Kubernetes clusters with the Telegraf operator. If you are not, then follow these instructions to do so.
- To collect metrics from a Kubernetes environment, add the following annotations on your Redis pods:
annotations:
telegraf.influxdata.com/class: sumologic-prometheus
prometheus.io/scrape: "true"
prometheus.io/port: "9273"
telegraf.influxdata.com/inputs: |+
servers = ["tcp://:<username-CHANGEME>:<password-CHANGEME>@localhost:6379"]
[inputs.redis.tags]
environment: "ENV_TO_BE_CHANGED"
component="database"
db_system="redis"
db_cluster: "ENV_TO_BE_CHANGED"
db_cluster_address = "ENV_TO_BE_CHANGED"
db_cluster_port = "ENV_TO_BE_CHANGED"
-
Enter in values for the following parameters (marked
ENV_TO_BE_CHANGED
in the snippet above):telegraf.influxdata.com/inputs
. As telegraf will be run as a sidecar the host should always be localhost.- In the input plugins section i.e.:
servers
. The URL to the Redis server. This can be a comma-separated list to connect to multiple Redis servers.
- In the tags section,
[inputs.redis.tags]
:environment
. This is the deployment environment where the Redis cluster identified by the value ofservers
resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it.db_cluster
. Enter a name to identify this Redis cluster. This cluster name will be shown in the Sumo Logic dashboards.db_cluster_address
. Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint.db_cluster_port
. Enter the database port. If not provided, a default port will be used.
- In the input plugins section i.e.:
notedb_cluster_address
anddb_cluster_port
should reflect the exact configuration of DB client configuration in your application, especially if you instrument it with OT tracing. The values of these fields should match exactly the connection string used by the database client (reported as values for net.peer.name and net.peer.port metadata fields).For example, if your application uses “redis-prod.sumologic.com:3306” as the connection string, the field values should be set as follows:
db_cluster_address=redis-prod.sumologic.com db_cluster_port=3306
If your application connects directly to a given redis node, rather than the whole cluster, use the application connection string to override the value of the “host” field in the Telegraf configuration: host=redis-prod.sumologic.com
Pivoting to Tracing data from Entity Inspector is possible only for “Redis address” Entities.
- Do not modify the following values, as they will cause the Sumo Logic apps to not function correctly.
telegraf.influxdata.com/class: sumologic-prometheus
. This instructs the Telegraf operator what output to use. This should not be changed.prometheus.io/scrape: "true"
. This ensures our Prometheus will scrape the metrics.prometheus.io/port: "9273"
. This tells prometheus what ports to scrape on. This should not be changed.telegraf.influxdata.com/inputs
- In the tags section, that is
[inputs.redis.tags]
component: "database"
. This value is used by Sumo Logic apps to identify application components.db_system: "redis"
. This value identifies the database system.
- In the tags section, that is
- See this doc for more parameters that can be configured in the Telegraf agent globally.
- For more information on configuring the Redis input plugin for Telegraf, see this doc.
-
Once this has been done, the Sumo Logic Kubernetes collection will automatically start collecting metrics from the pods having the labels and annotations defined in the previous step.
-
Verify metrics in Sumo Logic by running the following metrics query:
db_cluster=<Your_Redis_Cluster_Name> component="database" \
and db_system="redis"
Configure Logs Collection​
This section explains the steps to collect Redis logs from a Kubernetes environment.
-
Collect Redis logs written to standard output. If your Redis helm chart/pod is writing the logs to standard output, then follow the steps listed below.
- Apply the following labels to your Redis pods:
environment: "ENV_TO_BE_CHANGED"
component: "database"
db_system: "redis"
db_cluster: "ENV_TO_BE_CHANGED"
db_cluster_address = "ENV_TO_BE_CHANGED"
db_cluster_port = "ENV_TO_BE_CHANGED"- Enter in values for the following parameters (marked
ENV_TO_BE_CHANGED
above):environment
. This is the deployment environment where the Redis cluster identified by the value of servers resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it.db_cluster
. Enter a name to identify this Redis cluster. This cluster name will be shown in the Sumo Logic dashboards.db_cluster_address
. Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint.db_cluster_port
. Enter the database port. If not provided, a default port will be used.
notedb_cluster_address
anddb_cluster_port
should reflect the exact configuration of DB client configuration in your application, especially if you instrument it with OT tracing. The values of these fields should match exactly the connection string used by the database client (reported as values for the net.peer.name and net.peer.port metadata fields).For example, if your application uses “redis-prod.sumologic.com:3306” as the connection string, the field values should be set as follows:
db_cluster_address=redis-prod.sumologic.com db_cluster_port=3306
If your application connects directly to a given Redis node, rather than the whole cluster, use the application connection string to override the value of the “host” field in the Telegraf configuration:
host=redis-prod.sumologic.com
Pivoting to Tracing data from Entity Inspector is possible only for “Redis address” Entities.
- Do not modify these values as they will cause the Sumo Logic apps to not function correctly.
component: “database”
. This value is used by Sumo Logic apps to identify application components.db_system: “redis”
. This value identifies the database system.
- For all other parameters, see this doc for more parameters that can be configured in the Telegraf agent globally.
- The Sumologic-Kubernetes-Collection will automatically capture the logs from stdout and will send the logs to Sumologic. For more information on deploying Sumologic-Kubernetes-Collection, see this page.
-
Collect Redis logs written to log files (Optional). If your Redis helm chart/pod is writing its logs to log files, you can use a sidecar to send log files to standard out. To do this:
- Determine the location of the Redis log file on Kubernetes. This can be determined from the redis.conf for your Redis cluster along with the mounts on the Redis pods.
- Install the Sumo Logic tailing sidecar operator.
- Add the following annotation in addition to the existing annotations.
annotations:
tailing-sidecar: sidecarconfig;<mount>:<path_of_redis_log_file>/<redis_log_file_name>Example:
annotations:
tailing-sidecar: sidecarconfig;data:/redis-master-data/redis.log- Make sure that the Redis pods are running and annotations are applied by using the command:
kubectl describe pod <redis_pod_name>
- Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above.
-
Add an FER to normalize the fields in Kubernetes environments. This step is not needed if using application components solution terraform script. Labels created in Kubernetes environments automatically are prefixed with
pod_labels
. To normalize these for our app to work, we need to create a Field Extraction Rule. To do so:- Classic UI. In the main Sumo Logic menu, select Manage Data > Logs > Field Extraction Rules.
New UI. In the top menu select Configuration, and then under Logs select Field Extraction Rules. You can also click the Go To... menu at the top of the screen and select Field Extraction Rules. - Click the + Add button on the top right of the table.
- The Add Field Extraction Rule form will appear:
- Enter the following options:
- Rule Name. Enter the name as App Observability - Database.
- Applied At. Choose Ingest Time.
- Scope. Select Specific Data. Enter the following keyword search expression:
pod_labels_environment=* pod_labels_component=database pod_labels_db_system=* pod_labels_db_cluster=*
- Parse Expression. Enter the following parse expression:
| if (!isEmpty(pod_labels_environment), pod_labels_environment, "") as environment
| pod_labels_component as component
| pod_labels_db_system as db_system
| if (!isEmpty(pod_labels_db_cluster), pod_labels_db_cluster, null) as db_cluster
- Click Save to create the rule.
- Classic UI. In the main Sumo Logic menu, select Manage Data > Logs > Field Extraction Rules.
For Non-Kubernetes environments, we use the Telegraf Operator for Redis metric collection and Sumo Logic Installed Collector for collecting Redis logs. The diagram below illustrates the components of the Redis collection in a non-Kubernetes environment. Telegraf runs on the same system as Redis, and uses the Redis input plugin to obtain Redis metrics, and the Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from Redis on the other hand are sent to either a Sumo Logic Local File or Syslog source.
This section provides instructions for configuring metrics collection for the Sumo Logic app for Redis. Follow the instructions below to set up metrics and logs collection for a given Redis cluster:
Configure Metrics Collection​
- Configure a Hosted Collector. To create a new Sumo Logic hosted collector, perform the steps in the Configure a Hosted Collector section of the Sumo Logic documentation.
- Configure an HTTP Logs and Metrics Source. Create a new HTTP Logs and Metrics Source in the hosted collector created above by following these instructions. Make a note of the HTTP Source URL.
- Install Telegraf. Follow the steps in this document Use the in this document following steps to install Telegraf.
- Configure and start Telegraf. As part of collecting metrics data from Telegraf, we will use the Redis input plugin to get data from Telegraf and the Sumo Logic output plugin to send data to Sumo Logic.
- Create or modify the telegraf.conf file and copy and paste the text below:
[[inputs.redis]]
servers = ["tcp://localhost:6379"]
namepass = ["redis"]
fieldpass = ["blocked_clients", "clients", "cluster_enabled", "cmdstat_calls", "connected_slaves", "evicted_keys", "expired_keys", "instantaneous_ops_per_sec", "keyspace_hitrate", "keyspace_hits", "keyspace_misses", "master_repl_offset", "maxmemory", "mem_fragmentation_bytes", "mem_fragmentation_ratio", "rdb_changes_since_last_save", "rejected_connections", "slave_repl_offset", "total_commands_processed", "total_net_input_bytes", "total_net_output_bytes", "tracking_total_keys", "uptime", "used_cpu_sys", "used_cpu_user", "used_memory", "used_memory_overhead", "used_memory_rss", "used_memory_startup"]
[inputs.redis.tags]
environment = "ENV_TO_BE_CHANGED"
component ="database"
db_system ="redis"
db_cluster: "ENV_TO_BE_CHANGED"
db_cluster_address = "ENV_TO_BE_CHANGED"
db_cluster_port = "ENV_TO_BE_CHANGED"
[[outputs.sumologic]]
url = "<URL Created in Step 3>"
data_format = "prometheus"- Enter values for the following parameters (marked ENV_TO_BE_CHANGED above):
- For the input plugins section:
servers
. The URL to the Redis server. This can be a comma-separated list to connect to multiple Redis servers. Please see this doc for more information on additional parameters for configuring the Redis input plugin for Telegraf.
- For tags section (
[inputs.redis.tags]
):environment
. This is the deployment environment where the Redis cluster identified by the value of servers resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it.db_cluster
. Enter a name to identify this Redis cluster. This cluster name will be shown in the Sumo Logic dashboards.db_cluster_address
. Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint.db_cluster_port
. Enter the database port. If not provided, a default port will be used.
db_cluster_address
and db_cluster_port
should reflect the exact configuration of DB client configuration in your application, especially if you instrument it with OT tracing. The values of these fields should match exactly the connection string used by the database client (reported as values for net.peer.name and net.peer.port metadata fields).
For example, if your application uses “redis-prod.sumologic.com:3306”
as the connection string, the field values should be set as follows: db_cluster_address=redis-prod.sumologic.com db_cluster_port=3306
If your application connects directly to a given Redis node, rather than the whole cluster, use the application connection string to override the value of the “host” field in the Telegraf configuration: host=redis-prod.sumologic.com
Pivoting to Tracing data from Entity Inspector is possible only for “Redis address” Entities.
- For output plugins section:
url
. This is the HTTP source URL created in step 3. See this doc for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin.
- Do not modify these values, as they will cause the Sumo Logic apps to not function correctly.
data_format - “prometheus”
In the output plugins section, for which Metrics are sent in the Prometheus format to Sumo Logic.db_system: “redis”
. In the input plugins section: This value identifies the database system.component: “database”
. In the input plugins section: This value identifies application components.
- See this doc for more parameters that can be configured in the Telegraf agent globally. Once you have finalized your telegraf.conf file, you can start or reload the telegraf service using instructions from their doc.
At this point, Redis metrics should start flowing into Sumo Logic.
Configure Redis Logs Collection​
This section provides instructions for configuring log collection for Redis running on a non-Kubernetes environment for the Sumo Logic app for Redis. By default, Redis logs are stored in a log file. Redis also supports forwarding of logs via Syslog.
Sumo Logic supports collecting logs both via Syslog and a local log file. Utilizing Sumo Logic Cloud Syslog will require TCP TLS Port 6514 to be open in your network. Local log files can be collected via Sumo Logic Installed collectors which requires you to allow outbound traffic to Sumo Logic endpoints for collection to work.
Follow the instructions to set up log collection:
- Configure logging in Redis. Redis supports logging via following methods: syslog, local text log files and stdout. Redis logs have four levels of verbosity. All logging settings are located in redis.conf. To select a level, set
loglevel
to one of:- debug - This level produces a lot of information, which could be useful in development/testing environments)
- verbose - This level includes information not often needed, but logs less than debug)
- notice (moderately verbose, ideal for production environments) - this is the default value
- warning (only very important / critical messages are logged)
- Configure Redis to log to a local file or syslog. As part of this step, you will make a decision on how to export Redis logs and then accordingly configure your Redis cluster.
Option A: Configure Redis logs to go to log files
With this option, Redis logs written to a log file can be collected via the Local File Source of a Sumo Logic Installed collector.
To configure the Redis log file, locate your local redis.conf configuration file in the database directory. By default, Redis logs are stored in /var/log/redis/redis-server.log.
After determining the location of conf file modify the redis.conf configuration file logging parameters:
- Open redis.conf configuration file in a text editor.
- Set the following config parameters:
- Specify the server verbosity level. This can be one of:
- debug (a lot of information, useful for development/testing)
- verbose (many rarely useful info, but not a mess like the debug level)
- notice (moderately verbose, what you want in production probably)
- warning (only very important / critical messages are logged) loglevel notice
- Specify the log file name. Also the empty string can be used to force Redis to log on the standard output. Note that if you use standard output for logging but daemonize, logs will be sent to /dev/null logfile
""
- Specify the server verbosity level. This can be one of:
- Save the redis.conf file and restart the redis server:
sudo service redis-server restart
Option B: Configure Redis logs to stream data to a Syslog source
With this option, Redis logs can be streamed to the Syslog Source of a Sumo Logic Installed Collector.
To configure the Redis syslog, locate your local redis.conf configuration file in the database directory.
After determining the location of conf file, modify the redis.conf configuration file logging parameters
- Open redis.conf configuration file in a text editor.
- Set the following config parameters:
- To enable logging to the system logger, just set 'syslog-enabled' to yes, and optionally update the other syslog parameters to suit your needs.
syslog-enabled no
- Specify the syslog identity.
syslog-ident redis
- Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
syslog-facility local0
- Save the redis.conf file and restart the redis server:
sudo service redis-server restart
- At this point, Redis will start pushing the logs to the local syslog server.
- Configure an Installed Collector. To add an Installed collector, perform the steps as defined on the page Configure an Installed Collector.
- Configure a Source. To add a Local File Source source for Redis, do the following:
-
Add a Local File Source in the installed collector configured in the previous step.
-
Configure the Local File Source fields as follows:
- Name. (Required)
- Description. (Optional)
- File Path (Required). Enter the path to your error.log or access.log. The files are typically located in /var/log/redis/redis-server.log. If you're using a customized path, check the redis.conf file for this information.
- Source Host. Sumo Logic uses the hostname assigned by the OS unless you enter a different host name
- Source Category. Enter any string to tag the output collected from this Source, such as Redis/Logs. The Source Category metadata field is a fundamental building block to organize and label Sources. For details, see Best Practices.
-
Fields. Set the following fields. For more information on fields, see Sumo Logic Fields:
component = database
db_system = redis
db_cluster = <Your_Redis_Cluster_Name>
environment = <Environment_Name>
, such as Dev, QA or Prod.db_cluster_address
. Enter the cluster hostname or ip address that is used by the application to connect to the database. It could also be the load balancer or proxy endpoint.db_cluster_port
. Enter the database port. If not provided, a default port will be used.
notedb_cluster_address
anddb_cluster_port
should reflect the exact configuration of DB client configuration in your application, especially if you instrument it with OT tracing. The values of these fields should match exactly the connection string used by the database client (reported as values fornet.peer.name
andnet.peer.port
metadata fields). For example, if your application uses“redis-prod.sumologic.com:3306”
as the connection string, the field values should be set as follows:db_cluster_address=redis-prod.sumologic.com db_cluster_port=3306
. If your application connects directly to a given Redis node, rather than the whole cluster, use the application connection string to override the value of the “host” field in the Telegraf configuration:host=redis-prod.sumologic.com
Pivoting to Tracing data from Entity Inspector is possible only for “Redis address” Entities.
-
Configure the Advanced section:
- Enable Timestamp Parsing. Select Extract timestamp information from log file entries.
- Time Zone. Choose the option, Ignore time zone from log file and instead use, and then select your Redis Server’s time zone.
- Timestamp Format. The timestamp format is automatically detected.
- Encoding. Select UTF-8 (Default).
- Enable Multiline Processing. Detect messages spanning multiple lines. Select Infer Boundaries - Detect message boundaries automatically.
-
Click Save.
-
- Verify logs are flowing into Sumo Logic by running the following logs query:
db_cluster=<Your_Redis_cluster_Name> component="database" and db_system="redis"
Installing Redis Monitors/Alerts​
This step is not needed if you are using the application components solution terraform script.
This section has instructions for installing the Sumo app and Alerts for Redis ULM, as well as descriptions and examples for each of the dashboards. These instructions assume you have already set up collection as described in the Collecting Logs and Metrics for Redis app section.
Sumo Logic has provided out-of-the-box alerting capabilities available via Sumo Logic monitors to help you quickly determine if the Redis database cluster is available and performing as expected. These monitors fire alerts (notifications) on top of preset thresholds on metrics data using industry best practices and recommendations.
For details on the individual monitors, please see Alerts.
- To install these alerts, you need to have the Manage Monitors role capability.
- Alerts can be installed by either importing them via a JSON or via a Terraform script.
There are limits for how many alerts can be enabled - please see the Alerts FAQ for details.
Method A: Importing a JSON file​
- Download a JSON file that describes the monitors. The JSON contains the alerts that are based on Sumo Logic searches that do not have any scope filters and therefore will be applicable to all Redis clusters, the data for which has been collected via the instructions in the previous sections., However, if you would like to restrict these alerts to specific clusters or environments, update the JSON file by replacing the text
db_system=redis
with<Your Custom Filter> db_system=redis
. Custom filter examples:- For alerts applicable only to a specific cluster, your custom filter would be:
db_cluster=redis-.prod.01
. - For alerts applicable to all clusters that start with
redis-prod
, your custom filter would be:db_cluster=redis-prod*
. - For alerts applicable to a specific cluster within a production environment, your custom filter would be:
db_cluster=redis-1 and environment=prod
. This assumes you have set the optional environment tag while configuring collection.
- For alerts applicable only to a specific cluster, your custom filter would be:
- Classic UI. In the main Sumo Logic menu, select Manage Data > Monitoring > Monitors.
New UI. In the main Sumo Logic menu, select Alerts > Monitors. You can also click the Go To... menu at the top of the screen and select Monitors. - Click Add.
- Click Import to import monitors from the JSON above.
Monitors are disabled by default. Once you have installed the alerts via this method, navigate to the Redis folder under Monitors to configure them. See Monitor Settings to enable monitors. To send notifications to teams or connections, see the instructions detailed in Step 4 of Create a Monitor.
Method B: Using a Terraform script​
- Generate a Sumo Logic access key and ID for a user that has the Manage Monitors role capability in Sumo Logic using these instructions. Please identify which deployment your Sumo Logic account is in, using this link.
- Download and install Terraform 0.13 or later.
- Download the Sumo Logic Terraform package for Redis alerts. The alerts package is available in the Sumo Logic github repository. You can either download it via the “git clone” command or as a zip file.
- Monitor Configuration. After the package has been extracted, navigate to the package directory terraform-sumologic-sumo-logic-monitor/monitor_packages/redis/
Edit the redis.auto.tfvars file and add the Sumo Logic Access Key, Access Id and Deployment from Step 1.
access_id = "<SUMOLOGIC ACCESS ID>"
access_key = "<SUMOLOGIC ACCESS KEY>"
environment = "<SUMOLOGIC DEPLOYMENT>"
The Terraform script installs the alerts without any scope filters, if you would like to restrict the alerts to specific clusters or environments, update the variable redis_cluster_filter
. Custom filter examples:
- For alerts applicable only to a specific cluster, your custom filter would be:
db_cluster=redis-.prod.01
. - For alerts applicable to all clusters that start with redis-prod, your custom filter would be:
db_cluster=redis-prod*
. - For alerts applicable to a specific cluster within a production environment, your custom filter would be:
db_cluster=redis-1 and environment=prod
. This assumes you have set the optional environment tag while configuring collection.
All monitors are disabled by default on installation, if you would like to enable all the monitors, set the parameter monitors_disabled
to false
in this file.
By default, the monitors are configured in a monitor folder called “Redis”, if you would like to change the name of the folder, update the monitor folder name in this file.
If you would like the alerts to send email or connection notifications, configure these in the file redis_notifications.auto.tfvars. For configuration examples, refer to the next section.
- Email and Connection Notification Configuration Examples. To configure notifications, modify the file redis_notifications.auto.tfvars file and fill in the connection_notifications See the examples for PagerDuty and email notifications below. See this document for creating payloads with other connection types.
connection_notifications = [
{
connection_type = "PagerDuty",
connection_id = "<CONNECTION_ID>",
payload_override = "{\"service_key\": \"your_pagerduty_api_integration_key\",\"event_type\": \"trigger\",\"description\": \"Alert: Triggered {{TriggerType}} for Monitor {{Name}}\",\"client\": \"Sumo Logic\",\"client_url\": \"{{QueryUrl}}\"}",
run_for_trigger_types = ["Critical", "ResolvedCritical"]
},
{
connection_type = "Webhook",
connection_id = "<CONNECTION_ID>",
payload_override = "",
run_for_trigger_types = ["Critical", "ResolvedCritical"]
}
]
Replace <CONNECTION_ID>
with the connection id of the webhook connection. The webhook connection id can be retrieved via calling the Monitors API.
For overriding payload for different connection types, refer to this document.
email_notifications = [
{
connection_type = "Email",
recipients = ["abc@example.com"],
subject = "Monitor Alert: {{TriggerType}} on {{Name}}",
time_zone = "PST",
message_body = "Triggered {{TriggerType}} Alert on {{Name}}: {{QueryURL}}",
run_for_trigger_types = ["Critical", "ResolvedCritical"]
}
]
-
Install the Monitors.
- Navigate to the package directory terraform-sumologic-sumo-logic-monitor/monitor_packages/redis/ and run terraform init. This will initialize Terraform and will download the required components.
- Run
terraform plan
to view the monitors which will be created/modified by Terraform. - Run
terraform apply
.
-
Post Installation. If you haven’t enabled alerts and/or configured notifications via the Terraform procedure outlined above, we highly recommend enabling alerts of interest and configuring each enabled alert to send notifications to other people or services. This is detailed in Add a Monitor.
Installing the Redis app​
This section demonstrates how to install the Redis ULM app.
- From the App Catalog, search for and select the app.
- Select the version of the service you're using and click Add to Library.
Version selection is not available for all apps.
- To install the app, complete the following fields.
- App Name. You can retain the existing name, or enter a name of your choice for the app.
- Data Source. Choose Enter a Custom Data Filter and enter a custom Redis cluster filter. Examples:
- For all Redis clusters:
db_cluster=*
- For a specific cluster:
db_cluster=redis.dev.01
- Clusters within a specific environment:
db_cluster=redis-1 and environment=prod
. (This assumes you have set the optional environment tag while configuring collection).
- For all Redis clusters:
- Advanced. Select the Location in Library (the default is the Personal folder in the library), or click New Folder to add a new folder.
- Click Add to Library.
Once an app is installed, it will appear in your Personal folder, or other folder that you specified. From here, you can share it with your organization.
Panels will start to fill automatically. It's important to note that each panel slowly fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but with a bit of time, you'll see full graphs and maps.
Viewing Redis dashboards​
Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see Filter with template variables.
Overview​
The Redis - Overview dashboard provides an at-a-glance view of the Redis server status, error logs along with database metrics.
Use this dashboard to:
- Gain insights into Redis cluster Master and Slave status.
- Gain insights into your Redis health using Cache Hit Rate, Fragmentation Ratio, and Slow Queries Log.
- Get insights into Active and rejected client connections.
Cluster Overview​
The Redis - Cluster Overview dashboard provides a high-level view of cluster infrastructure, comparisons, and trends.
Use this dashboard to:
- Review Cluster infrastructure.
- Manage Redis versions running on nodes.
- Understand Master and Slave replication offsets.
Cluster Operations​
The Redis - Cluster Operations dashboard provides a high-level view of the activity and health of Master and Slaves on your network. Dashboard panels display visual graphs and detailed information on replication offsets, cache rate, and most importantly, resource usage.
Use this dashboard to:
- To understand the Cluster resource usage.
- Gain insights into cache hit and miss rates.
Logs​
The Redis - Logs dashboard provides a detailed analysis based on logs. The panels provide details such as RDBMemory Usage, events, RDB, and AOF events.
Use this dashboard to:
- Review errors and warnings generated by the server.
- Review the RDBMemory Usage, events, RDB and AOF events.
Command Stats Deltas​
The Redis - Command Stats Deltas dashboard provides an at-a-glance view of commands running on your Redis server.
Use this dashboard to:
- To gain insights into the commands running on your cluster.
Redis Alerts​
Sumo Logic has provided out-of-the-box alerts available via Sumo Logic monitors to help you quickly determine if the Redis database cluster is available and performing as expected.
Alert Name | Alert Description and conditions | Alert Condition | Recover Condition |
Redis - Instance Down | This alert fires when we detect that the Redis instance is down for 5 minutes. | == 0 | >0 |
Redis - Missing Master | This alert fires when we detect that a Redis cluster has no node marked as master for 5 minutes. | <=0 | >0 |
Redis - Replication Broken | This alert fires when we detect that a Redis instance has lost a slave. | <0 | >=0 |
Redis - Too Many Connections | This alert fires when we detect a given Redis server has too many connections (over 100). | >100 | <=100 |
Redis - Rejected Connections | This alert fires when we detect that some connections to a Redis cluster have been rejected. | >0 | <=0 |
Redis - High Memory Fragmentation Ratio | This alert fires when the ratio of Redis memory usage to Linux virtual memory pages (mapped to physical memory chunks) is higher than 1.5.
A high ratio will lead to swapping and can adversely affect performance. | >=1.5 | <1.5 |
Redis - Potential Master-Slave Communication Failure | This alert fires when we detect that communication between the Redis master and slave nodes has not occurred for the past 60 seconds or more. | >=60 | <60 |
Redis - Replica Lag | This alert fires when we detect that the replica lag for a given Redis cluster is greater than 60 seconds. Please review how replication has been configured. | >60 | <=60 |
Redis - Replication Broken | This alert fires when we detect that a Redis instance has lost all slaves. This will affect the redundancy of data stored in Redis. Please review how replication has been configured. | <0 | >=0 |
Redis - Replication Offset | This alert fires when the replication offset in a given Redis cluster is greater than 1 MB for the last five minutes. Please review how replication has been configured. | >=1 | <1 |
Redis - Out of Memory | This alert fires when we detect that a Redis node is running out of memory (> 90%). | > 90% | < 90% |
Redis - High CPU Usage | This alert is fired if user and system cpu usage for a host exceeds 80%. | > 80% | < 80% |