MongoDB - OpenTelemetry Collector
MongoDB is a source-available cross-platform document-oriented database program. The Sumo Logic app for MongoDB supports logs and metrics from the open source version of MongoDB. The app is tested on the 7.0.14 version of MongoDB.
MongoDB logs are sent to Sumo Logic through OpenTelemetry filelog receiver. The OpenTelemetry collector runs on the same host as MongoDB and uses the MongoDB Receiver and the Sumo Logic OpenTelemetry Exporter to send the metrics to Sumo Logic.
Log and metrics types​
This app supports logs and metrics for MongoDB instance. The MongoDB logs are generated in files as configured in this configuration file /var/log/mongodb/mongodb.log
. For more details on MongoDB logs, refer to the MongoDB documentation.
The app supports metrics generated by the MongoDB Receiver.
Fields creation in Sumo Logic for MongoDB​
Following are the Fields which will be created as part of MongoDB App install if not already present.
db.cluster.name
. User configured. Enter a name to identify this MongoDb cluster. This cluster name will be shown in the Sumo Logic dashboards.db.system
. Has fixed value of mongodb.deployment.environment
. User configured. This is the deployment environment where the Mongodb cluster resides. For example: dev, prod or qa.sumo.datasource
. has a fixed value of mongodb.
db.node.name
. Has the value of host name of the machine which is being monitored.
Prerequisites​
For metrics collection​
The MongoDB receiver fetches stats from a MongoDB instance using the golang mongo driver. Stats are collected via MongoDB's dbStats
, serverStatus
, and top
commands. The MongoDB receiver extracts values from the result and converts them to OpenTelemetry metrics.
For logs collection​
By default, MongoDB logs are stored in a log file.
- Configure logging verbosity in MongoDB.
MongoDB logs have six levels of verbosity. All logging settings are located in MongoDB.conf. To select a level, set loglevel to one of:- 0 is the MongoDB's default log verbosity level, to include Informational messages.
- 1 to 5 increases the verbosity level to include Debug messages.
- You need to set the profiling_level to a value of 1.
But by default it is set to 0. You can also configure the value of slowms to a value greater than or equal to 100. Thus can be done using the below command in mongosh:db.setProfilingLevel(1,100)
- Configure MongoDB to log to a local file.
By default, MongoDB logs are stored in/var/log/mongodb/mongodb.log
. The default directory for log files is listed in theMongoDB.conf
file. To configure the log output destination to a log file, use one of the following settings, either in the configuration file or command-line:- Configuration file: The systemLog.destination option for file.
- Command-line:
For Linux systems with ACL Support, the otelcol install process should have created the ACL grants necessary for the otelcol system user to access default log locations. You can verify the active ACL grants using the getfacl
command. Install the ACL in your Linux environment, if not installed.
The required ACL may not be supported for some rare cases, for example, Linux OS Distro, which is officially not supported by Sumo Logic. In this case, you can run the following command to explicitly grant the permissions.
sudo setfacl -R -m d:u:otelcol-sumo:r-x,d:g:otelcol-sumo:r-x,u:otelcol-sumo:r-x,g:otelcol-sumo:r-x <PATH_TO_LOG_FILE>
Run the above command for all the log files in the directory that need to be ingested, which are not residing in the default location.
If Linux ACL Support is not available, traditional Unix-styled user and group permission must be modified. It should be sufficient to add the otelcol system user to the specific group that has access to the log files.
For Windows systems, log files which are collected should be accessible by the SYSTEM group. Use the following set of PowerShell commands if the SYSTEM group does not have access.
$NewAcl = Get-Acl -Path "<PATH_TO_LOG_FILE>"
# Set properties
$identity = "NT AUTHORITY\SYSTEM"
$fileSystemRights = "ReadAndExecute"
$type = "Allow"
# Create new rule
$fileSystemAccessRuleArgumentList = $identity, $fileSystemRights, $type
$fileSystemAccessRule = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList $fileSystemAccessRuleArgumentList
# Apply new rule
$NewAcl.SetAccessRule($fileSystemAccessRule)
Set-Acl -Path "<PATH_TO_LOG_FILE>" -AclObject $NewAcl
Collection configuration and app installation​
As part of data collection setup and app installation, you can select the App from App Catalog and click on Install App. Follow the steps below.
Step 1: Set up Collector​
If you want to use an existing OpenTelemetry Collector, you can skip this step by selecting the Use an existing Collector option.
To create a new Collector:
- Select the Add a new Collector option.
- Select the platform where you want to install the Sumo Logic OpenTelemetry Collector.
This will generate a command that you can execute in the machine environment you need to monitor. Once executed, it will install the Sumo Logic OpenTelemetry Collector.
Step 2: Configure integration​
OpenTelemetry works with a configuration YAML file with all the details concerning the data that needs to be collected. For example, it specifies the location of a log file that is read and sent to the Sumo Logic platform.
In this step, you will configure the YAML file required for MongoDB collection.
Below are the inputs required:
Endpoint (no default)
. The hostname and port of the MognoDB instance, separated by a colon. This is required to scrap metrics. (For example:localhost:27017
.)noteFor collecting metrics, only use the
mongod
endpoint. The Sumo Logic OpenTelemetry collector does not support themongos
endpoint.Logs Path
. Provide the path to the mongo db log file configured as part of above steps. Typically the logs are located at the location:/var/log/mongodb/mongodb.log
.username (optional)
. If authentication is required, the user can with clusterMonitor permissions can be provided here.password (optional)
. If authentication is required, the password can be provided here.
You can add any custom fields which you want to tag along with the data ingested in Sumo. Click on the Download YAML File button to get the YAML file.
Click on the Download YAML File button to get the YAML file.
For Linux platform, click Download Environment Variables File to get the file with the password which is supposed to be set as environment variable.
Step 3: Send logs to Sumo Logic​
Once you have downloaded the YAML file as described in the previous step, follow the below steps based on your platform.
- Linux
- Windows
- macOS
- Chef
- Ansible
- Puppet
- Copy the YAML file to
/etc/otelcol-sumo/conf.d/
folder in the Mongodb instance which needs to be monitored. - Place Env file in the following directory:
/etc/otelcol-sumo/env/
- restart the collector using:
 sudo systemctl restart otelcol-sumo
- Copy the YAML file to
C:\ProgramData\Sumo Logic\OpenTelemetry Collector\config\conf.d
folder in the machine which needs to be monitored. - Restart the collector using:Â
Restart-Service -Name OtelcolSumo
- Copy the YAML file to
/etc/otelcol-sumo/conf.d/
folder in the Mongodb instance which needs to be monitored. - Restart the otelcol-sumo process using:
 otelcol-sumo --config /etc/otelcol-sumo/sumologic.yaml --config "glob:/etc/otelcol-sumo/conf.d/*.yaml"Â
- Copy the yaml file into your Chef cookbook files directory
files/<downloaded_yaml_file>
. - Use a Chef file resource in a recipe to manage it.
cookbook_file '/etc/otelcol-sumo/conf.d/<downloaded_yaml_file>' do mode 0644 notifies :restart, 'service[otelcol-sumo]', :delayed end
- Add the recipe to your collector setup to start collecting the data. Every team typically has their established way of applying the Chef recipe. The resulting Chef recipe should look something like:
cookbook_file '/etc/otelcol-sumo/conf.d/<downloaded_yaml_file>' do
mode 0644
notifies :restart, 'service[otelcol-sumo]', :delayed
end
- Place the file into your Ansible playbook files directory.
- Run the Ansible playbook.
ansible-playbook -i inventory install_sumologic_otel_collector.yaml
-e '{"installation_token": "<YOUR_TOKEN>", "collector_tags": {<YOUR_TAGS>}, "src_config_path": "files/conf.d"}'
- Place the file into your Puppet module files directory
modules/install_otel_collector/files/<downloaded_yaml>
. - Use a Puppet file resource to manage it.
file { '/etc/otelcol-sumo/conf.d/<downloaded_yaml_file>':
ensure => present,
source => 'puppet:///modules/install_otel_collector/<downloaded_yaml_file>',
mode => '0644',
notify => Service[otelcol-sumo],
} - Apply the Puppet manifest. Every team typically has their established way of applying the Puppet manifest. The resulting Puppet manifest should look something like:
node 'default' {
class { 'install_otel_collector'
installation_token => '<YOUR_TOKEN>',
collector_tags => { <YOUR_TAGS> },
}
service { 'otelcol-sumo':
provider => 'systemd',
ensure => running,
enable => true,
require => Class['install_otel_collector'],
}
file { '/etc/otelcol-sumo/conf.d/<downloaded_yaml_file>':
ensure => present,
source => 'puppet:///modules/install_otel_collector/<downloaded_yaml_file>',
mode => '0644',
notify => Service[otelcol-sumo],
}
}
After successfully executing the above command, Sumo Logic will start receiving data from your host machine.
Click Next. This will install the app (dashboards and monitors) to your Sumo Logic Org.
Dashboard panels will start to fill automatically. It's important to note that each panel fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but within 20 minutes, you'll see full graphs and maps.
Sample log messages​
{
   "t":{
       "$date":"2021-05-21T10:22:57.373+00:00"
   },
   "s":"I",
   "c":"NETWORK",
   "id":51800,
   "ctx":"conn500659",
   "msg":"client metadata",
   "attr":{
       "remote":"127.0.0.1:49472",
       "client":"conn500659",
       "doc":{
           "application":{
               "name":"MongoDB Shell"
           },
           "driver":{
               "name":"MongoDB Internal Client",
               "version":"4.4.4"
           },
           "os":{
               "type":"Linux",
               "name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"",
               "architecture":"x86_64",
               "version":"Kernel 4.4.0-62-generic"
           }
       }
   }
}
Sample queries​
deployment.environment=* db.cluster.name=* sumo.datasource=mongodb | json "log" as _rawlog nodrop
| if (isEmpty(_rawlog), _raw, _rawlog) as _raw
| json field=_raw "t.$date" as timestamp
| json field=_raw "s" as severity
| json field=_raw "c" as component
| json field=_raw "ctx" as context
| json field=_raw "msg" as msg
| where severity in ("E")
| count by component
Viewing MongoDB dashboards​
All dashboards have a set of filters that you can apply to the entire dashboard. Use these filters to drill down and examine the data to a granular level.
- You can change the time range for a dashboard or panel by selecting a predefined interval from a drop-down list, choosing a recently used time range, or specifying custom dates and times. Learn more.
- You can use template variables to drill down and examine the data on a granular level. For more information, see Filtering Dashboards with Template Variables.
- Most Next-Gen apps allow you to provide the scope at the installation time and are comprised of a key (
_sourceCategory
by default) and a default value for this key. Based on your input, the app dashboards will be parameterized with a dashboard variable, allowing you to change the dataset queried by all panels. This eliminates the need to create multiple copies of the same dashboard with different queries.
Overview​
The MongoDB - Overview dashboard provides an at-a-glance view of MongoDB health, performance, and problems causing errors.
Use this dashboard to:
- Identify slow CRUD and DB commands.
- Gain insights into errors logs by component and context.
- Know the number of up servers.
Errors and Warnings​
The MongoDB - Errors and Warnings dashboard shows errors and warnings by the MongoDB component.
Use this dashboard to:
- Determine components producing multiple errors or warnings.
Logins and Connections​
The MongoDB - Logins and Connections dashboard shows geo location of client connection requests, failed connection logins by geo location, and count of failed login attempts.
Use this dashboard to:
- Determine potential hacking attempts.
- Determine location of attacks.
Query Logs​
The MongoDB - Query Logs dashboard shows read and write query trends.
Use this dashboard to:
- Monitor abnormal spikes in query volume.
- Identify read versus write ratio of your application queries. This helps you to adjust indexes to improve query performance.
Sharding​
The MongoDB - Sharding dashboard shows sharding related errors, events, failures, and number of chunks moving between shards.
Use this dashboard to:
- Identify sharding errors and warnings.
- Gain insights into chunk operations.
Replication Logs​
The MongoDB - Replication Logs dashboard shows replica deletes/updates/inserts trend and replica state.
Use this dashboard to:
- Identify replication errors and warnings.
- Gain insights into replication operations.
Resource​
The MongoDB - Resource dashboard shows resource utilization by the MongoDB component.
Use this dashboard to:
- Determine memory and disk usage.
- Identify potential resource constraints and issues.
Operations​
The MongoDB - Operations dashboard shows MongoDB queries analytics using metrics.
Use this dashboard to:
- Know different kind of operation count like query, insert, and delete.
- Determine the operation time taken by different queries.
Replication​
The MongoDB - Replication dashboard displays the replication events, errors, warnings, and nodes information.
Use this dashboard to:
- Know different kind of operation count executed on replicas.
- Operation count like query, insert, and delete.