This feature is available on dedicated instances.
The CloudAMQP team monitors your servers and RabbitMQ brokers to make sure that the service is online and performing well. But we have also built several integrations to third-party systems where we can export logs and/or metrics. It allows you to get a good overview of how your system is doing and get logs and metrics into the same place as you have your other systems monitored.
Integrations are not covered by the SLA. Please email us at support@cloudamqp.com if you want more details about exporting metrics or logs.
CloudAMQP can ship logs to: Datadog, CloudWatch, Papertrail, Logentries, Google Stackdriver, Loggly, Splunk, Coralogix, Azure Monitor
Link: https://docs.datadoghq.com/logs/
Get your Datadog API key at app.datadoghq.com and enter the API key, region, and optional tags.
Example message for CloudAMQP cluster
quick-gray-porcupine
with user optional tags
env:example
{
"ddsource": "cloudamqp",
"ddtags": "env:example",
"hostname": "quick-gray-porcupine-01",
"message": "2025-03-25 11:07:02.552827+00:00 [info] <0.10.0> Time to start RabbitMQ: 13283 ms\n",
"service": "quick-gray-porcupine"
}
Link: https://aws.amazon.com/cloudwatch
Create an IAM user with programmatic access and the following permissions: CreateLogGroup, CreateLogStream, DescribeLogGroups, DescribeLogStreams, and PutLogEvents. Select the AWS region and enter the user's Access Key and Secret Key in the fields.
Link: https://www.papertrail.com
Create a Papertrail endpoint via https://papertrailapp.com/systems/setup and enter the endpoint address in the Address field.
Link: https://www.logentries.com
Create a Logentries token at https://logentries.com/app#/add-log/manual and enter it in the Token field.
Link: https://cloud.google.com/stackdriver
Steps to generate a credentials file with permissions to write logs into Stackdriver:
Link: https://www.loggly.com
Create a Loggly token at https://<your-company>.loggly.com/tokens and enter it in the Token field.
Link: https://www.splunk.com
Create a HTTP Event Collector token at https://<your-splunk>.cloud.splunk.com/en-US/manager/search/http-eventcollector and enter the token and the endpoint address into the respective fields.
Link: https://www.coralogix.com
Create or find your Send-Your-Data API Key. You also need to select the region you are using and enter the metadata information in the respective fields.
Link: https://learn.microsoft.com/en-us/azure/azure-monitor/overview
You will need to have a Log Analytics Workspace, a Data Collection Endpoint and a Data Collection Rule and a table in your workspace. Set it up by following this tutorial. Logs Ingestion Tutorial. You will need to enter the Directory (tenant) ID, Application (client) ID, Application secret, DCE URI, Table name and DCR ID in the respective fields.
With metrics integrations, you can filter what metrics to send based on regular expressions for queues and vhosts. You can also decide if you want to include metrics for auto-delete queues. We send metrics every 60 seconds by default, but this value can be changed to 10 seconds or higher.
CloudAMQP offers metrics integrations to:
Read more about our Azure Monitor integration
Link: https://aws.amazon.com/cloudwatch
For CloudWatch, we have three integrations. The first CloudWatch integration has been around for a long time, and during that time, CloudWatch has evolved. We didn't want to break usage for all customers using the existing one, so we added a new integration to leverage some new features in CloudWatch. The third version export Prometheus metrics.
Read more about CloudAMQP CloudWatch (legacy) integration
Read more about CloudAMQP CloudWatch V2 (legacy) integration
Read more about CloudAMQP CloudWatch V3 integration
Link: https://www.librato.com
Read more about our Librato (legacy) integration
Link: https://www.datadoghq.com/
For Datadog, we have three integrations. The first Datadog integration has been around for a long time. Version two matches all metrics to the dashboards in Datadog, when you activate this integration, the dashboards RabbitMQ - Overview and RabbitMQ - Metrics will get populated with data automatically. Third version does the same, but it matches the dashboard RabbitMQ - Overview (OpenMetrics Version) since it exports Prometheus metrics.
Read more about our Datadog (legacy) integration
Read more about our Datadog V2 (legacy) integration
Read more about our Datadog V3 integration
Link: https://www.dynatrace.com
Read more about our Dynatrace integration
Link https://newrelic.com
For New Relic we have two integrations. New Relic V3 exports server and broker metrics in Prometheus format.
Read more about the NewRelic (legacy) integration
Read more about the NewRelic V3 integration
For Stackdriver we have two integrations. Stackdriver V2 exports server and broker metrics in Prometheus format.
Read more about our Stackdriver (legacy) integration
Read more about our Stackdriver V2 integration
Link: https://www.splunk.com
For Splunk we have two integrations. Splunk V2 exports server and broker metrics in Prometheus format.