Performance Data TCP Export

From OpenNMS
Jump to navigation Jump to search


In addition to storing data in RRD files, OpenNMS can export performance metrics to a remote TCP service.

Why would I want to use this?

If you want to use your own graphing or data storage engine instead of OpenNMS's built-in RRD storage, this is an easy way to get real-time access to the same data that is stored in the RRDs. You must write a simple TCP listener that will accept incoming performance data but the data is in an easy-to-parse, high performance Google protocolbuffer binary format. Documentation on this protocol and code for C++, Java, and Python can be found at:

There is a contributed client receiver (listener) for the TCP RRD strategy. It receives RRD updates over IP and prints them to STDOUT.

Usage: java -jar perfdata-receiver-X.X.jar [port]

The compiled jar can be found at:


You can enable and configure this service by editing the following properties in the file:

# If you would like to export performance data to an external system
# over a TCP port, please set org.opennms.rrd.usetcp to 'true' and fill
# in your values for the external listener.
# The IPv4 address or hostname of the target system
# The TCP port where the target system is listening for performance data


The on-the-wire protocol is TCP and the performance data is sent by OpenNMS using Google's data interchange format called: Google protobuf. The message structure is:

message PerformanceDataReading {
  required string path = 1;
  required string owner = 2;
  required uint64 timestamp = 3;
  repeated double value = 4;

message PerformanceDataReadings {
  repeated PerformanceDataReading message = 1;

When OpenNMS sends performance data, it will open the sending socket, transmit one PerformanceDataReadings message containing one or more PerformanceDataReading messages with performance data, and then close the connection.

Performance Considerations

The performance data is buffered inside Collectd (with a buffer size of several thousand performance data messages) and if the transmission of TCP data isn't fast enough to keep up with how fast data is being generated, incoming messages will be dropped as necessary so that the buffer doesn't overflow.