RRD performance fundamentals

From OpenNMS
Jump to: navigation, search

How to maximize RRD storage performance

  • Use a mirrored stripe (RAID-10), with enough disks to handle the amount of data you need to collect. A single disk, a pair of mirrored disks (RAID-1), or a RAID-5 is only appropriate for an installation doing a small amount of data collection. Using a hardware write cache can increase the performance if the cache is large enough to cache all active RRAs. See the Tuning RRD reference below for details.
  • Ensure that the system has enough physical RAM to cache all active RRAs, so only writes need to occur during collection, not reads in addition to writes. See the Tuning RRD reference below for details.
  • On UNIX systems, set the "noatime" mount option, and if you are on a system that has "nodiratime" as a mount option, enable it, as well.
  • Check the blocksize of your filesystem for rrd data. Most of the time rrd data is written or read in very small pieces (writing a floating point value needs 8 Bytes...), but to write those 8 Bytes one block has to be read, those 8 bytes are changed and the block has to be written back. After this the header has to be changed in the same way (update the date). If the blocksize of your filesystem is too big you will get a lot of overhead. Most file-/operatingsystems do a "read-ahead" and read more than one block, which is only needed when you graph your rrd data. On a solaris system with zfs filesystem reducing the blocksize from 128K to 16K reduced writing to the rrd files from 20 MBit/s to 5 MBit/s, reading from 20 MBit/s to 10 MBit/s.
    For zfs file systems use the following command. All files created after this change will be created with the new block size (you can move your rrd files to another place and recreate them using tar, cpio, ... if you don't want to loose your already collected data)

zfs set recordsize=16K name_of_filesystem


  • RRD data storage causes a large number of small random disk writes, usually a few writes for each update.
  • By default, OpenNMS stores each collected variable in its own file, unless the store by group feature is enabled.
  • Normally, there will be 2-3 writes for each update: one for the file header, one for the previous RRA, one for the next RRA.
  • When multiple samples are consolidated into a single stored data point in the RRA, there will be additional writes. By default, such consolidations happen hourly and daily on the GMT day boundary. This will cause higher than normal amount of writes after the top of the hour and after the GMT day boundary.


  • On unix/linux systems use tools like top or the sysstat-tools (iostat, mpstat) to monitor your I/O performance, specially look out for wait states. When you start tuning you can see if your efforts minimize the wait states or not. When not you probably spent a lot of memory for some tuning effort without getting better performance.
  • You can set "<KeyValuePair key="queued" value="DEBUG" />" in file log4j2.xml (log4j.properties prior OpenNMS 14) and check the logfile queued.log for lines containing QS. Look for totalOperationsPending and if that's large (in the tens of thousands to hundreds of thousands), then queued RRD updates are likely your problem, see FAQ-Troubleshooting#Q:_OpenNMS_keeps_on_running_out_of_memory_with_errors_like_.22java.lang.OutOfMemoryError:_Java_heap_space.22_in_output.log for details.


  • In rrd-configuation.properties there are some possibilities to tune the queueing strategie for writing rrd files. You can
    • mark zero data as beeing less important to write, in case of write contention they will be queued and written later, thus generating writes of larger blocks and minimizing disk I/O
    • mark zero data to be thrown away when the write queue is above some limit
    • mark non-zero (e.g. significant) data to be thrown away above some limit of the write queue to prevent opennms from using up all the heap space of your system and crashing your jvm

These features are most helpful if you have only one disk for all your data in which case a lot of I/O-waits are difficult to avoid. This is true for most people running opennms for the first time to test it.

See the comments in rrd-configuation.properties for configuration of these parameters. (Included in version 1.6.x, maybe also in prior versions)