Remote Monitor

From OpenNMS
Jump to navigation Jump to search

Did you mean Remote Polling?

Create a process that can be installed on a remote machine, perhaps with access to a network that the main OpenNMS instance cannot reach, but with the goal to achieve visibility of a service defined in OpenNMS from the user's perspective. It will be able to perform all of the monitoring checks that the current service monitor does (ICMP, HTTP, etc.) and send events such as nodeDown or nodeLostService back to the main OpenNMS instance.

It will not do any discovery on its own, rather, it will take direction from the main OpenNMS instance as to what to monitor.

This Milestone is dependent on the first sub-task of the Correlation milestone.

Distributed polling is an enhancement that is part of a custom development project. In order to distribute the poller code the correlation logic currently contained in the OpenNMS Poller daemon will be removed and will become a seperate service allowing different correlation services to be added. For example, topology based correlation logic from the Italian Adventures branch in CVS and business logic correlation using a rules engine such as Drools.

Sub Tasks

  • Build standalone asynchronous poller
  • Build Web Start framework for on demand distribution of poller
  • Enhance object model to allow multipoint service monitoring

OpenNMS Remote Monitoring Use Case



For lack of a better term, this is the central OpenNMS system running all the OpenNMS daemons and manages communication with the OpenNMS-DMs.


A light weight polling process that communicates with the OpenNMS-MOM. An OpenNMS-DP can operate in the role of a distributed polling or a distributed monitoring application.

Remote Polling

An OpenNMS polling process that discovers, schedules, monitors, and correlates services reporting discovery of new entities and correlated status messages to the OpenNMS-MOM. (This project has not been scoped)

Remote Monitoring

An OpenNMS polling process that simply receives a polling schedule from the OpenNMS-MOM and monitors a list of services that have been identified by the OpenNMS administrator for distributed monitoring. Each poll status is reported back to the OpenNMS-MOM and aggregated such that the distributed pollers are monitoring the same application from multiple perspectives.

OpenNMS Entity

An abstraction in the OpenNMS object model from which persisted objects are extended and can be represented with status and/or performance metrics (i.e. network service, interface, node, virtual node, application, etc.)

Polling configuration

An extremely flexible XML based configuration that defines for an OpenNMS poller the services to be monitored on OpenNMS Entities, the schedule, a downtime polling model, and the tunable parameters for each service monitor.

Use Case

Typically, resources on the network are monitored by a network management system from a central location. Often times, these resources are accessible via multiple paths on a network and perhaps over various WAN and VPN technologies where outages and performance degradation may occur and not observed by a central NMS; sitting perhaps in the same location as the resources it’s monitoring. These resources need to be monitored from multiple remote locations so that their status can be seen from the prospective of the users accessing these resources.

The status and performance measurements (i.e. latency) of these services can be monitored from multiple locations and can be viewed at the OpenNMS-MOM, collectively.

An OpenNMS administrator defines an entity (an Application Entity for example) in the OpenNMS WebUI that is composed of services to be monitored by one or more distributed pollers. (Note: The situation could be that those services are only reachable by distributed pollers and not by the OpenNMS-MOM’s central polling services)

Detailed Configuration Explanation:

The OpenNMS administrator recognizes the requirement to monitor an application from multiple locations.


  1. The OpenNMS-MOM administrator creates a new OpenNMS Application entity that will be used to aggregate the status of services provided by that application and monitored remotely by an OpenNMS-DP.
  2. The OpenNMS-MOM administrator defines remote polling locations.
  3. The OpenNMS-MOM administrator creates or modifies a polling configuration for remote polling location that will be used to monitor services defined for the Application entity created in step 1.
  4. A system administrator installs the distributed polling code on one or more remote systems in the required locations. They modify the distributed monitoring properties file, on each instance, to define the remote location (using the location name provided by the OpenNMS-MOM administrator from step 2) and the IP address of the OpenNMS-MOM.

Detailed Operational Explanation:

  1. Following the configuration steps above, the remote system administrator starts the OpenNMS-DP and verifies its connection to the OpenNMS-MOM by either a) looking at the poller.log file or via the optional OpenNMS-DP GUI.
  2. The OpenNMS-MOM receives the initial communication from the OpenNMS-DP and registers it as active.
  3. The OpenNMS-MOM sends the OpenNMS-DP the polling configuration defined by the OpenNMS-MOM administrator and the OpenNMS-DP begins monitoring services and reporting poll status information.
  4. An aggregated status view of each of the OpenNMS Entities being remotely monitored can be seen in the OpenNMS WebUI.
  5. The status of monitors themselves is represented in the OpenNMS WebUI as determined by successful communication and execution of the remote polling schedule. Alarms and notifications are initiated when failures occur. Distributed pollers can be configured to cease all monitoring activity when communication with OpenNMS-MOM is lost (the lysine contingency).
  6. Distributed monitors have a separate thread that continuously checks for updates to the remote location’s polling configuration and immediately adapts the new configuration.
  7. Poll status messages reported to the OpenNMS-MOM contain:
    1. status of services
    2. latency of services
    3. with some monitors (such as HTTP) bandwidth utilization. Bandwidth utilization is calculated by requesting a static HTML page and determining the size and the time required to make the transfer.
  8. List of Distributable Monitors
    1. CitrixMonitor
    2. DnsMonitor
    3. DominoIIOPMonitor
    4. FtpMonitor
    5. HttpMonitor
    6. HttpsMonitor
    7. ImapMonitor
    8. JDBCMonitor
    9. JMXMonitor
    10. LdapMonitor
    11. LoopMonitor
    12. NrpeMonitor
    13. NsclientMonitor
    14. NtpMonitor
    15. PageSequenceMonitor
    16. Pop3Monitor
    17. RadiusAuthMonitor
    18. SmtpMonitor
    19. SshMonitor
    20. TcpMonitor
  9. list of monitors in 1.3.7
    1. AvailabilityMonitor
    2. CitrixMonitor
    3. DnsMonitor
    4. DominoIIOPMonitor
    5. FtpMonitor
    6. HttpMonitor
    7. HttpsMonitor
    8. ImapMonitor
    9. JDBCMonitor
    10. JMXMonitor
    11. LdapMonitor
    12. LoopMonitor
    13. NrpeMonitor
    14. NsclientMonitor
    15. NtpMonitor
    16. PageSequenceMonitor
    17. Pop3Monitor
    18. RadiusAuthMonitor
    19. SmtpMonitor
    20. SshMonitor
    21. TcpMonitor