Did you mean Remote Polling?
Create a process that can be installed on a remote machine, perhaps with access to a network that the main OpenNMS instance cannot reach, but with the goal to achieve visibility of a service defined in OpenNMS from the user's perspective. It will be able to perform all of the monitoring checks that the current service monitor does (ICMP, HTTP, etc.) and send events such as nodeDown or nodeLostService back to the main OpenNMS instance.
It will not do any discovery on its own, rather, it will take direction from the main OpenNMS instance as to what to monitor.
This Milestone is dependent on the first sub-task of the Correlation milestone.
Distributed polling is an enhancement that is part of a custom development project. In order to distribute the poller code the correlation logic currently contained in the OpenNMS Poller daemon will be removed and will become a seperate service allowing different correlation services to be added. For example, topology based correlation logic from the Italian Adventures branch in CVS and business logic correlation using a rules engine such as Drools.
- Build standalone asynchronous poller
- Build Web Start framework for on demand distribution of poller
- Enhance object model to allow multipoint service monitoring
OpenNMS Remote Monitoring Use Case
For lack of a better term, this is the central OpenNMS system running all the OpenNMS daemons and manages communication with the OpenNMS-DMs.
A light weight polling process that communicates with the OpenNMS-MOM. An OpenNMS-DP can operate in the role of a distributed polling or a distributed monitoring application.
An OpenNMS polling process that discovers, schedules, monitors, and correlates services reporting discovery of new entities and correlated status messages to the OpenNMS-MOM. (This project has not been scoped)
An OpenNMS polling process that simply receives a polling schedule from the OpenNMS-MOM and monitors a list of services that have been identified by the OpenNMS administrator for distributed monitoring. Each poll status is reported back to the OpenNMS-MOM and aggregated such that the distributed pollers are monitoring the same application from multiple perspectives.
An abstraction in the OpenNMS object model from which persisted objects are extended and can be represented with status and/or performance metrics (i.e. network service, interface, node, virtual node, application, etc.)
An extremely flexible XML based configuration that defines for an OpenNMS poller the services to be monitored on OpenNMS Entities, the schedule, a downtime polling model, and the tunable parameters for each service monitor.
Typically, resources on the network are monitored by a network management system from a central location. Often times, these resources are accessible via multiple paths on a network and perhaps over various WAN and VPN technologies where outages and performance degradation may occur and not observed by a central NMS; sitting perhaps in the same location as the resources it’s monitoring. These resources need to be monitored from multiple remote locations so that their status can be seen from the prospective of the users accessing these resources.
The status and performance measurements (i.e. latency) of these services can be monitored from multiple locations and can be viewed at the OpenNMS-MOM, collectively.
An OpenNMS administrator defines an entity (an Application Entity for example) in the OpenNMS WebUI that is composed of services to be monitored by one or more distributed pollers. (Note: The situation could be that those services are only reachable by distributed pollers and not by the OpenNMS-MOM’s central polling services)
Detailed Configuration Explanation:
The OpenNMS administrator recognizes the requirement to monitor an application from multiple locations.
- The OpenNMS-MOM administrator creates a new OpenNMS Application entity that will be used to aggregate the status of services provided by that application and monitored remotely by an OpenNMS-DP.
- The OpenNMS-MOM administrator defines remote polling locations.
- The OpenNMS-MOM administrator creates or modifies a polling configuration for remote polling location that will be used to monitor services defined for the Application entity created in step 1.
- A system administrator installs the distributed polling code on one or more remote systems in the required locations. They modify the distributed monitoring properties file, on each instance, to define the remote location (using the location name provided by the OpenNMS-MOM administrator from step 2) and the IP address of the OpenNMS-MOM.
Detailed Operational Explanation:
- Following the configuration steps above, the remote system administrator starts the OpenNMS-DP and verifies its connection to the OpenNMS-MOM by either a) looking at the poller.log file or via the optional OpenNMS-DP GUI.
- The OpenNMS-MOM receives the initial communication from the OpenNMS-DP and registers it as active.
- The OpenNMS-MOM sends the OpenNMS-DP the polling configuration defined by the OpenNMS-MOM administrator and the OpenNMS-DP begins monitoring services and reporting poll status information.
- An aggregated status view of each of the OpenNMS Entities being remotely monitored can be seen in the OpenNMS WebUI.
- The status of monitors themselves is represented in the OpenNMS WebUI as determined by successful communication and execution of the remote polling schedule. Alarms and notifications are initiated when failures occur. Distributed pollers can be configured to cease all monitoring activity when communication with OpenNMS-MOM is lost (the lysine contingency).
- Distributed monitors have a separate thread that continuously checks for updates to the remote location’s polling configuration and immediately adapts the new configuration.
- Poll status messages reported to the OpenNMS-MOM contain:
- status of services
- latency of services
- with some monitors (such as HTTP) bandwidth utilization. Bandwidth utilization is calculated by requesting a static HTML page and determining the size and the time required to make the transfer.
- List of Distributable Monitors
- list of monitors in 1.3.7