From OpenNMS
< DevProjects
Revision as of 16:58, 5 April 2018 by Jwhite (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Since OpenNMS is used to process ever increasing loads of data including SNMP traps, syslog messages, streaming telemetry and flows, is it becoming paramount that we develop as solution to scale the processing horizontally and break it out into multiple JVMs.

The goal of the Sentinel project is to provide a new runtime which can be used to host these workloads in a scalable fashion and offload this processing from the OpenNMS instance.

Since we're happy with the way the Karaf container has been working for Minion, we would also like to continue Karaf as the base run-time for Sentinel. Unlike Minion however, we expect the Sentinel container to be running along side OpenNMS and not at the customer premise, meaning that it will have direct access to the database and other back-end resources such as Cassandra and Elasticsearch.

Initial Research

We performed some initial research in the following JIRA issues:

This mainly involved reviewing the current structure of the project, and attempting to find the best approach for structuring the new container.


Project Layout


Embedded Container (21.0.0)

The following projects are used to build the embedded container that sits on top of the JettyServer service in the OpenNMS JVM.

OpenNMS logo for the Karaf shell
Jetty bridge
Feature definitions for both OpenNMS and Minion features
JAAS integration for the Karaf shell
The OpenNMS Karaf assembly
Jetty integration
Unused (and should probably be deleted)

Minion (21.0.0)

For comparison, the following projects are defined in 'features/minion':

Karaf feature aimed at making it easier to control the featuresBoot and installed repositories by overlaying files instead of editing existing existing files
Feature definitions for "base" container features i.e. scv and karaf-extender
The Minion Karaf assembly
Secure Credential Vault bundle and blueprint with the default JCEKS implementation
Core Minion API which includes interfaces for the MinionIdentity, and RestClient (to interfacing with OpenNMS)
Feature definitions for the core modules i.e. minion-core, minion-jms
Implementation of the core api
JMS client
Assembly for the core features and artifacts
Core shell commands i.e. minion:ping
Heartbeat Sink Module
Heartbeat Sink Consumer (runs on OpenNMS)
Heartbeat Sink Producer (runs on Minion)
Default repository for all Minion features (defined in 'container/features/src/main/resources/features-minion.xml') and artifacts
Minion specific shell commands for collection, polling, and provisioning

The Minion assembly is also defined in 'opennms-assemblies/minion'


  • The root 'container' project will contain both projects that relate to the embedded container, and components that are shared among containers
  • The 'features/minion' project will contain Minion specific projects
  • The 'features/sentinel' project will contain Sentinel specific projects

To achieve this we need to:

  • Move 'features/minion/container/extender' and 'features/minion/container/scv' to 'container/'
  • Remove 'features/minion/container/features' and move the current feature file into 'container/'
  • Create a new Karaf container assembly in 'features/sentinel/container/karaf'
  • Create a new assembly in 'opennms-assemblies/sentinel'


  • Similar to the Minion container, we should provide both .rpm and .deb packages as well as a tarball distribution of the container
  • The default settings should use ports that do not conflict with either OpenNMS or Minion, allowing us to launch all 3 JVMs on a single system (primarily for development purposes)

Database connectivity

  • Sentinel users should be able to configure the JDBC settings used for database connectivity
    • We can either expose these using the existing opennms-datasources.xml, or provide a smaller subset of this configuration.

Sink Consumer

  • We should be able to support all of the existing Sink consumer implementation
    • These currently include JMS (ActiveMQ), Kafka and SQS


At a minimum, we should provide some form of JMX monitoring and collection for the Sentinel containers.


  • Should be able to run a sentinel:ping command that verifies connectivity with the required resources
    • Instead of hard-coding the checks in the ping command, we can pull them in from the OSGi registry at run-time


In Scope

  • Base infrastructure
  • Handling of flows via Kafka

Out of Scope

  • Anything that needs to send or receive events