Structure of a Typical Cluster

A DPI cluster consists of the following components:

  1. Cluster controller – EXA series packet broker.
  2. Cubro Service Gateway nodes.
  3. External optical bypass.
  4. Data storage subsystems for generating statistical and analytical reports (QoE).
  5. A set of connecting cables and optical modules.
  6. Top-of-rack switch.
  7. Servers for additional and management components.

In the event of a failure in one of the Cubro Service Gateway nodes, traffic within the cluster will be dynamically redistributed among the remaining nodes, allowing for an N+x redundancy configuration by adding additional servers. The cluster can be scaled by adding Cubro Service Gateway devices, with a maximum of 16 nodes. The EXA packet broker, serving as the cluster controller, is integrated with external optical switches that provide system bypass in case of an emergency.

The optical switch is a 1U device that can hold up to three modules for working with two-fiber SM or MM lines. Modules support data transmission modes of 1/10/40/100G. (Read more Cubro CBR.BYSW)

Figure 1: General CSG cluster view

Ensuring Fault Tolerance and Failure Protection

The system is configured for redundancy using an N+x scheme, where x represents the number of additional nodes, or N+N with full redundancy of all components. Typically, under an N+1 scheme, one extra Cubro Service Gateway node is added to the system.

In the cluster controller settings (CLI), the number of Cubro Service Gateway nodes that can fail without impacting overall system performance must be defined. If the cluster controller detects that the number of active Cubro Service Gateway nodes is below this specified threshold, the entire system (site) will switch to bypass mode.

Each Cubro Service Gateway node generates heartbeat packets sent to the cluster controller's management interface over the UDP protocol. The cluster controller settings require defining the maximum number of heartbeat packet losses allowed before marking a Cubro Service Gateway node as faulty. The interval for sending heartbeat packets between the Cubro Service Gateway and the cluster controller should be synchronized on both the cluster controller and Cubro Service Gateway sides. Read more: FastBypass monitor.

For manually switching the system to bypass mode or returning it to load-bearing mode, a command-line utility is available from the cluster controller’s command shell. Read more: External Bypass Management.

The system operator can also configure signal level monitoring on the optical switch and set parameters for tracking power levels or signal presence on selected channels from the GUI bypass card. During software updates requiring a restart of the traffic processing core on Cubro Service Gateway nodes, the updated node will be automatically taken offline, and traffic processed by this node will be rerouted to the remaining active Cubro Service Gateway nodes.