Hardware Requirements and Performance

Cubro Service Gateway software runs on general-purpose servers with X86_64 architecture and in a Linux RHEL environment. It is recommended to install it in a CentOS 8.5 or CentOS 8 Stream environment.

Cubro Service Gateway is vendor-agnostic. The software can be executed on X86_64 server architecture and in a Linux RHEL/CentOS 8 environment.

The specifications for CPU and RAM are determined based on the required throughput. If the Cubro Service Gateway software is purchased separately for installation on the customer's hardware, the following requirements must be met:

CPU One processor supporting SSE 4.2 instructions, starting from Intel Nehalem and AMD EPYC Zen2, with a minimum of 4 cores for every 10 Gbit of total traffic. The base clock frequency should be 2.5 GHz or higher.
The traffic processing core operates with only one processor. Hyper-Threading or AMD threads should be disabled in the platform's BIOS settings.
RAM A minimum of 8 GB of RAM is required for every 10 Gbit/s of total processed traffic. Memory banks must be installed in all available processor channels.
SSD Disks To host the OS and Cubro Service Gateway software, two disks with a minimum capacity of 256 GB each are required, configured in a hardware RAID 1 (mirror) array. NVMe M.2 or SAS/SATA SSDs are recommended.
Number of Network Ports A minimum of 3 ports is required: one for SSH management (any chipset) and at least two interfaces for traffic processing.
Traffic Processing Interfaces Cubro Service Gateway captures traffic via the DPDK driver and network cards on chipsets that support DPDK technology. However, the following chipsets are recommended, as they have undergone compatibility testing.

1GbE Interfaces:
- Intel e1000 (82540, 82545, 82546)
- Intel e1000e (82571, 82572, 82573, 82574, 82583, ICH8, ICH9, ICH10, PCH, PCH2, I217, I218, I219)
- Intel igb (82573, 82576, 82580, I210, I211, I350, I354, DH89xx)
- Intel igc (I225)

10GbE Interfaces:
- Intel ixgbe (82598, 82599, X520, X540, X550)
- Intel i40e (X710, XL710, X722, XXV710)
- Mellanox mlx5

25GbE Interfaces:
- Intel i40e (XXV710)
- Mellanox mlx5

For network cards above 10 Gbit/s, consider the PCIe bus throughput. For example, with a dual-interface 40G card on a PCIe 3.0 x8 bus, the maximum achievable throughput is 64 Gbit/s across both adapter ports. Thus, consulting Cubro technical support for full configuration guidance is recommended.

- Intel i40e (X710, XL710, X722, XXV710)

100GbE Interfaces (requires motherboard support for PCIe 4.0 x16):

The maximum throughput for dual-port 100 Gbit/s cards is 128 Gbit/s across both ports. It is recommended to use single-port cards or limit each port to no more than 50 Gbit/s.
- Mellanox ConnectX-4, ConnectX-5, ConnectX-6
- Intel ice (Intel E810) - requires firmware version 4.4 or higher.
Ingress, max Gbps Egress, max Gbps CPU cores RAM, GB
Values
Interface set NAT pool, max IP
(if nat enabled)
Packet per second, if CPU clock 2.5Ghz
up to 3 up to 3 4 16 6x1G, 2x10G 100 1,5M pps
up to 5 up to 5 6 32 2x10G 500 1,5-2M pps
up to 10 up to 10 6 48 2x10G, 4x10G 1000 3-4M pps
up to 20 up to 20 12 64 4x10G, 2x25G, 2x40G 2000 6M pps
up to 30 up to 30 18 Intel 6242R 96 8x10G, 4x25G, 2x40G, 2x100G 3000 9M pps
up to 40 up to 40 22 Intel 6248R 128 8x10G, 4x25G, 4x40G, 2x100G 4000 12M pps
up to 50 up to 50 28 Intel 6258R
26 Intel 5320
32 AMD 7502P
160 10x10G, 4x25G, 6x40G, 2x100G 5000 15M pps
up to 60 up to 60 64 AMD 7713
64 AMD 9534
192 12x10G, 6x25G, 6x40G, 2x100G 6000 18M pps
up to 70 up to 70 64 AMD 7713
64 AMD 9534
192 14x10G, 8x25G, 8x40G, 4x100G 7000 20M pps
up to 100 up to 100 64 AMD 7713
64 AMD 9534
256 20x10G, 8x25G, 8x40G, 4x100G 10000 22M pps
up to 150 up to 150 96 AMD 9654 384 24x10G, 16x25G, 10x40G, 6x100G 12000 30M pps
up to 200 up to 200 128 AMD 9754 512 16x25G, 10x40G, 8x100G 15000 45M pps
  • Hypervisor: VMware, QEMU KVM
  • vCPU: Minimum of 4 cores with a frequency of 2.5 GHz or higher
  • vRAM: 8 GB or more
  • Disk Space: 20 GB minimum
  • OS: RHEL Centos 8.x / Centos 8 Stream

For DPI functionality in a virtual environment, the following settings must be enabled in the Security configurations of virtual networks where in and out interfaces are on the hypervisor side:

  • Promiscuous Mode: Accept
  • MAC Address Changes: Accept
  • Forged Transmits: Accept