Skip to main content
Version: Next 🚧

System Requirements

System Requirements​

Component# of InstancesRecommended vCPUMinimum MemoryNotes
Controllermin. 1
3 for HA (odd # only)
11GBvCPU core may be shared
Enforcer1 per node/VM1+1GBOne or more dedicated vCPU for higher network throughput in Protect mode
Scannermin. 1
2+ for HA/Performance
11GBCPU core may be shared for standard workloads.
Dedicate 1 or more CPU for high volume (10k+) image scanning.
Registry image scanning is performed by the scanner and managed by the controller and the image is pulled by the scanner and expanded in memory.
The minimum memory recommendation assumes images to be scanned are not larger than .5GB.
When scanning images larger than 1GB, scanner memory should be calculated by taking the largest image size and adding .5GB.
Example - largest image size = 1.3GB, the scanner container memory should be 1.8GB
Managermin 1
2+ for HA
11GBvCPU may be shared

Supported Platforms​

  • Officially supported linux distributions, SUSE Linux, Ubuntu, CentOS/Red Hat (RHEL), Debian, CoreOS, AWS Bottlerocket and Photon.
  • AMD64 and Arm architectures
  • CoreOS is supported (November 2023) for CVE scanning through RHEL mapping table provided by RedHat. Once an official feed is published by RedHat for CoreOS it will be supported.
  • Officially supported Kubernetes and Docker compliant container management systems. The following platforms are tested with every release of NeuVector: Kubernetes 1.19-1.31, SUSE Rancher (RKE, RKE2, K3s etc), RedHat OpenShift 4.6-4.16 (3.x to 4.12 supported prior to NeuVector 5.2.x), Google GKE, Amazon EKS, Microsoft Azure AKS, IBM IKS, native docker, docker swarm. The following Kubernetes and docker compliant platforms are supported and have been verified to work with NeuVector: VMware Photon and Tanzu, SUSE CaaS, Oracle OKE, Mirantis Kubernetes Engine, Nutanix Kubernetes Engine, docker UCP/DataCenter, docker Cloud.
  • Docker run-time version: 1.9.0 and up; Docker API version: 1.21, CE and EE.
  • Containerd and CRI-O run-times (requires changes to volume paths in sample yamls). See changes required for Containerd in the Kubernetes deployment section and CRI-O in the OpenShift deployment section.
  • NeuVector is compatible with most commercially supported CNI's. Officially tested and supported are openshift ovs (subnet/multitenant), calico, flannel, cilium, antrea and public clouds (gke, aks, iks, eks). Support for Multus was added in v5.4.0.
  • Console: Chrome or Firefox browser recommended. IE 11 not supported due to performance issues.
  • Minikube is supported for simple initial evaluation but not for full proof of concept. See below for changes required for the Allinone yaml to run on Minikube.

AWS Bottlerocket Note: Must change path of the containerd socket specific to Bottleneck. Please see Kubernetes deployment section for details.

Not Supported​

  • GKE Autopilot.
  • The multus cni is not currently supported but is on the 2024 roadmap.
  • AWS ECS is no longer supported. (NOTE: No functionality has been actively removed for operating NeuVector on ECS deployments. However, testing on ECS is no longer being perfromed by SUSE. While protecting ECS workloads with Neuvector likely will operate as expected, issues will not be investigated.)
  • Docker on Mac
  • Docker on Windows
  • Rkt (container linux) from CoreOS
  • AppArmor on K3S / SLES environments. Certain configurations may conflict with NeuVector and cause scanner errors; AppArmor should be disabled when deploying NeuVector.
  • IPv6 is not supported
  • VMWare Integrated Containers (VIC) except in nested mode
  • CloudFoundry
  • Console: IE 11 not supported due to performance issues.
  • Nested container host in a container tools used for simple testing. For example, deployment of a Kubernetes cluster using 'kind' https://kind.sigs.k8s.io/docs/user/configuration/.

Note 1: PKS is field tested and requires enabling privileged containers to the plan/tile, and changing the yaml hostPath as follows for Allinone, Controller, Enforcer:

            hostPath:
path: /var/vcap/sys/run/docker/docker.sock
note

NeuVector supports running on linux-based VMs on Mac/Windows using Vagrant, VirtualBox, VMware or other virtualized environments.

Minikube​

Please make the following changes to the Allinone deployment yaml.

apiVersion: apps/v1 <<-- required for k8s 1.19
kind: DaemonSet
metadata:
name: neuvector-allinone-pod
namespace: neuvector
spec:
selector: <-- Added
matchLabels: <-- Added
app: neuvector-allinone-pod <-- Added
minReadySeconds: 60
...
nodeSelector: <-- DELETE THIS LINE
nvallinone: "true" <-- DELETE THIS LINE
apiVersion: apps/v1 <<-- required for k8s 1.19
kind: DaemonSet
metadata:
name: neuvector-enforcer-pod
namespace: neuvector
spec:
selector: <-- Added
matchLabels: <-- Added
app: neuvector-enforcer-pod <-- Added

Performance and Scaling​

As always, performance planning for NeuVector containers will depend on several factors, including:

  • (Controller & Scanner) Number and size of images in registry to be scanned (by Scanner) initially
  • (Enforcer) Services mode (Discover, Monitor, Protect), where Protect mode runs as an inline firewall
  • (Enforcer) Type of network connections for workloads in Protect mode

In Monitor mode (network filtering similar to a mirror/tap), there is no performance impact and the Enforcer handles traffic at line speed, generating alerts as needed. In Protect mode (inline firewall), the Enforcer requires CPU and memory to filter connections with deep packet inspection and hold them to determine whether they should be blocked/dropped. Generally, with 1GB of memory and a shared CPU, the Enforcer should be able to handle most environments while in Protect mode.

For throughput or latency sensitive environments, additional memory and/or a dedicated CPU core can be allocated to the NeuVector Enforcer container.

For performance tuning of the Controller and Scanner for registry scanning, see System Requirements above.

For additional advice on performance and sizing, see the Onboarding/Best Practices section.

Throughput​

As the chart below shows, basic throughput benchmark tests showed a maximum throughput of 1.3 Gbps PER NODE on a small public cloud instance with 4 CPU cores. For example, a 10 node cluster would then be able to handle a maximum of 13 Gbps of throughput for the entire cluster for services in Protect mode.

Throughput

This throughput would be projected to scale up as dedicated a CPU is assigned to the Enforcer, or the CPU speed changes, and/or additional memory is allocated. Again, the scaling will be dependent on the type of network/application traffic of the workloads.

Latency​

Latency is another performance metric which depends on the type of network connections. Similar to throughput, latency is not affected in Monitor mode, only for services in Protect (inline firewall) mode. Small packets or simple/fast services will generate a higher latency by NeuVector as a percentage, while larger packets or services requiring complex processing will show a lower percentage of added latency by the NeuVector enforcer.

The table below shows the average latency of 2-10% benchmarked using the Redis benchmark tool. The Redis Benchmark uses fairly small packets, so the the latency with larger packets would expected to be lower.

TestMonitorProtectLatency
PING_INLINE34,90431,6039.46%
SET38,61836,1576.37%
GET36,05535,1842.42%
LPUSH39,85335,9949.68%
RPUSH37,68536,0104.45%
LPUSH (LRANGE Benchmark)37,39935,2205.83%
LRANGE_10025,53923,9066.39%
LRANGE_30013,08212,2776.15%

The benchmark above shows average TPS of Protect mode versus Monitor mode, and the latency added for Protect mode for several tests in the benchmark. The main way to lower the actual latency (microseconds) in Protect mode is to run on a system with a faster CPU. You can find more details on this open source Redis benchmark tool at https://redis.io/topics/benchmarks.