Spreading Connectware Workloads Across Kubernetes Nodes
Configure podAntiAffinity to distribute Connectware workloads across Kubernetes nodes for improved availability and resilience
To improve availability and resilience, Connectware can distribute replicas of the same workload across different Kubernetes nodes. This helps ensure that a failure of a single node does not impact all replicas of a component.
Connectware uses Kubernetes podAntiAffinity for this purpose. All scalable Connectware workloads use soft podAntiAffinity by default.
podAntiAffinity Modes
You can control how strictly Kubernetes spreads Pods of the same workload by selecting one of the following modes:
soft
Default. Kubernetes attempts to place Pods on different nodes, but may schedule multiple Pods on the same node if required.
hard
Kubernetes schedules Pods only on different nodes. If this is not possible, the Pod remains unscheduled.
none
No podAntiAffinity rules are applied. Pods may be scheduled freely.
Topology Key
podAntiAffinity relies on a topology key, which refers to a node label that defines the failure domain used for spreading Pods.
By default, Connectware uses the kubernetes.io/hostname label, which spreads Pods across individual nodes. You can override this behavior by specifying a different topology key, provided that all nodes share the corresponding label.
Configuring Workload Spreading
You configure workload spreading through Helm values for the relevant Connectware service.
In your
values.yamlfile, set thepodAntiAffinityHelm value and optionally thepodAntiAffinityTopologyKeyHelm value for the workload that you want to configure.
The following example configures the broker workload:
broker:
podAntiAffinity: hard
podAntiAffinityTopologyKey: kubernetes.io/os=linuxApply the configuration changes via the
helm upgradecommand:
For details on applying Helm configuration changes, see Applying Helm Configuration Changes.
Last updated
Was this helpful?

