# Spreading Connectware Workloads Across Kubernetes Nodes

To improve availability and resilience, Connectware can distribute replicas of the same workload across different Kubernetes nodes. This helps ensure that a failure of a single node does not impact all replicas of a component.

Connectware uses Kubernetes [podAntiAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) for this purpose. All scalable Connectware workloads use soft podAntiAffinity by default.

## podAntiAffinity Modes

You can control how strictly Kubernetes spreads Pods of the same workload by selecting one of the following modes:

| Mode | Behavior of pods with the same workload (e.g., broker).                                                                     |
| ---- | --------------------------------------------------------------------------------------------------------------------------- |
| soft | Default. Kubernetes attempts to place Pods on different nodes, but may schedule multiple Pods on the same node if required. |
| hard | Kubernetes schedules Pods only on different nodes. If this is not possible, the Pod remains unscheduled.                    |
| none | No podAntiAffinity rules are applied. Pods may be scheduled freely.                                                         |

## Topology Key

podAntiAffinity relies on a topology key, which refers to a node label that defines the failure domain used for spreading Pods.

By default, Connectware uses the `kubernetes.io/hostname` label, which spreads Pods across individual nodes. You can override this behavior by specifying a different topology key, provided that all nodes share the corresponding label.

## Configuring Workload Spreading

You configure workload spreading through Helm values for the relevant Connectware service.

1. In your `values.yaml` file, set the `podAntiAffinity` Helm value and optionally the `podAntiAffinityTopologyKey` Helm value for the workload that you want to configure.

The following example configures the broker workload:

{% code lineNumbers="true" %}

```yaml
broker:
  podAntiAffinity: hard
  podAntiAffinityTopologyKey: kubernetes.io/os=linux
```

{% endcode %}

2. Apply the configuration changes via the `helm upgrade` command:

{% code lineNumbers="true" %}

```yaml
helm upgrade -n <namespace> <installation-name> -f values.yaml
```

{% endcode %}

For details on applying Helm configuration changes, see [Applying Helm Configuration Changes](https://docs.cybus.io/2-0-6/documentation/connectware-helm-chart#applying-helm-configuration-changes).
