# Spreading Connectware Workloads Across Kubernetes Nodes

To improve availability and resilience, Connectware can distribute replicas of the same workload across different Kubernetes nodes. This helps ensure that a failure of a single node does not impact all replicas of a component.

Connectware uses Kubernetes [podAntiAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) for this purpose. All scalable Connectware workloads use soft podAntiAffinity by default.

## podAntiAffinity Modes

You can control how strictly Kubernetes spreads Pods of the same workload by selecting one of the following modes:

| Mode | Behavior of pods with the same workload (e.g., broker).                                                                     |
| ---- | --------------------------------------------------------------------------------------------------------------------------- |
| soft | Default. Kubernetes attempts to place Pods on different nodes, but may schedule multiple Pods on the same node if required. |
| hard | Kubernetes schedules Pods only on different nodes. If this is not possible, the Pod remains unscheduled.                    |
| none | No podAntiAffinity rules are applied. Pods may be scheduled freely.                                                         |

## Topology Key

podAntiAffinity relies on a topology key, which refers to a node label that defines the failure domain used for spreading Pods.

By default, Connectware uses the `kubernetes.io/hostname` label, which spreads Pods across individual nodes. You can override this behavior by specifying a different topology key, provided that all nodes share the corresponding label.

## Configuring Workload Spreading

You configure workload spreading through Helm values for the relevant Connectware service.

1. In your `values.yaml` file, set the `podAntiAffinity` Helm value and optionally the `podAntiAffinityTopologyKey` Helm value for the workload that you want to configure.

The following example configures the broker workload:

{% code lineNumbers="true" %}

```yaml
broker:
  podAntiAffinity: hard
  podAntiAffinityTopologyKey: kubernetes.io/os=linux
```

{% endcode %}

2. Apply the configuration changes via the `helm upgrade` command:

{% code lineNumbers="true" %}

```bash
helm upgrade -n ${NAMESPACE} ${INSTALLATION_NAME} -f values.yaml
```

{% endcode %}

For details on applying Helm configuration changes, see [Applying Helm Configuration Changes](/2-1-2/documentation/connectware-on-kubernetes/connectware-helm-chart.md#applying-helm-configuration-changes).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.cybus.io/2-1-2/documentation/connectware-on-kubernetes/spreading-connectware-workloads-across-kubernetes-modes.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
