# MQTT Broker Storage Volumes in Kubernetes

The disk space required for MQTT brokers depends on your use case, particularly regarding the use of QoS levels greater than 0, retained messages, and message sizes. Since storage requirements cannot be perfectly predicted, you may need to resize existing volumes or configure appropriate initial sizes.

This document covers both configuring initial storage volume sizes and resizing existing volumes to increase available disk space with minimal service interruption.

{% hint style="warning" %}
The resizing procedure involves removing the StatefulSet, which leaves the broker cluster vulnerable to failures caused by cluster events or human error. Execute this procedure with great care and only in a stable cluster environment.
{% endhint %}

## Configuring Initial Storage Volume Sizes

When deploying Connectware, you can configure the storage volume sizes for the MQTT broker through Helm values. The broker uses two volumes:

| Volume | Purpose                                                        | Helm Value                        | Default Size |
| ------ | -------------------------------------------------------------- | --------------------------------- | ------------ |
| data   | Stores retained messages, offline queues, and cluster metadata | `global.broker.storage.data.size` | 1Gi          |
| log    | Stores log files                                               | `global.broker.storage.log.size`  | 100Mi        |

**Prerequisites**

* Helm version 3 is installed on your system
* The Kubernetes command-line tool kubectl is configured and has access to the target installation
* You know the name and namespace of your Connectware installation. See [Obtaining the name, namespace, and version of your Connectware installation](https://docs.cybus.io/2-0-6/connectware-on-kubernetes/connectware-helm-chart#obtaining-the-name-namespace-and-version-of-your-connectware-installation)
* The `values.yaml` file is available

**Procedure**

1. Configure the storage volume sizes by adding the appropriate Helm values to your `values.yaml` file:

**Example**

{% code lineNumbers="true" %}

```yaml
global:
  broker:
    storage:
      data:
        size: 5Gi
      log:
        size: 500Mi
```

{% endcode %}

2. Apply the configuration changes using the `helm upgrade` command:

{% code lineNumbers="true" %}

```bash
helm upgrade -n <namespace> <installation-name> -f values.yaml
```

{% endcode %}

{% hint style="info" %}
You cannot change the size of existing volumes through Helm configuration alone. To resize existing volumes, follow the procedure in the next section.
{% endhint %}

## Resizing Existing Broker Storage Volumes

Use this procedure to increase the available disk space for existing broker volumes (PersistentVolumeClaims). This process requires pod restarts, which means clients will need to reconnect.

### Prerequisites

* `kubectl` access to the installation with the current context namespace set to the target namespace (`kubectl config set-context --current --namespace <target-namespace>`).
* A `StorageClass` that supports volume expansion. Verify this by running `kubectl get sc` and checking that `ALLOWVOLUMEEXPANSION` shows `true` for the StorageClass used by the volumes.

### Preparing the Broker Cluster

1. Ensure you have a healthy broker cluster of at least two pods. Run `kubectl get sts broker` and verify it shows `READY 2/2` or higher with matching numbers on both sides of the slash.
2. If you only have a single broker, scale the StatefulSet to two replicas:

{% code lineNumbers="true" %}

```bash
kubectl scale sts broker --replicas 2
```

{% endcode %}

3. Export the StatefulSet definition to a local file:

{% code lineNumbers="true" %}

```bash
kubectl get sts broker -o yaml > broker.yaml
```

{% endcode %}

### Resizing Volumes

Repeat this procedure for each broker pod in your cluster.

1. Delete the broker StatefulSet while leaving the pods as orphans:

{% code lineNumbers="true" %}

```bash
kubectl delete sts broker --cascade=orphan
```

{% endcode %}

2. Set the `$broker` variable to the pod name of the broker you want to resize (e.g., `broker-0`):

{% code lineNumbers="true" %}

```bash
broker=broker-0
```

{% endcode %}

3. Delete the broker pod:

{% code lineNumbers="true" %}

```bash
kubectl delete pod $broker
```

{% endcode %}

4. Increase the PersistentVolumeClaims size. Replace `<size>` with the desired Kubernetes quantity for the volume (e.g., `5Gi`):

This example resizes the data volume:

{% code lineNumbers="true" %}

```bash
kubectl patch pvc brokerdata-$broker --patch '{"spec": { "resources": {"requests": {"storage": "<size>"}}}}'
```

{% endcode %}

If you also need to resize the log volume, repeat this step for the log PersistentVolumeClaims:

{% code lineNumbers="true" %}

```bash
kubectl patch pvc brokerlog-$broker --patch '{"spec": { "resources": {"requests": {"storage": "<size>"}}}}'
```

{% endcode %}

5. Wait until the PersistentVolumeClaims shows the correct capacity:

{% code lineNumbers="true" %}

```bash
kubectl get pvc brokerdata-$broker
```

{% endcode %}

6. Recreate the StatefulSet:

{% code lineNumbers="true" %}

```bash
kubectl apply -f broker.yaml
```

{% endcode %}

7. Wait for the StatefulSet to recreate the missing pod. Monitor the status by running `kubectl get sts broker` until it shows `READY 2/2` or higher with matching numbers on both sides of the slash.
8. Verify that all cluster members show consistent cluster information:

{% code lineNumbers="true" %}

```bash
kubectl get pod -lapp=broker -o name | xargs -I % kubectl exec % -- vmq-admin cluster show
```

{% endcode %}

The output should look similar to this:

{% code lineNumbers="true" %}

```
Defaulted container "broker" out of: broker, wait-for-k8s (init)
+-------------------------------------------------+---------+
| Node                                            | Running |
+-------------------------------------------------+---------+
| VerneMQ@broker-0.broker.cybus.svc.cluster.local | true    |
+-------------------------------------------------+---------+
| VerneMQ@broker-1.broker.cybus.svc.cluster.local | true    |
+-------------------------------------------------+---------+
Defaulted container "broker" out of: broker, wait-for-k8s (init)
+-------------------------------------------------+---------+
| Node                                            | Running |
+-------------------------------------------------+---------+
| VerneMQ@broker-0.broker.cybus.svc.cluster.local | true    |
+-------------------------------------------------+---------+
| VerneMQ@broker-1.broker.cybus.svc.cluster.local | true    |
+-------------------------------------------------+---------+
```

{% endcode %}

9. Repeat this procedure for each additional broker pod until all volumes are resized.

### Persisting Volume Changes for Future Deployments

After resizing volumes, update your Helm values to reflect the new sizes for future deployments and upgrades. This prevents future Helm upgrades from reverting the volume size.

1. Update the following fields in your `values.yaml` file based on the volumes you resized:

| PersistentVolumeClaims Name | Helm Value                        |
| --------------------------- | --------------------------------- |
| brokerdata-broker-\*        | `global.broker.storage.data.size` |
| brokerlog-broker-\*         | `global.broker.storage.log.size`  |

2. Apply the configuration changes via the `helm upgrade` command:

{% code lineNumbers="true" %}

```bash
helm upgrade -n <namespace> <installation-name> -f values.yaml
```

{% endcode %}

For more information, see [Applying Helm Configuration Changes](https://docs.cybus.io/2-0-6/connectware-on-kubernetes/connectware-helm-chart#applying-helm-configuration-changes).

## Deleting Broker Data Volumes

Deleting broker data volumes will remove all retained messages, offline client queues, and cluster metadata. This is typically done when decommissioning an installation or performing a clean reinstall.

{% hint style="danger" %}
This procedure permanently removes all broker data including retained messages, offline queues, and cluster state. This action cannot be undone. Ensure you have proper backups if you need to preserve any data.
{% endhint %}

### Prerequisites

* `kubectl` access to the installation with the current context namespace set to the target namespace.
* Confirmation that all data can be permanently deleted.
* The broker StatefulSet should be scaled down or deleted before removing volumes.

### Procedure

1. Stop the broker cluster by scaling the StatefulSet to zero replicas:

{% code lineNumbers="true" %}

```bash
kubectl scale sts broker --replicas 0
```

{% endcode %}

2. Wait for all broker pods to terminate:

{% code lineNumbers="true" %}

```bash
kubectl get pods -lapp=broker -w
```

{% endcode %}

3. List all broker-related PersistentVolumeClaims:

{% code lineNumbers="true" %}

```bash
kubectl get pvc | grep broker
```

{% endcode %}

4. Delete the broker data volumes. This will remove all data volumes for all broker replicas:

{% code lineNumbers="true" %}

```bash
kubectl delete pvc -l app=broker
```

{% endcode %}

Alternatively, you can delete specific volumes by name:

{% code lineNumbers="true" %}

```bash
kubectl delete pvc brokerdata-broker-0 brokerdata-broker-1
kubectl delete pvc brokerlog-broker-0 brokerlog-broker-1
```

{% endcode %}

5. Verify that the PersistentVolumeClaims have been deleted:

{% code lineNumbers="true" %}

```bash
kubectl get pvc | grep broker
```

{% endcode %}

### Redeploying the Broker Cluster

If you plan to redeploy the broker cluster after deleting the data volumes, you can scale it back up with fresh storage:

{% code lineNumbers="true" %}

```bash
kubectl scale sts broker --replicas 2
```

{% endcode %}

The broker cluster will start with fresh, empty volumes and will need to be reconfigured with any required settings.

### Cleaning Up Persistent Volumes

Depending on your storage configuration, the underlying PersistentVolumes might still exist after deleting the claims. To completely remove all storage:

1. List any remaining PersistentVolumes that were bound to the deleted claims:

{% code lineNumbers="true" %}

```bash
kubectl get pv | grep broker
```

{% endcode %}

2. If PersistentVolumes still exist with a `Retain` reclaim policy, delete them manually:

{% code lineNumbers="true" %}

```bash
kubectl delete pv <persistent-volume-name>
```

{% endcode %}

{% hint style="info" %}
PersistentVolumes with a `Delete` reclaim policy will be automatically removed when their corresponding PersistentVolumeClaim is deleted.
{% endhint %}
