# Upgrading Connectware to 2.0.0 (Kubernetes)

## Disclaimer

{% hint style="info" %}
When upgrading your Connectware instance, follow the upgrade path based on your current version.

For all other version upgrades that are not listed below, you can simply follow the [regular Connectware upgrade guide](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes).

* **If you are on version 1.4.1 or below**
  * Upgrade sequentially: **1.5.0 → 1.7.0 → 2.0.0 → 2.0.1 → 2.0.2 → 2.0.5 → 2.0.6**
* **If you are between version 1.5.0 and 1.6.2**
  * Upgrade sequentially: **1.7.0 → 2.0.0 → 2.0.1 → 2.0.2 → 2.0.5 → 2.0.6**
* **If you are on version 1.7.0 or newer (but below 2.0.0)**
  * Upgrade sequentially: **2.0.0 → 2.0.1 → 2.0.2 → 2.0.5 → 2.0.6**
* **If you are on version 2.0.0**
  * Upgrade sequentially: **2.0.1 → 2.0.2 → 2.0.5 → 2.0.6**
* **If you are on version 2.0.1**
  * Upgrade sequentially: **2.0.2 → 2.0.5 → 2.0.6**
* **If you are on version 2.0.2, 2.0.3, or 2.0.4**
  * Upgrade sequentially: **2.0.5 → 2.0.6**
* **If you are on version 2.0.5**
  * Upgrade directly to **2.0.6**
* **If you are performing a clean or new installation**
  * No upgrade path required. You can install the **latest available version** directly.

**Detailed instructions on each upgrade step**

* [Upgrading Connectware to 2.0.6 (Kubernetes)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-2-0-6-kubernetes)
* [Upgrading Connectware to 2.0.5 (Kubernetes)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-2-0-5-kubernetes)
* [Upgrading Connectware to 2.0.2 (Kubernetes)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-2-0-2-kubernetes)
* [Upgrading Connectware to 2.0.1 (Kubernetes)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-2-0-1-kubernetes)
* [Upgrading Connectware to 2.0.0 (Kubernetes)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-2-0-0-kubernetes)
* [Upgrading Connectware to 1.7.0 (Kubernetes)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-1-7-0-kubernetes)
* [Upgrading from Connectware 1.x to 1.5.0 (Kubernetes)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-from-1-x-to-1-5-0-kubernetes)
  {% endhint %}

## Before You Begin

Upgrading to Connectware 2.0.0 introduces significant improvements in performance, scalability, and reliability. However, these changes also come with updated requirements for versions, networking, hardware, and storage.

This guide outlines the prerequisites and known limitations you must consider to ensure a smooth and successful upgrade.

{% hint style="warning" %}
Before starting the upgrade, read the entire guide. Some steps require developer work or preparation before the upgrade process begins.
{% endhint %}

{% hint style="warning" %}
Upgrading to Connectware 2.0.0 requires reinstalling all services. The main benefit of upgrading instead of performing a fresh installation is that it preserves the user database, including Multi-Factor Authentication. If you do not rely heavily on these features, a fresh installation may be the better option.

Even with a fresh installation, you will still need to follow this upgrade guide to update configuration parameters and adapt to the behavioral changes introduced in Connectware 2.0. However, you can skip the multi-step upgrade process itself.

If you are considering a fresh installation, we strongly recommend consulting with the Cybus Customer Support beforehand to confirm whether this is the right approach for your setup.
{% endhint %}

### Connectware Version Requirements

To be able to upgrade to Connectware 2.0.0, your Connectware version must be 1.7.0 or above.

If your Connectware installation is below 1.7.0, make sure that you have followed [Upgrading Connectware to 1.7.0 (Kubernetes)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-1-7-0-kubernetes) before upgrading to 2.0.0.

### Network Requirements

#### Why the Change?

With Connectware 2.0.0, some internal components have been updated to improve communication and performance. As a result, the network configuration has changed:

* **Added**: TCP/4222 and TCP/4223
* **Removed**: TCP/1884 and TCP/8884

#### What You Need to Do

<details>

<summary><strong>Updating the Network Ports</strong></summary>

Verify that your firewalls and security rules are updated to allow the new ports (TCP/4222 and TCP/4223) and to remove dependencies on the deprecated ones (TCP/1884 and TCP/8884).

This ensures uninterrupted connectivity between your agents and Connectware.

</details>

### Hardware Requirements

#### Why the Change?

Connectware 2.0.0 makes increased use of its PostgreSQL database and adds some components. When planning this upgrade, ensure your infrastructure can accommodate the enhanced resource requirements. This upgrade requires additional computing power.

#### What You Need to Do

<details>

<summary><strong>Updating the Hardware Setup</strong></summary>

We recommend adding the following resources to your hardware setup:

* **12** CPU cores
* **11** GB of memory
* **52** Gi of storage

However, these are general guidelines. Check what your specific system needs and make adjustments accordingly. If you were using the `control-plane-broker` option, you can offset these additional requirements with the resources it used, since it is being removed in this upgrade.

</details>

### Storage Requirements

#### Why the Change?

We have added two new components to Connectware:

* A streaming server called NATS
* A service called resource-status-tracking

Alongside other improvements, these additions enable Connectware to scale effectively for much larger deployments.

In addition, the latest versions of PostgreSQL and auth-server require updated Kubernetes resource requests and limits to maintain stability and performance under heavier workloads.

#### What You Need to Do

<details>

<summary><strong>Adjusting Kubernetes Resource Requests and Limits for Core Microservices</strong></summary>

The microservices `postgresql`, `auth-server`, `nats`, and `resource-status-tracking` now have new or revised Kubernetes resource requests and limits. Make sure to adapt the default values to match your deployment needs.

We recommend beginning with the defaults, monitoring performance metrics, and fine-tuning resource allocations as needed.

* To adjust the default values, update the corresponding values in the `global.podResources` Helm value context.

**Example**

```yaml
global:
  podResources:
    nats:
      limits:
        cpu: 2000m
        memory: 2000Mi
      requests:
        cpu: 2000m
        memory: 2000Mi
    resourceStatusTracking:
      limits:
        cpu: 1000m
        memory: 1000Mi
      requests:
        cpu: 1000m
        memory: 1000Mi
    database:
      limits:
        cpu: 2000m
        memory: 2000Mi
      requests:
        cpu: 2000m
        memory: 2000Mi
    authServer:
      limits:
        cpu: 1500m
        memory: 1000Mi
      requests:
        cpu: 1500m
        memory: 1000Mi
```

</details>

### Known Limitations

1. **Adding Certificates Through Admin UI Not Supported**

* You cannot add certificates to Connectware's CA bundle via the Admin UI.
* Instead, modify the `cybus_ca.crt` file directly on the `certs` volume.

2. **Backup via Admin UI Not Supported**

* The backup functionality through Admin UI is not supported.
* Instead, create backups of the database by running a `pg_dump` command on the `postgresql-0` pod.

{% code title="Example" lineNumbers="true" %}

```sh
kubectl exec -n [your-namespace] postgresql-0 -- \
    bash -c "pg_dump -U cybus-admin --if-exists -c cybus_connectware" \
    > connectware_database.sql
```

{% endcode %}

## Upgrade Procedure

Follow this procedure to upgrade your Connectware installation to version 2.0.0. The steps are divided into two parts:

* **Mandatory Upgrade Steps**: Required for all installations to ensure a smooth and stable upgrade.
* **Feature-Specific Upgrade Steps**: Only needed if you use certain features, so they remain compatible with Connectware 2.0.0.

Expand the following sections for an overview of all upgrade steps.

<details>

<summary><strong>Mandatory Upgrade Steps</strong></summary>

These steps apply to every Connectware installation upgrading to Connectware 2.0.0. For a detailed guide, see [Mandatory Upgrade Steps](#mandatory-upgrade-steps).

{% hint style="info" %}
Depending on your setup, you may also need to perform additional Conditional Steps.
{% endhint %}

1. [**TLS Changes**](#id-1.-tls-changes): Default behavior on certificate validation has been adjusted.
2. [**Update Helm Values**](#id-2.-update-helm-values): Remove obsolete Helm values and adjust for changed and new values.
3. [**Preparing the Connectware Helm Upgrade**](#id-3.-preparing-the-connectware-helm-upgrade): Prepare removal of control-plane-broker and remove PostgreSQL StatefulSet.
4. [**Upgrading to Connectware 2.0.0**](#id-4.-upgrading-to-connectware-2.0.0): Download and install Connectware 2.0.0.
5. [**Enabling Agents in the Connectware Helm Chart**](#id-5.-enabling-agents-in-the-connectware-helm-chart): After upgrading Connectware, you need to go back to your agents to enable their TLS connections.
6. [**Updating Helm Values for the Connectware Agent Helm Chart**](#id-6.-updating-helm-values-for-the-connectware-agent-helm-chart): Update your agent configuration to comply with the updated Helm value configuration.
7. [**Upgrading Agents for the Connectware Agent Helm Chart**](#id-7.-upgrading-agents-for-the-connectware-agent-helm-chart): Upgrade your agents with the `connectware-agent` Helm chart.
8. [**Reinstalling Services**](#id-8.-reinstalling-services): This upgrade changes where your services are stored. You will need to reinstall any services after the upgrade.

</details>

<details>

<summary><strong>Feature-Specific Upgrade Steps</strong></summary>

Only follow these if you use the related features, so they continue working after the upgrade.

1. [**Roles and Permissions**](#id-1.-permissions-and-roles): New permissions were added to Connectware. Verify your custom roles, if they require updates.
2. [**Custom Connectors**](#id-2.-custom-connectors): Update your customer connector configurations to meet new requirements.
3. [**Systemstate Protocol**](#id-3.-systemstate-protocol): Update your Systemstate protocol configurations to meet new requirements.
4. [**Log Monitoring**](#id-4.-log-monitoring): Some logging strings are changed. If you use log monitoring, you may need to update it.
5. [**Heidenhain Agents**](#id-5.-heidenhain-agents-windows): Upgrade your Heidenhain agents.
6. [**Auto-Generated MQTT Topics of Resources**](#id-6.-auto-generated-mqtt-topics-of-resources): Topic generation no longer includes resource-specific properties. Update your service commissioning files if you relied on old patterns.
7. [**Auto-Generated MQTT Users**](#id-7.-auto-generated-mqtt-users): The behavior of how MQTT users are auto-generated has changed. You may need to update your service commissioning file if you relied on auto-generated MQTT users.

</details>

## Mandatory Upgrade Steps

These steps are required to upgrade your Connectware installation to Connectware 2.0.0.

### 1. TLS Changes

#### Why the Change?

To enhance security by default, Connectware agents now verify TLS certificate chains automatically. This ensures that all components communicate over a valid trust chain, while still giving you the option to keep the old behavior by explicitly disabling TLS verification.

#### Key Changes

<details>

<summary><strong>1. Introducing the cybus_combined_ca.crt</strong></summary>

Connectware maintains two separate CA chains:

* External certificates validated by `cybus_ca.crt`.
* Internal certificates validated by `shared_yearly_ca.crt`.

Which CA an agent requires depends on the hostname through which it connects to Connectware. For example, through the Connectware ingress, or directly to the Control Streaming Server (NATS) through the internal network.

To simplify configuration, we introduced `cybus_combined_ca.crt`, a bundle containing both chains, so agents can use a single file without needing to distinguish between internal and external CA certificates.

</details>

<details>

<summary><strong>2. Certificate Chain Verification in Agents</strong></summary>

Agents now enforce TLS chain validation by default. Each agent requires access to `cybus_combined_ca.crt`, available on the `certs` volume.

* To revert to the previous behavior (skipping verification), set the environment variable `CYBUS_TRUST_ALL_CERTS` to `true`. Note that it has been renamed from `TRUST_ALL_CERTS`.

</details>

<details>

<summary><strong>3. Configuring Certificate Hostnames</strong></summary>

The default Connectware-generated CA includes the hostnames `localhost` and `connectware`.

* To add more hostnames, configure the Helm value `global.ingressDNSNames`.

</details>

<details>

<summary><strong>4. Renewal of Certificate Chains</strong></summary>

With 2.0.0, the internal CA chain is replaced:

* Certificate Authority renamed from `CybusCA` to `CybusInternalCA`.
* The hostname `nats` is added as a Subject Alternate Name (SAN) to `shared_yearly_server.crt`.

The built-in default external CA certificate chain is also replaced.

* The hostname `connectware` is added as a SAN to `cybus_server.crt`.

If you rely on monitoring, custom setups, or modified certificates, adapt your configuration accordingly.

</details>

<details>

<summary><strong>5. Replacing CA Certificate Chain</strong></summary>

To replace Connectware’s default external chain with your enterprise-managed CA:

* Replace `cybus_ca.crt` with your enterprise CA certificate.
* Ensure `cybus_server.crt` and `cybus_server.key` form a valid key pair, signed by the CA in `cybus_ca.crt`.

Do not replace the internal CA (`shared_yearly_ca.crt`).

After replacement:

1. Restart the `system-control-server` deployment to rebuild and synchronize the combined CA bundle (`cybus_combined_ca.crt`):

{% code lineNumbers="true" %}

```bash
kubectl -n [connectware-namespace] rollout restart deployment system-control-server
```

{% endcode %}

2. Restart all Connectware services.

</details>

### 2. Update Helm Values

#### Why the Change?

Some changes to Connectware or the Helm chart prompt changes to Helm values, which you need to adapt your `values.yaml` file to:

* The optional control-plane-broker is removed from Connectware.
* All parameters to tune the inter-service communication have been removed.

#### Obsolete Helm Values

<details>

<summary><strong>Removing Obsolete Helm Values</strong></summary>

Some [Helm values](https://docs.cybus.io/2-0-6/environment-variables#kubernetes) are obsolete and have been removed. Remove the following Helm values from your `values.yaml` file for the `connectware` Helm chart:

* `global.rpcTimeout`
* `global.adminWebApp.rpcTimeout`
* `global.containerManager.rpcTimeout`
* `global.protocolMapper.rpcTimeout`
* `global.systemControlServer.rpcTimeout`
* `global.serviceManager.rpcTimeout`
* `global.serviceManager.storage`
* `global.controlPlaneBroker`
* `global.protocolMapperAgents[*].controlPlane`
* `global.serviceManager.useServicesGraph`

</details>

#### New Helm Values

<details>

<summary><strong>1. Ingress DNS Name Configuration</strong></summary>

With the changes TLS behavior in Connectware, it has become essential to add the DNS names under which Connectware is addressed, for example by agents.

If you are replacing Connectware's default PKI, you can, and likely have managed this yourself by providing a valid `cybus_server.crt` containing all Subject Alternate Names (SANs) used within your setup.

If you are using Connectware's default PKI, you can use the new Helm value `global.ingressDNSNames`, which is a list of names that will be added to the default `cybus_server.crt`.

**Hostname Formats**

You can include multiple hostnames in the list. The certificate will include all specified names in its SAN section.

The configuration accepts various hostname formats:

* Wildcards (e.g., `*.company.io`)
* Subdomains (e.g., `connectware.company.io`)
* Custom hostnames (e.g., `localhost`)
* IP Addresses (e.g. 192.168.100.42)

**Example**

{% code lineNumbers="true" %}

```yaml
global:
    ingressDNSNames:
        - company.io
        - localhost
        - *.company.io
        - connectware.company.io
        - 192.168.100.42
```

{% endcode %}

</details>

<details>

<summary><strong>2. Proxy Configuration</strong></summary>

Connectware's proxy configuration has been improved with version 2.0. Accompanying this, we added Helm values to configure proxy usage. This means that you cannot configure proxy usage directly through environment variables. If you have been doing this in the past, transfer your configuration to these new Helm values:

| **New Helm Value**            | **Type** | **Default Values** | **Purpose**                                                                                           |
| ----------------------------- | -------- | ------------------ | ----------------------------------------------------------------------------------------------------- |
| `global.proxy.url`            | string   | \<none>            | Address of the HTTP proxy server to be used                                                           |
| `global.proxy.exceptions`     | array    | \<none>            | List of hosts, for which the proxy is ignored                                                         |
| `global.proxy.existingSecret` | string   | \<none>            | Name of a Kubernetes Secret which contains the proxy configuration as the keys 'url' and 'exceptions' |

**Example**

{% code lineNumbers="true" %}

```yaml
global:
  proxy:
    url: proxy.mycompany.tld
    exceptions:
      - my-server.mycompany.tld
      - my-internal-service.mycompany.tld
```

{% endcode %}

**Example with existing secret**

Create your secret using your preferred method, in this example we will use a `kubectl create` command:

{% code lineNumbers="true" %}

```sh
kubectl create secret -n [connectware-namespace] generic my-connectware-proxy-config \
 --from-literal="url=http://myproxy.company.tld" \
 --from-literal="exceptions=http://myserver1.company.tld,https://myserver2.company.tld"
```

{% endcode %}

{% code lineNumbers="true" %}

```yaml
global:
  proxy:
    existingSecret: my-connectware-proxy-config
```

{% endcode %}

</details>

<details>

<summary><strong>3. NATS Configuration</strong></summary>

Connectware 2.0.0 introduces NATS as the stream server, primarily used for inter-service communication. The key configuration parameter is `global.nats.replicas`, which defines the cluster size. Typical values are `3` or `5`, with `3` as the default. Increasing this to `5` raises the redundancy level from N+1 to N+2.

{% hint style="warning" %}
The replicas value is critical for the NATS cluster configuration and is shared across multiple Connectware components.

This value can only be set during the initial installation of Connectware and cannot be modified later. Scaling operations on the `nats` StatefulSet must not be performed.
{% endhint %}

The following configuration values are available for the NATS streaming server:

| **New Helm Value**                                        | **Type** | **Default Values**                                                                                                                                                                                             | **Purpose**                                                                                  |
| --------------------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
| `global.nats.replicas`                                    | integer  | 3                                                                                                                                                                                                              | The number of NATS replicas                                                                  |
| `global.nats.podAntiAffinity`                             | string   | soft                                                                                                                                                                                                           | The podAntiAffinity behavior for NATS (one of `none`, `soft`, `hard`)                        |
| `global.nats.podAntiAffinityTopologyKey`                  | string   | kubernetes.io/hostname                                                                                                                                                                                         | The podAntiAffinityTopologyKey for NATS (one of `none`, `soft`, `hard`)                      |
| `global.nats.labels`                                      | object   | \<none>                                                                                                                                                                                                        | A set of labels that will be applied to NATS resources                                       |
| `global.nats.annotations`                                 | object   | \<none>                                                                                                                                                                                                        | A set of annotations that will be applied to NATS resources                                  |
| `global.nats.podLabels`                                   | object   | \<none>                                                                                                                                                                                                        | A set of labels that will be applied to NATS pod resources                                   |
| `global.nats.podAnnotations`                              | object   | \<none>                                                                                                                                                                                                        | A set of annotations that will be applied to NATS pod resources                              |
| `global.nats.service.labels`                              | object   | \<none>                                                                                                                                                                                                        | A set of labels that will be applied to NATS service resources                               |
| `global.nats.service.annotations`                         | object   | \<none>                                                                                                                                                                                                        | A set of annotations that will be applied to NATS service resources                          |
| `global.podResources.nats.resources`                      | array    | For a list of all default values, see [default-values.yaml](https://docs.cybus.io/2-0-6/documentation/installing-connectware/installing-connectware-kubernetes#creating-a-copy-of-the-default-valuesyaml-file) | Kubernetes compute resource requirements and limits                                          |
| `global.nats.env`                                         | array    | \<none>                                                                                                                                                                                                        | Array containing environment variables as name and value pairs to be applied to NATS service |
| `global.nats.metrics.prometheus.enabled`                  | boolean  | false                                                                                                                                                                                                          | Enable Prometheus exporter for NATS                                                          |
| `global.nats.metrics.prometheus.resources`                | array    | For a list of all default values, see [default-values.yaml](https://docs.cybus.io/2-0-6/documentation/installing-connectware/installing-connectware-kubernetes#creating-a-copy-of-the-default-valuesyaml-file) | Kubernetes compute resource requirements and limits                                          |
| `global.nats.metrics.prometheus.serviceMonitor.enabled`   | boolean  | false                                                                                                                                                                                                          | Enable Prometheus Operator ServiceMonitor for NATS                                           |
| `global.nats.metrics.prometheus.serviceMonitor.namespace` | string   | \<none>                                                                                                                                                                                                        | Namespace for the Prometheus ServiceMonitor                                                  |
| `global.nats.metrics.prometheus.serviceMonitor.labels`    | object   | \<none>                                                                                                                                                                                                        | Labels for the Prometheus ServiceMonitor                                                     |
| `global.nats.storage.size`                                | string   | 16Gi                                                                                                                                                                                                           | Define the size of the NATS JetStream volume                                                 |
| `global.nats.storage.storageClassName`                    | string   | \<none>                                                                                                                                                                                                        | Define a Kubernetes StorageClass that will be used for the NATS JetStream volume             |
| `global.nats.containerSecurityContext`                    | array    | For a list of all default values, see [default-values.yaml](https://docs.cybus.io/2-0-6/documentation/installing-connectware/installing-connectware-kubernetes#creating-a-copy-of-the-default-valuesyaml-file) | Set a container SecurityContext as defined by Kubernetes API                                 |

</details>

<details>

<summary><strong>4. Resource-Status-Tracking Configuration</strong></summary>

In addition to the new stream server, Connectware introduces a second new component called `resource-status-tracking`. This component allows you to monitor the status of resources created through service commissioning files and enables you to detect deviations in service behavior.

The following configuration values are available for `resource-status-tracking`:

| **New Helm Value**                                         | **Type** | **Default Values**                                                                                                                                                                                             | **Purpose**                                                                                                    |
| ---------------------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
| `global.resourceStatusTracking.replicas`                   | integer  | 2                                                                                                                                                                                                              | The number of resource-status-tracking replicas                                                                |
| `global.resourceStatusTracking.podAntiAffinity`            | string   | soft                                                                                                                                                                                                           | The podAntiAffinity behavior for resourceStatusTracking (one of "none", "soft", "hard")                        |
| `global.resourceStatusTracking.podAntiAffinityTopologyKey` | string   | kubernetes.io/hostname                                                                                                                                                                                         | The podAntiAffinityTopologyKey for resourceStatusTracking (one of "none", "soft", "hard")                      |
| `global.resourceStatusTracking.labels`                     | object   | \<none>                                                                                                                                                                                                        | A set of labels that will be applied to resourceStatusTracking resources                                       |
| `global.resourceStatusTracking.annotations`                | object   | \<none>                                                                                                                                                                                                        | A set of annotations that will be applied to resourceStatusTracking resources                                  |
| `global.resourceStatusTracking.podLabels`                  | object   | \<none>                                                                                                                                                                                                        | A set of labels that will be applied to resourceStatusTracking pod resources                                   |
| `global.resourceStatusTracking.podAnnotations`             | object   | \<none>                                                                                                                                                                                                        | A set of annotations that will be applied to resourceStatusTracking pod resources                              |
| `global.resourceStatusTracking.service.labels`             | object   | \<none>                                                                                                                                                                                                        | A set of labels that will be applied to resourceStatusTracking service resources                               |
| `global.podResources.resourceStatusTracking.resources`     | array    | For a list of all default values, see [default-values.yaml](https://docs.cybus.io/2-0-6/documentation/installing-connectware/installing-connectware-kubernetes#creating-a-copy-of-the-default-valuesyaml-file) | Kubernetes compute resource requirements and limits                                                            |
| `global.resourceStatusTracking.service.annotations`        | object   | \<none>                                                                                                                                                                                                        | A set of annotations that will be applied to resourceStatusTracking service resources                          |
| `global.resourceStatusTracking.env`                        | array    | \<none>                                                                                                                                                                                                        | Array containing environment variables as name and value pairs to be applied to resourceStatusTracking service |
| `global.resourceStatusTracking.containerSecurityContext`   | array    | For a list of all default values, see [default-values.yaml](https://docs.cybus.io/2-0-6/documentation/installing-connectware/installing-connectware-kubernetes#creating-a-copy-of-the-default-valuesyaml-file) | Set a container SecurityContext as defined by Kubernetes API                                                   |

</details>

<details>

<summary><strong>5. PostgreSQL Storage Size</strong></summary>

In previous releases, the storage volume of the PostgreSQL component was fixed at 1 Gi. With Connectware 2.0.0, the reliance on PostgreSQL has increased, resulting in a new default storage size of 5 Gi. This parameter is now configurable.

We recommend to allocate at least 5 Gi. For larger deployments, storage sizes of 20 Gi or more may be appropriate.

You can configure the storage size via the `global.postgresql.storage.size` Helm value:

{% code lineNumbers="true" %}

```yaml
global:
  postgresql:
    storage:
      size: 10Gi
```

{% endcode %}

</details>

<details>

<summary><strong>6. Enabling MQTTS for Protocol-Mapper Agents</strong></summary>

A new Helm value is available to configure whether a protocol-mapper agent establishes its data plane connection to the Connectware MQTT broker using TLS.

| **New Helm Value**                             | **Type** | **Default Values** | **Purpose**                                                 |
| ---------------------------------------------- | -------- | ------------------ | ----------------------------------------------------------- |
| `global.protocolMapperAgents[*].dataPlane.tls` | boolean  | false              | Enable TLS encryption for agent's MQTT data plan connection |

{% code lineNumbers="true" %}

```yaml
global:
  protocolMapperAgents:
    - name: welder-robots
      dataPlane:
        tls: true
```

{% endcode %}

</details>

<details>

<summary><strong>7. Disabling Agents During the Connectware Upgrade</strong></summary>

{% hint style="info" %}
This step prepares agents orchestrated with the `connectware` Helm chart for disabling during the upgrade. Agents orchestrated by other methods must be shut down separately, as described in [7. Shutting Down Protocol-Mapper Agents](#id-4.-upgrading-to-connectware-2.0.0).
{% endhint %}

To prevent agents from requiring re-registration after the upgrade, you must disable them during the installation. Once the upgrade is complete, you will re-enable the agents, update their configuration, and provide them with a valid CA certificate.

* To disable the agents defined in your Connectware Helm chart, set the Helm value `global.protocolMapperAgents` and all related values within this context to disabled. You can do this either by commenting out each line with a `#` or by temporarily removing them from the file.

</details>

### 3. Preparing the Connectware Helm Upgrade

#### Why the Change?

Connectware 2.0.0 introduces architectural improvements that require you to remove or adjust certain resources before running the Helm upgrade. This ensures a clean and successful upgrade process.

#### What You Need to Do

<details>

<summary><strong>1. Removing the Control-Plane-Broker</strong></summary>

The `control-plane-broker` is deprecated and its associated StatefulSet is no longer used. It will not be started anymore after the upgrade.

* No action is required in your Helm installation.
* Optional: Remove `global.controlPlaneBroker` from the `connectware` chart and `controlPlaneBrokerEnabled` from the `connectware-agent` chart.
* The `brokerdata-control-plane-broker-*` and `brokerlog-control-plane-broker-*` PersistentVolumeClaims are not removed automatically. Delete them manually if you want to reclaim the storage.

</details>

<details>

<summary><strong>2. Backing Up Your PostgreSQL Database</strong></summary>

With Connectware 2.0.0, Connectware uses a new major version of PostgreSQL. You need to delete your `postgresql` volume before upgrading Connectware (this is covered later in this upgrade guide). This requires you to create a backup of your database and restore this after the upgrade.

{% hint style="warning" %}
Any modifications done to Connectware after the following database backup will be lost after the Connectware 2.0.0 upgrade. We recommend to create the backup right before upgrading to Connectware 2.0.0.
{% endhint %}

1. To create a backup of your database, run the following command.

{% code lineNumbers="true" %}

```sh
kubectl exec -n [your-namespace] postgresql-0 -- \
    bash -c "pg_dump -U cybus-admin --if-exists -c cybus_connectware" \
    > connectware_database.sql
```

{% endcode %}

2. Make sure the backup is successful, then store the database file in a secure location.

</details>

<details>

<summary><strong>3. Service-Manager Volume Backup &#x26; Removal</strong></summary>

The `service-manager` PersistentVolumeClaim (PVC) is deprecated and will be removed automatically during the upgrade to 2.0.0.

Depending on your ReclaimPolicy, this may or may not mean that the volume is being deleted too. If it is not automatically deleted, you will have to manually delete the volume previously associated with the `service-manager` PersistentVolumeClaim to free up the disk space.

The former contents will in the future be stored in the PostgreSQL database, but will not be automatically migrated. You will need to reinstall any services you used before. See [Reinstalling Services](#12-reinstalling-services).

If you do not have your services stored outside of Connectware, make sure to export your services, or create a backup of your `service-manager` volume before upgrading.

</details>

### 4. Upgrading to Connectware 2.0.0

{% hint style="warning" %}
Make sure all prior steps are completed before proceeding with the Helm upgrade.
{% endhint %}

<details>

<summary><strong>1. Updating Helm Repository Cache</strong></summary>

Before performing the upgrade, update the Helm repository cache to ensure the latest Connectware chart version is available.

* Run the following command:

{% code lineNumbers="true" %}

```sh
helm repo update
```

{% endcode %}

</details>

<details>

<summary><strong>2. Reviewing the Connectware Changelog</strong></summary>

Before upgrading to Connectware 2.0.0, review the [changelog](https://docs.cybus.io/2-0-6/reference/changelog#what-has-changed-in-200) to familiarize yourself with new features, bug fixes, and other changes introduced in Connectware 2.0.0.

</details>

<details>

<summary><strong>3. Reviewing the Readme File</strong></summary>

Before upgrading to Connectware 2.0.0, review the readme file. The readme may contain important version-specific upgrade instructions.

* To open the readme file, run the following command:

{% code lineNumbers="true" %}

```yaml
helm show readme <repo-name>/connectware --version <2.0.0>
```

{% endcode %}

</details>

<details>

<summary><strong>4. Comparing Helm Configurations between Connectware Versions</strong></summary>

With a new Connectware version, there might be changes to the default Helm configuration values. We recommend that you compare the default Helms values of your current Connectware version with the default Helm values of your target Connectware version.

* To display the new default values, enter the following command:

{% code lineNumbers="true" %}

```yaml
helm show values <repo-name>/connectware --version <2.0.0>
```

{% endcode %}

* To display which Connectware default values have changed between your current version and your target version, enter the following command:

{% code lineNumbers="true" %}

```yaml
diff <(helm show values <repo-name>/connectware --version <current-version>) <(helm show values <repo-name>/connectware --version <2.0.0>
```

{% endcode %}

**Example**

{% code lineNumbers="true" %}

```yaml
diff <(helm show values cybus/connectware --version 1.1.0) <(helm show values cybus/connectware --version 1.1.1)
83c83
<     version: 1.1.0
---
>     version: 1.1.1
```

{% endcode %}

In this example, only the image version has changed. However, if any of the Helm value changes are relevant to your setup, make the appropriate changes.

* To override default Helm values, add the custom Helm value to your local `values.yaml` file.

</details>

<details>

<summary><strong>5. Adjusting Helm Values</strong></summary>

When you have reviewed the necessary information, adjust your configuration in your `values.yaml` file. Not every upgrade requires adjustments.

If you specified which image version of Connectware to use by setting the Helm value `global.image.version` you will need to update this to `<target-version>`. Unless you have a specific reason to use a specific image version, we recommend not setting the Helm value.

</details>

<details>

<summary><strong>6. Verifying your Backups</strong></summary>

Make sure that you store backups of your setup. This allows you to restore a previous state if necessary.

Your backups must consist of the following files:

* All Kubernetes PersistentVolumes that Connectware uses
* Your Connectware database
* Your `values.yaml` file
* All service commissioning files

Depending on your local infrastructure, it may be necessary to back up additional files.

</details>

<details>

<summary><strong>7. Shutting Down Protocol-Mapper Agents</strong></summary>

{% hint style="info" %}
In this step, you need to shut down any agents currently connected to your Connectware instance that are **not** managed by the `connectware` chart. Agents orchestrated through the `connectware` chart have already been prepared for shutdown as part of [7. Disabling Agents During the Connectware Upgrade](#id-2.-update-helm-values).
{% endhint %}

Before running the `helm upgrade` command, you must stop all connected agents. Agents which remain up during this upgrade run the risk of having to go through the agent registration process again.

* **Docker Run**: To stop agents which were started using `docker run`, use the `docker stop` command. If you are not aware of the name these containers use, run the `docker ps` command to find out.
* **Docker Compose**: If your agents are running in Docker Compose, use the `docker compose down` command to stop them.
* **Agent Helm Chart**: You can shut down agents that have been installed via the `connectware-agent` Helm chart using this command:

{% code lineNumbers="true" %}

```sh
kubectl get -n [your-namespace] sts -lapp.kubernetes.io/component=protocol-mapper-agent -o name | xargs -I % kubectl scale -n [your-namespace] % --replicas 0
```

{% endcode %}

</details>

<details>

<summary><strong>8. Removing the PostgreSQL StatefulSet</strong></summary>

* Before running the `helm upgrade` command, you must remove the `postgresql` StatefulSet:

{% code lineNumbers="true" %}

```sh
kubectl -n [your-namespace] delete sts postgresql
```

{% endcode %}

</details>

<details>

<summary><strong>9. Initial Connectware Upgrade</strong></summary>

You can now start the first of two upgrade processes. This first upgrade run applies the new Connectware 2.0.0 workloads and prepares the system for the required database migration.

* To upgrade Connectware, enter the following command:

{% code lineNumbers="true" %}

```yaml
helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version <2.0.0> -f <values.yaml>
```

{% endcode %}

**Result:** The newly generated workload definitions are applied to your Kubernetes cluster and your Connectware pods are replaced.

</details>

<details>

<summary><strong>10. Shutting Down Connectware</strong></summary>

* Wait for the `system-control-server` deployment to contain a ready pod, then shut down Connectware to restore your PostgreSQL database:

{% code lineNumbers="true" %}

```sh
kubectl get -n [your-namespace] deploy,sts -lapp.kubernetes.io/part-of=connectware -o name | xargs -I % kubectl scale -n [your-namespace] % --replicas 0
```

{% endcode %}

</details>

<details>

<summary><strong>11. Restoring the PostgreSQL Database</strong></summary>

1. Note down the Persistent Volume name for your `postgresql-postgresql-0` PersistentVolumeClaim. You will need this name later to make sure the volume is not recycled.

{% code lineNumbers="true" %}

```sh
kubectl get -n [your-namespace] pvc -o jsonpath='{.spec.volumeName}{"\n"}' postgresql-postgresql-0
```

{% endcode %}

2. Remove the `postgresql-postgresql-0` PersistentVolumeClaim:

{% code lineNumbers="true" %}

```sh
kubectl -n [your-namespace] delete pvc postgresql-postgresql-0
```

{% endcode %}

3. Remove the PostgreSQL PersistentVolume. You can skip this step if the volume has been automatically deleted through a reclaim policy, or if you are sure a new volume will be used. If the same volume is reused for postgresql, the upgrade will fail.

{% code lineNumbers="true" %}

```sh
kubectl delete pv [persistent-volume-name-from-previous-step]
```

{% endcode %}

4. Start PostgreSQL:

{% code lineNumbers="true" %}

```sh
kubectl -n [your-namespace] scale sts postgresql --replicas=1
```

{% endcode %}

5. Restore your PostgreSQL Database. Wait for the postgresql-0 pod to become ready, then run:

{% code lineNumbers="true" %}

```sh
cat connectware_database.sql | kubectl exec -n [your-namespace] postgresql-0 \
  -i -- psql -U cybus-admin -d cybus_connectware
```

{% endcode %}

</details>

<details>

<summary><strong>12. Final Connectware Upgrade after Database Restore</strong></summary>

You can start the final upgrade process. This upgrade finalizes the process by starting Connectware with the restored PostgreSQL database.

* To upgrade Connectware, enter the following command:

{% code lineNumbers="true" %}

```yaml
helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version <2.0.0> -f <values.yaml>
```

{% endcode %}

**Optional:** You can use the `--atomic --timeout 10m` command line switch, which will cause Helm to wait for the result of your upgrade and perform a rollback when it fails. We recommend setting the timeout value to at least 10 minutes, but because the time it takes to complete an upgrade strongly depends on your infrastructure and configuration you might have to increase it further.

**Result:** The newly generated workload definitions are applied to your Kubernetes cluster and your Connectware pods are replaced.

</details>

<details>

<summary><strong>13. Verifying the Connectware Upgrade</strong></summary>

You can monitor the Connectware upgrade progress to verify that everything runs smoothly, to know when the installation is successful, or to investigate potential issues.

**Monitoring the Connectware Upgrade**

The Connectware upgrade can take a few minutes. To monitor the upgrade process, do one of the following:

* To monitor the current status of the upgrade process, enter the following command:

{% code lineNumbers="true" %}

```yaml
kubectl get pods -n <namespace>
```

{% endcode %}

* To monitor the continuous progress of the upgrade process, enter the following command:

{% code lineNumbers="true" %}

```yaml
while [ True ]; do clear; kubectl get pod -n <namespace>; sleep 5; done
```

{% endcode %}

* To stop monitoring the continuous progress of the upgrade process , press <kbd>Ctrl</kbd>+<kbd>C</kbd>.

**Pod Stages During the Connectware Upgrade**

During the Connectware upgrade, the pods go through the following stages:

* Terminating
* Pending
* PodInitializing
* ContainerCreating
* Init:x/x
* Running

When pods reach the STATUS Running, they go through their individual startup before reporting as Ready. To be fully functional, all pods must reach the STATUS Running and report all their containers as ready. This is indicated by them showing the same number on both sides of the / in the column READY.

**Example**

{% code lineNumbers="true" %}

```yaml
kubectl get pod -n <namespace>
```

{% endcode %}

| NAME                                   | READY | STATUS  | RESTARTS | AGE   |
| -------------------------------------- | ----- | ------- | -------- | ----- |
| admin-web-app-7cd8ccfbc5-bvnzx         | 1/1   | Running | 0        | 3h44m |
| auth-server-5b8c899958-f9nl4           | 1/1   | Running | 0        | 3m3s  |
| broker-0                               | 1/1   | Running | 0        | 3h44m |
| broker-1                               | 1/1   | Running | 0        | 2m1s  |
| connectware-7784b5f4c5-g8krn           | 1/1   | Running | 0        | 21s   |
| container-manager-558d9c4cbf-m82bz     | 1/1   | Running | 0        | 3h44m |
| doc-server-55c77d4d4c-nwq5f            | 1/1   | Running | 0        | 3h44m |
| ingress-controller-6bcf66495c-l5dpk    | 1/1   | Running | 0        | 18s   |
| postgresql-0                           | 1/1   | Running | 0        | 3h44m |
| protocol-mapper-67cfc6c848-qqtx9       | 1/1   | Running | 0        | 3h44m |
| service-manager-f68ccb767-cftps        | 1/1   | Running | 0        | 3h44m |
| system-control-server-58f47c69bf-plzt5 | 1/1   | Running | 0        | 3h44m |
| workbench-5c69654659-qwhgc             | 1/1   | Running | 0        | 15s   |

At this point Connectware is upgraded and started. You can now make additional configurations or verify the upgrade status in the Admin UI.

</details>

### 5. Enabling Agents in the Connectware Helm Chart

When upgrading to Connectware 2.0.0, protocol-mapper agents must be explicitly enabled again. This requires updating Helm values and configuring TLS certificates — or, in less secure setups, choosing to trust all certificates.

The following steps are required:

1. Update Helm values to their new equivalents.
2. Configure TLS (recommended) or opt to trust all certificates (not recommended).
3. Re-run `helm upgrade` to apply the changes.

Agents connecting to Connectware must either:

* Provide a valid CA certificate that matches the server certificate, **OR**
* Skip certificate validation by setting `CYBUS_TRUST_ALL_CERTS` to `true` (not recommended).

#### Which Certificate to Use

The certificate that you provide depends on how the agent connects:

* Via Connectware ingress: Use `cybus_ca.crt`.
* Via the internal network: Use `shared_yearly_ca.crt`.
* Simplified option that works for both cases: Use `cybus_combined_ca.crt`.

Use the table below to decide which option applies to your setup:

<details>

<summary><strong>Overview of Certificate Behavior</strong></summary>

| CA certificate | Value of `CYBUS_TRUST_ALL_CERTS` | Behavior                                                                                                                                                                                                                                                                                                        | Log message during control connection                                                              |
| -------------- | -------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| Not Configured | false                            | **Default**: TLS connections, like the control connection to Connectware, will try to use the system trust store to do certificate validation. This will only work when the CAs being used in Connectware are signed by a well-known CA authority or when self-signed CAs were added to the system trust store. | CA certificate not found, using system trusted CAs for NATS connection.                            |
| Not Configured | true                             | TLS connections, like the control connection to Connectware, will not validate certificates and will trust all certificates.                                                                                                                                                                                    | CA certificate not found, Connectware configured for trusting all certificates for NATS connection |
| Configured     | true                             | TLS connections, like the control connection to Connectware, will not validate server certificates and will trust any cert.                                                                                                                                                                                     | CA certificate found, but trusting all certificates for NATS connection                            |
| Configured     | false                            | **Recommended**: TLS connections, like the control connection to Connectware, will validate server certificates and will not trust any cert.                                                                                                                                                                    | CA certificate found, using it for NATS connection with CA verification                            |

</details>

#### What You Need to Do

<details>

<summary><strong>1. Updating the Helm Values</strong></summary>

You must update the Helm values of your `connectware` installation again to re-enable agents by removing the "#" you added, or adding them to the Helm values file again. Then add the `cybus_combined_ca.crt` or set `CYBUS_TRUST_ALL_CERTS` to `true`. If you were directly targeting our MQTT broker or control-plane-broker before, you should also move the respective configuration to their new replacements.

The following Helm values have changed. If you had specific configuration for these in the past, update the Helm values accordingly.

For some Helm values, you need to take additional steps depending on your setup. The required steps are covered in the following sections.

| **Old Helm Value**                                              | **New Helm Value**                                        | **Required Change**                               |
| --------------------------------------------------------------- | --------------------------------------------------------- | ------------------------------------------------- |
| `global.protocolMapperAgents[*].mTLS.caChain.cert`              | `global.protocolMapperAgents[*].tls.ca.certChain`         | Move                                              |
| `global.protocolMapperAgents[*].mTLS.caChain.existingConfigMap` | `global.protocolMapperAgents[*].tls.ca.existingConfigMap` | Move                                              |
| `global.protocolMapperAgents[*].mqttDataHost`                   | `global.protocolMapperAgents[*].dataPlane.host`           | See **1.1 - Directly Targeting MQTT Broker**      |
| `global.protocolMapperAgents[*].mqttDataPort`                   | `global.protocolMapperAgents[*].dataPlane.port`           | See **1.1 - Directly Targeting MQTT Broker**      |
| `global.protocolMapperAgents[*].mqttHost`                       | `global.protocolMapperAgents[*].streamServer.host`        | See **1.2 - Directly Targeting Streaming Server** |
| `global.protocolMapperAgents[*].mqttPort`                       | `global.protocolMapperAgents[*].streamServer.port`        | See **1.2 - Directly Targeting Streaming Server** |

</details>

<details>

<summary><strong>1.1 - Directly Targeting MQTT Broker</strong></summary>

When deploying agents in the internal network of Connectware, they are able to directly connect to our MQTT broker instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

The major hindrance is, that the name "broker", which is used by our MQTT broker on the internal network, is not part of the `cybus_server.crt` by default. In order to connect agents with a TLS connection to this hostname, you either need to add the hostname "broker" as a Subject Alternate Name (SAN) to the certificate, or set `CYBUS_TRUST_ALL_CERTS=true` for the agent.

**Adding the Hostname to the Default Certificate**

If you are using the built-in default certificate for Connectware, you can add the hostname "broker" through the Helm value `global.ingressDNSNames`:

{% code lineNumbers="true" %}

```yaml
global:
  ingressDNSNames:
    - broker
```

{% endcode %}

It is easiest if you add this Helm value before running your upgrade to Connectware 2.0.0, since activating it will automatically be covered by the upgrade guide.

If applying this configuration after already upgrading to Connectware 2.0.0, running `helm upgrade` on your Connectware installation will cause the `system-control-server` Deployment to restart. Once it is ready again, restart the `broker` StatefulSet:

{% code lineNumbers="true" %}

```sh
kubectl rollout restart broker -n [your-namespace]
```

{% endcode %}

**Configuring Your Agents to Target the MQTT Broker**

Next you need to configure your agents to target the MQTT broker directly by using the `protocolMapperAgents[*].dataPlane.host` Helm value:

{% code lineNumbers="true" %}

```yaml
global:
  protocolMapperAgents:
    - name: welder-robots
      dataPlane:
        host: broker
    - name: bender-robots
      dataPlane:
        host: broker
```

{% endcode %}

The TCP port used for this connection is automatically determined by other configuration like TLS and mTLS settings, however, if for some reason you need to override this, use the Helm value `protocolMapperAgents[*].dataPlane.port`.

</details>

<details>

<summary><strong>1.2 - Directly Targeting Control Connection Streaming Server</strong></summary>

When deploying agents in the internal network of Connectware, they are able to directly connect to our streaming server control plane instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

**Configuring Your Agents to Target the Streaming Server**

Next you need to configure your agents to target the streaming server directly by using the `protocolMapperAgents[*].streamServer.host` Helm value. The default internal name is "nats".

{% code lineNumbers="true" %}

```yaml
global:
  protocolMapperAgents:
    - name: welder-robots
      streamServer:
        host: nats
    - name: bender-robots
      streamServer:
        host: nats
```

{% endcode %}

The TCP port used for this connection is automatically determined by other configuration like mTLS settings, however, if for some reason you need to override this, use the Helm value `protocolMapperAgents[*].streamServer.port`.

</details>

<details>

<summary><strong>2. Adding the CA Certificate to Your Agent</strong></summary>

You will need to have the CA certificate you want to add at hand, in this example we will assume, that you are using the `cybus_combined_ca.crt`:

**1. Copy `cybus_combined_ca.crt` from Connectware:**

{% code lineNumbers="true" %}

```sh
kubectl cp -n [your namespace] postgresql-0:/connectware_certs/cybus_combined_ca.crt cybus_combined_ca.crt
```

{% endcode %}

**2. Add `cybus_combined_ca.crt` to Agent Helm Values:**

Add the CA certificate to the Helm values of every agent in your `connectware` installation:

{% code lineNumbers="true" %}

```yaml
global:
  protocolMapperAgents:
    - name: welder-robots
      tls:
        ca:
          certChain: |
            -----BEGIN CERTIFICATE-----
            MIIFpTCCA40CFEQKP621lWyKwv/7bZGbYEoxrLGdMA0GCSqGSIb3DQEBCwUAMIGN
            [skipped lines]
            tTa2qvRLD2J9Eh1KXZ//8IhLc+lIjZsqoPTnhZ7QXZCGwLFdOTEL15mbrgmJOiz/
            lB0RUj8zolJa
            -----END CERTIFICATE-----
            -----BEGIN CERTIFICATE-----
            MIIGATCCA+mgAwIBAgIUCdqCz7EzCbalj4n7qbxZFxi3XdAwDQYJKoZIhvcNAQEL
            [skipped lines]
            ja2TMCBzQSaGyUoLs6Sm2hDD/Y5E6z56Dh7oKQPkoCWjc3+ShF4ilBO9cpyHY0dP
            CcN5u+A=
            -----END CERTIFICATE-----
    - name: bender-robots
      tls:
        ca:
          certChain: |
            -----BEGIN CERTIFICATE-----
            MIIFpTCCA40CFEQKP621lWyKwv/7bZGbYEoxrLGdMA0GCSqGSIb3DQEBCwUAMIGN
            [skipped lines]
            tTa2qvRLD2J9Eh1KXZ//8IhLc+lIjZsqoPTnhZ7QXZCGwLFdOTEL15mbrgmJOiz/
            lB0RUj8zolJa
            -----END CERTIFICATE-----
            -----BEGIN CERTIFICATE-----
            MIIGATCCA+mgAwIBAgIUCdqCz7EzCbalj4n7qbxZFxi3XdAwDQYJKoZIhvcNAQEL
            [skipped lines]
            ja2TMCBzQSaGyUoLs6Sm2hDD/Y5E6z56Dh7oKQPkoCWjc3+ShF4ilBO9cpyHY0dP
            CcN5u+A=
            -----END CERTIFICATE-----
```

{% endcode %}

Alternatively you can add it using an existing Kubernetes ConfigMap:

{% code lineNumbers="true" %}

```sh
kubectl create -n [your-namespace] configmap my-connectware-ca --from-file cybus_combined_ca.crt
```

{% endcode %}

{% code lineNumbers="true" %}

```yaml
global:
  protocolMapperAgents:
    - name: welder-robots
      tls:
        ca:
          existingConfigMap: my-connectware-ca
    - name: bender-robots
      tls:
        ca:
          existingConfigMap: my-connectware-ca
```

{% endcode %}

</details>

<details>

<summary><strong>3. (Alternative) Disable TLS Certificate Validation</strong></summary>

As an alternative you can disable TLS certificate validation for agents. This of course has negative impact on security of your TLS connections, allowing for Man-in-the-middle attacks, but this may be acceptable for development instances or test installations.

{% hint style="info" %}
This is only possible with agents using the username/password authentication method. If you are using mTLS for your agents, you need to care for a proper certificate setup.
{% endhint %}

{% code lineNumbers="true" %}

```yaml
global:
  protocolMapperAgents:
    - name: welder-robots
      env:
        - name: CYBUS_TRUST_ALL_CERTS
          value: 'true'
    - name: bender-robots
      env:
        - name: CYBUS_TRUST_ALL_CERTS
          value: 'true'
```

{% endcode %}

</details>

<details>

<summary><strong>4. Run Helm Upgrade Again</strong></summary>

After choosing and configuring your method of choice, you can run `helm upgrade` on your `connectware` installation again.

* To upgrade Connectware, enter the following command:

{% code lineNumbers="true" %}

```yaml
helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version <2.0.0> -f <values.yaml>
```

{% endcode %}

</details>

### 6. Updating Helm Values for the Connectware Agent Helm Chart

#### Why the Change?

{% hint style="info" %}
This guide explains how to update agents that use the `connectware-agent` Helm chart. If you are using agents via Docker, refer to the [Docker Guide](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-to-2-0-0-docker).
{% endhint %}

With Connectware 2.0.0, the default handling of certificate chain verification has changed. Previously, protocol-mapper agents required explicit configuration to validate peer certificate chains. Now, certificate chain verification is enabled and enforced by default. While you can revert to the old behavior using a configuration switch, we strongly recommend using a proper TLS certificate chain.

You now must provide the CA certificate signing Connectware's public server certificate `cybus_server.crt` to agents using the Helm value `protocolMapperAgentDefaults.tls.ca.certChain` (renamed from `protocolMapperAgentDefaults.mTLS.caChain.cert`).

Additionally, the `control-plane-broker` has been replaced with a new streaming-based control plane. Along with this change, the configuration values for both the control plane and the data plane have been redesigned. The new values are intended to be generic and resilient against future technology changes. As a result, several Helm values have been deprecated, renamed, or newly introduced.

#### What You Need to Do

<details>

<summary><strong>1. Updating the Helm Values</strong></summary>

**Obsolete Helm Values (Connectware-Agent Chart)**

Some [Helm values](https://docs.cybus.io/2-0-6/environment-variables#kubernetes) are obsolete and have been removed. Remove the following Helm values from your `values.yaml` file for the `connectware` Helm chart:

* `protocolMapperAgentDefaults.controlPlaneBrokerEnabled`
* `protocolMapperAgents[*].controlPlaneBrokerEnabled`
* `protocolMapperAgentDefaults.controlPlane`
* `protocolMapperAgents[*].controlPlane`
* `protocolMapperAgentDefaults.rpcTimeout`
* `protocolMapperAgents[*].rpcTimeout`

**Changed Helm Values (Connectware-Agent Chart)**

The following Helm values have changed. If you had specific configuration for these in the past, update the Helm values accordingly.

For some Helm values, you need to take additional steps depending on your setup. The required steps are covered in the following sections.

| **Old Helm Value**                                           | **New Helm Value**                                     | **Required Change**                               |
| ------------------------------------------------------------ | ------------------------------------------------------ | ------------------------------------------------- |
| `protocolMapperAgentDefaults.mTLS.caChain.cert`              | `protocolMapperAgentDefaults.tls.ca.certChain`         | Move                                              |
| `protocolMapperAgentDefaults.mTLS.caChain.existingConfigMap` | `protocolMapperAgentDefaults.tls.ca.existingConfigMap` | Move                                              |
| `protocolMapperAgents[*].mqtt.tls`                           | `protocolMapperAgents[*].dataPlane.tls`                | Move                                              |
| `protocolMapperAgents[*].mqtt.dataHost`                      | `protocolMapperAgents[*].dataPlane.host`               | See **1.1 - Directly Targeting MQTT Broker**      |
| `protocolMapperAgents[*].mqtt.dataPort`                      | `protocolMapperAgents[*].dataPlane.port`               | See **1.1 - Directly Targeting MQTT Broker**      |
| `protocolMapperAgents[*].mqtt.controlHost`                   | `protocolMapperAgents[*].streamServer.host`            | See **1.2 - Directly Targeting Streaming Server** |
| `protocolMapperAgents[*].mqtt.controlPort`                   | `protocolMapperAgents[*].streamServer.port`            | See **1.2 - Directly Targeting Streaming Server** |

</details>

<details>

<summary><strong>1.1 - Directly Targeting MQTT Broker</strong></summary>

When deploying agents in the internal network of Connectware, they are able to directly connect to our MQTT broker instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

The major hindrance is, that the name "broker", which is used by our MQTT broker on the internal network, is not part of the `cybus_server.crt` by default. In order to connect agents with a TLS connection to this hostname, you either need to add the hostname "broker" as a Subject Alternate Name (SAN) to the certificate, or set `CYBUS_TRUST_ALL_CERTS=true` for the agent.

**Adding the Hostname to the Default Certificate**

If you are using the built-in default certificate for Connectware, you can add the hostname "broker" through the Helm value `global.ingressDNSNames`:

{% code lineNumbers="true" %}

```yaml
global:
  ingressDNSNames:
    - broker
```

{% endcode %}

It is easiest if you add this Helm value before running your upgrade to Connectware 2.0.0, since activating it will automatically be covered by the upgrade guide.

If applying this configuration after already upgrading to Connectware 2.0.0, running `helm upgrade` on your Connectware installation will cause the `system-control-server` Deployment to restart. Once it is ready again, restart the `broker` StatefulSet:

{% code lineNumbers="true" %}

```sh
kubectl rollout restart broker -n [your-namespace]
```

{% endcode %}

**Configuring Your Agents to Target the MQTT Broker**

Next you need to configure your agents to target the MQTT broker directly by using the `protocolMapperAgentDefaults.dataPlane.host` Helm value:

{% code lineNumbers="true" %}

```yaml
protocolMapperAgentDefaults:
  dataPlane:
    host: broker
```

{% endcode %}

The TCP port used for this connection is automatically determined by other configuration like TLS and mTLS settings, however, if for some reason you need to override this, use the Helm value `protocolMapperAgentDefaults.dataPlane.port`.

</details>

<details>

<summary><strong>1.2 - Directly Targeting Streaming Server</strong></summary>

When deploying agents in the internal network of Connectware, they are able to directly connect to our streaming server control plane instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

**Configuring Your Agents to Target the Streaming Server**

Next you need to configure your agents to target the streaming server directly by using the `protocolMapperAgentDefaults.streamServer.host` Helm value. The default internal name is "nats".

{% code lineNumbers="true" %}

```yaml
protocolMapperAgentDefaults:
  streamServer:
    host: nats
```

{% endcode %}

The TCP port used for this connection is automatically determined by other configuration like mTLS settings, however, if for some reason you need to override this, use the Helm value `protocolMapperAgentDefaults.streamServer.port`.

</details>

<details>

<summary><strong>2. Adding the CA Certificate to Your Agent</strong></summary>

To connect a protocol-mapper agent with Connectware 2.0.0, you must either provide the agent with the valid CA certificate for the server certificate in use, or disable verification of TLS certificate validity by setting the environment variable `CYBUS_TRUST_ALL_CERTS` to `true` on the agent.

Depending on the fact if you are connecting an agent through Connectware's ingress or through the internal network, you may need to provide either `cybus_ca.crt` or `shared_yearly_ca.crt`, but if you want to skip this complexity, there is a new file called `cybus_combined_ca.crt`, which includes both CA bundles, allowing internal and external connections.

The following examples use the method of configuring all agents inside one `connectware-agent` installation through the `protocolMapperAgentDefaults` Helm value context. However, you can also configure this using the `protocolMapperAgents` Helm value context as described in [Configuration Principles for the connectware-agent Helm Chart](https://docs.cybus.io/2-0-6/documentation/agents/agents-in-kubernetes/configuration-principle-for-the-connectware-agent-helm-chart).

You need to have the CA certificate that you want to add at hand. In this example, we assume that you are using the `cybus_combined_ca.crt`:

1. Copy `cybus_combined_ca.crt` from Connectware:

{% code lineNumbers="true" %}

```sh
kubectl cp -n [your namespace] postgresql-0:/connectware_certs/cybus_combined_ca.crt cybus_combined_ca.crt
```

{% endcode %}

2. Add the CA certificate `cybus_combined_ca.crt` to the Helm values of your `connectware-agent` installation:

{% code lineNumbers="true" %}

```yaml
protocolMapperAgentDefaults:
  tls:
    ca:
      certChain: |
        -----BEGIN CERTIFICATE-----
        MIIFpTCCA40CFEQKP621lWyKwv/7bZGbYEoxrLGdMA0GCSqGSIb3DQEBCwUAMIGN
        [skipped lines]
        tTa2qvRLD2J9Eh1KXZ//8IhLc+lIjZsqoPTnhZ7QXZCGwLFdOTEL15mbrgmJOiz/
        lB0RUj8zolJa
        -----END CERTIFICATE-----
        -----BEGIN CERTIFICATE-----
        MIIGATCCA+mgAwIBAgIUCdqCz7EzCbalj4n7qbxZFxi3XdAwDQYJKoZIhvcNAQEL
        [skipped lines]
        ja2TMCBzQSaGyUoLs6Sm2hDD/Y5E6z56Dh7oKQPkoCWjc3+ShF4ilBO9cpyHY0dP
        CcN5u+A=
        -----END CERTIFICATE-----
```

{% endcode %}

Alternatively, you can add it using an existing Kubernetes ConfigMap:

{% code lineNumbers="true" %}

```sh
kubectl create -n [your-namespace] configmap my-connectware-ca --from-file cybus_combined_ca.crt
```

{% endcode %}

{% code lineNumbers="true" %}

```yaml
protocolMapperAgentDefaults:
  tls:
    ca:
      existingConfigMap: my-connectware-ca
```

{% endcode %}

</details>

<details>

<summary><strong>3. (Alternative) Disable TLS Certificate Validation</strong></summary>

You can choose to disable TLS certificate validation for agents. This is **not recommended**, as it weakens security and makes your setup vulnerable to man-in-the-middle attacks. However, it may be acceptable in non-production environments such as development or testing.

{% hint style="info" %}
This option is only available for agents using username/password authentication. If your agents use mTLS, you must configure proper certificates instead.
{% endhint %}

{% code lineNumbers="true" %}

```yaml
protocolMapperAgentDefaults:
  env:
    - name: CYBUS_TRUST_ALL_CERTS
      value: 'true'
```

{% endcode %}

</details>

### 7. Upgrading Agents for the Connectware Agent Helm Chart

#### What You Need to Do

{% hint style="info" %}
This guide explains how to update agents which use the `connectware-agent` Helm chart. If you are using agents via Docker, follow the [Docker upgrade guide](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-to-2-0-0-docker) for this part.
{% endhint %}

* To upgrade your agents installed via the `connectware-agent` Helm chart, see [Upgrading the connectware-agent Helm Chart](https://docs.cybus.io/2-0-6/documentation/agents/agents-in-kubernetes/upgrading-the-connectware-agent-helm-chart).

### 8. Reinstalling Services

#### Why the Change?

With Connectware 2.0.0, your services and resources are no longer stored on the service-manager volume, but inside the PostgreSQL database.

#### What You Need to Do

<details>

<summary><strong>Reinstalling Your Services</strong></summary>

After completing the upgrade, you must reinstall all previously used services. You can do this using your preferred method:

* Via the Admin UI, see [Installing Services](https://docs.cybus.io/2-0-6/documentation/services/setting-up-and-configuring-services/installing-services).
* Automatically through a CI pipeline.

Additionally, there have been changes to the relationships between services. Understanding how these interdependencies behave at runtime is crucial for correct deployment and maintenance.

**Install parent services first (recommended)**: If the service depends on another service (parent/child relationship), install the parent service first. This ensures:

* Service relations are created during installation.
* Each service can be installed with `targetState=enabled`.

**Install child services first (alternative)**: It is possible to install the dependent (child) service first, but this comes with limitations:

* Service relations are only established when the service is enabled.
* The dependent (child) service can **only** be installed with `targetState=disabled`.

For more details, see [Service Dependency Behavior](https://docs.cybus.io/2-0-6/services/inter-service-referencing#service-dependency-behavior) and [targetState](https://docs.cybus.io/2-0-6/services/service-commissioning-files/resources/cybus-endpoint#targetstate).

</details>

## Feature-Specific Upgrade Steps

Only follow these if you use the related features, so they continue working after the upgrade.

### 1. Permissions and Roles

#### Why the Change?

Permissions allow administrators to define who can access what resources and what actions they can perform. Each permission represents a specific access right to a resource.

Connectware 2.0.0 introduces new and permissions. Because of this, custom roles or specific permissions you have set up might not allow users to do everything they could before the 2.0.0 upgrade.

#### What You Need To Do

<details>

<summary><strong>Verifying Permissions</strong></summary>

* Check the permissions of your users. Compare them with the default roles in Connectware 2.0.0 and make any updates needed so your users can continue working without interruptions.

For more information on managing permissions, see [Permissions](https://docs.cybus.io/2-0-6/documentation/user-management/permissions).

</details>

### 2. Custom Connectors

#### Why the Change?

Connectware has evolved its architecture, removing dependencies like VRPC and improving protocol handling. To ensure compatibility, you must update your custom connector implementations.

#### What You Need To Do

If you are using [custom connectors](https://docs.cybus.io/2-0-6/documentation/industry-protocol-details/custom-connectors), follow these steps to make your custom connector compatible with Connectware 2.0.0.

<details>

<summary><strong>1. Remove VRPC</strong></summary>

VRPC is no longer supported in the custom connector environment.

* Remove all VRPC references in the custom connector code. This includes the import and any usage of `VrpcAdapter`:

**Example**

{% code lineNumbers="true" %}

```javascript
// const { VrpcAdapter } = require('vrpc') <- REMOVE THIS
const Connection = require('./FoobarConnection')
const Endpoint = require('./FoobarEndpoint')

// VrpcAdapter.register(Endpoint, { schema: Endpoint.getSchema() }) <- REMOVE THIS
// VrpcAdapter.register(Connection, { schema: Connection.getSchema() }) <- REMOVE THIS
```

{% endcode %}

</details>

<details>

<summary><strong>2. Follow the Directory Naming Conventions</strong></summary>

* When defining the Dockerfile, ensure that the destination path for the copied source files ends in a protocol-specific directory name written entirely in lowercase.

**Example**

{% code lineNumbers="true" %}

```yaml
# protocol directory must be lowercase
COPY ./src ./src/protocols/foobar
```

{% endcode %}

</details>

<details>

<summary><strong>3. Follow the Schema Naming Conventions</strong></summary>

* The schema `$id` must match the file name (without the `.json`).
* The schema must start with a capital letter, like `Foobar`.

**Example**

* In `FoobarConnection.json`, the class must be like:

{% code lineNumbers="true" %}

```json
{
  ...
  "$id": "FoobarConnection"
  ...
}
```

{% endcode %}

* In `FoobarEndpoint.json`, the class must be like:

{% code lineNumbers="true" %}

```json
{
  ...
  "$id": "FoobarEndpoint"
  ...
}
```

{% endcode %}

</details>

<details>

<summary><strong>4. Schema Versioning</strong></summary>

Schemas support versioning through the additional `version` property, which must be a positive integer greater than zero. If this property is omitted, the default value is `1`.

Versioning ensures that only the latest version of a schema is considered active and valid. This means that even though all custom connector instances should run the same version of schemas, the latest version will overwrite any previous version in the CW control plane.

**Example**

* `FoobarConnection.json` supporting versioning.

{% code lineNumbers="true" %}

```json
{
  ...
  "$id": "FoobarConnection",
  "version": 3
  ...
}
```

{% endcode %}

</details>

<details>

<summary><strong>5. Follow the Source Directory Naming Conventions</strong></summary>

Follow the case-sensitive naming conventions based on the protocol name.

* File names must start with an uppercase protocol name (e.g., `Foobar`).
* Connection and endpoint suffixes are mandatory.
* JS files define classes.
* JSON files define schemas.

**Example**

{% code lineNumbers="true" %}

```yaml
src/
├── FoobarConnection.js
├── FoobarConnection.json
├── FoobarEndpoint.js
└── FoobarEndpoint.json
```

{% endcode %}

</details>

<details>

<summary><strong>6. Follow the Class Naming Conventions</strong></summary>

* The class name must match the file name, excluding the `.js` extension.
* The class name must start with a capital letter, such as `Foobar`.

**Example**

* In `FoobarConnection.js`, the class must be:

{% code lineNumbers="true" %}

```javascript
class FoobarConnection extends Connection { ... }
```

{% endcode %}

* In `FoobarEndpoint.js`, the class must be:

{% code lineNumbers="true" %}

```javascript
class FoobarEndpoint extends Endpoint { ... }
```

{% endcode %}

</details>

<details>

<summary><strong>7. Class Constructors</strong></summary>

Unless you need a specific constructor, there is no need to specify one because it is inherited from the parent class. However, if you need to implement a custom constructor for the `Connection` or `Endpoint` classes, preserve the following format:

* In `FoobarConnection.js`, the class constructor must be like:

{% code lineNumbers="true" %}

```javascript
class FoobarConnection extends Connection {
---
constructor (params) {
super(params)
---
// custom code
---
}
---
}
```

{% endcode %}

* In `FoobarEndpoint.js`, the class constructor must be like:

{% code lineNumbers="true" %}

```javascript
class FoobarEndpoint extends Endpoint {
---
constructor (params, dataPlaneConnectionInstance, parentConnectionInstance) {
super(params, dataPlaneConnectionInstance, parentConnectionInstance)
---
// custom code
---
}
---
}
```

{% endcode %}

</details>

<details>

<summary><strong>8. Do not Set the _topic Property Manually</strong></summary>

The `_topic` property is now handled automatically. Manually assigning it will cause errors.

The following code is invalid and must be removed since topics are now built internally.

{% code lineNumbers="true" %}

```javascript
// this is invalid, remove it
this._topic = 'this/is/a/topic'
```

{% endcode %}

</details>

<details>

<summary><strong>9. ES Modules Not Supported</strong></summary>

The standard JavaScript environment of custom connectors is based on CommonJS modules. ES modules are not supported.

</details>

<details>

<summary><strong>10. TypeScript Configuration</strong></summary>

TypeScript is not officially supported in development workflows. However, if you want to use TypeScript and compile it to JavaScript, make sure to configure your `tsconfig.json` file as follows:

{% code lineNumbers="true" %}

```json
{
  "compilerOptions": {
    ....
    "target": "es2022",    /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
    "lib": ["es6"],        /* Specify a set of bundled library declaration files that describe the target runtime environment. */
    "module": "commonjs",  /* Specify what module code is generated. */
    ....
  },
  "include": ["src/**/*.ts", "src/**/*.json", "src/**/*.js", "src/**/*.d.ts", "test/**/*"]
}
```

{% endcode %}

Additionally, the compiled JavaScript output must include an `exports.default` assignment and the exported class itself. This ensures interoperability with our CommonJS-based module system. The compiled `.js` file should result in:

{% code lineNumbers="true" %}

```javascript
class FoobarConnection { ... }
exports.default = FoobarConnection;
```

{% endcode %}

</details>

### 3. Systemstate Protocol

#### Why the Change?

To improve performance and reduce unnecessary messaging load, the Systemstate protocol no longer supports whole-service tracking or redundant status events. This simplifies agent responsibilities and avoids misleading lifecycle signals.

#### What You Need to Do

If you are using the [Systemstate protocol](https://docs.cybus.io/2-0-6/documentation/industry-protocol-details/systemstate), do the following:

<details>

<summary><strong>1. Stop Tracking Whole Services</strong></summary>

* Tracking the entire service object is no longer allowed. You must update your connector configuration to track individual resources only (like specific endpoints or connections).

**Example**

{% code lineNumbers="true" %}

```yaml
# Before (no longer supported)
serviceEndpoint:
  type: Cybus::Endpoint
  properties:
    protocol: Systemstate
    connection: !ref systemStateConnection
    subscribe:
      resourceId: !sub '${Cybus::ServiceId}'
```

{% endcode %}

</details>

<details>

<summary><strong>2. Update Event Handling Logic</strong></summary>

The following status events have been removed from Systemstate. If your implementation depends on them (e.g., for health monitoring or automation), you must refactor that logic:

* `subscribed`/`unsubscribed`
* `online`/`offline`

</details>

### 4. Log Monitoring

#### Why the Change?

With version 2.0.0, several log messages have been corrected to fix spelling mistakes. These changes may affect existing log monitoring configurations.

#### What You Need to Do

<details>

<summary><strong>Updating Your Log Monitoring</strong></summary>

If you rely on log monitoring, review whether your setup references any of the updated log messages and adjust accordingly.

| Type          | Log Level | Original (with typo)                                                                                | Corrected line                                                                                      |
| ------------- | --------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
| Log message   | info      | MS Entra Login was succesful, redirecting to                                                        | MS Entra Login was successful, redirecting to                                                       |
| Log message   | debug     | DELETE /:id/tokens sucess for user: '\<req.params.id>'                                              | DELETE /:id/tokens success for user: '\<req.params.id>'                                             |
| Error message |           | Views are found, the restore implenetation do not support views!                                    | Views are found, the restore implementation do not support views!                                   |
| Error message |           | query paramter error is not a valid HTTP error code (\<req.query.code>)                             | query parameter error is not a valid HTTP error code (\<req.query.code>)                            |
| Log message   | debug     | Cleared persistance of:                                                                             | Cleared persistence of:                                                                             |
| Error message | warn      | HttpNode is configured with method 'GET' but operation 'serverRecieves' (instead of serverProvides) | HttpNode is configured with method 'GET' but operation 'serverReceives' (instead of serverProvides) |
| Log message   |           | Error when trying to recieve OPC-UA Method details from nodeId : \<err.message>                     | Error when trying to receive OPC-UA Method details from nodeId : \<err.message>                     |
| Log message   | warn      | tried to pass the value as an INT64 and found no matching convertion                                | tried to pass the value as an INT64 and found no matching conversion                                |
| Log message   | warn      | tried to pass the value as an UINT64 and found no matching convertion                               | tried to pass the value as an UINT64 and found no matching conversion                               |
| Log message   | debug     | Sucessfully subscribed to topic: \<mqttOpts.topic>.                                                 | Successfully subscribed to topic: \<mqttOpts.topic>.                                                |
| Log message   | error     | error occured during shutting down the server                                                       | error occurred during shutting down the server                                                      |
| Log message   | error     | expected payload convertion to fail because given payload was not a JSON notation, but 'err == nil' | expected payload conversion to fail because given payload was not a JSON notation, but 'err == nil' |

</details>

### 5. Heidenhain Agents (Windows)

#### Why the Change?

For Connectware 2.0.0, the Heidenhain protocol has been updated.

#### What You Need to Do

<details>

<summary><strong>Installing the Heidenhain Agent</strong></summary>

You must upgrade the Windows-based [Cybus Heidenhain Agent](https://docs.cybus.io/2-0-6/documentation/industry-protocol-details/heidenhain-dnc) to work with Connectware 2.0.0.

1. Uninstall the existing Heidenhain agent installation from your Windows system.
2. Install the updated Heidenhain agent. You can find the download link at [Heidenhain DNC](https://docs.cybus.io/2-0-6/documentation/industry-protocol-details/heidenhain-dnc).

</details>

### 6. Auto-Generated MQTT Topics of Resources

#### Why the Change?

With Connectware 2.0.0, auto-generated MQTT topics no longer include resource-specific properties. This makes the topic generation more unified and explicit. You must update any service commissioning file that hardcodes those old auto-generated topics.

**Example of old behavior**

Some auto-generated topics contained property-specific parts:

* S7: `services/myService/pressure/address:DB1,REAL6`
* Modbus: `services/myService/current/fc:3/address:7`
* HTTP: `services/myService/myEndpoint/get[object Object]`

These paths might have been referenced inside `Cybus::Mapping` resources. Using auto-generated topics inside `Cybus::Mapping` is **not** recommended. Instead, use references via `!ref, “Reference Method”`.

#### What You Need to Do

<details>

<summary><strong>Updating Auto-Generated Topic References</strong></summary>

Auto-generated topics no longer include resource-specific properties. They always follow:

{% code lineNumbers="true" %}

```bash
<Cybus::MqttRoot>/<serviceId>/<resourceName>
```

{% endcode %}

**Example of new behavior**

* S7: `services/myService/pressure`
* Modbus: `services/myService/current`
* HTTP: `services/myService/myEndpoint`

**Procedure**

1. Scan your service commissioning files for any usage of auto-generated topics.
2. Adapt those references by replacing direct topic strings with `!ref` references.

For more details, see [Reference Method (!ref)](https://docs.cybus.io/2-0-6/services/service-commissioning-files/parameters#reference-method-ref).

</details>

### 7. Auto-Generated MQTT Users

#### Why the Change?

Before 2.0.0, Connectware created a hidden MQTT user for every installed service. These auto-generated users were only used when the service commissioning file explicitly referenced the pseudo parameter [Cybus::MqttUser](https://docs.cybus.io/2-0-6/services/service-commissioning-files/parameters#cybus-mqttuser).

With Connectware 2.0.0, hidden users and groups are created only when the service commissioning file uses the `Cybus::MqttUser` pseudo parameter. This reduces unused accounts and makes credential usage explicit.

#### What You Need to Do

<details>

<summary><strong>Verify Your Service Commissioning Files</strong></summary>

If you are using auto-generated MQTT users outside of services (e.g., scripts, dashboards, or other non-commissioning references), migrate to explicit identities:

* Create dedicated users with the required roles/permissions. See [User Management](https://docs.cybus.io/2-0-6/documentation/user-management).
* Update your external systems to use the new explicit credentials.

</details>
