# Upgrading Connectware to 2.0.0 (Docker)

## Disclaimer

{% hint style="info" %}
When upgrading your Connectware instance, follow the upgrade path based on your current version.

For all other version upgrades that are not listed below, you can simply follow the [regular Connectware upgrade guide](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker).

* **If you are on version 1.4.1 or below**
  * Upgrade sequentially: **1.5.0 → 1.7.0 → 2.0.0 → 2.0.1 → 2.0.2 → 2.0.5**
* **If you are between version 1.5.0 and 1.6.2**
  * Upgrade sequentially: **1.7.0 → 2.0.0 → 2.0.1 → 2.0.2 → 2.0.5**
* **If you are on version 1.7.0 or newer (but below 2.0.0)**
  * Upgrade sequentially: **2.0.0 → 2.0.1 → 2.0.2 → 2.0.5**
* **If you are on version 2.0.0**
  * Upgrade sequentially: **2.0.1 → 2.0.2 → 2.0.5**
* **If you are on version 2.0.1**
  * Upgrade sequentially: **2.0.2 → 2.0.5**
* **If you are on version 2.0.2, 2.0.3, or 2.0.4**
  * Upgrade directly to **2.0.5**
* **If you are performing a clean or new installation**
  * No upgrade path required. You can install the **latest available version** directly.

**Detailed instructions on each upgrade step**

* [Upgrading Connectware to 2.0.5 (Docker)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-to-2-0-5-docker)
* [Upgrading Connectware to 2.0.2 (Docker)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-to-2-0-2-docker)
* [Upgrading Connectware to 2.0.1 (Docker)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-to-2-0-1-docker)
* [Upgrading Connectware to 2.0.0 (Docker)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-to-2-0-0-docker)
* [Upgrading Connectware to 1.7.0 (Docker)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-to-1-7-0-docker)
* [Upgrading Connectware from 1.x to 1.5.0 (Docker)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-from-1-x-to-1-5-0-docker)
  {% endhint %}

## Before You Begin

Upgrading to Connectware 2.0.0 introduces significant improvements in performance, scalability, and reliability. However, these changes also come with updated requirements for versions, networking, hardware, and storage.

This guide outlines the prerequisites and known limitations you must consider to ensure a smooth and successful upgrade.

{% hint style="warning" %}
Before starting the upgrade, read the entire guide. Some steps require developer work or preparation before the upgrade process begins.
{% endhint %}

{% hint style="warning" %}
Upgrading to Connectware 2.0.0 requires reinstalling all services. The main benefit of upgrading instead of performing a fresh installation is that it preserves the user database, including Multi-Factor Authentication. If you do not rely heavily on these features, a fresh installation may be the better option.

Even with a fresh installation, you will still need to follow this upgrade guide to update configuration parameters and adapt to the behavioral changes introduced in Connectware 2.0.0. However, you can skip the multi-step upgrade process itself.

If you are considering a fresh installation, we strongly recommend consulting with the Cybus Customer Support beforehand to confirm whether this is the right approach for your setup.
{% endhint %}

### Connectware Version Requirements

To be able to upgrade to Connectware 2.0.0, your Connectware version must be 1.7.0 or above.

If your Connectware installation is below 1.7.0, make sure that you have followed [Upgrading Connectware to 1.7.0 (Docker)](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-docker/upgrading-connectware-to-1-7-0-docker) before upgrading to 2.0.0.

### Network Requirements

#### Why the Change?

With Connectware 2.0.0, some internal components have been updated to improve communication and performance. As a result, the network configuration has changed:

* **Added**: TCP/4222 and TCP/4223
* **Removed**: TCP/1884 and TCP/8884

#### What You Need to Do

<details>

<summary><strong>Updating the Network Ports</strong></summary>

Verify that your firewalls and security rules are updated to allow the new ports (TCP/4222 and TCP/4223) and to remove dependencies on the deprecated ones (TCP/1884 and TCP/8884).

This ensures uninterrupted connectivity between your agents and Connectware.

</details>

### Hardware Requirements

#### Why the Change?

Connectware 2.0.0 makes increased use of its PostgreSQL database and adds some components. When planning this upgrade, ensure your infrastructure can accommodate the enhanced resource requirements. This upgrade requires additional computing power.

#### What You Need to Do

<details>

<summary><strong>Updating the Hardware Setup</strong></summary>

We recommend adding the following resources to your hardware setup:

* **7** CPU cores
* **6** GB of memory
* **20** Gi of storage

However, these are general guidelines. Check what your specific system needs and make adjustments accordingly.

</details>

### Storage Requirements

#### Why the Change?

We have added two new components to Connectware:

* A streaming server called NATS
* A service called resource-status-tracking

Alongside other improvements, these additions enable Connectware to scale effectively for much larger deployments.

### Known Limitations

1. **Adding Certificates Through Admin UI Not Supported**

* You cannot add certificates to Connectware's CA bundle via the Admin UI.
* Instead, modify the `cybus_ca.crt` file directly on the `certs` volume.

2. **Backup via Admin UI Not Supported**

* The backup functionality through Admin UI is not supported.
* Instead, create backups of the database by running a `pg_dump` command on the postgresql container.
* When running the command, make sure that only a single Connectware instance is running. Otherwise, select the container manually.

{% code title="Example" lineNumbers="true" %}

```sh
docker exec \
    $(docker container ls -q -f "label=io.cybus.connectware=core" -f "label=com.docker.compose.service=postgresql") \
    bash -c "pg_dump -U cybus-admin --if-exists -c cybus_connectware" \
    > connectware_database.sql
```

{% endcode %}

## Upgrade Procedure

Follow this procedure to upgrade your Connectware installation to version 2.0.0. The steps are divided into two parts:

* **Mandatory Upgrade Steps**: Required for all installations to ensure a smooth and stable upgrade.
* **Feature-Specific Upgrade Steps**: Only needed if you use certain features, so they remain compatible with Connectware 2.0.0.

Expand the following sections for an overview of all upgrade steps.

<details>

<summary><strong>Mandatory Upgrade Steps</strong></summary>

These steps apply to every Connectware installation upgrading to Connectware 2.0.0. For a detailed guide, see [Mandatory Upgrade Steps](#mandatory-upgrade-steps).

{% hint style="info" %}
Depending on your setup, you may also need to perform additional Conditional Steps.
{% endhint %}

1. [**TLS Changes**](#id-1.-tls-changes): Default behavior on certificate validation has been adjusted.
2. [**Update .env Configuration**](#id-2.-update-.env-configuration): Remove any obsolete environment variables and update the configuration for changed or new environment variables as necessary.
3. [**Preparing the Connectware Upgrade**](#id-3.-preparing-the-connectware-upgrade).
4. [**Upgrading to Connectware 2.0.0**](#id-4.-upgrading-to-connectware-2.0.0): Download and install Connectware 2.0.0.
5. [**Updating Agent Configuration**](#id-6.-updating-agent-configuration): Update your agent configuration to comply with the updated configuration.
6. [**Upgrading Agents**](#id-7.-upgrading-agents): Upgrade your agents.
7. [**Reinstalling Services**](#id-8.-reinstalling-services): This upgrade changes where your services are stored. You will need to reinstall any services after the upgrade.

</details>

<details>

<summary><strong>Feature-Specific Upgrade Steps</strong></summary>

Only follow these if you use the related features, so they continue working after the upgrade.

1. [**Roles and Permissions**](#id-1.-permissions-and-roles): New permissions were added to Connectware. Verify your custom roles, if they require updates.
2. [**Custom Connectors**](#id-2.-custom-connectors): Update your customer connector configurations to meet new requirements.
3. [**Systemstate Protocol**](#id-3.-systemstate-protocol): Update your Systemstate protocol configurations to meet new requirements.
4. [**Log Monitoring**](#id-4.-log-monitoring): Some logging strings are changed. If you use log monitoring, you may need to update it.
5. [**Heidenhain Agents**](#id-5.-heidenhain-agents-windows): Upgrade your Heidenhain agents.
6. [**Auto-Generated MQTT Topics of Resources**](#id-6.-auto-generated-mqtt-topics-of-resources): Topic generation no longer includes resource-specific properties. Update your service commissioning files if you relied on old patterns.
7. [**Auto-Generated MQTT Users**](#id-7.-auto-generated-mqtt-users): The behavior of how MQTT users are auto-generated has changed. You may need to update your service commissioning file if you relied on auto-generated MQTT users.

</details>

## Mandatory Upgrade Steps

These steps are required to upgrade your Connectware installation to Connectware 2.0.0.

### 1. TLS Changes

#### Why the Change?

To enhance security by default, Connectware agents now verify TLS certificate chains automatically. This ensures that all components communicate over a valid trust chain, while still giving you the option to keep the old behavior by explicitly disabling TLS verification.

#### Key Changes

<details>

<summary><strong>1. Introducing the cybus_combined_ca.crt</strong></summary>

Connectware maintains two separate CA chains:

* External certificates validated by `cybus_ca.crt`.
* Internal certificates validated by `shared_yearly_ca.crt`.

Which CA an agent requires depends on the hostname through which it connects to Connectware. For example, through the Connectware ingress, or directly to the Control Streaming Server (NATS) through the internal network.

To simplify configuration, we introduced `cybus_combined_ca.crt`, a bundle containing both chains, so agents can use a single file without needing to distinguish between internal and external CA certificates.

</details>

<details>

<summary><strong>2. Certificate Chain Verification in Agents</strong></summary>

Agents now enforce TLS chain validation by default. Each agent requires access to `cybus_combined_ca.crt`, available on the `certs` volume.

* To revert to the previous behavior (skipping verification), set the environment variable `CYBUS_TRUST_ALL_CERTS` to `true`. Note that it has been renamed from `TRUST_ALL_CERTS`.

</details>

<details>

<summary><strong>3. Configuring Certificate Hostnames</strong></summary>

The default Connectware-generated CA includes the hostnames `localhost` and `connectware`.

* To add more hostnames, configure a comma separated list in the environment variable `CYBUS_INGRESS_DNS_NAMES`.
* You will also be prompted for these names as part of running the Connectware installer.

</details>

<details>

<summary><strong>4. Renewal of Certificate Chains</strong></summary>

With 2.0.0, the internal CA chain is replaced:

* Certificate Authority renamed from `CybusCA` to `CybusInternalCA`.
* The hostname `nats` is added as a Subject Alternate Name (SAN) to `shared_yearly_server.crt`.

The built-in default external CA certificate chain is also replaced.

* The hostname `connectware` is added as a SAN to `cybus_server.crt`.

If you rely on monitoring, custom setups, or modified certificates, adapt your configuration accordingly.

</details>

<details>

<summary><strong>5. Replacing CA Certificate Chain</strong></summary>

To replace Connectware’s default external chain with your enterprise-managed CA:

* Replace `cybus_ca.crt` with your enterprise CA certificate.
* Ensure `cybus_server.crt` and `cybus_server.key` form a valid key pair, signed by the CA in `cybus_ca.crt`.

Do not replace the internal CA (`shared_yearly_ca.crt`).

After replacement:

1. Restart the `system-control-server` deployment to rebuild and synchronize the combined CA bundle (`cybus_combined_ca.crt`). Ensure that only a single Connectware instance is running.

{% code lineNumbers="true" %}

```bash
docker restart $(docker container ls -f "label=io.cybus.connectware=core" -f "label=com.docker.compose.service=system-control-server" -q)
```

{% endcode %}

2. Restart all Connectware services.

</details>

### 2. Update .env Configuration

#### Why the Change?

Some changes to Connectware require updating your environment variable configuration. Adapt your `.env` file accordingly.

* All parameters to tune the inter-service communication have been removed.

#### Obsolete Environment Variables Values

<details>

<summary><strong>Removing Obsolete Environment Variables</strong></summary>

Some [environment variables](https://docs.cybus.io/2-0-6/environment-variables#docker-compose) are obsolete and have been removed. Remove the following environment variables from your .env file for Connectware:

* `CYBUS_CM_RPC_TIMEOUT`
* `CYBUS_ADMIN_WEB_APP_VRPC_TIMEOUT`
* `CYBUS_PM_RPC_TIMEOUT`
* `CYBUS_SM_RPC_TIMEOUT`
* `CYBUS_SCS_RPC_TIMEOUT`
* `CYBUS_USE_SERVICES_GRAPH`

</details>

#### New Environment Variables

<details>

<summary><strong>1. Ingress DNS Name Configuration</strong></summary>

With the changes TLS behavior in Connectware, it has become essential to add the DNS names under which Connectware is addressed, for example by agents.

If you are replacing Connectware's default PKI, you can, and likely have managed this yourself by providing a valid `cybus_server.crt` containing all Subject Alternate Names (SANs) used within your setup.

If you are using Connectware's default PKI, you can use the new environment variable `CYBUS_INGRESS_DNS_NAMES`, which is a comma separated list of names that will be added to the default `cybus_server.crt`.

**Hostname Formats**

You can include multiple hostnames in the list. The certificate will include all specified names in its SAN section.

The configuration accepts various hostname formats:

* Wildcards (e.g., `*.company.io`)
* Subdomains (e.g., `connectware.company.io`)
* Custom hostnames (e.g., `localhost`)
* IP addresses (e.g. 192.168.100.42)

**Example**

{% code lineNumbers="true" %}

```ini
CYBUS_INGRESS_DNS_NAMES=connectware.company.io,*.company.io,192.168.100.42
```

{% endcode %}

The Connectware installer will also ask you for this value.

</details>

### 3. Preparing the Connectware Upgrade

#### Why the Change?

Connectware 2.0.0 introduces architectural improvements that require you to remove or adjust certain resources before running the upgrade. This ensures a clean and successful upgrade process.

#### What You Need to Do

<details>

<summary><strong>1. Backing Up Your PostgreSQL Database</strong></summary>

With Connectware 2.0.0, Connectware uses a new major version of PostgreSQL. You need to delete your `postgresql` volume before upgrading Connectware (this is covered later in this upgrade guide). This requires you to create a backup of your database and restore this after the upgrade.

{% hint style="warning" %}
Any modifications done to Connectware after the following database backup will be lost after the Connectware 2.0.0 upgrade. We recommend to create the backup right before upgrading to Connectware 2.0.0.
{% endhint %}

1. To create a backup of your database, first identify the correct container from the NAMES column:

{% code lineNumbers="true" %}

```sh
docker container ls -f "label=io.cybus.connectware=core" -f "label=com.docker.compose.service=postgresql"
```

{% endcode %}

If more than one container is shown, you need to identify the correct container. The prefix of the container is usually the folder name in which your Docker Composition is stored. For example, if you installed it in `/opt/connectware`, the name of the container would be `connectware-postgresql-1`.

2. Create a backup of the database in the file `connectware_database.sql`, replacing \[container-name] with the name of the container identified in step 1.

{% code lineNumbers="true" %}

```sh
docker exec \
    [container-name-from-step-1] \
    bash -c "pg_dump -U cybus-admin --if-exists -c cybus_connectware" \
    > connectware_database.sql
```

{% endcode %}

3. Make sure the backup is successful, then store the database file in a secure location.

</details>

<details>

<summary><strong>2. Service-Manager Volume Backup &#x26; Removal</strong></summary>

The `service-manager` volume is deprecated and will not be used after the upgrade to 2.0.0.

After upgrading, you can remove this Docker volume. You can identify the volume using this command:

{% code lineNumbers="true" %}

```sh
docker volume ls -f "label=com.docker.compose.volume=service-manager"
```

{% endcode %}

In the future, the former contents of this volume will be stored in the PostgreSQL database, but they will not be migrated automatically. You must reinstall any services that you previously used. See [Reinstalling Services](#12-reinstalling-services).

If you do not have your services stored outside of Connectware, make sure to export your services, or create a backup of your `service-manager` volume before upgrading.

</details>

### 4. Upgrading to Connectware 2.0.0

{% hint style="warning" %}
Make sure all prior steps are completed before proceeding with the upgrade.
{% endhint %}

<details>

<summary><strong>1. Reviewing the Connectware Changelog</strong></summary>

Before upgrading to Connectware 2.0.0, review the [changelog](https://docs.cybus.io/2-0-6/reference/changelog#what-has-changed-in-200) to familiarize yourself with new features, bug fixes, and other changes introduced in Connectware 2.0.0.

</details>

<details>

<summary><strong>2. Verifying your Backups</strong></summary>

Make sure that you store backups of your setup. This allows you to restore a previous state if necessary.

Your backups must consist of the following files:

* All Docker volumes
* Your Connectware database
* Your .env file
* All service commissioning files

Depending on your local infrastructure, it may be necessary to back up additional files.

</details>

<details>

<summary><strong>3. Shutting Down Protocol-Mapper Agents</strong></summary>

Before running the upgrade, you must stop all connected agents. Any agents that remain active during the upgrade will have to go through the registration process again.

* **Docker Run**: To stop agents which were started using `docker run`, use the `docker stop` command. If you are not aware of the name these containers use, run the `docker ps` command to find out.
* **Docker Compose**: If your agents are running in Docker Compose, use the `docker compose down` command to stop them.
* **Agent Helm Chart**: You can shut down agents that have been installed via the `connectware-agent` Helm chart using this command:

{% code lineNumbers="true" %}

```sh
kubectl get -n [your-namespace] sts -lapp.kubernetes.io/component=protocol-mapper-agent -o name | xargs -I % kubectl scale -n [your-namespace] % --replicas 0
```

{% endcode %}

</details>

<details>

<summary><strong>4.Shutting Down Connectware</strong></summary>

* Before running the installer, you must shut down Connectware.

1. Make sure you enter the directory in which you installed Connectware, where your `docker-compose.yaml` and `.env` files are located. This is likely `/opt/connectware`.
2. Shutdown Connectware:

{% code lineNumbers="true" %}

```sh
docker compose down
```

{% endcode %}

</details>

<details>

<summary><strong>5. Removing the PostgreSQL Volume</strong></summary>

* Before running the installer, you must remove the `postgresql` volume.

1. Identify the correct volume to delete:

{% code lineNumbers="true" %}

```sh
docker volume ls -f "label=com.docker.compose.volume=postgresql"
```

{% endcode %}

If this shows more than one volume, you must identify the correct volume. The prefix of this volume is usually the folder name in which your Docker Composition is stored. For example, if you installed in `/opt/connectware`, the name of this volume would be `connectware_postgresql`.

2. Delete the postgresql volume:

{% code lineNumbers="true" %}

```sh
docker volume rm [volume-identified-in-step-1]"
```

{% endcode %}

</details>

<details>

<summary><strong>6. Initial Connectware Upgrade</strong></summary>

* To upgrade Connectware to a newer version, follow the steps in the [Prepare Installer Script](https://docs.cybus.io/2-0-6/documentation/installing-connectware/installing-connectware-docker#preparing-the-installer-script) to get the latest installer script. When running the update, select your current Connectware installation directory.

The update will automatically preserve your existing configuration, including your license key and network settings. If you are prompted to enter a license key during the update, this usually means that you have selected the wrong installation directory. In this case, cancel the update and verify you have chosen the correct directory.

{% hint style="info" %}
If you originally installed Connectware with sudo privileges, make sure that you use `sudo` again when running the update.
{% endhint %}

{% hint style="info" %}
Downgrading to previous Connectware versions is not supported.
{% endhint %}

### Upgrading Connectware in Silent Mode

The installer supports an automated deployment mode that requires no manual intervention. You can activate this by using either `-s` or `--silent`, and `-d`(directory) when [running the installation script](#running-the-installer-script).

If you need to customize your installation, the script offers several configuration options. Run the installer with `--help` to view all available parameters.

**Example**

{% code lineNumbers="true" %}

```sh
./connectware-online-installer.sh -s -d <PATH/TO/YOUR/CONNECTWARE/FOLDER>
```

{% endcode %}

The installer will tell you that you are ready to run `docker compose up` now. However, before you do this, there are some additional steps that need to be completed.

</details>

<details>

<summary><strong>7. Running Database Restore &#x26; Starting Connectware</strong></summary>

1. Create a file called `docker-compose.override.yaml` and add the following content:

{% code lineNumbers="true" %}

```yaml
services:
  admin-web-app:
    profiles:
      - do-not-start
  auth-server:
    profiles:
      - do-not-start
  broker:
    profiles:
      - do-not-start
  connectware:
    profiles:
      - do-not-start
  container-manager:
    profiles:
      - do-not-start
  doc-server:
    profiles:
      - do-not-start
  ingress-controller:
    profiles:
      - do-not-start
  protocol-mapper:
    profiles:
      - do-not-start
  service-manager:
    profiles:
      - do-not-start
  workbench:
    profiles:
      - do-not-start
  resource-status-tracking:
    profiles:
      - do-not-start
  nats:
    profiles:
      - do-not-start
```

{% endcode %}

{% hint style="info" %}
If you are already using a file `docker-compose.override.yaml`, make sure to temporary rename this file for the upgrade.
{% endhint %}

2. Start the Docker Composition:

{% code lineNumbers="true" %}

```sh
docker compose up -d
```

{% endcode %}

3. Once the postgresql container has started, restore the database (reuse the container name previously identified when backup up the database):

{% code lineNumbers="true" %}

```sh
cat connectware_database.sql | docker exec -i \
    [container-name] \
    bash -c "psql -U cybus-admin -d cybus_connectware"
```

{% endcode %}

4. Stop the Docker Composition:

{% code lineNumbers="true" %}

```sh
docker compose down
```

{% endcode %}

5. Remove `docker-compose.override.yaml`

{% code lineNumbers="true" %}

```sh
rm docker-compose.override.yaml
```

{% endcode %}

{% hint style="info" %}
If you were already using a file `docker-compose.override.yaml`, make sure to restore it now.
{% endhint %}

6. Start Connectware 2.0.0:

{% code lineNumbers="true" %}

```sh
docker compose up -d
```

{% endcode %}

7. Restart Connectware 2.0.0:

Finally, restart Connectware once more to ensure any certificate updates are properly applied:

{% code lineNumbers="true" %}

```sh
docker compose down && docker compose up -d
```

{% endcode %}

</details>

### 5. Updating Agent Configuration

#### Why the Change?

{% hint style="info" %}
This guide explains how to update agents that use Docker. If you are using agents via the `connectware-agent` Helm chart, refer to the [Kubernetes Guide](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-2-0-0-kubernetes).
{% endhint %}

With Connectware 2.0.0, the default handling of certificate chain verification has changed. Previously, protocol-mapper agents required explicit configuration to validate peer certificate chains. Now, certificate chain verification is enabled and enforced by default. While you can revert to the old behavior using a configuration switch, we strongly recommend using a proper TLS certificate chain.

You now must provide the CA certificate signing Connectware's public server certificate `cybus_server.crt` to agents.

Additionally, the `control-plane-broker` has been replaced with a new streaming-based control plane. Along with this change, the configuration values for both the control plane and the data plane have been redesigned. The new values are intended to be generic and resilient against future technology changes. As a result, several environment variables have been deprecated, renamed, or newly introduced.

#### What You Need to Do

Because Connectware agents are single containers, they can be orchestrated by many means, exceeding the possibilities of this upgrade guide. We will provide examples for Docker Compose orchestration. We trust that you know how to adapt these to an orchestrator of your choice. Contact Cybus Support for additional assistance.

<details>

<summary><strong>1. Adding the CA Certificate to Your Agent</strong></summary>

To connect a protocol-mapper agent with Connectware 2.0.0, you must either provide the agent with the valid CA certificate for the server certificate in use, or disable verification of TLS certificate validity by setting the environment variable `CYBUS_TRUST_ALL_CERTS` to `true` on the agent.

Whether you are connecting an agent via Connectware's ingress or the internal network will determine whether you need to provide either `cybus_ca.crt` or `shared_yearly_ca.crt`. However, if you want to avoid this complexity, there is a new file called `cybus_combined_ca.crt` which includes both CA bundles and allows both internal and external connections.

You need to have the CA certificate that you want to add at hand. In this example, we assume that you are using the `cybus_combined_ca.crt`:

1. Identify the correct container from the NAMES column:

{% code lineNumbers="true" %}

```sh
docker container ls -f "label=io.cybus.connectware=core" -f "label=com.docker.compose.service=system-control-server"
```

{% endcode %}

2. Copy `cybus_combined_ca.crt` from Connectware:

{% code lineNumbers="true" %}

```sh
docker cp [container-name-from-step-1]:/connectware_certs/cybus_combined_ca.crt cybus_combined_ca.crt
```

{% endcode %}

3. Copy the CA certificate `cybus_combined_ca.crt` to the directory which contains the `docker-compose.yaml` file for your agent:

**Example using /opt/connectware-agent/ as directory:**

{% code lineNumbers="true" %}

```sh
cp cybus_combined_ca.crt /opt/connectware-agent/
```

{% endcode %}

4. Mount the CA certificate `cybus_combined_ca.crt` to the `/connectware/certs/ca/ca-chain.pem` mount point of your agent by adding a volume in `docker-compose.yaml`:

**Example using /opt/connectware-agent/ as directory:**

{% code lineNumbers="true" %}

```yaml
services:
  protocol-mapper-agent:
    image: registry.cybus.io/cybus/protocol-mapper:2.0.0
    environment:
      CYBUS_AGENT_MODE: distributed
      CYBUS_AGENT_NAME: my-docker-compose-agent
      CYBUS_HOSTNAME_INGRESS: localhost
    volumes:
      - protocol-mapper-agent:/data
      - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem # Mount your cybus_combined_ca to the agent
    restart: unless-stopped
    network_mode: host
    hostname: my-docker-compose-agent
volumes:
  protocol-mapper-agent:
```

{% endcode %}

</details>

<details>

<summary><strong>2. (Alternative) Disable TLS Certificate Validation</strong></summary>

You can choose to disable TLS certificate validation for agents. This is **not recommended**, as it weakens security and makes your setup vulnerable to man-in-the-middle attacks. However, it may be acceptable in non-production environments such as development or testing.

{% hint style="info" %}
This option is only available for agents using username/password authentication. If your agents use mTLS, you must configure proper certificates instead.
{% endhint %}

{% code lineNumbers="true" %}

```yaml
services:
  protocol-mapper-agent:
    image: registry.cybus.io/cybus/protocol-mapper:2.0.0
    environment:
      CYBUS_AGENT_MODE: distributed
      CYBUS_AGENT_NAME: my-docker-compose-agent
      CYBUS_HOSTNAME_INGRESS: localhost
      CYBUS_TRUST_ALL_CERTS: true # disable certificate validation
    volumes:
      - protocol-mapper-agent:/data
    restart: unless-stopped
    network_mode: host
    hostname: my-docker-compose-agent
volumes:
  protocol-mapper-agent:
```

{% endcode %}

</details>

<details>

<summary><strong>3. Updating Environment Variables</strong></summary>

**Obsolete Environment Variables (Agents)**

Some [Environment Variables](https://docs.cybus.io/2-0-6/environment-variables#docker-compose) are obsolete and have been removed. Remove the following environment variables from your agent orchestration:

* `CYBUS_PM_RPC_TIMEOUT`
* `CYBUS_CONTROLPLANE_URI`

**New or Changed Environment Variables (Agents)**

The following environment variables have changed. If you had specific configuration for these in the past, update your orchestration accordingly.

For some environment variables, you need to take additional steps depending on your setup. The required steps are covered in the following sections.

| **Old Environment Variable** | **New Environment Variable**                             | **Required Change**                                                                            |
| ---------------------------- | -------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
| -                            | `CYBUS_DATAPLANE_USE_TLS`                                | Set to `true` if you want your agents to use TLS encryption for the MQTT data plan             |
| `USE_MUTUAL_TLS`             | `CYBUS_USE_MUTUAL_TLS`                                   | Set to `true` if you want your agents to use mTLS authentication for the MQTT data plan        |
| `TRUST_ALL_CERTS`            | `CYBUS_TRUST_ALL_CERTS`                                  | Set to `true` if you want your agents to skip TLS certificate validation.                      |
| `CYBUS_DATA_MQTT_HOST`       | `CYBUS_DATAPLANE_HOST`                                   | See **3.2 - Directly Targeting MQTT Broker**                                                   |
| `CYBUS_DATA_MQTT_PORT`       | `CYBUS_DATAPLANE_PORT`                                   | See **3.2 - Directly Targeting MQTT Broker**                                                   |
| `CYBUS_MQTT_HOST`            | `CYBUS_STREAMSERVER_HOST`                                | See **3.3 - Directly Targeting Streaming Server**                                              |
| `CYBUS_MQTT_PORT`            | `CYBUS_STREAMSERVER_PORT`                                | See **3.3 - Directly Targeting Streaming Server**                                              |
| `CYBUS_MQTT_SCHEME`          | `CYBUS_DATAPLANE_SCHEME` and `CYBUS_STREAMSERVER_SCHEME` | See **3.2 - Directly Targeting MQTT Broker** and **3.3 - Directly Targeting Streaming Server** |

</details>

<details>

<summary><strong>3.1 - Updating Connectware Ingress Targeting</strong></summary>

Connectware 2.0.0 changes how you address your Connectware instance with agents.

Previously, the environment variable `CYBUS_MQTT_HOST` was used. Later, `CYBUS_HOSTNAME_INGRESS` was introduced for targeting the ingress, while `CYBUS_MQTT_HOST` was used for targeting the `control-plane-broker`. Additionally, `CYBUS_DATA_MQTT_HOST` was introduced to control the MQTT broker that the agent connected to as data plane. `CYBUS_MQTT_HOST` acted as a fallback for all three environment variables.

With the removal of the `control-plane-broker`, we are simplifying and decoupling the configuration:

* If you are only using the Connectware ingress for your agents, you must only configure `CYBUS_HOSTNAME_INGRESS`.
* If you have more complex setup, which targeted the MQTT data plane broker or the `control-plane-broker`, use `CYBUS_DATAPLANE_HOST` and `CYBUS_STREAMSERVER_HOST` to refine your configuration, as explained in the next steps.

In short, make sure that you target your Connectware instance using the `CYBUS_HOSTNAME_INGRESS` environment variable, replacing any legacy `CYBUS_MQTT_HOST` configuration you may have had, without the intention of directly targeting the MQTT data plane broker.

{% code lineNumbers="true" %}

```yaml
services:
  protocol-mapper-agent:
    image: registry.cybus.io/cybus/protocol-mapper:2.0.0
    environment:
      CYBUS_AGENT_MODE: distributed
      CYBUS_AGENT_NAME: my-docker-compose-agent
      CYBUS_HOSTNAME_INGRESS: localhost # make sure you use CYBUS_HOSTNAME_INGRESS as the general Connectware target for your agent
      CYBUS_DATAPLANE_HOST: broker # use new environment variables for more complex network setups
    volumes:
      - protocol-mapper-agent:/data
      - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem
    restart: unless-stopped
    hostname: my-docker-compose-agent
    network_mode: host
volumes:
  protocol-mapper-agent:
```

{% endcode %}

</details>

<details>

<summary><strong>3.2 - Directly Targeting MQTT Broker</strong></summary>

When deploying agents in the internal network of Connectware, they are able to directly connect to our MQTT broker instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

The major hindrance is, that the name "broker", which is used by our MQTT broker on the internal network, is not part of the `cybus_server.crt` by default. In order to connect agents with a TLS connection to this hostname, you either need to add the hostname "broker" as a Subject Alternate Name (SAN) to the certificate, or set `CYBUS_TRUST_ALL_CERTS=true` for the agent. The previous steps explained how to add the CA certificate bundle file and how to set the environment variable.

**Adding the Hostname to the Default Certificate**

If you are using the built-in default certificate for Connectware, you can add the hostname "broker" through the environment variable `CYBUS_HOSTNAME_INGRESS` in the `.env` file of your Connectware installation:

{% code lineNumbers="true" %}

```ini
CYBUS_INGRESS_DNS_NAMES=connectware.company.io,broker
```

{% endcode %}

It is easiest if you add this configuration during your upgrade to Connectware 2.0.0, since activating it will automatically be covered by the upgrade guide. You will be asked for the ingress hostnames by the installer script. You can also use the parameter `--ingress-dns-names` for the Connectware installer to set this.

If applying this configuration after already upgrading to Connectware 2.0.0, running `docker compose up -d` on your Connectware installation will cause the multiple containers to restart. Once they are ready, restart Connectware again:

{% code lineNumbers="true" %}

```sh
docker compose down
docker compose up -d
```

{% endcode %}

**Configuring Your Agents to Target the MQTT Broker**

Next, you must configure your agents to target the MQTT broker directly by using the `CYBUS_DATAPLANE_*` environment variables. To use TLS encryption for this connection, you must set `CYBUS_DATAPLANE_USE_TLS` to `true` and provide the agent with the CA certificate bundle, as explained previously.

**Docker Compose Example**:

To add a Docker Composition to an existing network, you must add it as external network. For this, you need to know the name of the network.

This example assumes the name is `connectware_cybus`, however you can find it using this command (NAME column):

{% code lineNumbers="true" %}

```sh
docker network ls -f "label=com.docker.compose.network=cybus"
```

{% endcode %}

{% code lineNumbers="true" %}

```yaml
services:
  protocol-mapper-agent:
    image: registry.cybus.io/cybus/protocol-mapper:2.0.0
    environment:
      CYBUS_AGENT_MODE: distributed
      CYBUS_AGENT_NAME: my-docker-compose-agent
      CYBUS_HOSTNAME_INGRESS: connectware
      CYBUS_DATAPLANE_HOST: broker
      CYBUS_DATAPLANE_USE_TLS: true
    volumes:
      - protocol-mapper-agent:/data
      - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem
    restart: unless-stopped
    hostname: my-docker-compose-agent
    networks:
      - connectware_cybus # name from previous step
    extra_hosts:
      - 'connectware:host-gateway' # this extra host ensures the agent is still able to make HTTP API calls
volumes:
  protocol-mapper-agent:
networks:
  connectware_cybus: # name from previous step
    external: true
```

{% endcode %}

The TCP port used for this connection is automatically determined by other configuration like TLS and mTLS settings, however, if for some reason you need to override this, use the environment variable `CYBUS_DATAPLANE_PORT`.

</details>

<details>

<summary><strong>3.3 - Directly Targeting Streaming Server</strong></summary>

When deploying agents in the internal network of Connectware, they are able to directly connect to our streaming server control plane instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

**Configuring Your Agents to Target the Streaming Server**

Next, you must configure your agents to target the streaming server directly by using the `CYBUS_STREAMSERVER_HOST` environment variable. The default internal name is "nats".

**Docker Compose Example**:

To add a Docker Composition to an existing network, you must add it as external network. For this, you need to know the name of the network.

This example assumes the name is `connectware_cybus`. However, you can find it using this command (NAME column):

{% code lineNumbers="true" %}

```sh
docker network ls -f "label=com.docker.compose.network=cybus"
```

{% endcode %}

{% code lineNumbers="true" %}

```yaml
services:
  protocol-mapper-agent:
    image: registry.cybus.io/cybus/protocol-mapper:2.0.0
    environment:
      CYBUS_AGENT_MODE: distributed
      CYBUS_AGENT_NAME: my-docker-compose-agent
      CYBUS_HOSTNAME_INGRESS: connectware
      CYBUS_STREAMSERVER_HOST: nats
    volumes:
      - protocol-mapper-agent:/data
      - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem
    restart: unless-stopped
    hostname: my-docker-compose-agent
    networks:
      - connectware_cybus # name from previous step
    extra_hosts:
      - 'connectware:host-gateway' # this extra host ensures the agent is still able to make HTTP API calls
volumes:
  protocol-mapper-agent:
networks:
  connectware_cybus: # name from previous step
    external: true
```

{% endcode %}

The TCP port used for this connection is automatically determined by other configuration like mTLS settings. However, if for some reason you need to override this, use the environment variable `CYBUS_STREAMSERVER_PORT`.

{% hint style="info" %}
If you are not using the `cybus_combined_ca.crt` for your agents, targeting the streaming server requires you to add the `shared_yearly_ca.crt`, not the `cybus_ca.crt`.
{% endhint %}

</details>

### 6. Upgrading Agents

#### What You Need to Do

{% hint style="info" %}
This guide explains how to update agents which use Docker Compose. If you are using agents via the `connectware-agent` Helm chart, follow the [Kubernetes upgrade guide](https://docs.cybus.io/2-0-6/documentation/installation-and-upgrades/upgrading-connectware/upgrading-connectware-kubernetes/upgrading-connectware-to-2-0-0-kubernetes) for this part.
{% endhint %}

1. Ensure you followed the previous step to prepare the agents, by adjusting their configuration to the changes made with Connectware 2.0.0.
2. Enter the directory in which the `docker-compose.yaml` file for your agents is stored.
3. Modify the agents `docker-compose.yaml` file and replace the image tag with `2.0.0`.

**Example**

```yaml
services:
  protocol-mapper-agent:
    image: registry.cybus.io/cybus/protocol-mapper:2.0.0 # update the image tag
    environment:
      CYBUS_AGENT_MODE: distributed
      CYBUS_AGENT_NAME: my-docker-compose-agent
      CYBUS_HOSTNAME_INGRESS: connectware
    volumes:
      - protocol-mapper-agent:/data
      - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem
    restart: unless-stopped
    network_mode: host
    hostname: my-docker-compose-agent
volumes:
  protocol-mapper-agent:
```

4. Run `docker compose up`.

### 7. Reinstalling Services

#### Why the Change?

With Connectware 2.0.0, your services and resources are no longer stored on the service-manager volume, but inside the PostgreSQL database.

#### What You Need to Do

<details>

<summary><strong>Reinstalling Your Services</strong></summary>

After completing the upgrade, you must reinstall all previously used services. You can do this using your preferred method:

* Via the Admin UI, see [Installing Services](https://docs.cybus.io/2-0-6/documentation/services/setting-up-and-configuring-services/installing-services).
* Automatically through a CI pipeline.

Additionally, there have been changes to the relationships between services. Understanding how these interdependencies behave at runtime is crucial for correct deployment and maintenance.

**Install parent services first (recommended)**: If the service depends on another service (parent/child relationship), install the parent service first. This ensures:

* Service relations are created during installation.
* Each service can be installed with `targetState=enabled`.

**Install child services first (alternative)**: It is possible to install the dependent (child) service first, but this comes with limitations:

* Service relations are only established when the service is enabled.
* The dependent (child) service can **only** be installed with `targetState=disabled`.

For more details, see [Service Dependency Behavior](https://docs.cybus.io/2-0-6/services/inter-service-referencing#service-dependency-behavior) and [targetState](https://docs.cybus.io/2-0-6/services/service-commissioning-files/resources/cybus-endpoint#targetstate).

</details>

## Feature-Specific Upgrade Steps

Only follow these if you use the related features, so they continue working after the upgrade.

### 1. Permissions and Roles

#### Why the Change?

Permissions allow administrators to define who can access what resources and what actions they can perform. Each permission represents a specific access right to a resource.

Connectware 2.0.0 introduces new and permissions. Because of this, custom roles or specific permissions you have set up might not allow users to do everything they could before the 2.0.0 upgrade.

#### What You Need To Do

<details>

<summary><strong>Verifying Permissions</strong></summary>

* Check the permissions of your users. Compare them with the default roles in Connectware 2.0.0 and make any updates needed so your users can continue working without interruptions.

For more information on managing permissions, see [Permissions](https://docs.cybus.io/2-0-6/documentation/user-management/permissions).

</details>

### 2. Custom Connectors

#### Why the Change?

Connectware has evolved its architecture, removing dependencies like VRPC and improving protocol handling. To ensure compatibility, you must update your custom connector implementations.

#### What You Need To Do

If you are using [custom connectors](https://docs.cybus.io/2-0-6/documentation/industry-protocol-details/custom-connectors), follow these steps to make your custom connector compatible with Connectware 2.0.0.

<details>

<summary><strong>1. Remove VRPC</strong></summary>

VRPC is no longer supported in the custom connector environment.

* Remove all VRPC references in the custom connector code. This includes the import and any usage of `VrpcAdapter`:

**Example**

{% code lineNumbers="true" %}

```javascript
// const { VrpcAdapter } = require('vrpc') <- REMOVE THIS
const Connection = require('./FoobarConnection')
const Endpoint = require('./FoobarEndpoint')

// VrpcAdapter.register(Endpoint, { schema: Endpoint.getSchema() }) <- REMOVE THIS
// VrpcAdapter.register(Connection, { schema: Connection.getSchema() }) <- REMOVE THIS
```

{% endcode %}

</details>

<details>

<summary><strong>2. Follow the Directory Naming Conventions</strong></summary>

* When defining the Dockerfile, ensure that the destination path for the copied source files ends in a protocol-specific directory name written entirely in lowercase.

**Example**

{% code lineNumbers="true" %}

```yaml
# protocol directory must be lowercase
COPY ./src ./src/protocols/foobar
```

{% endcode %}

</details>

<details>

<summary><strong>3. Follow the Schema Naming Conventions</strong></summary>

* The schema `$id` must match the file name (without the `.json`).
* The schema must start with a capital letter, like `Foobar`.

**Example**

* In `FoobarConnection.json`, the class must be like:

{% code lineNumbers="true" %}

```json
{
  ...
  "$id": "FoobarConnection"
  ...
}
```

{% endcode %}

* In `FoobarEndpoint.json`, the class must be like:

{% code lineNumbers="true" %}

```json
{
  ...
  "$id": "FoobarEndpoint"
  ...
}
```

{% endcode %}

</details>

<details>

<summary><strong>4. Schema Versioning</strong></summary>

Schemas support versioning through the additional `version` property, which must be a positive integer greater than zero. If this property is omitted, the default value is `1`.

Versioning ensures that only the latest version of a schema is considered active and valid. This means that even though all custom connector instances should run the same version of schemas, the latest version will overwrite any previous version in the CW control plane.

**Example**

* `FoobarConnection.json` supporting versioning.

{% code lineNumbers="true" %}

```json
{
  ...
  "$id": "FoobarConnection",
  "version": 3
  ...
}
```

{% endcode %}

</details>

<details>

<summary><strong>5. Follow the Source Directory Naming Conventions</strong></summary>

Follow the case-sensitive naming conventions based on the protocol name.

* File names must start with an uppercase protocol name (e.g., `Foobar`).
* Connection and endpoint suffixes are mandatory.
* JS files define classes.
* JSON files define schemas.

**Example**

{% code lineNumbers="true" %}

```yaml
src/
├── FoobarConnection.js
├── FoobarConnection.json
├── FoobarEndpoint.js
└── FoobarEndpoint.json
```

{% endcode %}

</details>

<details>

<summary><strong>6. Follow the Class Naming Conventions</strong></summary>

* The class name must match the file name, excluding the `.js` extension.
* The class name must start with a capital letter, such as `Foobar`.

**Example**

* In `FoobarConnection.js`, the class must be:

{% code lineNumbers="true" %}

```javascript
class FoobarConnection extends Connection { ... }
```

{% endcode %}

* In `FoobarEndpoint.js`, the class must be:

{% code lineNumbers="true" %}

```javascript
class FoobarEndpoint extends Endpoint { ... }
```

{% endcode %}

</details>

<details>

<summary><strong>7. Class Constructors</strong></summary>

Unless you need a specific constructor, there is no need to specify one because it is inherited from the parent class. However, if you need to implement a custom constructor for the `Connection` or `Endpoint` classes, preserve the following format:

* In `FoobarConnection.js`, the class constructor must be like:

{% code lineNumbers="true" %}

```javascript
class FoobarConnection extends Connection {
---
constructor (params) {
super(params)
---
// custom code
---
}
---
}
```

{% endcode %}

* In `FoobarEndpoint.js`, the class constructor must be like:

{% code lineNumbers="true" %}

```javascript
class FoobarEndpoint extends Endpoint {
---
constructor (params, dataPlaneConnectionInstance, parentConnectionInstance) {
super(params, dataPlaneConnectionInstance, parentConnectionInstance)
---
// custom code
---
}
---
}
```

{% endcode %}

</details>

<details>

<summary><strong>8. Do not Set the _topic Property Manually</strong></summary>

The `_topic` property is now handled automatically. Manually assigning it will cause errors.

The following code is invalid and must be removed since topics are now built internally.

{% code lineNumbers="true" %}

```javascript
// this is invalid, remove it
this._topic = 'this/is/a/topic'
```

{% endcode %}

</details>

<details>

<summary><strong>9. ES Modules Not Supported</strong></summary>

The standard JavaScript environment of custom connectors is based on CommonJS modules. ES modules are not supported.

</details>

<details>

<summary><strong>10. TypeScript Configuration</strong></summary>

TypeScript is not officially supported in development workflows. However, if you want to use TypeScript and compile it to JavaScript, make sure to configure your `tsconfig.json` file as follows:

{% code lineNumbers="true" %}

```json
{
  "compilerOptions": {
    ....
    "target": "es2022",    /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
    "lib": ["es6"],        /* Specify a set of bundled library declaration files that describe the target runtime environment. */
    "module": "commonjs",  /* Specify what module code is generated. */
    ....
  },
  "include": ["src/**/*.ts", "src/**/*.json", "src/**/*.js", "src/**/*.d.ts", "test/**/*"]
}
```

{% endcode %}

Additionally, the compiled JavaScript output must include an `exports.default` assignment and the exported class itself. This ensures interoperability with our CommonJS-based module system. The compiled `.js` file should result in:

{% code lineNumbers="true" %}

```javascript
class FoobarConnection { ... }
exports.default = FoobarConnection;
```

{% endcode %}

</details>

### 3. Systemstate Protocol

#### Why the Change?

To improve performance and reduce unnecessary messaging load, the Systemstate protocol no longer supports whole-service tracking or redundant status events. This simplifies agent responsibilities and avoids misleading lifecycle signals.

#### What You Need to Do

If you are using the [Systemstate protocol](https://docs.cybus.io/2-0-6/documentation/industry-protocol-details/systemstate), do the following:

<details>

<summary><strong>1. Stop Tracking Whole Services</strong></summary>

* Tracking the entire service object is no longer allowed. You must update your connector configuration to track individual resources only (like specific endpoints or connections).

**Example**

{% code lineNumbers="true" %}

```yaml
# Before (no longer supported)
serviceEndpoint:
  type: Cybus::Endpoint
  properties:
    protocol: Systemstate
    connection: !ref systemStateConnection
    subscribe:
      resourceId: !sub '${Cybus::ServiceId}'
```

{% endcode %}

</details>

<details>

<summary><strong>2. Update Event Handling Logic</strong></summary>

The following status events have been removed from Systemstate. If your implementation depends on them (e.g., for health monitoring or automation), you must refactor that logic:

* `subscribed`/`unsubscribed`
* `online`/`offline`

</details>

### 4. Log Monitoring

#### Why the Change?

With version 2.0.0, several log messages have been corrected to fix spelling mistakes. These changes may affect existing log monitoring configurations.

#### What You Need to Do

<details>

<summary><strong>Updating Your Log Monitoring</strong></summary>

If you rely on log monitoring, review whether your setup references any of the updated log messages and adjust accordingly.

| Type          | Log Level | Original (with typo)                                                                                | Corrected line                                                                                      |
| ------------- | --------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
| Log message   | info      | MS Entra Login was succesful, redirecting to                                                        | MS Entra Login was successful, redirecting to                                                       |
| Log message   | debug     | DELETE /:id/tokens sucess for user: '\<req.params.id>'                                              | DELETE /:id/tokens success for user: '\<req.params.id>'                                             |
| Error message |           | Views are found, the restore implenetation do not support views!                                    | Views are found, the restore implementation do not support views!                                   |
| Error message |           | query paramter error is not a valid HTTP error code (\<req.query.code>)                             | query parameter error is not a valid HTTP error code (\<req.query.code>)                            |
| Log message   | debug     | Cleared persistance of:                                                                             | Cleared persistence of:                                                                             |
| Error message | warn      | HttpNode is configured with method 'GET' but operation 'serverRecieves' (instead of serverProvides) | HttpNode is configured with method 'GET' but operation 'serverReceives' (instead of serverProvides) |
| Log message   |           | Error when trying to recieve OPC-UA Method details from nodeId : \<err.message>                     | Error when trying to receive OPC-UA Method details from nodeId : \<err.message>                     |
| Log message   | warn      | tried to pass the value as an INT64 and found no matching convertion                                | tried to pass the value as an INT64 and found no matching conversion                                |
| Log message   | warn      | tried to pass the value as an UINT64 and found no matching convertion                               | tried to pass the value as an UINT64 and found no matching conversion                               |
| Log message   | debug     | Sucessfully subscribed to topic: \<mqttOpts.topic>.                                                 | Successfully subscribed to topic: \<mqttOpts.topic>.                                                |
| Log message   | error     | error occured during shutting down the server                                                       | error occurred during shutting down the server                                                      |
| Log message   | error     | expected payload convertion to fail because given payload was not a JSON notation, but 'err == nil' | expected payload conversion to fail because given payload was not a JSON notation, but 'err == nil' |

</details>

### 5. Heidenhain Agents (Windows)

#### Why the Change?

For Connectware 2.0.0, the Heidenhain protocol has been updated.

#### What You Need to Do

<details>

<summary><strong>Installing the Heidenhain Agent</strong></summary>

You must upgrade the Windows-based [Cybus Heidenhain Agent](https://docs.cybus.io/2-0-6/documentation/industry-protocol-details/heidenhain-dnc) to work with Connectware 2.0.0.

1. Uninstall the existing Heidenhain agent installation from your Windows system.
2. Install the updated Heidenhain agent. You can find the download link at [Heidenhain DNC](https://docs.cybus.io/2-0-6/documentation/industry-protocol-details/heidenhain-dnc).

</details>

### 6. Auto-Generated MQTT Topics of Resources

#### Why the Change?

With Connectware 2.0.0, auto-generated MQTT topics no longer include resource-specific properties. This makes the topic generation more unified and explicit. You must update any service commissioning file that hardcodes those old auto-generated topics.

**Example of old behavior**

Some auto-generated topics contained property-specific parts:

* S7: `services/myService/pressure/address:DB1,REAL6`
* Modbus: `services/myService/current/fc:3/address:7`
* HTTP: `services/myService/myEndpoint/get[object Object]`

These paths might have been referenced inside `Cybus::Mapping` resources. Using auto-generated topics inside `Cybus::Mapping` is **not** recommended. Instead, use references via `!ref, “Reference Method”`.

#### What You Need to Do

<details>

<summary><strong>Updating Auto-Generated Topic References</strong></summary>

Auto-generated topics no longer include resource-specific properties. They always follow:

{% code lineNumbers="true" %}

```bash
<Cybus::MqttRoot>/<serviceId>/<resourceName>
```

{% endcode %}

**Example of new behavior**

* S7: `services/myService/pressure`
* Modbus: `services/myService/current`
* HTTP: `services/myService/myEndpoint`

**Procedure**

1. Scan your service commissioning files for any usage of auto-generated topics.
2. Adapt those references by replacing direct topic strings with `!ref` references.

For more details, see [Reference Method (!ref)](https://docs.cybus.io/2-0-6/services/service-commissioning-files/parameters#reference-method-ref).

</details>

### 7. Auto-Generated MQTT Users

#### Why the Change?

Before 2.0.0, Connectware created a hidden MQTT user for every installed service. These auto-generated users were only used when the service commissioning file explicitly referenced the pseudo parameter [Cybus::MqttUser](https://docs.cybus.io/2-0-6/services/service-commissioning-files/parameters#cybus-mqttuser).

With Connectware 2.0.0, hidden users and groups are created only when the service commissioning file uses the `Cybus::MqttUser` pseudo parameter. This reduces unused accounts and makes credential usage explicit.

#### What You Need to Do

<details>

<summary><strong>Verify Your Service Commissioning Files</strong></summary>

If you are using auto-generated MQTT users outside of services (e.g., scripts, dashboards, or other non-commissioning references), migrate to explicit identities:

* Create dedicated users with the required roles/permissions. See [User Management](https://docs.cybus.io/2-0-6/documentation/user-management).
* Update your external systems to use the new explicit credentials.

</details>
