Upgrading Connectware to 2.0.0 (Kubernetes)

Disclaimer

Some Connectware upgrades require you to follow a few additional steps when upgrading Connectware to a newer version.

When upgrading your Connectware instance, follow the required upgrade path based on your current version:

  • If you are running Connectware version 1.4.1 or below:

    1. First upgrade to version 1.5.0

    2. Then upgrade to version 1.7.0

    3. Finally upgrade to version 2.0.0

  • If you are running Connectware version between 1.5.0 and 1.6.2:

    1. First upgrade to version 1.7.0

    2. Then upgrade to version 2.0.0

  • If you are running Connectware version 1.7.0 or newer:

    • You can directly upgrade to version 2.0.0

For detailed instructions on each upgrade step, refer to:

Before You Begin

Upgrading to Connectware 2.0.0 introduces significant improvements in performance, scalability, and reliability. However, these changes also come with updated requirements for versions, networking, hardware, and storage.

This guide outlines the prerequisites and known limitations you must consider to ensure a smooth and successful upgrade.

Connectware Version Requirements

To be able to upgrade to Connectware 2.0.0, your Connectware version must be 1.7.0 or above.

If your Connectware installation is below 1.7.0, make sure that you have followed Upgrading Connectware to 1.7.0 (Kubernetes) before upgrading to 2.0.0.

Network Requirements

Why the Change?

With Connectware 2.0.0, some internal components have been updated to improve communication and performance. As a result, the network configuration has changed:

  • Added: TCP/4222 and TCP/4223

  • Removed: TCP/1884 and TCP/8884

What You Need to Do

Updating the Network Ports

Verify that your firewalls and security rules are updated to allow the new ports (TCP/4222 and TCP/4223) and to remove dependencies on the deprecated ones (TCP/1884 and TCP/8884).

This ensures uninterrupted connectivity between your agents and Connectware.

Hardware Requirements

Why the Change?

Connectware 2.0.0 makes increased use of its PostgreSQL database and adds some components. When planning this upgrade, ensure your infrastructure can accommodate the enhanced resource requirements. This upgrade requires additional computing power.

What You Need to Do

Updating the Hardware Setup

We recommend adding the following resources to your hardware setup:

  • 12 CPU cores

  • 11 GB of memory

  • 52 Gi of storage

However, these are general guidelines. Check what your specific system needs and make adjustments accordingly. If you were using the control-plane-broker option, you can offset these additional requirements with the resources it used, since it is being removed in this upgrade.

Storage Requirements

Why the Change?

We have added two new components to Connectware:

  • A streaming server called NATS

  • A service called resource-status-tracking

Alongside other improvements, these additions enable Connectware to scale effectively for much larger deployments.

In addition, the latest versions of PostgreSQL and auth-server require updated Kubernetes resource requests and limits to maintain stability and performance under heavier workloads.

What You Need to Do

Adjusting Kubernetes Resource Requests and Limits for Core Microservices

The microservices postgresql, auth-server, nats, and resource-status-tracking now have new or revised Kubernetes resource requests and limits. Make sure to adapt the default values to match your deployment needs.

We recommend beginning with the defaults, monitoring performance metrics, and fine-tuning resource allocations as needed.

  • To adjust the default values, update the corresponding values in the global.podResources Helm value context.

Example

global:
    podResources:
        nats:
            limits:
                cpu: 2000m
                memory: 2000Mi
            requests:
                cpu: 2000m
                memory: 2000Mi
        resourceStatusTracking:
            limits:
                cpu: 1000m
                memory: 1000Mi
            requests:
                cpu: 1000m
                memory: 1000Mi
        database:
            limits:
                cpu: 2000m
                memory: 2000Mi
            requests:
                cpu: 2000m
                memory: 2000Mi
        authServer:
            limits:
                cpu: 1500m
                memory: 1000Mi
            requests:
                cpu: 1500m
                memory: 1000Mi

Known Limitations

  1. Adding Certificates Through Admin UI Not Supported

  • You cannot add certificates to Connectware's CA bundle via the Admin UI.

  • Instead, modify the cybus_ca.crt file directly on the certs volume.

  1. Backup via Admin UI Not Supported

  • The backup functionality through Admin UI is not supported.

  • Instead, create backups of the database by running a pg_dump command on the postgresql-0 pod.

Example
kubectl exec -n [your-namespace] postgresql-0 -- \
    bash -c "pg_dump -U cybus-admin --if-exists -c cybus_connectware" \
    > connectware_database.sql

Upgrade Procedure

Follow this procedure to upgrade your Connectware installation to version 2.0.0. The steps are divided into two parts:

  • Mandatory Upgrade Steps: Required for all installations to ensure a smooth and stable upgrade.

  • Feature-Specific Upgrade Steps: Only needed if you use certain features, so they remain compatible with Connectware 2.0.0.

Expand the following sections for an overview of all upgrade steps.

Mandatory Upgrade Steps

These steps apply to every Connectware installation upgrading to Connectware 2.0.0. For a detailed guide, see Mandatory Upgrade Steps.

Depending on your setup, you may also need to perform additional Conditional Steps.

  1. TLS Changes: Default behavior on certificate validation has been adjusted.

  2. Update Helm Values: Remove obsolete Helm values and adjust for changed and new values.

  3. Preparing the Connectware Helm Upgrade: Prepare removal of control-plane-broker and remove PostgreSQL StatefulSet.

  4. Upgrading to Connectware 2.0.0: Download and install Connectware 2.0.0.

  5. Enabling Agents in the Connectware Helm Chart: After upgrading Connectware, you need to go back to your agents to enable their TLS connections.

  6. Updating Helm Values for the Connectware Agent Helm Chart: Update your agent configuration to comply with the updated Helm value configuration.

  7. Upgrading Agents for the Connectware Agent Helm Chart: Upgrade your agents with the connectware-agent Helm chart.

  8. Reinstalling Services: This upgrade changes where your services are stored. You will need to reinstall any services after the upgrade.

Feature-Specific Upgrade Steps

Only follow these if you use the related features, so they continue working after the upgrade.

  1. Roles and Permissions: New permissions were added to Connectware. Verify your custom roles, if they require updates.

  2. Custom Connectors: Update your customer connector configurations to meet new requirements.

  3. Systemstate Protocol: Update your Systemstate protocol configurations to meet new requirements.

  4. Log Monitoring: Some logging strings are changed. If you use log monitoring, you may need to update it.

  5. Heidenhain Agents: Upgrade your Heidenhain agents.

  6. Auto-Generated MQTT Topics of Resources: Topic generation no longer includes resource-specific properties. Update your service commissioning files if you relied on old patterns.

Mandatory Upgrade Steps

These steps are required to upgrade your Connectware installation to Connectware 2.0.0.

1. TLS Changes

Why the Change?

To enhance security by default, Connectware agents now verify TLS certificate chains automatically. This ensures that all components communicate over a valid trust chain, while still giving you the option to keep the old behavior by explicitly disabling TLS verification.

Key Changes

1. Introducing the cybus_combined_ca.crt

Connectware maintains two separate CA chains:

  • External certificates validated by cybus_ca.crt.

  • Internal certificates validated by shared_yearly_ca.crt.

Which CA an agent requires depends on whether it connects from inside or outside the Connectware network.

To simplify configuration, we introduced cybus_combined_ca.crt, a bundle containing both chains, so agents can use a single file without needing to distinguish between internal and external CA certificates.

2. Certificate Chain Verification in Agents

Agents now enforce TLS chain validation by default. Each agent requires access to cybus_combined_ca.crt, available on the certs volume.

  • To revert to the previous behavior (skipping verification), set the environment variable CYBUS_TRUST_ALL_CERTS to true. Note that it has been renamed from TRUST_ALL_CERTS.

3. Configuring Certificate Hostnames

The default Connectware-generated CA includes the hostnames localhost and connectware.

  • To add more hostnames, configure the Helm value global.ingressDNSNames.

4. Renewal of Certificate Chains

With 2.0.0, the internal CA chain is replaced:

  • Certificate Authority renamed from CybusCA to CybusInternalCA.

  • The hostname nats is added as a Subject Alternate Name (SAN) to shared_yearly_server.crt.

The built-in default external CA certificate chain is also replaced.

  • The hostname connectware is added as a SAN to cybus_server.crt.

If you rely on monitoring, custom setups, or modified certificates, adapt your configuration accordingly.

5. Replacing CA Certificate Chain

To replace Connectware’s default external chain with your enterprise-managed CA:

  • Replace cybus_ca.crt with your enterprise CA certificate.

  • Ensure cybus_server.crt and cybus_server.key form a valid key pair, signed by the CA in cybus_ca.crt.

Do not replace the internal CA (shared_yearly_ca.crt).

After replacement:

  1. Restart system-control-server.

  2. Restart all Connectware services.

2. Update Helm Values

Why the Change?

Some changes to Connectware or the Helm chart prompt changes to Helm values, which you need to adapt your values.yaml file to:

  • The optional control-plane-broker is removed from Connectware.

  • All parameters to tune the inter-service communication have been removed.

Obsolete Helm Values

Removing Obsolete Helm Values

Some Helm values are obsolete and have been removed. Remove the following Helm values from your values.yaml file for the connectware Helm chart:

  • global.rpcTimeout

  • global.adminWebApp.rpcTimeout

  • global.containerManager.rpcTimeout

  • global.protocolMapper.rpcTimeout

  • global.systemControlServer.rpcTimeout

  • global.serviceManager.rpcTimeout

  • global.serviceManager.storage

  • global.controlPlaneBroker

  • global.protocolMapperAgents[*].controlPlane

  • global.serviceManager.useServicesGraph

New Helm Values

1. Ingress DNS Name Configuration

With the changes TLS behavior in Connectware, it has become essential to add the DNS names under which Connectware is addressed, for example by agents.

If you are replacing Connectware's default PKI, you can, and likely have managed this yourself by providing a valid cybus_server.crt containing all Subject Alternate Names (SANs) used within your setup.

If you are using Connectware's default PKI, you can use the new Helm value global.ingressDNSNames, which is a list of names that will be added to the default cybus_server.crt.

Hostname Formats

You can include multiple hostnames in the list. The certificate will include all specified names in its SAN section.

The configuration accepts various hostname formats:

  • Wildcards (e.g., *.company.io)

  • Subdomains (e.g., connectware.company.io)

  • Custom hostnames (e.g., localhost)

  • IP Addresses (e.g. 192.168.100.42)

Example

global:
    ingressDNSNames:
        - company.io
        - localhost
        - *.company.io
        - connectware.company.io
        - 192.168.100.42
2. Proxy Configuration

Connectware's proxy configuration has been improved with version 2.0. Accompanying this, we added Helm values to configure proxy usage. This means that you cannot configure proxy usage directly through environment variables. If you have been doing this in the past, transfer your configuration to these new Helm values:

New Helm Value

Type

Default Values

Purpose

global.proxy.url

string

<none>

Address of the HTTP proxy server to be used

global.proxy.exceptions

array

<none>

List of hosts, for which the proxy is ignored

global.proxy.existingSecret

string

<none>

Name of a Kubernetes Secret which contains the proxy configuration as the keys 'url' and 'exceptions'

Example

global:
    proxy:
        url: proxy.mycompany.tld
        exceptions:
            - my-server.mycompany.tld
            - my-internal-service.mycompany.tld

Example with existing secret

Create your secret using your preferred method, in this example we will use a kubectl create command:

kubectl create secret -n [connectware-namespace] generic my-connectware-proxy-config \
 --from-literal="url=http://myproxy.company.tld" \
 --from-literal="exceptions=http://myserver1.company.tld,https://myserver2.company.tld"
global:
    proxy:
        existingSecret: my-connectware-proxy-config
3. NATS Configuration

Connectware 2.0.0 introduces NATS as the stream server, primarily used for inter-service communication. The key configuration parameter is global.nats.replicas, which defines the cluster size. Typical values are 3 or 5, with 3 as the default. Increasing this to 5 raises the redundancy level from N+1 to N+2.

The following configuration values are available for the NATS streaming server:

New Helm Value

Type

Default Values

Purpose

global.nats.replicas

integer

3

The number of NATS replicas

global.nats.podAntiAffinity

string

soft

The podAntiAffinity behavior for NATS (one of none, soft, hard)

global.nats.podAntiAffinityTopologyKey

string

kubernetes.io/hostname

The podAntiAffinityTopologyKey for NATS (one of none, soft, hard)

global.nats.labels

object

<none>

A set of labels that will be applied to NATS resources

global.nats.annotations

object

<none>

A set of annotations that will be applied to NATS resources

global.nats.podLabels

object

<none>

A set of labels that will be applied to NATS pod resources

global.nats.podAnnotations

object

<none>

A set of annotations that will be applied to NATS pod resources

global.nats.service.labels

object

<none>

A set of labels that will be applied to NATS service resources

global.nats.service.annotations

object

<none>

A set of annotations that will be applied to NATS service resources

global.podResources.nats.resources

array

For a list of all default values, see default-values.yaml

Kubernetes compute resource requirements and limits

global.nats.env

array

<none>

Array containing environment variables as name and value pairs to be applied to NATS service

global.nats.metrics.prometheus.enabled

boolean

false

Enable Prometheus exporter for NATS

global.nats.metrics.prometheus.resources

array

For a list of all default values, see default-values.yaml

Kubernetes compute resource requirements and limits

global.nats.metrics.prometheus.serviceMonitor.enabled

boolean

false

Enable Prometheus Operator ServiceMonitor for NATS

global.nats.metrics.prometheus.serviceMonitor.namespace

string

<none>

Namespace for the Prometheus ServiceMonitor

global.nats.metrics.prometheus.serviceMonitor.labels

object

<none>

Labels for the Prometheus ServiceMonitor

global.nats.storage.size

string

16Gi

Define the size of the NATS JetStream volume

global.nats.storage.storageClassName

string

<none>

Define a Kubernetes StorageClass that will be used for the NATS JetStream volume

global.nats.containerSecurityContext

array

For a list of all default values, see default-values.yaml

Set a container SecurityContext as defined by Kubernetes API

4. Resource-Status-Tracking Configuration

In addition to the new stream server, Connectware introduces a second new component called resource-status-tracking. This component allows you to monitor the status of resources created through service commissioning files and enables you to detect deviations in service behavior.

The following configuration values are available for resource-status-tracking:

New Helm Value

Type

Default Values

Purpose

global.resourceStatusTracking.replicas

integer

2

The number of resource-status-tracking replicas

global.resourceStatusTracking.podAntiAffinity

string

soft

The podAntiAffinity behavior for resourceStatusTracking (one of "none", "soft", "hard")

global.resourceStatusTracking.podAntiAffinityTopologyKey

string

kubernetes.io/hostname

The podAntiAffinityTopologyKey for resourceStatusTracking (one of "none", "soft", "hard")

global.resourceStatusTracking.labels

object

<none>

A set of labels that will be applied to resourceStatusTracking resources

global.resourceStatusTracking.annotations

object

<none>

A set of annotations that will be applied to resourceStatusTracking resources

global.resourceStatusTracking.podLabels

object

<none>

A set of labels that will be applied to resourceStatusTracking pod resources

global.resourceStatusTracking.podAnnotations

object

<none>

A set of annotations that will be applied to resourceStatusTracking pod resources

global.resourceStatusTracking.service.labels

object

<none>

A set of labels that will be applied to resourceStatusTracking service resources

global.podResources.resourceStatusTracking.resources

array

For a list of all default values, see default-values.yaml

Kubernetes compute resource requirements and limits

global.resourceStatusTracking.service.annotations

object

<none>

A set of annotations that will be applied to resourceStatusTracking service resources

global.resourceStatusTracking.env

array

<none>

Array containing environment variables as name and value pairs to be applied to resourceStatusTracking service

global.resourceStatusTracking.containerSecurityContext

array

For a list of all default values, see default-values.yaml

Set a container SecurityContext as defined by Kubernetes API

5. PostgreSQL Storage Size

In previous releases, the storage volume of the PostgreSQL component was fixed at 1 Gi. With Connectware 2.0.0, the reliance on PostgreSQL has increased, resulting in a new default storage size of 5 Gi. This parameter is now configurable.

We recommend to allocate at least 5 Gi. For larger deployments, storage sizes of 20 Gi or more may be appropriate.

You can configure the storage size via the global.postgresql.storage.size Helm value:

global:
    postgresql:
        storage:
            size: 10Gi
6. Enabling MQTTS for Protocol-Mapper Agents

A new Helm value is available to configure whether a protocol-mapper agent establishes its data plane connection to the Connectware MQTT broker using TLS.

New Helm Value

Type

Default Values

Purpose

global.protocolMapperAgents[*].dataPlane.tls

boolean

false

Enable TLS encryption for agent's MQTT data plan connection

global:
    protocolMapperAgents:
        - name: welder-robots
          dataPlane:
              tls: true
7. Disabling Agents During the Connectware Upgrade

This step prepares agents orchestrated with the connectware Helm chart for disabling during the upgrade. Agents orchestrated by other methods must be shut down separately, as described in 7. Shutting Down Protocol-Mapper Agents.

To prevent agents from requiring re-registration after the upgrade, you must disable them during the installation. Once the upgrade is complete, you will re-enable the agents, update their configuration, and provide them with a valid CA certificate.

  • To disable the agents defined in your Connectware Helm chart, set the Helm value global.protocolMapperAgents and all related values within this context to disabled. You can do this either by commenting out each line with a # or by temporarily removing them from the file.

3. Preparing the Connectware Helm Upgrade

Why the Change?

Connectware 2.0.0 introduces architectural improvements that require you to remove or adjust certain resources before running the Helm upgrade. This ensures a clean and successful upgrade process.

What You Need to Do

1. Removing the Control-Plane-Broker

The control-plane-broker is deprecated and its associated StatefulSet is no longer used. It will not be started anymore after the upgrade.

  • No action is required in your Helm installation.

  • Optional: Remove global.controlPlaneBroker from the connectware chart and controlPlaneBrokerEnabled from the connectware-agent chart.

  • The brokerdata-control-plane-broker-* and brokerlog-control-plane-broker-* PersistentVolumeClaims are not removed automatically. Delete them manually if you want to reclaim the storage.

2. Backing Up Your PostgreSQL Database

With Connectware 2.0.0, Connectware uses a new major version of PostgreSQL. You need to delete your postgresql volume before upgrading Connectware (this is covered later in this upgrade guide). This requires you to create a backup of your database and restore this after the upgrade.

  1. To create a backup of your database, run the following command.

kubectl exec -n [your-namespace] postgresql-0 -- \
    bash -c "pg_dump -U cybus-admin --if-exists -c cybus_connectware" \
    > connectware_database.sql
  1. Make sure the backup is successful, then store the database file in a secure location.

3. Service-Manager Volume Backup & Removal

The service-manager PersistentVolumeClaim (PVC) is deprecated and will be removed automatically during the upgrade to 2.0.0.

Depending on your ReclaimPolicy, this may or may not mean that the volume is being deleted too. If it is not automatically deleted, you will have to manually delete the volume previously associated with the service-manager PersistentVolumeClaim to free up the disk space.

The former contents will in the future be stored in the PostgreSQL database, but will not be automatically migrated. You will need to reinstall any services you used before. See Reinstalling Services.

If you do not have your services stored outside of Connectware, make sure to export your services, or create a backup of your service-manager volume before upgrading.

4. Upgrading to Connectware 2.0.0

1. Updating Helm Repository Cache

Before performing the upgrade, update the Helm repository cache to ensure the latest Connectware chart version is available.

  • Run the following command:

helm repo update
2. Reviewing the Connectware Changelog

Before upgrading to Connectware 2.0.0, review the changelog to familiarize yourself with new features, bug fixes, and other changes introduced in Connectware 2.0.0.

3. Reviewing the Readme File

Before upgrading to Connectware 2.0.0, review the readme file. The readme may contain important version-specific upgrade instructions.

  • To open the readme file, run the following command:

helm show readme <repo-name>/connectware --version <2.0.0>
4. Comparing Helm Configurations between Connectware Versions

With a new Connectware version, there might be changes to the default Helm configuration values. We recommend that you compare the default Helms values of your current Connectware version with the default Helm values of your target Connectware version.

  • To display the new default values, enter the following command:

helm show values <repo-name>/connectware --version <2.0.0>
  • To display which Connectware default values have changed between your current version and your target version, enter the following command:

diff <(helm show values <repo-name>/connectware --version <current-version>) <(helm show values <repo-name>/connectware --version <2.0.0>

Example

diff <(helm show values cybus/connectware --version 1.1.0) <(helm show values cybus/connectware --version 1.1.1)
83c83
<     version: 1.1.0
---
>     version: 1.1.1

In this example, only the image version has changed. However, if any of the Helm value changes are relevant to your setup, make the appropriate changes.

  • To override default Helm values, add the custom Helm value to your local values.yaml file.

5. Adjusting Helm Values

When you have reviewed the necessary information, adjust your configuration in your values.yaml file. Not every upgrade requires adjustments.

If you specified which image version of Connectware to use by setting the Helm value global.image.version you will need to update this to <target-version>. Unless you have a specific reason to use a specific image version, we recommend not setting the Helm value.

6. Verifying your Backups

Make sure that you store backups of your setup. This allows you to restore a previous state if necessary.

Your backups must consist of the following files:

  • All Kubernetes PersistentVolumes that Connectware uses

  • Your Connectware database

  • Your values.yaml file

  • All service commissioning files

Depending on your local infrastructure, it may be necessary to back up additional files.

7. Shutting Down Protocol-Mapper Agents

In this step, you need to shut down any agents currently connected to your Connectware instance that are not managed by the connectware chart. Agents orchestrated through the connectware chart have already been prepared for shutdown as part of 7. Disabling Agents During the Connectware Upgrade.

Before running the helm upgrade command, you must stop all connected agents. Agents which remain up during this upgrade run the risk of having to go through the agent registration process again.

  • Docker Run: To stop agents which were started using docker run, use the docker stop command. If you are not aware of the name these containers use, run the docker ps command to find out.

  • Docker Compose: If your agents are running in Docker Compose, use the docker compose down command to stop them.

  • Agent Helm Chart: You can shut down agents that have been installed via the connectware-agent Helm chart using this command:

kubectl get -n [your-namespace] sts -lapp.kubernetes.io/component=protocol-mapper-agent -o name | xargs -I % kubectl scale -n [your-namespace] % --replicas 0
8. Removing the PostgreSQL StatefulSet
  • Before running the helm upgrade command, you must remove the postgresql StatefulSet:

kubectl -n [your-namespace] delete sts postgresql
9. Initial Connectware Upgrade

You can now start the first of two upgrade processes. This first upgrade run applies the new Connectware 2.0.0 workloads and prepares the system for the required database migration.

  • To upgrade Connectware, enter the following command:

helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version <2.0.0> -f <values.yaml>

Result: The newly generated workload definitions are applied to your Kubernetes cluster and your Connectware pods are replaced.

10. Shutting Down Connectware
  • Wait for the system-control-server deployment to contain a ready pod, then shut down Connectware to restore your PostgreSQL database:

kubectl get -n [your-namespace] deploy,sts -lapp.kubernetes.io/part-of=connectware -o name | xargs -I % kubectl scale -n [your-namespace] % --replicas 0
11. Restoring the PostgreSQL Database
  1. Note down the Persistent Volume name for your postgresql-postgresql-0 PersistentVolumeClaim. You will need this name later to make sure the volume is not recycled.

kubectl get -n [your-namespace] pvc -o jsonpath='{.spec.volumeName}{"\n"}' postgresql-postgresql-0
  1. Remove the postgresql-postgresql-0 PersistentVolumeClaim:

kubectl -n [your-namespace] delete pvc postgresql-postgresql-0
  1. Remove the PostgreSQL PersistentVolume. You can skip this step if the volume has been automatically deleted through a reclaim policy, or if you are sure a new volume will be used. If the same volume is reused for postgresql, the upgrade will fail.

kubectl delete pv [persistent-volume-name-from-previous-step]
  1. Start PostgreSQL:

kubectl -n [your-namespace] scale sts postgresql --replicas=1
  1. Restore your PostgreSQL Database. Wait for the postgresql-0 pod to become ready, then run:

cat connectware_database.sql | kubectl exec -n [your-namespace] postgresql-0 \
  -i -- psql -U cybus-admin -d cybus_connectware
12. Final Connectware Upgrade after Database Restore

You can start the final upgrade process. This upgrade finalizes the process by starting Connectware with the restored PostgreSQL database.

  • To upgrade Connectware, enter the following command:

helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version <2.0.0> -f <values.yaml>

Optional: You can use the --atomic --timeout 10m command line switch, which will cause Helm to wait for the result of your upgrade and perform a rollback when it fails. We recommend setting the timeout value to at least 10 minutes, but because the time it takes to complete an upgrade strongly depends on your infrastructure and configuration you might have to increase it further.

Result: The newly generated workload definitions are applied to your Kubernetes cluster and your Connectware pods are replaced.

13. Verifying the Connectware Upgrade

You can monitor the Connectware upgrade progress to verify that everything runs smoothly, to know when the installation is successful, or to investigate potential issues.

Monitoring the Connectware Upgrade

The Connectware upgrade can take a few minutes. To monitor the upgrade process, do one of the following:

  • To monitor the current status of the upgrade process, enter the following command:

kubectl get pods -n <namespace>
  • To monitor the continuous progress of the upgrade process, enter the following command:

while [ True ]; do clear; kubectl get pod -n <namespace>; sleep 5; done
  • To stop monitoring the continuous progress of the upgrade process , press Ctrl+C.

Pod Stages During the Connectware Upgrade

During the Connectware upgrade, the pods go through the following stages:

  • Terminating

  • Pending

  • PodInitializing

  • ContainerCreating

  • Init:x/x

  • Running

When pods reach the STATUS Running, they go through their individual startup before reporting as Ready. To be fully functional, all pods must reach the STATUS Running and report all their containers as ready. This is indicated by them showing the same number on both sides of the / in the column READY.

Example

$ kubectl get pod -n <namespace>
NAME
READY
STATUS
RESTARTS
AGE

admin-web-app-7cd8ccfbc5-bvnzx

1/1

Running

0

3h44m

auth-server-5b8c899958-f9nl4

1/1

Running

0

3m3s

broker-0

1/1

Running

0

3h44m

broker-1

1/1

Running

0

2m1s

connectware-7784b5f4c5-g8krn

1/1

Running

0

21s

container-manager-558d9c4cbf-m82bz

1/1

Running

0

3h44m

doc-server-55c77d4d4c-nwq5f

1/1

Running

0

3h44m

ingress-controller-6bcf66495c-l5dpk

1/1

Running

0

18s

postgresql-0

1/1

Running

0

3h44m

protocol-mapper-67cfc6c848-qqtx9

1/1

Running

0

3h44m

service-manager-f68ccb767-cftps

1/1

Running

0

3h44m

system-control-server-58f47c69bf-plzt5

1/1

Running

0

3h44m

workbench-5c69654659-qwhgc

1/1

Running

0

15s

At this point Connectware is upgraded and started. You can now make additional configurations or verify the upgrade status in the Admin UI.

5. Enabling Agents in the Connectware Helm Chart

When upgrading to Connectware 2.0.0, protocol-mapper agents must be explicitly enabled again. This requires updating Helm values and configuring TLS certificates — or, in less secure setups, choosing to trust all certificates.

The following steps are required:

  1. Update Helm values to their new equivalents.

  2. Configure TLS (recommended) or opt to trust all certificates (not recommended).

  3. Re-run helm upgrade to apply the changes.

Agents connecting to Connectware must either:

  • Provide a valid CA certificate that matches the server certificate, OR

  • Skip certificate validation by setting CYBUS_TRUST_ALL_CERTS to true (not recommended).

Which Certificate to Use

The certificate that you provide depends on how the agent connects:

  • Via Connectware ingress: Use cybus_ca.crt.

  • Via the internal network: Use shared_yearly_ca.crt.

  • Simplified option that works for both cases: Use cybus_combined_ca.crt.

Use the table below to decide which option applies to your setup:

Overview of Certificate Behavior

CA certificate

Value of CYBUS_TRUST_ALL_CERTS

Behavior

Log message during control connection

Not Configured

false

Default: TLS connections, like the control connection to Connectware, will try to use the system trust store to do certificate validation. This will only work when the CAs being used in Connectware are signed by a well-known CA authority or when self-signed CAs were added to the system trust store.

CA certificate not found, using system trusted CAs for NATS connection.

Not Configured

true

TLS connections, like the control connection to Connectware, will not validate certificates and will trust all certificates.

CA certificate not found, Connectware configured for trusting all certificates for NATS connection

Configured

true

TLS connections, like the control connection to Connectware, will not validate server certificates and will trust any cert.

CA certificate found, but trusting all certificates for NATS connection

Configured

false

Recommended: TLS connections, like the control connection to Connectware, will validate server certificates and will not trust any cert.

CA certificate found, using it for NATS connection with CA verification

What You Need to Do

1. Updating the Helm Values

You must update the Helm values of your connectware installation again to re-enable agents by removing the "#" you added, or adding them to the Helm values file again. Then add the cybus_combined_ca.crt or set CYBUS_TRUST_ALL_CERTS to true. If you were directly targeting our MQTT broker or control-plane-broker before, you should also move the respective configuration to their new replacements.

The following Helm values have changed. If you had specific configuration for these in the past, update the Helm values accordingly.

For some Helm values, you need to take additional steps depending on your setup. The different scenarios are covered in the following steps.

Old Helm Value

New Helm Value

Required Change

global.protocolMapperAgents[*].mTLS.caChain.cert

global.protocolMapperAgents[*].tls.ca.certChain

Move

global.protocolMapperAgents[*].mTLS.caChain.existingConfigMap

global.protocolMapperAgents[*].tls.ca.existingConfigMap

Move

global.protocolMapperAgents[*].mqttDataHost

global.protocolMapperAgents[*].dataPlane.host

See 1.1 - Directly Targeting MQTT Broker (Scenario A)

global.protocolMapperAgents[*].mqttDataPort

global.protocolMapperAgents[*].dataPlane.port

See 1.1 - Directly Targeting MQTT Broker (Scenario A)

global.protocolMapperAgents[*].mqttHost

global.protocolMapperAgents[*].streamServer.host

See 1.2 - Directly Targeting Streaming Server (Scenario B)

global.protocolMapperAgents[*].mqttPort

global.protocolMapperAgents[*].streamServer.port

See 1.2 - Directly Targeting Streaming Server (Scenario B)

1.1 - Directly Targeting MQTT Broker (Scenario A)

When deploying agents in the internal network of Connectware, they are able to directly connect to our MQTT broker instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

The major hindrance is, that the name "broker", which is used by our MQTT broker on the internal network, is not part of the cybus_server.crt by default. In order to connect agents with a TLS connection to this hostname, you either need to add the hostname "broker" as a Subject Alternate Name (SAN) to the certificate, or set CYBUS_TRUST_ALL_CERTS=true for the agent.

Adding the Hostname to the Default Certificate

If you are using the built-in default certificate for Connectware, you can add the hostname "broker" through the Helm value global.ingressDNSNames:

global:
    ingressDNSNames:
        - broker

It is easiest if you add this Helm value before running your upgrade to Connectware 2.0.0, since activating it will automatically be covered by the upgrade guide.

If applying this configuration after already upgrading to Connectware 2.0.0, running helm upgrade on your Connectware installation will cause the system-control-server Deployment to restart. Once it is ready again, restart the broker StatefulSet:

kubectl rollout restart broker -n [your-namespace]

Configuring Your Agents to Target the MQTT Broker

Next you need to configure your agents to target the MQTT broker directly by using the protocolMapperAgents[*].dataPlane.host Helm value:

global:
    protocolMapperAgents:
        - name: welder-robots
          dataPlane:
              host: broker
        - name: bender-robots
          dataPlane:
              host: broker

The TCP port used for this connection is automatically determined by other configuration like TLS and mTLS settings, however, if for some reason you need to override this, use the Helm value protocolMapperAgents[*].dataPlane.port.

1.2 - Directly Targeting Control Connection Streaming Server (Scenario B)

When deploying agents in the internal network of Connectware, they are able to directly connect to our streaming server control plane instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

Configuring Your Agents to Target the Streaming Server

Next you need to configure your agents to target the streaming server directly by using the protocolMapperAgents[*].streamServer.host Helm value. The default internal name is "nats".

global:
    protocolMapperAgents:
        - name: welder-robots
          streamServer:
              host: nats
        - name: bender-robots
          streamServer:
              host: nats

The TCP port used for this connection is automatically determined by other configuration like mTLS settings, however, if for some reason you need to override this, use the Helm value protocolMapperAgents[*].streamServer.port.

2. Adding the CA Certificate to Your Agent

You will need to have the CA certificate you want to add at hand, in this example we will assume, that you are using the cybus_combined_ca.crt:

1. Copy cybus_combined_ca.crt from Connectware:

kubectl cp -n [your namespace] postgresql-0:/connectware_certs/cybus_combined_ca.crt cybus_combined_ca.crt

2. Add cybus_combined_ca.crt to Agent Helm Values:

Add the CA certificate to the Helm values of every agent in your connectware installation:

global:
    protocolMapperAgents:
        - name: welder-robots
          tls:
              ca:
                  certChain: |
                      -----BEGIN CERTIFICATE-----
                      MIIFpTCCA40CFEQKP621lWyKwv/7bZGbYEoxrLGdMA0GCSqGSIb3DQEBCwUAMIGN
                      [skipped lines]
                      tTa2qvRLD2J9Eh1KXZ//8IhLc+lIjZsqoPTnhZ7QXZCGwLFdOTEL15mbrgmJOiz/
                      lB0RUj8zolJa
                      -----END CERTIFICATE-----
                      -----BEGIN CERTIFICATE-----
                      MIIGATCCA+mgAwIBAgIUCdqCz7EzCbalj4n7qbxZFxi3XdAwDQYJKoZIhvcNAQEL
                      [skipped lines]
                      ja2TMCBzQSaGyUoLs6Sm2hDD/Y5E6z56Dh7oKQPkoCWjc3+ShF4ilBO9cpyHY0dP
                      CcN5u+A=
                      -----END CERTIFICATE-----
        - name: bender-robots
          tls:
              ca:
                  certChain: |
                      -----BEGIN CERTIFICATE-----
                      MIIFpTCCA40CFEQKP621lWyKwv/7bZGbYEoxrLGdMA0GCSqGSIb3DQEBCwUAMIGN
                      [skipped lines]
                      tTa2qvRLD2J9Eh1KXZ//8IhLc+lIjZsqoPTnhZ7QXZCGwLFdOTEL15mbrgmJOiz/
                      lB0RUj8zolJa
                      -----END CERTIFICATE-----
                      -----BEGIN CERTIFICATE-----
                      MIIGATCCA+mgAwIBAgIUCdqCz7EzCbalj4n7qbxZFxi3XdAwDQYJKoZIhvcNAQEL
                      [skipped lines]
                      ja2TMCBzQSaGyUoLs6Sm2hDD/Y5E6z56Dh7oKQPkoCWjc3+ShF4ilBO9cpyHY0dP
                      CcN5u+A=
                      -----END CERTIFICATE-----

Alternatively you can add it using an existing Kubernetes ConfigMap:

kubectl create -n [your-namespace] configmap my-connectware-ca --from-file cybus_combined_ca.crt
global:
    protocolMapperAgents:
        - name: welder-robots
          tls:
              ca:
                  existingConfigMap: my-connectware-ca
        - name: bender-robots
          tls:
              ca:
                  existingConfigMap: my-connectware-ca
3. (Alternative) Disable TLS Certificate Validation

As an alternative you can disable TLS certificate validation for agents. This of course has negative impact on security of your TLS connections, allowing for Man-in-the-middle attacks, but this may be acceptable for development instances or test installations.

This is only possible with agents using the username/password authentication method. If you are using mTLS for your agents, you need to care for a proper certificate setup.

global:
    protocolMapperAgents:
        - name: welder-robots
          env:
              - name: CYBUS_TRUST_ALL_CERTS
                value: 'true'
        - name: bender-robots
          env:
              - name: CYBUS_TRUST_ALL_CERTS
                value: 'true'
4. Run Helm Upgrade Again

After choosing and configuring your method of choice, you can run helm upgrade on your connectware installation again.

  • To upgrade Connectware, enter the following command:

helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version <2.0.0> -f <values.yaml>

6. Updating Helm Values for the Connectware Agent Helm Chart

Why the Change?

This guide explains how to update agents that use the connectware-agent Helm chart. If you are using agents via Docker, refer to the Docker Guide.

With Connectware 2.0.0, the default handling of certificate chain verification has changed. Previously, protocol-mapper agents required explicit configuration to validate peer certificate chains. Now, certificate chain verification is enabled and enforced by default. While you can revert to the old behavior using a configuration switch, we strongly recommend using a proper TLS certificate chain.

You now must provide the CA certificate signing Connectware's public server certificate cybus_server.crt to agents using the Helm value protocolMapperAgentDefaults.tls.ca.certChain (renamed from protocolMapperAgentDefaults.mTLS.caChain.cert).

Additionally, the control-plane-broker has been replaced with a new streaming-based control plane. Along with this change, the configuration values for both the control plane and the data plane have been redesigned. The new values are intended to be generic and resilient against future technology changes. As a result, several Helm values have been deprecated, renamed, or newly introduced.

What You Need to Do

1. Updating the Helm Values

Obsolete Helm Values (Connectware-Agent Chart)

Some Helm values are obsolete and have been removed. Remove the following Helm values from your values.yaml file for the connectware Helm chart:

  • protocolMapperAgentDefaults.controlPlaneBrokerEnabled

  • protocolMapperAgents[*].controlPlaneBrokerEnabled

  • protocolMapperAgentDefaults.controlPlane

  • protocolMapperAgents[*].controlPlane

  • protocolMapperAgentDefaults.rpcTimeout

  • protocolMapperAgents[*].rpcTimeout

Changed Helm Values (Connectware-Agent Chart)

The following Helm values have changed. If you had specific configuration for these in the past, update the Helm values accordingly.

For some Helm values, you need to take additional steps depending on your setup. The different scenarios are covered in the following steps.

Old Helm Value

New Helm Value

Required Change

protocolMapperAgentDefaults.mTLS.caChain.cert

protocolMapperAgentDefaults.tls.ca.certChain

Move

protocolMapperAgentDefaults.mTLS.caChain.existingConfigMap

protocolMapperAgentDefaults.tls.ca.existingConfigMap

Move

protocolMapperAgents[*].mqtt.tls

protocolMapperAgents[*].dataPlane.tls

Move

protocolMapperAgents[*].mqtt.dataHost

protocolMapperAgents[*].dataPlane.host

See 1.1 - Directly Targeting MQTT Broker (Scenario A)

protocolMapperAgents[*].mqtt.dataPort

protocolMapperAgents[*].dataPlane.port

See 1.1 - Directly Targeting MQTT Broker (Scenario A)

protocolMapperAgents[*].mqtt.controlHost

protocolMapperAgents[*].streamServer.host

See 1.2 - Directly Targeting Streaming Server (Scenario B)

protocolMapperAgents[*].mqtt.controlPort

protocolMapperAgents[*].streamServer.port

See 1.2 - Directly Targeting Streaming Server (Scenario B)

1.1 - Directly Targeting MQTT Broker (Scenario A)

When deploying agents in the internal network of Connectware, they are able to directly connect to our MQTT broker instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

The major hindrance is, that the name "broker", which is used by our MQTT broker on the internal network, is not part of the cybus_server.crt by default. In order to connect agents with a TLS connection to this hostname, you either need to add the hostname "broker" as a Subject Alternate Name (SAN) to the certificate, or set CYBUS_TRUST_ALL_CERTS=true for the agent.

Adding the Hostname to the Default Certificate

If you are using the built-in default certificate for Connectware, you can add the hostname "broker" through the Helm value global.ingressDNSNames:

global:
    ingressDNSNames:
        - broker

It is easiest if you add this Helm value before running your upgrade to Connectware 2.0.0, since activating it will automatically be covered by the upgrade guide.

If applying this configuration after already upgrading to Connectware 2.0.0, running helm upgrade on your Connectware installation will cause the system-control-server Deployment to restart. Once it is ready again, restart the broker StatefulSet:

kubectl rollout restart broker -n [your-namespace]

Configuring Your Agents to Target the MQTT Broker

Next you need to configure your agents to target the MQTT broker directly by using the protocolMapperAgentDefaults.dataPlane.host Helm value:

protocolMapperAgentDefaults:
    dataPlane:
        host: broker

The TCP port used for this connection is automatically determined by other configuration like TLS and mTLS settings, however, if for some reason you need to override this, use the Helm value protocolMapperAgentDefaults.dataPlane.port.

1.2 - Directly Targeting Streaming Server (Scenario B)

When deploying agents in the internal network of Connectware, they are able to directly connect to our streaming server control plane instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

Configuring Your Agents to Target the Streaming Server

Next you need to configure your agents to target the streaming server directly by using the protocolMapperAgentDefaults.streamServer.host Helm value. The default internal name is "nats".

protocolMapperAgentDefaults:
    streamServer:
        host: nats

The TCP port used for this connection is automatically determined by other configuration like mTLS settings, however, if for some reason you need to override this, use the Helm value protocolMapperAgentDefaults.streamServer.port.

2. Adding the CA Certificate to Your Agent

To connect a protocol-mapper agent with Connectware 2.0.0, you must either provide the agent with the valid CA certificate for the server certificate in use, or disable verification of TLS certificate validity by setting the environment variable CYBUS_TRUST_ALL_CERTS to true on the agent.

Depending on the fact if you are connecting an agent through Connectware's ingress or through the internal network, you may need to provide either cybus_ca.crt or shared_yearly_ca.crt, but if you want to skip this complexity, there is a new file called cybus_combined_ca.crt, which includes both CA bundles, allowing internal and external connections.

The following examples use the method of configuring all agents inside one connectware-agent installation through the protocolMapperAgentDefaults Helm value context. However, you can also configure this using the protocolMapperAgents Helm value context as described in Configuration Principles for the connectware-agent Helm Chart.

You need to have the CA certificate that you want to add at hand. In this example, we assume that you are using the cybus_combined_ca.crt:

  1. Copy cybus_combined_ca.crt from Connectware:

kubectl cp -n [your namespace] postgresql-0:/connectware_certs/cybus_combined_ca.crt cybus_combined_ca.crt
  1. Add the CA certificate cybus_combined_ca.crt to the Helm values of your connectware-agent installation:

protocolMapperAgentDefaults:
    tls:
        ca:
            certChain: |
                -----BEGIN CERTIFICATE-----
                MIIFpTCCA40CFEQKP621lWyKwv/7bZGbYEoxrLGdMA0GCSqGSIb3DQEBCwUAMIGN
                [skipped lines]
                tTa2qvRLD2J9Eh1KXZ//8IhLc+lIjZsqoPTnhZ7QXZCGwLFdOTEL15mbrgmJOiz/
                lB0RUj8zolJa
                -----END CERTIFICATE-----
                -----BEGIN CERTIFICATE-----
                MIIGATCCA+mgAwIBAgIUCdqCz7EzCbalj4n7qbxZFxi3XdAwDQYJKoZIhvcNAQEL
                [skipped lines]
                ja2TMCBzQSaGyUoLs6Sm2hDD/Y5E6z56Dh7oKQPkoCWjc3+ShF4ilBO9cpyHY0dP
                CcN5u+A=
                -----END CERTIFICATE-----

Alternatively, you can add it using an existing Kubernetes ConfigMap:

kubectl create -n [your-namespace] configmap my-connectware-ca --from-file cybus_combined_ca.crt
protocolMapperAgentDefaults:
    tls:
        ca:
            existingConfigMap: my-connectware-ca
3. (Alternative) Disable TLS Certificate Validation

You can choose to disable TLS certificate validation for agents. This is not recommended, as it weakens security and makes your setup vulnerable to man-in-the-middle attacks. However, it may be acceptable in non-production environments such as development or testing.

This option is only available for agents using username/password authentication. If your agents use mTLS, you must configure proper certificates instead.

protocolMapperAgentDefaults:
    env:
        - name: CYBUS_TRUST_ALL_CERTS
          value: 'true'

7. Upgrading Agents for the Connectware Agent Helm Chart

What You Need to Do

This guide explains how to update agents which use the connectware-agent Helm chart. If you are using agents via Docker, follow the Docker upgrade guide for this part.

8. Reinstalling Services

Why the Change?

With Connectware 2.0.0, your services and resources are no longer stored on the service-manager volume, but inside the PostgreSQL database.

What You Need to Do

Reinstalling Your Services

After completing the upgrade, you must reinstall all previously used services. You can do this using your preferred method:

Additionally, there have been changes to the relationships between services. Understanding how these interdependencies behave at runtime is crucial for correct deployment and maintenance.

Install parent services first (recommended): If the service depends on another service (parent/child relationship), install the parent service first. This ensures:

  • Service relations are created during installation.

  • Each service can be installed with targetState=enabled.

Install child services first (alternative): It is possible to install the dependent (child) service first, but this comes with limitations:

  • Service relations are only established when the service is enabled.

  • The dependent (child) service can only be installed with targetState=disabled.

For more details, see Service Dependency Behavior and targetState.

Feature-Specific Upgrade Steps

Only follow these if you use the related features, so they continue working after the upgrade.

1. Permissions and Roles

Why the Change?

Permissions allow administrators to define who can access what resources and what actions they can perform. Each permission represents a specific access right to a resource.

Connectware 2.0.0 introduces new and permissions. Because of this, custom roles or specific permissions you have set up might not allow users to do everything they could before the 2.0.0 upgrade.

What You Need To Do

Verifying Permissions
  • Check the permissions of your users. Compare them with the default roles in Connectware 2.0.0 and make any updates needed so your users can continue working without interruptions.

For more information on managing permissions, see Permissions.

2. Custom Connectors

Why the Change?

Connectware has evolved its architecture, removing dependencies like VRPC and improving protocol handling. To ensure compatibility, you must update your custom connector implementations.

What You Need To Do

If you are using custom connectors, follow these steps to make your custom connector compatible with Connectware 2.0.0.

1. Remove VRPC

VRPC is no longer supported in the custom connector environment.

  • Remove all VRPC references in the custom connector code. This includes the import and any usage of VrpcAdapter:

Example

// const { VrpcAdapter } = require('vrpc') <- REMOVE THIS
const Connection = require('./FoobarConnection')
const Endpoint = require('./FoobarEndpoint')

// VrpcAdapter.register(Endpoint, { schema: Endpoint.getSchema() }) <- REMOVE THIS
// VrpcAdapter.register(Connection, { schema: Connection.getSchema() }) <- REMOVE THIS
2. Follow the Directory Naming Conventions
  • When defining the Dockerfile, ensure that the destination path for the copied source files ends in a protocol-specific directory name written entirely in lowercase.

Example

# protocol directory must be lowercase
COPY ./src ./src/protocols/foobar
3. Follow the Schema Naming Conventions
  • The schema $id must match the file name (without the .json).

  • The schema must start with a capital letter, like Foobar.

Example

  • In FoobarConnection.json, the class must be like:

{
  ...
  "$id": "FoobarConnection"
  ...
}
  • In FoobarEndpoint.json, the class must be like:

{
  ...
  "$id": "FoobarEndpoint"
  ...
}
4. Schema Versioning

Schemas support versioning through the additional version property, which must be a positive integer greater than zero. If this property is omitted, the default value is 1.

Versioning ensures that only the latest version of a schema is considered active and valid. This means that even though all custom connector instances should run the same version of schemas, the latest version will overwrite any previous version in the CW control plane.

Example

  • FoobarConnection.json supporting versioning.

{
  ...
  "$id": "FoobarConnection",
  "version": 3
  ...
}
5. Follow the Source Directory Naming Conventions

Follow the case-sensitive naming conventions based on the protocol name.

  • File names must start with an uppercase protocol name (e.g., Foobar).

  • Connection and endpoint suffixes are mandatory.

  • JS files define classes.

  • JSON files define schemas.

Example

src/
├── FoobarConnection.js
├── FoobarConnection.json
├── FoobarEndpoint.js
└── FoobarEndpoint.json
6. Follow the Class Naming Conventions
  • The class name must match the file name, excluding the .js extension.

  • The class name must start with a capital letter, such as Foobar.

Example

  • In FoobarConnection.js, the class must be:

class FoobarConnection extends Connection { ... }
  • In FoobarEndpoint.js, the class must be:

class FoobarEndpoint extends Endpoint { ... }
7. Class Constructors

Unless you need a specific constructor, there is no need to specify one because it is inherited from the parent class. However, if you need to implement a custom constructor for the Connection or Endpoint classes, preserve the following format:

  • In FoobarConnection.js, the class constructor must be like:

class FoobarConnection extends Connection {
---
constructor (params) {
super(params)
---
// custom code
---
}
---
}
  • In FoobarEndpoint.js, the class constructor must be like:

class FoobarEndpoint extends Endpoint {
---
constructor (params, dataPlaneConnectionInstance, parentConnectionInstance) {
super(params, dataPlaneConnectionInstance, parentConnectionInstance)
---
// custom code
---
}
---
}
8. Do not Set the _topic Property Manually

The _topic property is now handled automatically. Manually assigning it will cause errors.

The following code is invalid and must be removed since topics are now built internally.

// this is invalid, remove it
this._topic = 'this/is/a/topic'
9. ES Modules Not Supported

The standard JavaScript environment of custom connectors is based on CommonJS modules. ES modules are not supported.

10. TypeScript Configuration

TypeScript is not officially supported in development workflows. However, if you want to use TypeScript and compile it to JavaScript, make sure to configure your tsconfig.json file as follows:

{
  "compilerOptions": {
    ....
    "target": "es2022",    /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
    "lib": ["es6"],        /* Specify a set of bundled library declaration files that describe the target runtime environment. */
    "module": "commonjs",  /* Specify what module code is generated. */
    ....
  },
  "include": ["src/**/*.ts", "src/**/*.json", "src/**/*.js", "src/**/*.d.ts", "test/**/*"]
}

Additionally, the compiled JavaScript output must include an exports.default assignment and the exported class itself. This ensures interoperability with our CommonJS-based module system. The compiled .js file should result in:

class FoobarConnection { ... }
exports.default = FoobarConnection;

3. Systemstate Protocol

Why the Change?

To improve performance and reduce unnecessary messaging load, the Systemstate protocol no longer supports whole-service tracking or redundant status events. This simplifies agent responsibilities and avoids misleading lifecycle signals.

What You Need to Do

If you are using the Systemstate protocol, do the following:

1. Stop Tracking Whole Services
  • Tracking the entire service object is no longer allowed. You must update your connector configuration to track individual resources only (like specific endpoints or connections).

Example

# Before (no longer supported)
serviceEndpoint:
    type: Cybus::Endpoint
    properties:
        protocol: Systemstate
        connection: !ref systemStateConnection
        subscribe:
            resourceId: !sub '${Cybus::ServiceId}'
2. Update Event Handling Logic

The following status events have been removed from Systemstate. If your implementation depends on them (e.g., for health monitoring or automation), you must refactor that logic:

  • subscribed/unsubscribed

  • online/offline

4. Log Monitoring

Why the Change?

With version 2.0.0, several log messages have been corrected to fix spelling mistakes. These changes may affect existing log monitoring configurations.

What You Need to Do

Updating Your Log Monitoring

If you rely on log monitoring, review whether your setup references any of the updated log messages and adjust accordingly.

Type
Log Level
Original (with typo)
Corrected line

Log message

info

MS Entra Login was succesful, redirecting to

MS Entra Login was successful, redirecting to

Log message

debug

DELETE /:id/tokens sucess for user: '<req.params.id>'

DELETE /:id/tokens success for user: '<req.params.id>'

Error message

Views are found, the restore implenetation do not support views!

Views are found, the restore implementation do not support views!

Error message

query paramter error is not a valid HTTP error code (<req.query.code>)

query parameter error is not a valid HTTP error code (<req.query.code>)

Log message

debug

Cleared persistance of:

Cleared persistence of:

Error message

warn

HttpNode is configured with method 'GET' but operation 'serverRecieves' (instead of serverProvides)

HttpNode is configured with method 'GET' but operation 'serverReceives' (instead of serverProvides)

Log message

Error when trying to recieve OPC-UA Method details from nodeId : <err.message>

Error when trying to receive OPC-UA Method details from nodeId : <err.message>

Log message

warn

tried to pass the value as an INT64 and found no matching convertion

tried to pass the value as an INT64 and found no matching conversion

Log message

warn

tried to pass the value as an UINT64 and found no matching convertion

tried to pass the value as an UINT64 and found no matching conversion

Log message

debug

Sucessfully subscribed to topic: <mqttOpts.topic>.

Successfully subscribed to topic: <mqttOpts.topic>.

Log message

error

error occured during shutting down the server

error occurred during shutting down the server

Log message

error

expected payload convertion to fail because given payload was not a JSON notation, but 'err == nil'

expected payload conversion to fail because given payload was not a JSON notation, but 'err == nil'

5. Heidenhain Agents (Windows)

Why the Change?

For Connectware 2.0.0, the Heidenhain protocol has been updated.

What You Need to Do

Installing the Heidenhain Agent

You must upgrade the Windows-based Cybus Heidenhain Agent to work with Connectware 2.0.0.

  1. Uninstall the existing Heidenhain agent installation from your Windows system.

  2. Install the updated Heidenhain agent. You can find the download link at Heidenhain DNC.

6. Auto-Generated MQTT Topics of Resources

Why the Change?

With Connectware 2.0.0, auto-generated MQTT topics no longer include resource-specific properties. This makes the topic generation more unified and explicit. You must update any service commissioning file that hardcodes those old auto-generated topics.

Example of old behavior

Some auto-generated topics contained property-specific parts:

  • S7: services/myService/pressure/address:DB1,REAL6

  • Modbus: services/myService/current/fc:3/address:7

  • HTTP: services/myService/myEndpoint/get[object Object]

These paths might have been referenced inside Cybus::Mapping resources. Using auto-generated topics inside Cybus::Mapping is not recommended. Instead, use references via !ref, “Reference Method”.

What You Need to Do

Updating Auto-Generated Topic References

Auto-generated topics no longer include resource-specific properties. They always follow:

<Cybus::MqttRoot>/<serviceId>/<resourceName>

** of new behavior**

  • S7: services/myService/pressure

  • Modbus: services/myService/current

  • HTTP: services/myService/myEndpoint

Procedure

  1. Scan your service commissioning files for any usage of auto-generated topics.

  2. Adapt those references by replacing direct topic strings with !ref references.

For more details, see Reference Method (!ref).

Last updated

Was this helpful?