Upgrading Connectware to 2.0.0 (Docker)

Disclaimer

Some Connectware upgrades require you to follow a few additional steps when upgrading Connectware to a newer version.

When upgrading your Connectware instance, follow the required upgrade path based on your current version:

  • If you are running Connectware version 1.4.1 or below:

    1. First upgrade to version 1.5.0

    2. Then upgrade to version 1.7.0

    3. Finally upgrade to version 2.0.0

  • If you are running Connectware version between 1.5.0 and 1.6.2:

    1. First upgrade to version 1.7.0

    2. Then upgrade to version 2.0.0

  • If you are running Connectware version 1.7.0 or newer:

    • You can directly upgrade to version 2.0.0

For detailed instructions on each upgrade step, refer to:

Before You Begin

Upgrading to Connectware 2.0.0 introduces significant improvements in performance, scalability, and reliability. However, these changes also come with updated requirements for versions, networking, hardware, and storage.

This guide outlines the prerequisites and known limitations you must consider to ensure a smooth and successful upgrade.

Connectware Version Requirements

To be able to upgrade to Connectware 2.0.0, your Connectware version must be 1.7.0 or above.

If your Connectware installation is below 1.7.0, make sure that you have followed Upgrading Connectware to 1.7.0 (Docker) before upgrading to 2.0.0.

Network Requirements

Why the Change?

With Connectware 2.0.0, some internal components have been updated to improve communication and performance. As a result, the network configuration has changed:

  • Added: TCP/4222 and TCP/4223

  • Removed: TCP/1884 and TCP/8884

What You Need to Do

Updating the Network Ports

Verify that your firewalls and security rules are updated to allow the new ports (TCP/4222 and TCP/4223) and to remove dependencies on the deprecated ones (TCP/1884 and TCP/8884).

This ensures uninterrupted connectivity between your agents and Connectware.

Hardware Requirements

Why the Change?

Connectware 2.0.0 makes increased use of its PostgreSQL database and adds some components. When planning this upgrade, ensure your infrastructure can accommodate the enhanced resource requirements. This upgrade requires additional computing power.

What You Need to Do

Updating the Hardware Setup

We recommend adding the following resources to your hardware setup:

  • 7 CPU cores

  • 6 GB of memory

  • 20 Gi of storage

However, these are general guidelines. Check what your specific system needs and make adjustments accordingly.

Storage Requirements

Why the Change?

We have added two new components to Connectware:

  • A streaming server called NATS

  • A service called resource-status-tracking

Alongside other improvements, these additions enable Connectware to scale effectively for much larger deployments.

Known Limitations

  1. Adding Certificates Through Admin UI Not Supported

  • You cannot add certificates to Connectware's CA bundle via the Admin UI.

  • Instead, modify the cybus_ca.crt file directly on the certs volume.

  1. Backup via Admin UI Not Supported

  • The backup functionality through Admin UI is not supported.

  • Instead, create backups of the database by running a pg_dump command on the postgresql container.

  • When running the command, make sure that only a single Connectware instance is running. Otherwise, select the container manually.

Example
docker exec \
    $(docker container ls -q -f "label=io.cybus.connectware=core" -f "label=com.docker.compose.service=postgresql") \
    bash -c "pg_dump -U cybus-admin --if-exists -c cybus_connectware" \
    > connectware_database.sql

Upgrade Procedure

Follow this procedure to upgrade your Connectware installation to version 2.0.0. The steps are divided into two parts:

  • Mandatory Upgrade Steps: Required for all installations to ensure a smooth and stable upgrade.

  • Feature-Specific Upgrade Steps: Only needed if you use certain features, so they remain compatible with Connectware 2.0.0.

Expand the following sections for an overview of all upgrade steps.

Mandatory Upgrade Steps

These steps apply to every Connectware installation upgrading to Connectware 2.0.0. For a detailed guide, see Mandatory Upgrade Steps.

Depending on your setup, you may also need to perform additional Conditional Steps.

  1. TLS Changes: Default behavior on certificate validation has been adjusted.

  2. Update .env Configuration: Remove any obsolete environment variables and update the configuration for changed or new environment variables as necessary.

  3. Upgrading to Connectware 2.0.0: Download and install Connectware 2.0.0.

  4. Updating Agent Configuration: Update your agent configuration to comply with the updated configuration.

  5. Upgrading Agents: Upgrade your agents.

  6. Reinstalling Services: This upgrade changes where your services are stored. You will need to reinstall any services after the upgrade.

Feature-Specific Upgrade Steps

Only follow these if you use the related features, so they continue working after the upgrade.

  1. Roles and Permissions: New permissions were added to Connectware. Verify your custom roles, if they require updates.

  2. Custom Connectors: Update your customer connector configurations to meet new requirements.

  3. Systemstate Protocol: Update your Systemstate protocol configurations to meet new requirements.

  4. Log Monitoring: Some logging strings are changed. If you use log monitoring, you may need to update it.

  5. Heidenhain Agents: Upgrade your Heidenhain agents.

  6. Auto-Generated MQTT Topics of Resources: Topic generation no longer includes resource-specific properties. Update your service commissioning files if you relied on old patterns.

Mandatory Upgrade Steps

These steps are required to upgrade your Connectware installation to Connectware 2.0.0.

1. TLS Changes

Why the Change?

To enhance security by default, Connectware agents now verify TLS certificate chains automatically. This ensures that all components communicate over a valid trust chain, while still giving you the option to keep the old behavior by explicitly disabling TLS verification.

Key Changes

1. Introducing the cybus_combined_ca.crt

Connectware maintains two separate CA chains:

  • External certificates validated by cybus_ca.crt.

  • Internal certificates validated by shared_yearly_ca.crt.

Which CA an agent requires depends on whether it connects from inside or outside the Connectware network.

To simplify configuration, we introduced cybus_combined_ca.crt, a bundle containing both chains, so agents can use a single file without needing to distinguish between internal and external CA certificates.

2. Certificate Chain Verification in Agents

Agents now enforce TLS chain validation by default. Each agent requires access to cybus_combined_ca.crt, available on the certs volume.

  • To revert to the previous behavior (skipping verification), set the environment variable CYBUS_TRUST_ALL_CERTS to true. Note that it has been renamed from TRUST_ALL_CERTS.

3. Configuring Certificate Hostnames

The default Connectware-generated CA includes the hostnames localhost and connectware.

  • To add more hostnames, configure a comma separated list in the environment variable CYBUS_INGRESS_DNS_NAMES.

  • You will also be prompted for these names as part of running the Connectware installer.

4. Renewal of Certificate Chains

With 2.0.0, the internal CA chain is replaced:

  • Certificate Authority renamed from CybusCA to CybusInternalCA.

  • The hostname nats is added as a Subject Alternate Name (SAN) to shared_yearly_server.crt.

The built-in default external CA certificate chain is also replaced.

  • The hostname connectware is added as a SAN to cybus_server.crt.

If you rely on monitoring, custom setups, or modified certificates, adapt your configuration accordingly.

5. Replacing CA Certificate Chain

To replace Connectware’s default external chain with your enterprise-managed CA:

  • Replace cybus_ca.crt with your enterprise CA certificate.

  • Ensure cybus_server.crt and cybus_server.key form a valid key pair, signed by the CA in cybus_ca.crt.

Do not replace the internal CA (shared_yearly_ca.crt).

After replacement:

  1. Restart system-control-server.

  2. Restart all Connectware services.

2. Update .env Configuration

Why the Change?

Some changes to Connectware require updating your environment variable configuration. Adapt your .env file accordingly.

  • All parameters to tune the inter-service communication have been removed.

Obsolete Environment Variables Values

Removing Obsolete Environment Variables

Some environment variables are obsolete and have been removed. Remove the following environment variables from your .env file for Connectware:

  • CYBUS_CM_RPC_TIMEOUT

  • CYBUS_ADMIN_WEB_APP_VRPC_TIMEOUT

  • CYBUS_PM_RPC_TIMEOUT

  • CYBUS_SM_RPC_TIMEOUT

  • CYBUS_SCS_RPC_TIMEOUT

  • CYBUS_USE_SERVICES_GRAPH

New Environment Variables

1. Ingress DNS Name Configuration

With the changes TLS behavior in Connectware, it has become essential to add the DNS names under which Connectware is addressed, for example by agents.

If you are replacing Connectware's default PKI, you can, and likely have managed this yourself by providing a valid cybus_server.crt containing all Subject Alternate Names (SANs) used within your setup.

If you are using Connectware's default PKI, you can use the new environment variable CYBUS_INGRESS_DNS_NAMES, which is a comma separated list of names that will be added to the default cybus_server.crt.

Hostname Formats

You can include multiple hostnames in the list. The certificate will include all specified names in its SAN section.

The configuration accepts various hostname formats:

  • Wildcards (e.g., *.company.io)

  • Subdomains (e.g., connectware.company.io)

  • Custom hostnames (e.g., localhost)

  • IP addresses (e.g. 192.168.100.42)

Example

CYBUS_INGRESS_DNS_NAMES=connectware.company.io,*.company.io,192.168.100.42

The Connectware installer will also ask you for this value.

3. Preparing the Connectware Upgrade

Why the Change?

Connectware 2.0.0 introduces architectural improvements that require you to remove or adjust certain resources before running the upgrade. This ensures a clean and successful upgrade process.

What You Need to Do

1. Backing Up Your PostgreSQL Database

With Connectware 2.0.0, Connectware uses a new major version of PostgreSQL. You need to delete your postgresql volume before upgrading Connectware (this is covered later in this upgrade guide). This requires you to create a backup of your database and restore this after the upgrade.

  1. To create a backup of your database, first identify the correct container from the NAMES column:

docker container ls -f "label=io.cybus.connectware=core" -f "label=com.docker.compose.service=postgresql"

If more than one container is shown, you need to identify the correct container. The prefix of the container is usually the folder name in which your Docker Composition is stored. For example, if you installed it in /opt/connectware, the name of the container would be connectware-postgresql-1.

  1. Create a backup of the database in the file connectware_database.sql, replacing [container-name] with the name of the container identified in step 1.

docker exec \
    [container-name-from-step-1] \
    bash -c "pg_dump -U cybus-admin --if-exists -c cybus_connectware" \
    > connectware_database.sql
  1. Make sure the backup is successful, then store the database file in a secure location.

2. Service-Manager Volume Backup & Removal

The service-managervolume is deprecated and will not be used after the upgrade to 2.0.0.

After upgrading, you can remove this Docker volume. You can identify the volume using this command:

docker volume ls -f "label=com.docker.compose.volume=service-manager"

In the future, the former contents of this volume will be stored in the PostgreSQL database, but they will not be migrated automatically. You must reinstall any services that you previously used. See Reinstalling Services.

If you do not have your services stored outside of Connectware, make sure to export your services, or create a backup of your service-manager volume before upgrading.

4. Upgrading to Connectware 2.0.0

1. Reviewing the Connectware Changelog

Before upgrading to Connectware 2.0.0, review the changelog to familiarize yourself with new features, bug fixes, and other changes introduced in Connectware 2.0.0.

2. Verifying your Backups

Make sure that you store backups of your setup. This allows you to restore a previous state if necessary.

Your backups must consist of the following files:

  • All Docker volumes

  • Your Connectware database

  • Your .env file

  • All service commissioning files

Depending on your local infrastructure, it may be necessary to back up additional files.

3. Shutting Down Protocol-Mapper Agents

Before running the upgrade, you must stop all connected agents. Any agents that remain active during the upgrade will have to go through the registration process again.

  • Docker Run: To stop agents which were started using docker run, use the docker stop command. If you are not aware of the name these containers use, run the docker ps command to find out.

  • Docker Compose: If your agents are running in Docker Compose, use the docker compose down command to stop them.

  • Agent Helm Chart: You can shut down agents that have been installed via the connectware-agent Helm chart using this command:

kubectl get -n [your-namespace] sts -lapp.kubernetes.io/component=protocol-mapper-agent -o name | xargs -I % kubectl scale -n [your-namespace] % --replicas 0
4.Shutting Down Connectware
  • Before running the installer, you must shut down Connectware.

  1. Make sure you enter the directory in which you installed Connectware, where your docker-compose.yaml and .env files are located. This is likely /opt/connectware.

  2. Shutdown Connectware:

docker compose down
5. Removing the PostgreSQL Volume
  • Before running the installer, you must remove the postgresql volume.

  1. Identify the correct volume to delete:

docker volume ls -f "label=com.docker.compose.volume=postgresql"

If this shows more than one volume, you must identify the correct volume. The prefix of this volume is usually the folder name in which your Docker Composition is stored. For example, if you installed in /opt/connectware, the name of this volume would be connectware_postgresql.

  1. Delete the postgresql volume:

docker volume rm [volume-identified-in-step-1]"
6. Initial Connectware Upgrade
  • To upgrade Connectware to a newer version, follow the steps in the Prepare Installer Script to get the latest installer script. When running the update, select your current Connectware installation directory.

The update will automatically preserve your existing configuration, including your license key and network settings. If you are prompted to enter a license key during the update, this usually means that you have selected the wrong installation directory. In this case, cancel the update and verify you have chosen the correct directory.

If you originally installed Connectware with sudo privileges, make sure that you use sudo again when running the update.

Downgrading to previous Connectware versions is not supported.

Upgrading Connectware in Silent Mode

The installer supports an automated deployment mode that requires no manual intervention. You can activate this by using either -s or --silent, and -d(directory) when running the installation script.

If you need to customize your installation, the script offers several configuration options. Run the installer with --help to view all available parameters.

Example

./connectware-online-installer.sh -s -d <PATH/TO/YOUR/CONNECTWARE/FOLDER>

The installer will tell you that you are ready to run docker compose up now. However, before you do this, there are some additional steps that need to be completed.

7. Running Database Restore & Starting Connectware
  1. Create a file called docker-compose.override.yaml and add the following content:

services:
    admin-web-app:
        profiles:
            - do-not-start
    auth-server:
        profiles:
            - do-not-start
    broker:
        profiles:
            - do-not-start
    connectware:
        profiles:
            - do-not-start
    container-manager:
        profiles:
            - do-not-start
    doc-server:
        profiles:
            - do-not-start
    ingress-controller:
        profiles:
            - do-not-start
    protocol-mapper:
        profiles:
            - do-not-start
    service-manager:
        profiles:
            - do-not-start
    workbench:
        profiles:
            - do-not-start
    resource-status-tracking:
        profiles:
            - do-not-start
    nats:
        profiles:
            - do-not-start

If you are already using a file docker-compose.override.yaml, make sure to temporary rename this file for the upgrade.

  1. Start the Docker Composition:

docker compose up -d
  1. Once the postgresql container has started, restore the database (reuse the container name previously identified when backup up the database):

cat connectware_database.sql | docker exec -i \
    [container-name] \
    bash -c "psql -U cybus-admin -d cybus_connectware"
  1. Stop the Docker Composition:

docker compose down
  1. Remove docker-compose.override.yaml

rm docker-compose.override.yaml

If you were already using a file docker-compose.override.yaml, make sure to restore it now.

  1. Start Connectware 2.0.0:

docker compose up -d
  1. Restart Connectware 2.0.0:

Finally, restart Connectware once more to ensure any certificate updates are properly applied:

docker compose down && docker compose up -d

5. Updating Agent Configuration

Why the Change?

This guide explains how to update agents that use Docker. If you are using agents via the connectware-agent Helm chart, refer to the Kubernetes Guide.

With Connectware 2.0.0, the default handling of certificate chain verification has changed. Previously, protocol-mapper agents required explicit configuration to validate peer certificate chains. Now, certificate chain verification is enabled and enforced by default. While you can revert to the old behavior using a configuration switch, we strongly recommend using a proper TLS certificate chain.

You now must provide the CA certificate signing Connectware's public server certificate cybus_server.crt to agents.

Additionally, the control-plane-broker has been replaced with a new streaming-based control plane. Along with this change, the configuration values for both the control plane and the data plane have been redesigned. The new values are intended to be generic and resilient against future technology changes. As a result, several environment variables have been deprecated, renamed, or newly introduced.

What You Need to Do

Because Connectware agents are single containers, they can be orchestrated by many means, exceeding the possibilities of this upgrade guide. We will provide examples for Docker Compose orchestration. We trust that you know how to adapt these to an orchestrator of your choice. Contact Cybus Support for additional assistance.

1. Adding the CA Certificate to Your Agent

To connect a protocol-mapper agent with Connectware 2.0.0, you must either provide the agent with the valid CA certificate for the server certificate in use, or disable verification of TLS certificate validity by setting the environment variable CYBUS_TRUST_ALL_CERTS to true on the agent.

Whether you are connecting an agent via Connectware's ingress or the internal network will determine whether you need to provide either cybus_ca.crt or shared_yearly_ca.crt. However, if you want to avoid this complexity, there is a new file called cybus_combined_ca.crt which includes both CA bundles and allows both internal and external connections.

You need to have the CA certificate that you want to add at hand. In this example, we assume that you are using the cybus_combined_ca.crt:

  1. Identify the correct container from the NAMES column:

docker container ls -f "label=io.cybus.connectware=core" -f "label=com.docker.compose.service=system-control-server"
  1. Copy cybus_combined_ca.crt from Connectware:

docker cp [container-name-from-step-1]:/connectware_certs/cybus_combined_ca.crt cybus_combined_ca.crt
  1. Copy the CA certificate cybus_combined_ca.crt to the directory which contains the docker-compose.yaml file for your agent:

Example using /opt/connectware-agent/ as directory:

cp cybus_combined_ca.crt /opt/connectware-agent/
  1. Mount the CA certificate cybus_combined_ca.crt to the /connectware/certs/ca/ca-chain.pem mount point of your agent by adding a volume in docker-compose.yaml:

Example using /opt/connectware-agent/ as directory:

services:
    protocol-mapper-agent:
        image: registry.cybus.io/cybus/protocol-mapper:2.0.0
        environment:
            CYBUS_AGENT_MODE: distributed
            CYBUS_AGENT_NAME: my-docker-compose-agent
            CYBUS_HOSTNAME_INGRESS: localhost
        volumes:
            - protocol-mapper-agent:/data
            - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem # Mount your cybus_combined_ca to the agent
        restart: unless-stopped
        network_mode: host
        hostname: my-docker-compose-agent
volumes:
    protocol-mapper-agent:
2. (Alternative) Disable TLS Certificate Validation

You can choose to disable TLS certificate validation for agents. This is not recommended, as it weakens security and makes your setup vulnerable to man-in-the-middle attacks. However, it may be acceptable in non-production environments such as development or testing.

This option is only available for agents using username/password authentication. If your agents use mTLS, you must configure proper certificates instead.

services:
    protocol-mapper-agent:
        image: registry.cybus.io/cybus/protocol-mapper:2.0.0
        environment:
            CYBUS_AGENT_MODE: distributed
            CYBUS_AGENT_NAME: my-docker-compose-agent
            CYBUS_HOSTNAME_INGRESS: localhost
            CYBUS_TRUST_ALL_CERTS: true # disable certificate validation
        volumes:
            - protocol-mapper-agent:/data
        restart: unless-stopped
        network_mode: host
        hostname: my-docker-compose-agent
volumes:
    protocol-mapper-agent:
3. Updating Environment Variables

Obsolete Environment Variables (Agents)

Some Environment Variables are obsolete and have been removed. Remove the following environment variables from your agent orchestration:

  • CYBUS_PM_RPC_TIMEOUT

  • CYBUS_CONTROLPLANE_URI

New or Changed Environment Variables (Agents)

The following environment variables have changed. If you had specific configuration for these in the past, update your orchestration accordingly.

For some environment variables, you need to take additional steps depending on your setup. The different scenarios are covered in the following steps.

Old Environment Variable

New Environment Variable

Required Change

-

CYBUS_DATAPLANE_USE_TLS

Set to true if you want your agents to use TLS encryption for the MQTT data plan

USE_MUTUAL_TLS

CYBUS_USE_MUTUAL_TLS

Set to true if you want your agents to use mTLS authentication for the MQTT data plan

TRUST_ALL_CERTS

CYBUS_TRUST_ALL_CERTS

Set to true if you want your agents to skip TLS certificate validation.

CYBUS_DATA_MQTT_HOST

CYBUS_DATAPLANE_HOST

See 3.2 - Directly Targeting MQTT Broker (Scenario A)

CYBUS_DATA_MQTT_PORT

CYBUS_DATAPLANE_PORT

See 3.2 - Directly Targeting MQTT Broker (Scenario A)

CYBUS_MQTT_HOST

CYBUS_STREAMSERVER_HOST

See 3.3 - Directly Targeting Streaming Server (Scenario B)

CYBUS_MQTT_PORT

CYBUS_STREAMSERVER_PORT

See 3.3 - Directly Targeting Streaming Server (Scenario B)

CYBUS_MQTT_SCHEME

CYBUS_DATAPLANE_SCHEME and CYBUS_STREAMSERVER_SCHEME

See 3.2 - Directly Targeting MQTT Broker (Scenario A) and 3.3 - Directly Targeting Streaming Server (Scenario B)

3.1 - Updating Connectware Ingress Targeting

Connectware 2.0.0 changes how you address your Connectware instance with agents.

Previously, the environment variable CYBUS_MQTT_HOST was used. Later, CYBUS_HOSTNAME_INGRESS was introduced for targeting the ingress, while CYBUS_MQTT_HOST was used for targeting the control-plane-broker. Additionally, CYBUS_DATA_MQTT_HOST was introduced to control the MQTT broker that the agent connected to as data plane. CYBUS_MQTT_HOST acted as a fallback for all three environment variables.

With the removal of the control-plane-broker, we are simplifying and decoupling the configuration:

  • If you are only using the Connectware ingress for your agents, you must only configure CYBUS_HOSTNAME_INGRESS.

  • If you have more complex setup, which targeted the MQTT data plane broker or the control-plane-broker, use CYBUS_DATAPLANE_HOST and CYBUS_STREAMSERVER_HOST to refine your configuration, as explained in the next steps.

In short, make sure that you target your Connectware instance using the CYBUS_HOSTNAME_INGRESS environment variable, replacing any legacy CYBUS_MQTT_HOST configuration you may have had, without the intention of directly targeting the MQTT data plane broker.

services:
    protocol-mapper-agent:
        image: registry.cybus.io/cybus/protocol-mapper:2.0.0
        environment:
            CYBUS_AGENT_MODE: distributed
            CYBUS_AGENT_NAME: my-docker-compose-agent
            CYBUS_HOSTNAME_INGRESS: localhost # make sure you use CYBUS_HOSTNAME_INGRESS as the general Connectware target for your agent
            CYBUS_DATAPLANE_HOST: broker # use new environment variables for more complex network setups
        volumes:
            - protocol-mapper-agent:/data
            - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem
        restart: unless-stopped
        hostname: my-docker-compose-agent
        network_mode: host
volumes:
    protocol-mapper-agent:
3.2 - Directly Targeting MQTT Broker (Scenario A)

When deploying agents in the internal network of Connectware, they are able to directly connect to our MQTT broker instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

The major hindrance is, that the name "broker", which is used by our MQTT broker on the internal network, is not part of the cybus_server.crt by default. In order to connect agents with a TLS connection to this hostname, you either need to add the hostname "broker" as a Subject Alternate Name (SAN) to the certificate, or set CYBUS_TRUST_ALL_CERTS=true for the agent. The previous steps explained how to add the CA certificate bundle file and how to set the environment variable.

Adding the Hostname to the Default Certificate

If you are using the built-in default certificate for Connectware, you can add the hostname "broker" through the environment variable CYBUS_HOSTNAME_INGRESS in the .env file of your Connectware installation:

CYBUS_INGRESS_DNS_NAMES=connectware.company.io,broker

It is easiest if you add this configuration during your upgrade to Connectware 2.0.0, since activating it will automatically be covered by the upgrade guide. You will be asked for the ingress hostnames by the installer script. You can also use the parameter --ingress-dns-names for the Connectware installer to set this.

If applying this configuration after already upgrading to Connectware 2.0.0, running docker compose up -d on your Connectware installation will cause the multiple containers to restart. Once they are ready, restart Connectware again:

docker compose down
docker compose up -d

Configuring Your Agents to Target the MQTT Broker

Next, you must configure your agents to target the MQTT broker directly by using the CYBUS_DATAPLANE_* environment variables. To use TLS encryption for this connection, you must set CYBUS_DATAPLANE_USE_TLS to true and provide the agent with the CA certificate bundle, as explained previously.

Docker Compose Example:

To add a Docker Composition to an existing network, you must add it as external network. For this, you need to know the name of the network.

This example assumes the name is connectware_cybus, however you can find it using this command (NAME column):

docker network ls -f "label=com.docker.compose.network=cybus"
services:
    protocol-mapper-agent:
        image: registry.cybus.io/cybus/protocol-mapper:2.0.0
        environment:
            CYBUS_AGENT_MODE: distributed
            CYBUS_AGENT_NAME: my-docker-compose-agent
            CYBUS_HOSTNAME_INGRESS: connectware
            CYBUS_DATAPLANE_HOST: broker
            CYBUS_DATAPLANE_USE_TLS: true
        volumes:
            - protocol-mapper-agent:/data
            - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem
        restart: unless-stopped
        hostname: my-docker-compose-agent
        networks:
            - connectware_cybus # name from previous step
        extra_hosts:
            - 'connectware:host-gateway' # this extra host ensures the agent is still able to make HTTP API calls
volumes:
    protocol-mapper-agent:
networks:
    connectware_cybus: # name from previous step
        external: true

The TCP port used for this connection is automatically determined by other configuration like TLS and mTLS settings, however, if for some reason you need to override this, use the environment variable CYBUS_DATAPLANE_PORT.

3.3 - Directly Targeting Streaming Server (Scenario B)

When deploying agents in the internal network of Connectware, they are able to directly connect to our streaming server control plane instead of going through the Connectware ingress.

This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.

Configuring Your Agents to Target the Streaming Server

Next, you must configure your agents to target the streaming server directly by using the CYBUS_STREAMSERVER_HOST environment variable. The default internal name is "nats".

Docker Compose Example:

To add a Docker Composition to an existing network, you must add it as external network. For this, you need to know the name of the network.

This example assumes the name is connectware_cybus. However, you can find it using this command (NAME column):

docker network ls -f "label=com.docker.compose.network=cybus"
services:
    protocol-mapper-agent:
        image: registry.cybus.io/cybus/protocol-mapper:2.0.0
        environment:
            CYBUS_AGENT_MODE: distributed
            CYBUS_AGENT_NAME: my-docker-compose-agent
            CYBUS_HOSTNAME_INGRESS: connectware
            CYBUS_STREAMSERVER_HOST: nats
        volumes:
            - protocol-mapper-agent:/data
            - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem
        restart: unless-stopped
        hostname: my-docker-compose-agent
        networks:
            - connectware_cybus # name from previous step
        extra_hosts:
            - 'connectware:host-gateway' # this extra host ensures the agent is still able to make HTTP API calls
volumes:
    protocol-mapper-agent:
networks:
    connectware_cybus: # name from previous step
        external: true

The TCP port used for this connection is automatically determined by other configuration like mTLS settings. However, if for some reason you need to override this, use the environment variable CYBUS_STREAMSERVER_PORT.

If you are not using the cybus_combined_ca.crt for your agents, targeting the streaming server requires you to add the shared_yearly_ca.crt, not the cybus_ca.crt.

6. Upgrading Agents

What You Need to Do

This guide explains how to update agents which use Docker Compose. If you are using agents via the connectware-agent Helm chart, follow the Kubernetes upgrade guide for this part.

  1. Ensure you followed the previous step to prepare the agents, by adjusting their configuration to the changes made with Connectware 2.0.0.

  2. Enter the directory in which the docker-compose.yaml file for your agents is stored.

  3. Modify the agents docker-compose.yaml file and replace the image tag with 2.0.0.

Example

services:
    protocol-mapper-agent:
        image: registry.cybus.io/cybus/protocol-mapper:2.0.0 # update the image tag
        environment:
            CYBUS_AGENT_MODE: distributed
            CYBUS_AGENT_NAME: my-docker-compose-agent
            CYBUS_HOSTNAME_INGRESS: connectware
        volumes:
            - protocol-mapper-agent:/data
            - ./cybus_combined_ca.crt:/connectware/certs/ca/ca-chain.pem
        restart: unless-stopped
        network_mode: host
        hostname: my-docker-compose-agent
volumes:
    protocol-mapper-agent:
  1. Run docker compose up.

7. Reinstalling Services

Why the Change?

With Connectware 2.0.0, your services and resources are no longer stored on the service-manager volume, but inside the PostgreSQL database.

What You Need to Do

Reinstalling Your Services

After completing the upgrade, you must reinstall all previously used services. You can do this using your preferred method:

Additionally, there have been changes to the relationships between services. Understanding how these interdependencies behave at runtime is crucial for correct deployment and maintenance.

Install parent services first (recommended): If the service depends on another service (parent/child relationship), install the parent service first. This ensures:

  • Service relations are created during installation.

  • Each service can be installed with targetState=enabled.

Install child services first (alternative): It is possible to install the dependent (child) service first, but this comes with limitations:

  • Service relations are only established when the service is enabled.

  • The dependent (child) service can only be installed with targetState=disabled.

For more details, see Service Dependency Behavior and targetState.

Feature-Specific Upgrade Steps

Only follow these if you use the related features, so they continue working after the upgrade.

1. Permissions and Roles

Why the Change?

Permissions allow administrators to define who can access what resources and what actions they can perform. Each permission represents a specific access right to a resource.

Connectware 2.0.0 introduces new and permissions. Because of this, custom roles or specific permissions you have set up might not allow users to do everything they could before the 2.0.0 upgrade.

What You Need To Do

Verifying Permissions
  • Check the permissions of your users. Compare them with the default roles in Connectware 2.0.0 and make any updates needed so your users can continue working without interruptions.

For more information on managing permissions, see Permissions.

2. Custom Connectors

Why the Change?

Connectware has evolved its architecture, removing dependencies like VRPC and improving protocol handling. To ensure compatibility, you must update your custom connector implementations.

What You Need To Do

If you are using custom connectors, follow these steps to make your custom connector compatible with Connectware 2.0.0.

1. Remove VRPC

VRPC is no longer supported in the custom connector environment.

  • Remove all VRPC references in the custom connector code. This includes the import and any usage of VrpcAdapter:

Example

// const { VrpcAdapter } = require('vrpc') <- REMOVE THIS
const Connection = require('./FoobarConnection')
const Endpoint = require('./FoobarEndpoint')

// VrpcAdapter.register(Endpoint, { schema: Endpoint.getSchema() }) <- REMOVE THIS
// VrpcAdapter.register(Connection, { schema: Connection.getSchema() }) <- REMOVE THIS
2. Follow the Directory Naming Conventions
  • When defining the Dockerfile, ensure that the destination path for the copied source files ends in a protocol-specific directory name written entirely in lowercase.

Example

# protocol directory must be lowercase
COPY ./src ./src/protocols/foobar
3. Follow the Schema Naming Conventions
  • The schema $id must match the file name (without the .json).

  • The schema must start with a capital letter, like Foobar.

Example

  • In FoobarConnection.json, the class must be like:

{
  ...
  "$id": "FoobarConnection"
  ...
}
  • In FoobarEndpoint.json, the class must be like:

{
  ...
  "$id": "FoobarEndpoint"
  ...
}
4. Schema Versioning

Schemas support versioning through the additional version property, which must be a positive integer greater than zero. If this property is omitted, the default value is 1.

Versioning ensures that only the latest version of a schema is considered active and valid. This means that even though all custom connector instances should run the same version of schemas, the latest version will overwrite any previous version in the CW control plane.

Example

  • FoobarConnection.json supporting versioning.

{
  ...
  "$id": "FoobarConnection",
  "version": 3
  ...
}
5. Follow the Source Directory Naming Conventions

Follow the case-sensitive naming conventions based on the protocol name.

  • File names must start with an uppercase protocol name (e.g., Foobar).

  • Connection and endpoint suffixes are mandatory.

  • JS files define classes.

  • JSON files define schemas.

Example

src/
├── FoobarConnection.js
├── FoobarConnection.json
├── FoobarEndpoint.js
└── FoobarEndpoint.json
6. Follow the Class Naming Conventions
  • The class name must match the file name, excluding the .js extension.

  • The class name must start with a capital letter, such as Foobar.

Example

  • In FoobarConnection.js, the class must be:

class FoobarConnection extends Connection { ... }
  • In FoobarEndpoint.js, the class must be:

class FoobarEndpoint extends Endpoint { ... }
7. Class Constructors

Unless you need a specific constructor, there is no need to specify one because it is inherited from the parent class. However, if you need to implement a custom constructor for the Connection or Endpoint classes, preserve the following format:

  • In FoobarConnection.js, the class constructor must be like:

class FoobarConnection extends Connection {
---
constructor (params) {
super(params)
---
// custom code
---
}
---
}
  • In FoobarEndpoint.js, the class constructor must be like:

class FoobarEndpoint extends Endpoint {
---
constructor (params, dataPlaneConnectionInstance, parentConnectionInstance) {
super(params, dataPlaneConnectionInstance, parentConnectionInstance)
---
// custom code
---
}
---
}
8. Do not Set the _topic Property Manually

The _topic property is now handled automatically. Manually assigning it will cause errors.

The following code is invalid and must be removed since topics are now built internally.

// this is invalid, remove it
this._topic = 'this/is/a/topic'
9. ES Modules Not Supported

The standard JavaScript environment of custom connectors is based on CommonJS modules. ES modules are not supported.

10. TypeScript Configuration

TypeScript is not officially supported in development workflows. However, if you want to use TypeScript and compile it to JavaScript, make sure to configure your tsconfig.json file as follows:

{
  "compilerOptions": {
    ....
    "target": "es2022",    /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
    "lib": ["es6"],        /* Specify a set of bundled library declaration files that describe the target runtime environment. */
    "module": "commonjs",  /* Specify what module code is generated. */
    ....
  },
  "include": ["src/**/*.ts", "src/**/*.json", "src/**/*.js", "src/**/*.d.ts", "test/**/*"]
}

Additionally, the compiled JavaScript output must include an exports.default assignment and the exported class itself. This ensures interoperability with our CommonJS-based module system. The compiled .js file should result in:

class FoobarConnection { ... }
exports.default = FoobarConnection;

3. Systemstate Protocol

Why the Change?

To improve performance and reduce unnecessary messaging load, the Systemstate protocol no longer supports whole-service tracking or redundant status events. This simplifies agent responsibilities and avoids misleading lifecycle signals.

What You Need to Do

If you are using the Systemstate protocol, do the following:

1. Stop Tracking Whole Services
  • Tracking the entire service object is no longer allowed. You must update your connector configuration to track individual resources only (like specific endpoints or connections).

Example

# Before (no longer supported)
serviceEndpoint:
    type: Cybus::Endpoint
    properties:
        protocol: Systemstate
        connection: !ref systemStateConnection
        subscribe:
            resourceId: !sub '${Cybus::ServiceId}'
2. Update Event Handling Logic

The following status events have been removed from Systemstate. If your implementation depends on them (e.g., for health monitoring or automation), you must refactor that logic:

  • subscribed/unsubscribed

  • online/offline

4. Log Monitoring

Why the Change?

With version 2.0.0, several log messages have been corrected to fix spelling mistakes. These changes may affect existing log monitoring configurations.

What You Need to Do

Updating Your Log Monitoring

If you rely on log monitoring, review whether your setup references any of the updated log messages and adjust accordingly.

Type
Log Level
Original (with typo)
Corrected line

Log message

info

MS Entra Login was succesful, redirecting to

MS Entra Login was successful, redirecting to

Log message

debug

DELETE /:id/tokens sucess for user: '<req.params.id>'

DELETE /:id/tokens success for user: '<req.params.id>'

Error message

Views are found, the restore implenetation do not support views!

Views are found, the restore implementation do not support views!

Error message

query paramter error is not a valid HTTP error code (<req.query.code>)

query parameter error is not a valid HTTP error code (<req.query.code>)

Log message

debug

Cleared persistance of:

Cleared persistence of:

Error message

warn

HttpNode is configured with method 'GET' but operation 'serverRecieves' (instead of serverProvides)

HttpNode is configured with method 'GET' but operation 'serverReceives' (instead of serverProvides)

Log message

Error when trying to recieve OPC-UA Method details from nodeId : <err.message>

Error when trying to receive OPC-UA Method details from nodeId : <err.message>

Log message

warn

tried to pass the value as an INT64 and found no matching convertion

tried to pass the value as an INT64 and found no matching conversion

Log message

warn

tried to pass the value as an UINT64 and found no matching convertion

tried to pass the value as an UINT64 and found no matching conversion

Log message

debug

Sucessfully subscribed to topic: <mqttOpts.topic>.

Successfully subscribed to topic: <mqttOpts.topic>.

Log message

error

error occured during shutting down the server

error occurred during shutting down the server

Log message

error

expected payload convertion to fail because given payload was not a JSON notation, but 'err == nil'

expected payload conversion to fail because given payload was not a JSON notation, but 'err == nil'

5. Heidenhain Agents (Windows)

Why the Change?

For Connectware 2.0.0, the Heidenhain protocol has been updated.

What You Need to Do

Installing the Heidenhain Agent

You must upgrade the Windows-based Cybus Heidenhain Agent to work with Connectware 2.0.0.

  1. Uninstall the existing Heidenhain agent installation from your Windows system.

  2. Install the updated Heidenhain agent. You can find the download link at Heidenhain DNC.

6. Auto-Generated MQTT Topics of Resources

Why the Change?

With Connectware 2.0.0, auto-generated MQTT topics no longer include resource-specific properties. This makes the topic generation more unified and explicit. You must update any service commissioning file that hardcodes those old auto-generated topics.

Example of old behavior

Some auto-generated topics contained property-specific parts:

  • S7: services/myService/pressure/address:DB1,REAL6

  • Modbus: services/myService/current/fc:3/address:7

  • HTTP: services/myService/myEndpoint/get[object Object]

These paths might have been referenced inside Cybus::Mapping resources. Using auto-generated topics inside Cybus::Mapping is not recommended. Instead, use references via !ref, “Reference Method”.

What You Need to Do

Updating Auto-Generated Topic References

Auto-generated topics no longer include resource-specific properties. They always follow:

<Cybus::MqttRoot>/<serviceId>/<resourceName>

** of new behavior**

  • S7: services/myService/pressure

  • Modbus: services/myService/current

  • HTTP: services/myService/myEndpoint

Procedure

  1. Scan your service commissioning files for any usage of auto-generated topics.

  2. Adapt those references by replacing direct topic strings with !ref references.

For more details, see Reference Method (!ref).

Last updated

Was this helpful?