Upgrading Connectware to 2.0.0 (Kubernetes)
How to upgrade Connectware to version 2.0.0 on Kubernetes.
Disclaimer
When upgrading your Connectware instance, follow the upgrade path based on your current version.
For all other version upgrades that are not listed below, you can simply follow the regular Connectware upgrade guide.
If you are on version 1.4.1 or below
Upgrade sequentially: 1.5.0 → 1.7.0 → 2.0.0
If you are between version 1.5.0 and 1.6.2
Upgrade sequentially: 1.7.0 → 2.0.0
If you are on version 1.7.0 or newer (but below 2.0.0)
Upgrade directly to 2.0.0
If you are performing a clean or new installation
No upgrade path required. You can install the latest available version directly.
Detailed instructions on each upgrade step
Before You Begin
Upgrading to Connectware 2.0.0 introduces significant improvements in performance, scalability, and reliability. However, these changes also come with updated requirements for versions, networking, hardware, and storage.
This guide outlines the prerequisites and known limitations you must consider to ensure a smooth and successful upgrade.
Before starting the upgrade, read the entire guide. Some steps require developer work or preparation before the upgrade process begins.
Upgrading to Connectware 2.0.0 requires reinstalling all services. The main benefit of upgrading instead of performing a fresh installation is that it preserves the user database, including Multi-Factor Authentication. If you do not rely heavily on these features, a fresh installation may be the better option.
Even with a fresh installation, you will still need to follow this upgrade guide to update configuration parameters and adapt to the behavioral changes introduced in Connectware 2.0. However, you can skip the multi-step upgrade process itself.
If you are considering a fresh installation, we strongly recommend consulting with the Cybus Customer Support beforehand to confirm whether this is the right approach for your setup.
Connectware Version Requirements
To be able to upgrade to Connectware 2.0.0, your Connectware version must be 1.7.0 or above.
If your Connectware installation is below 1.7.0, make sure that you have followed Upgrading Connectware to 1.7.0 (Kubernetes) before upgrading to 2.0.0.
Network Requirements
Why the Change?
With Connectware 2.0.0, some internal components have been updated to improve communication and performance. As a result, the network configuration has changed:
Added: TCP/4222 and TCP/4223
Removed: TCP/1884 and TCP/8884
What You Need to Do
Updating the Network Ports
Verify that your firewalls and security rules are updated to allow the new ports (TCP/4222 and TCP/4223) and to remove dependencies on the deprecated ones (TCP/1884 and TCP/8884).
This ensures uninterrupted connectivity between your agents and Connectware.
Hardware Requirements
Why the Change?
Connectware 2.0.0 makes increased use of its PostgreSQL database and adds some components. When planning this upgrade, ensure your infrastructure can accommodate the enhanced resource requirements. This upgrade requires additional computing power.
What You Need to Do
Updating the Hardware Setup
We recommend adding the following resources to your hardware setup:
12 CPU cores
11 GB of memory
52 Gi of storage
However, these are general guidelines. Check what your specific system needs and make adjustments accordingly. If you were using the control-plane-broker option, you can offset these additional requirements with the resources it used, since it is being removed in this upgrade.
Storage Requirements
Why the Change?
We have added two new components to Connectware:
A streaming server called NATS
A service called resource-status-tracking
Alongside other improvements, these additions enable Connectware to scale effectively for much larger deployments.
In addition, the latest versions of PostgreSQL and auth-server require updated Kubernetes resource requests and limits to maintain stability and performance under heavier workloads.
What You Need to Do
Adjusting Kubernetes Resource Requests and Limits for Core Microservices
The microservices postgresql, auth-server, nats, and resource-status-tracking now have new or revised Kubernetes resource requests and limits. Make sure to adapt the default values to match your deployment needs.
We recommend beginning with the defaults, monitoring performance metrics, and fine-tuning resource allocations as needed.
To adjust the default values, update the corresponding values in the
global.podResourcesHelm value context.
Example
Known Limitations
Adding Certificates Through Admin UI Not Supported
You cannot add certificates to Connectware's CA bundle via the Admin UI.
Instead, modify the
cybus_ca.crtfile directly on thecertsvolume.
Backup via Admin UI Not Supported
The backup functionality through Admin UI is not supported.
Instead, create backups of the database by running a
pg_dumpcommand on thepostgresql-0pod.
Upgrade Procedure
Follow this procedure to upgrade your Connectware installation to version 2.0.0. The steps are divided into two parts:
Mandatory Upgrade Steps: Required for all installations to ensure a smooth and stable upgrade.
Feature-Specific Upgrade Steps: Only needed if you use certain features, so they remain compatible with Connectware 2.0.0.
Expand the following sections for an overview of all upgrade steps.
Mandatory Upgrade Steps
These steps apply to every Connectware installation upgrading to Connectware 2.0.0. For a detailed guide, see Mandatory Upgrade Steps.
Depending on your setup, you may also need to perform additional Conditional Steps.
TLS Changes: Default behavior on certificate validation has been adjusted.
Update Helm Values: Remove obsolete Helm values and adjust for changed and new values.
Preparing the Connectware Helm Upgrade: Prepare removal of control-plane-broker and remove PostgreSQL StatefulSet.
Upgrading to Connectware 2.0.0: Download and install Connectware 2.0.0.
Enabling Agents in the Connectware Helm Chart: After upgrading Connectware, you need to go back to your agents to enable their TLS connections.
Updating Helm Values for the Connectware Agent Helm Chart: Update your agent configuration to comply with the updated Helm value configuration.
Upgrading Agents for the Connectware Agent Helm Chart: Upgrade your agents with the
connectware-agentHelm chart.Reinstalling Services: This upgrade changes where your services are stored. You will need to reinstall any services after the upgrade.
Feature-Specific Upgrade Steps
Only follow these if you use the related features, so they continue working after the upgrade.
Roles and Permissions: New permissions were added to Connectware. Verify your custom roles, if they require updates.
Custom Connectors: Update your customer connector configurations to meet new requirements.
Systemstate Protocol: Update your Systemstate protocol configurations to meet new requirements.
Log Monitoring: Some logging strings are changed. If you use log monitoring, you may need to update it.
Heidenhain Agents: Upgrade your Heidenhain agents.
Auto-Generated MQTT Topics of Resources: Topic generation no longer includes resource-specific properties. Update your service commissioning files if you relied on old patterns.
Auto-Generated MQTT Users: The behavior of how MQTT users are auto-generated has changed. You may need to update your service commissioning file if you relied on auto-generated MQTT users.
Mandatory Upgrade Steps
These steps are required to upgrade your Connectware installation to Connectware 2.0.0.
1. TLS Changes
Why the Change?
To enhance security by default, Connectware agents now verify TLS certificate chains automatically. This ensures that all components communicate over a valid trust chain, while still giving you the option to keep the old behavior by explicitly disabling TLS verification.
Key Changes
1. Introducing the cybus_combined_ca.crt
Connectware maintains two separate CA chains:
External certificates validated by
cybus_ca.crt.Internal certificates validated by
shared_yearly_ca.crt.
Which CA an agent requires depends on the hostname through which it connects to Connectware. For example, through the Connectware ingress, or directly to the Control Streaming Server (NATS) through the internal network.
To simplify configuration, we introduced cybus_combined_ca.crt, a bundle containing both chains, so agents can use a single file without needing to distinguish between internal and external CA certificates.
2. Certificate Chain Verification in Agents
Agents now enforce TLS chain validation by default. Each agent requires access to cybus_combined_ca.crt, available on the certs volume.
To revert to the previous behavior (skipping verification), set the environment variable
CYBUS_TRUST_ALL_CERTStotrue. Note that it has been renamed fromTRUST_ALL_CERTS.
3. Configuring Certificate Hostnames
The default Connectware-generated CA includes the hostnames localhost and connectware.
To add more hostnames, configure the Helm value
global.ingressDNSNames.
4. Renewal of Certificate Chains
With 2.0.0, the internal CA chain is replaced:
Certificate Authority renamed from
CybusCAtoCybusInternalCA.The hostname
natsis added as a Subject Alternate Name (SAN) toshared_yearly_server.crt.
The built-in default external CA certificate chain is also replaced.
The hostname
connectwareis added as a SAN tocybus_server.crt.
If you rely on monitoring, custom setups, or modified certificates, adapt your configuration accordingly.
5. Replacing CA Certificate Chain
To replace Connectware’s default external chain with your enterprise-managed CA:
Replace
cybus_ca.crtwith your enterprise CA certificate.Ensure
cybus_server.crtandcybus_server.keyform a valid key pair, signed by the CA incybus_ca.crt.
Do not replace the internal CA (shared_yearly_ca.crt).
After replacement:
Restart the
system-control-serverdeployment to rebuild and synchronize the combined CA bundle (cybus_combined_ca.crt):
Restart all Connectware services.
2. Update Helm Values
Why the Change?
Some changes to Connectware or the Helm chart prompt changes to Helm values, which you need to adapt your values.yaml file to:
The optional control-plane-broker is removed from Connectware.
All parameters to tune the inter-service communication have been removed.
Obsolete Helm Values
Removing Obsolete Helm Values
Some Helm values are obsolete and have been removed. Remove the following Helm values from your values.yaml file for the connectware Helm chart:
global.rpcTimeoutglobal.adminWebApp.rpcTimeoutglobal.containerManager.rpcTimeoutglobal.protocolMapper.rpcTimeoutglobal.systemControlServer.rpcTimeoutglobal.serviceManager.rpcTimeoutglobal.serviceManager.storageglobal.controlPlaneBrokerglobal.protocolMapperAgents[*].controlPlaneglobal.serviceManager.useServicesGraph
New Helm Values
1. Ingress DNS Name Configuration
With the changes TLS behavior in Connectware, it has become essential to add the DNS names under which Connectware is addressed, for example by agents.
If you are replacing Connectware's default PKI, you can, and likely have managed this yourself by providing a valid cybus_server.crt containing all Subject Alternate Names (SANs) used within your setup.
If you are using Connectware's default PKI, you can use the new Helm value global.ingressDNSNames, which is a list of names that will be added to the default cybus_server.crt.
Hostname Formats
You can include multiple hostnames in the list. The certificate will include all specified names in its SAN section.
The configuration accepts various hostname formats:
Wildcards (e.g.,
*.company.io)Subdomains (e.g.,
connectware.company.io)Custom hostnames (e.g.,
localhost)IP Addresses (e.g. 192.168.100.42)
Example
2. Proxy Configuration
Connectware's proxy configuration has been improved with version 2.0. Accompanying this, we added Helm values to configure proxy usage. This means that you cannot configure proxy usage directly through environment variables. If you have been doing this in the past, transfer your configuration to these new Helm values:
New Helm Value
Type
Default Values
Purpose
global.proxy.url
string
<none>
Address of the HTTP proxy server to be used
global.proxy.exceptions
array
<none>
List of hosts, for which the proxy is ignored
global.proxy.existingSecret
string
<none>
Name of a Kubernetes Secret which contains the proxy configuration as the keys 'url' and 'exceptions'
Example
Example with existing secret
Create your secret using your preferred method, in this example we will use a kubectl create command:
3. NATS Configuration
Connectware 2.0.0 introduces NATS as the stream server, primarily used for inter-service communication. The key configuration parameter is global.nats.replicas, which defines the cluster size. Typical values are 3 or 5, with 3 as the default. Increasing this to 5 raises the redundancy level from N+1 to N+2.
The replicas value is critical for the NATS cluster configuration and is shared across multiple Connectware components.
This value can only be set during the initial installation of Connectware and cannot be modified later. Scaling operations on the nats StatefulSet must not be performed.
The following configuration values are available for the NATS streaming server:
New Helm Value
Type
Default Values
Purpose
global.nats.replicas
integer
3
The number of NATS replicas
global.nats.podAntiAffinity
string
soft
The podAntiAffinity behavior for NATS (one of none, soft, hard)
global.nats.podAntiAffinityTopologyKey
string
kubernetes.io/hostname
The podAntiAffinityTopologyKey for NATS (one of none, soft, hard)
global.nats.labels
object
<none>
A set of labels that will be applied to NATS resources
global.nats.annotations
object
<none>
A set of annotations that will be applied to NATS resources
global.nats.podLabels
object
<none>
A set of labels that will be applied to NATS pod resources
global.nats.podAnnotations
object
<none>
A set of annotations that will be applied to NATS pod resources
global.nats.service.labels
object
<none>
A set of labels that will be applied to NATS service resources
global.nats.service.annotations
object
<none>
A set of annotations that will be applied to NATS service resources
global.podResources.nats.resources
array
For a list of all default values, see default-values.yaml
Kubernetes compute resource requirements and limits
global.nats.env
array
<none>
Array containing environment variables as name and value pairs to be applied to NATS service
global.nats.metrics.prometheus.enabled
boolean
false
Enable Prometheus exporter for NATS
global.nats.metrics.prometheus.resources
array
For a list of all default values, see default-values.yaml
Kubernetes compute resource requirements and limits
global.nats.metrics.prometheus.serviceMonitor.enabled
boolean
false
Enable Prometheus Operator ServiceMonitor for NATS
global.nats.metrics.prometheus.serviceMonitor.namespace
string
<none>
Namespace for the Prometheus ServiceMonitor
global.nats.metrics.prometheus.serviceMonitor.labels
object
<none>
Labels for the Prometheus ServiceMonitor
global.nats.storage.size
string
16Gi
Define the size of the NATS JetStream volume
global.nats.storage.storageClassName
string
<none>
Define a Kubernetes StorageClass that will be used for the NATS JetStream volume
global.nats.containerSecurityContext
array
For a list of all default values, see default-values.yaml
Set a container SecurityContext as defined by Kubernetes API
4. Resource-Status-Tracking Configuration
In addition to the new stream server, Connectware introduces a second new component called resource-status-tracking. This component allows you to monitor the status of resources created through service commissioning files and enables you to detect deviations in service behavior.
The following configuration values are available for resource-status-tracking:
New Helm Value
Type
Default Values
Purpose
global.resourceStatusTracking.replicas
integer
2
The number of resource-status-tracking replicas
global.resourceStatusTracking.podAntiAffinity
string
soft
The podAntiAffinity behavior for resourceStatusTracking (one of "none", "soft", "hard")
global.resourceStatusTracking.podAntiAffinityTopologyKey
string
kubernetes.io/hostname
The podAntiAffinityTopologyKey for resourceStatusTracking (one of "none", "soft", "hard")
global.resourceStatusTracking.labels
object
<none>
A set of labels that will be applied to resourceStatusTracking resources
global.resourceStatusTracking.annotations
object
<none>
A set of annotations that will be applied to resourceStatusTracking resources
global.resourceStatusTracking.podLabels
object
<none>
A set of labels that will be applied to resourceStatusTracking pod resources
global.resourceStatusTracking.podAnnotations
object
<none>
A set of annotations that will be applied to resourceStatusTracking pod resources
global.resourceStatusTracking.service.labels
object
<none>
A set of labels that will be applied to resourceStatusTracking service resources
global.podResources.resourceStatusTracking.resources
array
For a list of all default values, see default-values.yaml
Kubernetes compute resource requirements and limits
global.resourceStatusTracking.service.annotations
object
<none>
A set of annotations that will be applied to resourceStatusTracking service resources
global.resourceStatusTracking.env
array
<none>
Array containing environment variables as name and value pairs to be applied to resourceStatusTracking service
global.resourceStatusTracking.containerSecurityContext
array
For a list of all default values, see default-values.yaml
Set a container SecurityContext as defined by Kubernetes API
5. PostgreSQL Storage Size
In previous releases, the storage volume of the PostgreSQL component was fixed at 1 Gi. With Connectware 2.0.0, the reliance on PostgreSQL has increased, resulting in a new default storage size of 5 Gi. This parameter is now configurable.
We recommend to allocate at least 5 Gi. For larger deployments, storage sizes of 20 Gi or more may be appropriate.
You can configure the storage size via the global.postgresql.storage.size Helm value:
6. Enabling MQTTS for Protocol-Mapper Agents
A new Helm value is available to configure whether a protocol-mapper agent establishes its data plane connection to the Connectware MQTT broker using TLS.
New Helm Value
Type
Default Values
Purpose
global.protocolMapperAgents[*].dataPlane.tls
boolean
false
Enable TLS encryption for agent's MQTT data plan connection
7. Disabling Agents During the Connectware Upgrade
This step prepares agents orchestrated with the connectware Helm chart for disabling during the upgrade. Agents orchestrated by other methods must be shut down separately, as described in 7. Shutting Down Protocol-Mapper Agents.
To prevent agents from requiring re-registration after the upgrade, you must disable them during the installation. Once the upgrade is complete, you will re-enable the agents, update their configuration, and provide them with a valid CA certificate.
To disable the agents defined in your Connectware Helm chart, set the Helm value
global.protocolMapperAgentsand all related values within this context to disabled. You can do this either by commenting out each line with a#or by temporarily removing them from the file.
3. Preparing the Connectware Helm Upgrade
Why the Change?
Connectware 2.0.0 introduces architectural improvements that require you to remove or adjust certain resources before running the Helm upgrade. This ensures a clean and successful upgrade process.
What You Need to Do
1. Removing the Control-Plane-Broker
The control-plane-broker is deprecated and its associated StatefulSet is no longer used. It will not be started anymore after the upgrade.
No action is required in your Helm installation.
Optional: Remove
global.controlPlaneBrokerfrom theconnectwarechart andcontrolPlaneBrokerEnabledfrom theconnectware-agentchart.The
brokerdata-control-plane-broker-*andbrokerlog-control-plane-broker-*PersistentVolumeClaims are not removed automatically. Delete them manually if you want to reclaim the storage.
2. Backing Up Your PostgreSQL Database
With Connectware 2.0.0, Connectware uses a new major version of PostgreSQL. You need to delete your postgresql volume before upgrading Connectware (this is covered later in this upgrade guide). This requires you to create a backup of your database and restore this after the upgrade.
Any modifications done to Connectware after the following database backup will be lost after the Connectware 2.0.0 upgrade. We recommend to create the backup right before upgrading to Connectware 2.0.0.
To create a backup of your database, run the following command.
Make sure the backup is successful, then store the database file in a secure location.
3. Service-Manager Volume Backup & Removal
The service-manager PersistentVolumeClaim (PVC) is deprecated and will be removed automatically during the upgrade to 2.0.0.
Depending on your ReclaimPolicy, this may or may not mean that the volume is being deleted too. If it is not automatically deleted, you will have to manually delete the volume previously associated with the service-manager PersistentVolumeClaim to free up the disk space.
The former contents will in the future be stored in the PostgreSQL database, but will not be automatically migrated. You will need to reinstall any services you used before. See Reinstalling Services.
If you do not have your services stored outside of Connectware, make sure to export your services, or create a backup of your service-manager volume before upgrading.
4. Upgrading to Connectware 2.0.0
Make sure all prior steps are completed before proceeding with the Helm upgrade.
1. Updating Helm Repository Cache
Before performing the upgrade, update the Helm repository cache to ensure the latest Connectware chart version is available.
Run the following command:
2. Reviewing the Connectware Changelog
Before upgrading to Connectware 2.0.0, review the changelog to familiarize yourself with new features, bug fixes, and other changes introduced in Connectware 2.0.0.
3. Reviewing the Readme File
Before upgrading to Connectware 2.0.0, review the readme file. The readme may contain important version-specific upgrade instructions.
To open the readme file, run the following command:
4. Comparing Helm Configurations between Connectware Versions
With a new Connectware version, there might be changes to the default Helm configuration values. We recommend that you compare the default Helms values of your current Connectware version with the default Helm values of your target Connectware version.
To display the new default values, enter the following command:
To display which Connectware default values have changed between your current version and your target version, enter the following command:
Example
In this example, only the image version has changed. However, if any of the Helm value changes are relevant to your setup, make the appropriate changes.
To override default Helm values, add the custom Helm value to your local
values.yamlfile.
5. Adjusting Helm Values
When you have reviewed the necessary information, adjust your configuration in your values.yaml file. Not every upgrade requires adjustments.
If you specified which image version of Connectware to use by setting the Helm value global.image.version you will need to update this to <target-version>. Unless you have a specific reason to use a specific image version, we recommend not setting the Helm value.
6. Verifying your Backups
Make sure that you store backups of your setup. This allows you to restore a previous state if necessary.
Your backups must consist of the following files:
All Kubernetes PersistentVolumes that Connectware uses
Your Connectware database
Your
values.yamlfileAll service commissioning files
Depending on your local infrastructure, it may be necessary to back up additional files.
7. Shutting Down Protocol-Mapper Agents
In this step, you need to shut down any agents currently connected to your Connectware instance that are not managed by the connectware chart. Agents orchestrated through the connectware chart have already been prepared for shutdown as part of 7. Disabling Agents During the Connectware Upgrade.
Before running the helm upgrade command, you must stop all connected agents. Agents which remain up during this upgrade run the risk of having to go through the agent registration process again.
Docker Run: To stop agents which were started using
docker run, use thedocker stopcommand. If you are not aware of the name these containers use, run thedocker pscommand to find out.Docker Compose: If your agents are running in Docker Compose, use the
docker compose downcommand to stop them.Agent Helm Chart: You can shut down agents that have been installed via the
connectware-agentHelm chart using this command:
8. Removing the PostgreSQL StatefulSet
Before running the
helm upgradecommand, you must remove thepostgresqlStatefulSet:
9. Initial Connectware Upgrade
You can now start the first of two upgrade processes. This first upgrade run applies the new Connectware 2.0.0 workloads and prepares the system for the required database migration.
To upgrade Connectware, enter the following command:
Result: The newly generated workload definitions are applied to your Kubernetes cluster and your Connectware pods are replaced.
10. Shutting Down Connectware
Wait for the
system-control-serverdeployment to contain a ready pod, then shut down Connectware to restore your PostgreSQL database:
11. Restoring the PostgreSQL Database
Note down the Persistent Volume name for your
postgresql-postgresql-0PersistentVolumeClaim. You will need this name later to make sure the volume is not recycled.
Remove the
postgresql-postgresql-0PersistentVolumeClaim:
Remove the PostgreSQL PersistentVolume. You can skip this step if the volume has been automatically deleted through a reclaim policy, or if you are sure a new volume will be used. If the same volume is reused for postgresql, the upgrade will fail.
Start PostgreSQL:
Restore your PostgreSQL Database. Wait for the postgresql-0 pod to become ready, then run:
12. Final Connectware Upgrade after Database Restore
You can start the final upgrade process. This upgrade finalizes the process by starting Connectware with the restored PostgreSQL database.
To upgrade Connectware, enter the following command:
Optional: You can use the --atomic --timeout 10m command line switch, which will cause Helm to wait for the result of your upgrade and perform a rollback when it fails. We recommend setting the timeout value to at least 10 minutes, but because the time it takes to complete an upgrade strongly depends on your infrastructure and configuration you might have to increase it further.
Result: The newly generated workload definitions are applied to your Kubernetes cluster and your Connectware pods are replaced.
13. Verifying the Connectware Upgrade
You can monitor the Connectware upgrade progress to verify that everything runs smoothly, to know when the installation is successful, or to investigate potential issues.
Monitoring the Connectware Upgrade
The Connectware upgrade can take a few minutes. To monitor the upgrade process, do one of the following:
To monitor the current status of the upgrade process, enter the following command:
To monitor the continuous progress of the upgrade process, enter the following command:
To stop monitoring the continuous progress of the upgrade process , press Ctrl+C.
Pod Stages During the Connectware Upgrade
During the Connectware upgrade, the pods go through the following stages:
Terminating
Pending
PodInitializing
ContainerCreating
Init:x/x
Running
When pods reach the STATUS Running, they go through their individual startup before reporting as Ready. To be fully functional, all pods must reach the STATUS Running and report all their containers as ready. This is indicated by them showing the same number on both sides of the / in the column READY.
Example
admin-web-app-7cd8ccfbc5-bvnzx
1/1
Running
0
3h44m
auth-server-5b8c899958-f9nl4
1/1
Running
0
3m3s
broker-0
1/1
Running
0
3h44m
broker-1
1/1
Running
0
2m1s
connectware-7784b5f4c5-g8krn
1/1
Running
0
21s
container-manager-558d9c4cbf-m82bz
1/1
Running
0
3h44m
doc-server-55c77d4d4c-nwq5f
1/1
Running
0
3h44m
ingress-controller-6bcf66495c-l5dpk
1/1
Running
0
18s
postgresql-0
1/1
Running
0
3h44m
protocol-mapper-67cfc6c848-qqtx9
1/1
Running
0
3h44m
service-manager-f68ccb767-cftps
1/1
Running
0
3h44m
system-control-server-58f47c69bf-plzt5
1/1
Running
0
3h44m
workbench-5c69654659-qwhgc
1/1
Running
0
15s
At this point Connectware is upgraded and started. You can now make additional configurations or verify the upgrade status in the Admin UI.
5. Enabling Agents in the Connectware Helm Chart
When upgrading to Connectware 2.0.0, protocol-mapper agents must be explicitly enabled again. This requires updating Helm values and configuring TLS certificates — or, in less secure setups, choosing to trust all certificates.
The following steps are required:
Update Helm values to their new equivalents.
Configure TLS (recommended) or opt to trust all certificates (not recommended).
Re-run
helm upgradeto apply the changes.
Agents connecting to Connectware must either:
Provide a valid CA certificate that matches the server certificate, OR
Skip certificate validation by setting
CYBUS_TRUST_ALL_CERTStotrue(not recommended).
Which Certificate to Use
The certificate that you provide depends on how the agent connects:
Via Connectware ingress: Use
cybus_ca.crt.Via the internal network: Use
shared_yearly_ca.crt.Simplified option that works for both cases: Use
cybus_combined_ca.crt.
Use the table below to decide which option applies to your setup:
Overview of Certificate Behavior
CA certificate
Value of CYBUS_TRUST_ALL_CERTS
Behavior
Log message during control connection
Not Configured
false
Default: TLS connections, like the control connection to Connectware, will try to use the system trust store to do certificate validation. This will only work when the CAs being used in Connectware are signed by a well-known CA authority or when self-signed CAs were added to the system trust store.
CA certificate not found, using system trusted CAs for NATS connection.
Not Configured
true
TLS connections, like the control connection to Connectware, will not validate certificates and will trust all certificates.
CA certificate not found, Connectware configured for trusting all certificates for NATS connection
Configured
true
TLS connections, like the control connection to Connectware, will not validate server certificates and will trust any cert.
CA certificate found, but trusting all certificates for NATS connection
Configured
false
Recommended: TLS connections, like the control connection to Connectware, will validate server certificates and will not trust any cert.
CA certificate found, using it for NATS connection with CA verification
What You Need to Do
1. Updating the Helm Values
You must update the Helm values of your connectware installation again to re-enable agents by removing the "#" you added, or adding them to the Helm values file again. Then add the cybus_combined_ca.crt or set CYBUS_TRUST_ALL_CERTS to true. If you were directly targeting our MQTT broker or control-plane-broker before, you should also move the respective configuration to their new replacements.
The following Helm values have changed. If you had specific configuration for these in the past, update the Helm values accordingly.
For some Helm values, you need to take additional steps depending on your setup. The required steps are covered in the following sections.
Old Helm Value
New Helm Value
Required Change
global.protocolMapperAgents[*].mTLS.caChain.cert
global.protocolMapperAgents[*].tls.ca.certChain
Move
global.protocolMapperAgents[*].mTLS.caChain.existingConfigMap
global.protocolMapperAgents[*].tls.ca.existingConfigMap
Move
global.protocolMapperAgents[*].mqttDataHost
global.protocolMapperAgents[*].dataPlane.host
See 1.1 - Directly Targeting MQTT Broker
global.protocolMapperAgents[*].mqttDataPort
global.protocolMapperAgents[*].dataPlane.port
See 1.1 - Directly Targeting MQTT Broker
global.protocolMapperAgents[*].mqttHost
global.protocolMapperAgents[*].streamServer.host
See 1.2 - Directly Targeting Streaming Server
global.protocolMapperAgents[*].mqttPort
global.protocolMapperAgents[*].streamServer.port
See 1.2 - Directly Targeting Streaming Server
1.1 - Directly Targeting MQTT Broker
When deploying agents in the internal network of Connectware, they are able to directly connect to our MQTT broker instead of going through the Connectware ingress.
This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.
The major hindrance is, that the name "broker", which is used by our MQTT broker on the internal network, is not part of the cybus_server.crt by default. In order to connect agents with a TLS connection to this hostname, you either need to add the hostname "broker" as a Subject Alternate Name (SAN) to the certificate, or set CYBUS_TRUST_ALL_CERTS=true for the agent.
Adding the Hostname to the Default Certificate
If you are using the built-in default certificate for Connectware, you can add the hostname "broker" through the Helm value global.ingressDNSNames:
It is easiest if you add this Helm value before running your upgrade to Connectware 2.0.0, since activating it will automatically be covered by the upgrade guide.
If applying this configuration after already upgrading to Connectware 2.0.0, running helm upgrade on your Connectware installation will cause the system-control-server Deployment to restart. Once it is ready again, restart the broker StatefulSet:
Configuring Your Agents to Target the MQTT Broker
Next you need to configure your agents to target the MQTT broker directly by using the protocolMapperAgents[*].dataPlane.host Helm value:
The TCP port used for this connection is automatically determined by other configuration like TLS and mTLS settings, however, if for some reason you need to override this, use the Helm value protocolMapperAgents[*].dataPlane.port.
1.2 - Directly Targeting Control Connection Streaming Server
When deploying agents in the internal network of Connectware, they are able to directly connect to our streaming server control plane instead of going through the Connectware ingress.
This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.
Configuring Your Agents to Target the Streaming Server
Next you need to configure your agents to target the streaming server directly by using the protocolMapperAgents[*].streamServer.host Helm value. The default internal name is "nats".
The TCP port used for this connection is automatically determined by other configuration like mTLS settings, however, if for some reason you need to override this, use the Helm value protocolMapperAgents[*].streamServer.port.
2. Adding the CA Certificate to Your Agent
You will need to have the CA certificate you want to add at hand, in this example we will assume, that you are using the cybus_combined_ca.crt:
1. Copy cybus_combined_ca.crt from Connectware:
2. Add cybus_combined_ca.crt to Agent Helm Values:
Add the CA certificate to the Helm values of every agent in your connectware installation:
Alternatively you can add it using an existing Kubernetes ConfigMap:
3. (Alternative) Disable TLS Certificate Validation
As an alternative you can disable TLS certificate validation for agents. This of course has negative impact on security of your TLS connections, allowing for Man-in-the-middle attacks, but this may be acceptable for development instances or test installations.
This is only possible with agents using the username/password authentication method. If you are using mTLS for your agents, you need to care for a proper certificate setup.
4. Run Helm Upgrade Again
After choosing and configuring your method of choice, you can run helm upgrade on your connectware installation again.
To upgrade Connectware, enter the following command:
6. Updating Helm Values for the Connectware Agent Helm Chart
Why the Change?
This guide explains how to update agents that use the connectware-agent Helm chart. If you are using agents via Docker, refer to the Docker Guide.
With Connectware 2.0.0, the default handling of certificate chain verification has changed. Previously, protocol-mapper agents required explicit configuration to validate peer certificate chains. Now, certificate chain verification is enabled and enforced by default. While you can revert to the old behavior using a configuration switch, we strongly recommend using a proper TLS certificate chain.
You now must provide the CA certificate signing Connectware's public server certificate cybus_server.crt to agents using the Helm value protocolMapperAgentDefaults.tls.ca.certChain (renamed from protocolMapperAgentDefaults.mTLS.caChain.cert).
Additionally, the control-plane-broker has been replaced with a new streaming-based control plane. Along with this change, the configuration values for both the control plane and the data plane have been redesigned. The new values are intended to be generic and resilient against future technology changes. As a result, several Helm values have been deprecated, renamed, or newly introduced.
What You Need to Do
1. Updating the Helm Values
Obsolete Helm Values (Connectware-Agent Chart)
Some Helm values are obsolete and have been removed. Remove the following Helm values from your values.yaml file for the connectware Helm chart:
protocolMapperAgentDefaults.controlPlaneBrokerEnabledprotocolMapperAgents[*].controlPlaneBrokerEnabledprotocolMapperAgentDefaults.controlPlaneprotocolMapperAgents[*].controlPlaneprotocolMapperAgentDefaults.rpcTimeoutprotocolMapperAgents[*].rpcTimeout
Changed Helm Values (Connectware-Agent Chart)
The following Helm values have changed. If you had specific configuration for these in the past, update the Helm values accordingly.
For some Helm values, you need to take additional steps depending on your setup. The required steps are covered in the following sections.
Old Helm Value
New Helm Value
Required Change
protocolMapperAgentDefaults.mTLS.caChain.cert
protocolMapperAgentDefaults.tls.ca.certChain
Move
protocolMapperAgentDefaults.mTLS.caChain.existingConfigMap
protocolMapperAgentDefaults.tls.ca.existingConfigMap
Move
protocolMapperAgents[*].mqtt.tls
protocolMapperAgents[*].dataPlane.tls
Move
protocolMapperAgents[*].mqtt.dataHost
protocolMapperAgents[*].dataPlane.host
See 1.1 - Directly Targeting MQTT Broker
protocolMapperAgents[*].mqtt.dataPort
protocolMapperAgents[*].dataPlane.port
See 1.1 - Directly Targeting MQTT Broker
protocolMapperAgents[*].mqtt.controlHost
protocolMapperAgents[*].streamServer.host
See 1.2 - Directly Targeting Streaming Server
protocolMapperAgents[*].mqtt.controlPort
protocolMapperAgents[*].streamServer.port
See 1.2 - Directly Targeting Streaming Server
1.1 - Directly Targeting MQTT Broker
When deploying agents in the internal network of Connectware, they are able to directly connect to our MQTT broker instead of going through the Connectware ingress.
This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.
The major hindrance is, that the name "broker", which is used by our MQTT broker on the internal network, is not part of the cybus_server.crt by default. In order to connect agents with a TLS connection to this hostname, you either need to add the hostname "broker" as a Subject Alternate Name (SAN) to the certificate, or set CYBUS_TRUST_ALL_CERTS=true for the agent.
Adding the Hostname to the Default Certificate
If you are using the built-in default certificate for Connectware, you can add the hostname "broker" through the Helm value global.ingressDNSNames:
It is easiest if you add this Helm value before running your upgrade to Connectware 2.0.0, since activating it will automatically be covered by the upgrade guide.
If applying this configuration after already upgrading to Connectware 2.0.0, running helm upgrade on your Connectware installation will cause the system-control-server Deployment to restart. Once it is ready again, restart the broker StatefulSet:
Configuring Your Agents to Target the MQTT Broker
Next you need to configure your agents to target the MQTT broker directly by using the protocolMapperAgentDefaults.dataPlane.host Helm value:
The TCP port used for this connection is automatically determined by other configuration like TLS and mTLS settings, however, if for some reason you need to override this, use the Helm value protocolMapperAgentDefaults.dataPlane.port.
1.2 - Directly Targeting Streaming Server
When deploying agents in the internal network of Connectware, they are able to directly connect to our streaming server control plane instead of going through the Connectware ingress.
This improves performance and reduces failure points, so if you are running a heavy, critical load, it may be worth the extra configuration.
Configuring Your Agents to Target the Streaming Server
Next you need to configure your agents to target the streaming server directly by using the protocolMapperAgentDefaults.streamServer.host Helm value. The default internal name is "nats".
The TCP port used for this connection is automatically determined by other configuration like mTLS settings, however, if for some reason you need to override this, use the Helm value protocolMapperAgentDefaults.streamServer.port.
2. Adding the CA Certificate to Your Agent
To connect a protocol-mapper agent with Connectware 2.0.0, you must either provide the agent with the valid CA certificate for the server certificate in use, or disable verification of TLS certificate validity by setting the environment variable CYBUS_TRUST_ALL_CERTS to true on the agent.
Depending on the fact if you are connecting an agent through Connectware's ingress or through the internal network, you may need to provide either cybus_ca.crt or shared_yearly_ca.crt, but if you want to skip this complexity, there is a new file called cybus_combined_ca.crt, which includes both CA bundles, allowing internal and external connections.
The following examples use the method of configuring all agents inside one connectware-agent installation through the protocolMapperAgentDefaults Helm value context. However, you can also configure this using the protocolMapperAgents Helm value context as described in Configuration Principles for the connectware-agent Helm Chart.
You need to have the CA certificate that you want to add at hand. In this example, we assume that you are using the cybus_combined_ca.crt:
Copy
cybus_combined_ca.crtfrom Connectware:
Add the CA certificate
cybus_combined_ca.crtto the Helm values of yourconnectware-agentinstallation:
Alternatively, you can add it using an existing Kubernetes ConfigMap:
3. (Alternative) Disable TLS Certificate Validation
You can choose to disable TLS certificate validation for agents. This is not recommended, as it weakens security and makes your setup vulnerable to man-in-the-middle attacks. However, it may be acceptable in non-production environments such as development or testing.
This option is only available for agents using username/password authentication. If your agents use mTLS, you must configure proper certificates instead.
7. Upgrading Agents for the Connectware Agent Helm Chart
What You Need to Do
This guide explains how to update agents which use the connectware-agent Helm chart. If you are using agents via Docker, follow the Docker upgrade guide for this part.
To upgrade your agents installed via the
connectware-agentHelm chart, see Upgrading the connectware-agent Helm Chart.
8. Reinstalling Services
Why the Change?
With Connectware 2.0.0, your services and resources are no longer stored on the service-manager volume, but inside the PostgreSQL database.
What You Need to Do
Reinstalling Your Services
After completing the upgrade, you must reinstall all previously used services. You can do this using your preferred method:
Via the Admin UI, see Installing Services.
Automatically through a CI pipeline.
Additionally, there have been changes to the relationships between services. Understanding how these interdependencies behave at runtime is crucial for correct deployment and maintenance.
Install parent services first (recommended): If the service depends on another service (parent/child relationship), install the parent service first. This ensures:
Service relations are created during installation.
Each service can be installed with
targetState=enabled.
Install child services first (alternative): It is possible to install the dependent (child) service first, but this comes with limitations:
Service relations are only established when the service is enabled.
The dependent (child) service can only be installed with
targetState=disabled.
For more details, see Service Dependency Behavior and targetState.
Feature-Specific Upgrade Steps
Only follow these if you use the related features, so they continue working after the upgrade.
1. Permissions and Roles
Why the Change?
Permissions allow administrators to define who can access what resources and what actions they can perform. Each permission represents a specific access right to a resource.
Connectware 2.0.0 introduces new and permissions. Because of this, custom roles or specific permissions you have set up might not allow users to do everything they could before the 2.0.0 upgrade.
What You Need To Do
Verifying Permissions
Check the permissions of your users. Compare them with the default roles in Connectware 2.0.0 and make any updates needed so your users can continue working without interruptions.
For more information on managing permissions, see Permissions.
2. Custom Connectors
Why the Change?
Connectware has evolved its architecture, removing dependencies like VRPC and improving protocol handling. To ensure compatibility, you must update your custom connector implementations.
What You Need To Do
If you are using custom connectors, follow these steps to make your custom connector compatible with Connectware 2.0.0.
1. Remove VRPC
VRPC is no longer supported in the custom connector environment.
Remove all VRPC references in the custom connector code. This includes the import and any usage of
VrpcAdapter:
Example
2. Follow the Directory Naming Conventions
When defining the Dockerfile, ensure that the destination path for the copied source files ends in a protocol-specific directory name written entirely in lowercase.
Example
3. Follow the Schema Naming Conventions
The schema
$idmust match the file name (without the.json).The schema must start with a capital letter, like
Foobar.
Example
In
FoobarConnection.json, the class must be like:
In
FoobarEndpoint.json, the class must be like:
4. Schema Versioning
Schemas support versioning through the additional version property, which must be a positive integer greater than zero. If this property is omitted, the default value is 1.
Versioning ensures that only the latest version of a schema is considered active and valid. This means that even though all custom connector instances should run the same version of schemas, the latest version will overwrite any previous version in the CW control plane.
Example
FoobarConnection.jsonsupporting versioning.
5. Follow the Source Directory Naming Conventions
Follow the case-sensitive naming conventions based on the protocol name.
File names must start with an uppercase protocol name (e.g.,
Foobar).Connection and endpoint suffixes are mandatory.
JS files define classes.
JSON files define schemas.
Example
6. Follow the Class Naming Conventions
The class name must match the file name, excluding the
.jsextension.The class name must start with a capital letter, such as
Foobar.
Example
In
FoobarConnection.js, the class must be:
In
FoobarEndpoint.js, the class must be:
7. Class Constructors
Unless you need a specific constructor, there is no need to specify one because it is inherited from the parent class. However, if you need to implement a custom constructor for the Connection or Endpoint classes, preserve the following format:
In
FoobarConnection.js, the class constructor must be like:
In
FoobarEndpoint.js, the class constructor must be like:
8. Do not Set the _topic Property Manually
The _topic property is now handled automatically. Manually assigning it will cause errors.
The following code is invalid and must be removed since topics are now built internally.
9. ES Modules Not Supported
The standard JavaScript environment of custom connectors is based on CommonJS modules. ES modules are not supported.
10. TypeScript Configuration
TypeScript is not officially supported in development workflows. However, if you want to use TypeScript and compile it to JavaScript, make sure to configure your tsconfig.json file as follows:
Additionally, the compiled JavaScript output must include an exports.default assignment and the exported class itself. This ensures interoperability with our CommonJS-based module system. The compiled .js file should result in:
3. Systemstate Protocol
Why the Change?
To improve performance and reduce unnecessary messaging load, the Systemstate protocol no longer supports whole-service tracking or redundant status events. This simplifies agent responsibilities and avoids misleading lifecycle signals.
What You Need to Do
If you are using the Systemstate protocol, do the following:
1. Stop Tracking Whole Services
Tracking the entire service object is no longer allowed. You must update your connector configuration to track individual resources only (like specific endpoints or connections).
Example
2. Update Event Handling Logic
The following status events have been removed from Systemstate. If your implementation depends on them (e.g., for health monitoring or automation), you must refactor that logic:
subscribed/unsubscribedonline/offline
4. Log Monitoring
Why the Change?
With version 2.0.0, several log messages have been corrected to fix spelling mistakes. These changes may affect existing log monitoring configurations.
What You Need to Do
Updating Your Log Monitoring
If you rely on log monitoring, review whether your setup references any of the updated log messages and adjust accordingly.
Log message
info
MS Entra Login was succesful, redirecting to
MS Entra Login was successful, redirecting to
Log message
debug
DELETE /:id/tokens sucess for user: '<req.params.id>'
DELETE /:id/tokens success for user: '<req.params.id>'
Error message
Views are found, the restore implenetation do not support views!
Views are found, the restore implementation do not support views!
Error message
query paramter error is not a valid HTTP error code (<req.query.code>)
query parameter error is not a valid HTTP error code (<req.query.code>)
Log message
debug
Cleared persistance of:
Cleared persistence of:
Error message
warn
HttpNode is configured with method 'GET' but operation 'serverRecieves' (instead of serverProvides)
HttpNode is configured with method 'GET' but operation 'serverReceives' (instead of serverProvides)
Log message
Error when trying to recieve OPC-UA Method details from nodeId : <err.message>
Error when trying to receive OPC-UA Method details from nodeId : <err.message>
Log message
warn
tried to pass the value as an INT64 and found no matching convertion
tried to pass the value as an INT64 and found no matching conversion
Log message
warn
tried to pass the value as an UINT64 and found no matching convertion
tried to pass the value as an UINT64 and found no matching conversion
Log message
debug
Sucessfully subscribed to topic: <mqttOpts.topic>.
Successfully subscribed to topic: <mqttOpts.topic>.
Log message
error
error occured during shutting down the server
error occurred during shutting down the server
Log message
error
expected payload convertion to fail because given payload was not a JSON notation, but 'err == nil'
expected payload conversion to fail because given payload was not a JSON notation, but 'err == nil'
5. Heidenhain Agents (Windows)
Why the Change?
For Connectware 2.0.0, the Heidenhain protocol has been updated.
What You Need to Do
Installing the Heidenhain Agent
You must upgrade the Windows-based Cybus Heidenhain Agent to work with Connectware 2.0.0.
Uninstall the existing Heidenhain agent installation from your Windows system.
Install the updated Heidenhain agent. You can find the download link at Heidenhain DNC.
6. Auto-Generated MQTT Topics of Resources
Why the Change?
With Connectware 2.0.0, auto-generated MQTT topics no longer include resource-specific properties. This makes the topic generation more unified and explicit. You must update any service commissioning file that hardcodes those old auto-generated topics.
Example of old behavior
Some auto-generated topics contained property-specific parts:
S7:
services/myService/pressure/address:DB1,REAL6Modbus:
services/myService/current/fc:3/address:7HTTP:
services/myService/myEndpoint/get[object Object]
These paths might have been referenced inside Cybus::Mapping resources. Using auto-generated topics inside Cybus::Mapping is not recommended. Instead, use references via !ref, “Reference Method”.
What You Need to Do
Updating Auto-Generated Topic References
Auto-generated topics no longer include resource-specific properties. They always follow:
Example of new behavior
S7:
services/myService/pressureModbus:
services/myService/currentHTTP:
services/myService/myEndpoint
Procedure
Scan your service commissioning files for any usage of auto-generated topics.
Adapt those references by replacing direct topic strings with
!refreferences.
For more details, see Reference Method (!ref).
7. Auto-Generated MQTT Users
Why the Change?
Before 2.0.0, Connectware created a hidden MQTT user for every installed service. These auto-generated users were only used when the service commissioning file explicitly referenced the pseudo parameter Cybus::MqttUser.
With Connectware 2.0.0, hidden users and groups are created only when the service commissioning file uses the Cybus::MqttUser pseudo parameter. This reduces unused accounts and makes credential usage explicit.
What You Need to Do
Verify Your Service Commissioning Files
If you are using auto-generated MQTT users outside of services (e.g., scripts, dashboards, or other non-commissioning references), migrate to explicit identities:
Create dedicated users with the required roles/permissions. See User Management.
Update your external systems to use the new explicit credentials.
Last updated
Was this helpful?

