Running Connectware at Scale: 9 Principles

Operational principles for scaling Connectware from first use cases to production.

Connectware is designed to grow with you. A first deployment can start small and deliver value quickly. As that success compounds, scaling becomes less about adding capacity and more about maintaining clarity, reliability, and control.

Growing smoothly requires structure, clear ownership, and operational patterns that hold up as complexity increases.

Why Operation Principles Matter

Decisions made early shape how a deployment evolves. Teams that adopt clear patterns from the start spend less time troubleshooting, onboard new colleagues faster, and can expand to new sites without reinventing their approach.

Overview: The 9 Principles

In short, here are nine principles that keep your data infrastructure scalable and maintainable:

  1. Modular: Keep service commissioning files short and organized by responsibility.

  2. Structured: Organize configurations in a clear, scalable repository.

  3. Consistent: Use naming conventions that humans and automation both understand.

  4. Cohesive: Keep related data together in complete event snapshots.

  5. Automated: Deploy through pipelines, not by hand.

  6. Isolated: Keep development and production environments separate.

  7. Focused: Design pipelines that are fast and modular.

  8. Distributed: Place agents intentionally for load and fault isolation.

  9. Observable: Centralize logs and metrics for fast troubleshooting.

Now, let's look at what each principle means in practice.

1. Structure Service Commissioning Files for Readability and Modularity

Configuration complexity grows quietly until it becomes unmanageable. This principle keeps it in check.

As your Connectware deployment grows, so do your service commissioning files. What starts as a manageable configuration can evolve into a monolith that is hard to review, debug, or safely modify.

The problem is rarely a single bad decision. It is the accumulation of small additions over time. Each new connection, mapping, or endpoint is added where it fits best at the moment, until responsibilities blur and unintended dependencies appear.

The solution: keep service commissioning files short, and organize them by responsibility.

A good rule of thumb is simple: if a service commissioning file takes more than a few minutes to understand, it has likely grown too large.

Short, well-scoped service commissioning files reduce cognitive load and make changes safer. When configurations are decoupled, a small change in one file will not accidentally break something elsewhere.

Practical tips

  • Split large service commissioning files into smaller, purpose-driven units.

  • Use templating and parameterization to avoid repetition.

  • Centralize shared logic so updates propagate automatically.

Think of your configurations as modules with clear boundaries.

Organize your configurations by responsibility. To make the flow from asset to enterprise clear, use the following structure:

Module type
Scope & Purpose

Asset connection service

One per asset. Reads from and writes to shopfloor assets.

Use case mapping service

One per use case. Covers one or multiple data mappings between shopfloor assets and enterprise systems.

Enterprise connection service

One per enterprise system. Reads from and writes to enterprise systems.

Endpoint services

Optional. Allow for modularity of data endpoints grouped per use case.

This separation lets teams work in parallel, deploy targeted updates, and maintain clear ownership. Document each module's purpose so new team members can navigate the structure quickly.

2. Use a Clear and Scalable Repository Structure

Teams navigate configurations faster when the structure is obvious. This principle ensures clarity from day one.

Where do your configurations live? How do teams find what they need? A clear repository structure answers these questions before they become problems.

A proven approach: use a single repository for your multi-site rollout and create one directory per site. Shared templates live in one place, site-specific configurations live in their own directories. This keeps things organized without creating unnecessary coupling between sites.

Avoid using branches to represent different sites or environments.

Branches are for versioning over time, not for separating parallel configurations. Using branches for sites quickly leads to merge conflicts, duplicated effort, and automation headaches.

Document your repository layout and conventions. When a new team member joins or a new site comes online, they should be able to navigate the structure without asking for help.

3. Use Consistent and Meaningful Naming Conventions

Names are how humans and automation understand your system at scale. This principle makes that understanding effortless.

Names are an interface. When someone sees svc-plc-line4-press01, they should immediately understand what it refers to. When a script parses topic names, it should find predictable patterns.

Good names convey purpose, location, or ownership without requiring a lookup table.

Define a naming convention that covers everything: services, endpoints, agents, topics, and other resources. Avoid abbreviations that are not universally understood. What is obvious to one team may be cryptic to another.

Document your naming scheme and enforce it. Code reviews catch some inconsistencies; automated validation in your CI/CD pipelines catches the rest. The earlier you catch a naming mistake, the cheaper it is to fix.

circle-check

How you shape data at the source determines how much complexity lives downstream. This principle eliminates that complexity before it starts.

A machine produces a critical event: temperature spikes, a part completes, a cycle finishes. That event is not a single number. It is a timestamp, multiple measurements, status flags, and identifiers. All captured at the same moment.

Now consider two ways to publish this data:

  • First approach: Each value goes to its own topic or node. Temperature here, timestamp there, status somewhere else. Downstream, you write code to collect these scattered pieces, match them by time, validate that they belong together, and handle edge cases when values arrive out of order or not at all. Every system consuming this data duplicates this logic.

  • Second approach: All related values travel together as one payload. Downstream systems receive complete, consistent events. No reassembly. No validation. No edge cases.

Publish related data together in a single payload whenever possible.

With OPC UA, subscribe to parent nodes that represent complete objects, not individual values. With MQTT, publish cohesive events on single topics rather than fragmenting them.

When you cannot control the source and data arrives fragmented, handle the reassembly once in Connectware and republish complete snapshots. Use sequence numbers or commit signals to guarantee consistency. Do not force every downstream consumer to solve the same problem.

Keep machine-level data generic and complete. Build use-case-specific payloads through transformation in Connectware, not at the source. This separation keeps machine configurations reusable while giving you the flexibility to serve different consumers.

circle-check

5. Adopt a DevOps Mindset

Change must be safe, repeatable, and traceable. Manual deployments cannot deliver this at scale. This principle enforces consistency through automation.

Manual deployments do not scale. Every time someone makes a change by hand, you risk inconsistency, errors, and the loss of institutional knowledge. As your Connectware deployment grows, these risks compound.

Treat your pipelines as the only way to deploy changes to production to avoid system drift.

CI/CD pipelines are about more than efficiency. When every change flows through a pipeline, you gain:

  • Consistency: The same process runs every time, eliminating human error.

  • Traceability: Every change is logged, reviewed, and traceable to its origin.

  • Collaboration: Teams can work in parallel with confidence that integration rules are enforced.

Beyond pipelines, invest in observability from day one. Build debugging and auditing capabilities into your deployment process before you need them. Retrofitting is always harder.

Create runbooks for common scenarios so operational knowledge does not live only in people's heads.

circle-exclamation

6. Separate Development and Production Environments

Experimentation and stability require different environments. This principle ensures mixing them never introduces risk.

The pressure to move fast can blur the line between development and production.

Maintain separate Connectware instances for each. Your development environment is where you experiment, break things, and learn. Your production environment is where stability matters. Keep them isolated so that mistakes in one do not affect the other.

Every Connectware license comes with two license keys - leverage them.

Use the same configurations and templates in both environments. Only the environment-specific parameters should differ. This ensures that what you test is what you deploy. Establish a clear promotion workflow: changes move from development to production through a defined, reviewable process.

For complex deployments, consider adding a staging environment that mirrors production as closely as possible. This gives you a final validation step before changes go live.

7. Keep CI/CD Pipelines Tight and Manageable

Fast, focused pipelines encourage frequent iteration. Slow, monolithic ones create bottlenecks. This principle keeps delivery speed high as systems grow.

A pipeline that deploys everything on every change sounds thorough. Until it takes 45 minutes to run and fails unpredictably. Monolithic pipelines become slow, fragile, and frustrating.

Design your pipelines to be modular and configurable.

You do not need a separate pipeline for every possible scenario. Instead, make your pipelines configurable through variables. This keeps your pipeline landscape manageable while still allowing flexibility.

For example, use pipeline variables to select the target environment, the type of service to deploy, or the scope of the deployment. This approach avoids a proliferation of pipelines and makes maintenance easier.

Additionally, consider splitting deployment logic into different Ansible playbooks or scripts, which can be triggered conditionally from your pipeline. This further increases modularity and reusability.

Use change detection so that only the relevant stages run. If a mapping service changes, there is no need to redeploy every connection service.

Keep execution times short. Fast pipelines encourage frequent commits and rapid iteration. Slow pipelines encourage batching changes together, which increases risk and makes debugging harder.

Where dependencies permit, structure your pipelines to run stages in parallel. This speeds up overall execution and makes better use of your infrastructure.

8. Design a Scalable Agent Architecture

Agents distribute load and risk. This principle ensures their placement shapes resilience and performance intentionally.

Agents are often introduced as a technical necessity to connect assets, but at scale they become an architectural decision. Where agents run and how many are deployed directly affects load distribution, fault isolation, and the ability to grow the system safely.

The primary purpose of agents is to distribute load and risk across your deployment.

Agents offload heavy processing from the main Connectware instance. Instead of routing all data through a central system for transformation, agents can preprocess data locally. For example, use the Rule Engine on agents to filter, aggregate, or normalize data before it reaches the main instance. This reduces network traffic, lowers central processing load, and keeps data paths efficient.

Agent deployment: close to assets vs. centralized

Agents do not always need to be physically close to the assets they connect to. Scalable load and risk distribution can also be achieved by deploying multiple agents in a centralized environment, such as Kubernetes. Placing agents on hardware that is close to assets is particularly relevant if you want to minimize network dependencies or keep unencrypted data paths short. Otherwise, running agents centrally can provide the same architectural benefits, especially when leveraging container orchestration for scalability and fault isolation.

As your deployment grows, add agents based on clear criteria: increasing processing load, the need to isolate risks from specific assets or locations, or organizational boundaries that require separation. Each agent creates a natural fault isolation boundary. If one agent fails, it affects only the assets it manages, not your entire deployment.

Do not wait for performance problems to appear. Monitor agent performance proactively and scale before issues affect production. A well-designed agent architecture grows smoothly with your deployment rather than becoming a bottleneck.

9. Level Up Monitoring and Logging

When problems occur, centralized observability makes the difference between minutes and hours of troubleshooting. This principle builds that capability from the start.

When something goes wrong at 2 AM, you need answers fast. Centralized logs and metrics make the difference between a quick fix and hours of digging.

Collect logs from all Connectware components and agents in one place.

Define the metrics that matter (message throughput, connection status, service health), and track them consistently. Set up alerting for critical conditions so you detect and address problems before they impact production.

Observability is not just for troubleshooting. Retain logs and metrics for long-term trend analysis. This data helps you understand how your system behaves over time, plan capacity, and demonstrate compliance during audits.

If you already have monitoring infrastructure in place, integrate Connectware into it. Consistency across your observability stack makes operations simpler and reduces the number of tools your teams need to learn.

Getting Started

If you are setting up Connectware for the first time, you have a valuable opportunity: build these principles into your deployment from day one. Decisions made early are far easier than retrofitting later. Use this guide as a checklist during your initial setup.

If you are working with an existing deployment, start with the principles that address your most pressing challenges and build from there. Even adopting a few will make your system easier to operate, scale, and hand off to others.

Either way, the goal is the same: a Connectware deployment that grows with your needs, not one that requires a rewrite every time you add a site or onboard a new team.

Last updated

Was this helpful?