Rule Engine

The Rule Engine is a core component of Connectware's data processing capabilities, offering a sophisticated and flexible approach to handling and transforming data streams. With the Rule Engine, you can apply transformation rules to data for complex data manipulations. You can define these rules within the resources section of your service commissioning files. For testing data processing rules before deploying them to your live system, you can use the Rule Sandbox.

How Does the Rule Engine Work?

The Rule Engine uses JSONata for data transformation. Additionally, Connectware provides custom rules for specialized data processing and manipulation tasks, extending the core JSONata functionality.

For example, you could collect data from different machines on a specific topic, such as energy consumption and preprocess this data. The Rule Engine allows you to aggregate, filter, or transform this data before passing it on to other services or applications.

  • Rules are applied directly within Connectware's data processing pipeline, enabling powerful data transformations.

  • They are defined as a list (array) and executed sequentially, from top to bottom.

  • Individual rules may have specific input format requirements and can conditionally pass or block data messages to the next rule, such as with a change-of-value (COV) filter.

  • Each rule is configured as a named object with properties tailored to its specific type.

Types of Data Processing Rules

The following types of data processing rules are available.

For a detailled description of all rule engine parameters, see Data Processing Rules

  1. Aggregate Messages with the Burst Rule

Consolidate multiple messages into a payload array using the burst rule. You can set this up based on time intervals or a maximum size, which simplifies your data handling and improves transmission efficiency.

  1. Aggregate Data with the Collect Rule

Use the collect rule to combine data from multiple endpoints. This rule acts as a Last Value Cache, creating a single object containing the most recent data from each subscribed endpoint or topic. Each data point in the output is identified by a key, which can be either the endpoint itself or a custom label you've defined.

  1. Apply Change-of-Value (COV) Filtering

Utilize the change-of-value (cov) filter to transmit data only when it has changed compared to the previous message. You can specify a key to monitor for changes and set optional deadband parameters. This helps optimize your data transfer and reduce network bandwidth usage.

  1. Filter Data

Employ filter rules to evaluate incoming data against a JSONata expression. You can choose to forward or block data based on the evaluation result. This allows you to focus on relevant information and reduce data noise.

  1. Convert Data Formats with the Parse Rule

Employ the parse rule to transform non-JSON input data into a JSON object. You can specify the format of your input data, and the rule will convert it accordingly. This allows you to standardize your data format, making it compatible with other rule operations in your processing pipeline. After parsing, you can easily pass the resulting JSON object to subsequent rules for further processing or analysis.

  1. Use Context Variables

Store values as context variables for later use within the Rule Engine pipeline. You can use the setContextVars rule to pass data between rule steps or store intermediate results for further processing. This allows you to create complex data processing workflows.

  1. Store Intermediate States with the Stash Rule

Save intermediate states of messages for later reference using the stash rule. You can then use these stashed values in filter or transform rules, giving you additional data management capabilities and enabling more advanced processing techniques.

  1. Transform Data

Use JSONata expressions in the Rule Engine to transform your data in real-time. This query and transformation language allows you to convert, restructure, and perform calculations on your data, making it easier to integrate and process information from various sources.

Use Cases for the Rule Engine

Whether you're working with Industrial IoT sensor data, business analytics, or any other data-intensive application, the Rule Engine provides flexible solutions for common data processing tasks.

Here are some example use cases for the Rule Engine:

  • Data cleansing: Remove invalid or inconsistent data.

  • Data enrichment: Add missing information or context.

  • Data normalization: Standardize data formats.

  • Data filtering: Select specific data based on criteria.

  • Data transformation: Convert data from one format to another.

Consider the potential impact of rule changes on your data processing pipeline. Test your rules thoroughly to ensure they produce the desired results.

Testing Data Processing Rules

The Rule Sandbox offers a safe and interactive environment for testing and refining your data processing rules. It allows you to experiment with real-time data transformations, visualize the effects of your rules instantly, and debug any issues before deploying to your live system.

For more information, see Rule Sandbox

Last updated

Logo

© Copyright 2024, Cybus GmbH