Cybus::Endpoint
Last updated
Was this helpful?
Last updated
Was this helpful?
An endpoint describes the address of a single data endpoint within the language of a specific protocol. A single endpoint address is always mapped into a single topic of the internal broker.
The topic can either be specific explicitly by using the property, or it is auto-generated (see note at ). In a concrete service, the endpoint topic will typically be mapped (using a resource) into an application-specific topic, which will then be used by other services such as a dashboard application.
The actual data address of the protocol is specified by the properties below the property of the endpoint. These important properties are actually not immediately below the endpoint but one level lower, namely below subscribe / read / write, which in turn is below endpoint. Those two different levels must not be confused.
Endpoints of type read or write generate an additional topic with a /res
suffix (“result”) where the results of the operation are sent to, loosely following the JSON-RPC 2.0 Specification.
A read endpoint named myEndpoint will listen to requests on the MQTT topic myEndpoint/req
, and publish the result as a message to the MQTT topic myEndpoint/res
.
A write endpoint named myEndpoint similarly will listen to requests on the MQTT topic myEndpoint/set
, and publish the result as a message to the MQTT topic myEndpoint/res
.
The data message on the result topic will have the following format in the successful case:
In this result message, id
is the request identifier that was sent on the original request, timestamp
is the Unix timestamp of when the result was received, and result
is the JSON object with the actual result, but its content depends on the concrete protocol implementation.
If there was an error, the resulting JSON object does not contain a property result but instead a property error
. The content of the error property depends on the concrete protocol implementation, too. Often, it could simply be of string type and contains an explanatory message of the error. Hence, in the error case the data message on the result topic will have the following format:
When an endpoint should subscribe to some data point on the device, it should be defined with the subscribe operation. Some protocols support such a subscription directly (e.g. OPC UA), whereas others only support regular polling the data point from the Connectware side. Depending on the available choices, the actual behaviour can be chosen by the properties in the section.
If the endpoint is set to polling, there is the choice between specifying an interval or a time schedule expression for polling from the Connectware side.
An interval
specifies the waiting time between subsequent polls. There is no guarantee on the exact time interval, only that on average the time interval should be matched, i.e. if the protocol needed longer for one interval, the next one will be chosen shorter. Typical numbers for specified time intervals of 1000 milliseconds are actual intervals in the range of 950 to 1050 milliseconds, but this also strongly depends on the protocol and device behaviour.
A time schedule expression is specified in the cronExpression
syntax, see , for example "0 * * * *"
for “every hour at the 00 minute, such as 00:00h, 01:00h, 02:00, and so on. In this case there is no guarantee on the exact time when data is received, but one polling will be triggered for each time expression match. So it can be relied on receiving 24 polling results per day if “once per hour” has been specified in the cronExpression.
For any subscribe endpoint in the protocols where polling is available, you can either specify an interval, or a cronExpression (which takes precedence over the interval property), or neither, in which case interval will be used with its default value.
enum
Required
string
Required
object
Optional*
object
Optional*
object
Optional*
object[]
Optional
integer
Optional
boolean
Optional
enum
Optional
string
Optional
string
Obsolete
object
Optional
object
Optional
boolean
Optional
Identifies the protocol for which a connection should be established
is required
type: enum
The value of this property must be equal to one of the below
Ads
Bacnet
EthernetIp
Focas
GenericVrpc
Hbmdaq
Heidenhain
Http
InfluxDB
Kafka
Modbus
Mqtt
Mssql
Opcda
Opcua
S7
Shdr
Sinumerik
Sopas
Sql
Systemstate
Werma
is required
type: string
one of those is required
type: object
is optional
You may specify rules to your payload here before sending it to the internal broker first time.
This rules will transform the raw data as received from this protocol. It affects all further steps in the processing chain.
MQTT Quality of Service (QoS) for the internal messaging from Endpoint to internal MQTT broker.
If this endpoint runs on an agent, setting this to 1
instead of the default 0 will activate the simple buffering of MQTT client implementations.
is optional
type: integer
, must be one of 0
, 1
, 2
default: 0
QoS level 2 is most likely not useful in the industry context and is not recommended here.
Whether the last message should be retained (last-value-cached) on the internal MQTT broker.
If this endpoint runs on an agent, setting this to true instead of default false might be useful in certain applications to have some value on the topic if the agent disconnects. However, in other applications this might not make sense.
is optional
type: boolean
, must be one of true
, false
default: false
The state this resource should be in, after start-up.
is optional
type: enum
, must be one of enabled
, disabled
default: enabled
Explicit topic name to which this endpoint address should be mapped.
services/<serviceId>/<topic>
is optional
type: string
Obsolete - this value is no longer being used. The agentName of the referenced connection is used always, if this connection and endpoint is being used on an agent instance, separate from the Connectware.
The buffering
section can optionally switch on output data buffering on write endpoints. With this feature, it is possible to enable output buffering on write operations for when the connection to the device is lost, in order to avoid data loss.
The buffering mechanism kicks in when a device disconnection is detected and will start buffering any incoming values. After the connection is reestablished, buffered messages will be written to the machine (“flushed”).
The flushing of the buffer is implemented to handle subsequent disconnections during flushing correctly. In such a case newly incoming values will be buffered, too. Once the connection is reestablished again, the flushing will continue where it left off.
By default, this feature is switched off. To enable it, the property enabled
must be set to true
, and most likely additional properties should be set according to the expected behaviour in the actual application scenario. The supported properties of buffering are:
enabled
(default: false
) Whether buffering should be enable or not when the connection to the source device is lost.
keepOrder
(default: true
) Whether to keep the order of messages when going into redelivery mode after a endpoint came back online.
burstInterval
(default: 100
) Time in milliseconds to wait between each re-publishing of buffered messages after connection is re-established.
burstSize
(default: 1
) The number of messages to send in one burst when flushing the buffer upon re-connection.
bufferMaxSize
(default: 100000
) The max number of messages to be buffered. Older messages are deleted when this limit is reached.
bufferMaxAge
(default: 86400
, one day) The number of seconds the buffered data will be kept. If messages have been buffered for longer than this number of seconds, they will be discarded.
It is important to keep a balanced configuration of these properties to avoid potentially unwanted behavior. For example if a very large buffer (bufferMaxSize) is configured along with a very slow burstInterval and an small burstSize, the flushing of the buffer could take very long and depending on the bufferMaxAge it could be possible for messages to expire.
The values should be configured based on the target device capabilities.
The keepOrder property, which is switched on by default, will keep order of arriving messages when a flush of the buffer is in progress. This will delay newly arriving messages until all the buffered messages have been sent.
For example if we had the values 1, 2, 3, 4 in the buffer, the buffer starts the flushing of values after a reconnection, and the values 5, 6 are received in the meantime, then the machine will get the values in that exact order 1, 2, 3, 4, 5, 6. If this property was set to false and the same scenario is replicated, the order of arrival of the new values is unspecified and the end result would be an interleaved set of values, for example: 1,5,2,3,6,4.
The input data to each endpoint can optionally be managed through an individual input buffer (also called input queue) to establish fine-grained control for high data rate behaviour. By default, this input buffering is disabled and instead all input data is handled on the global event queue, which works fine as long as there is no risk of out-of-memory exceptions due to unexpected slow data processing or forwarding.
When enabling the individual input buffer, the buffer properties determine the behaviour in situations when the input buffer is filling up. The buffer is filling up when the message arrival rate is larger than the processing data rate or the forwarding (publishing) data rate. Or, in other words, the input buffer is filling up if the messages arrive faster than how they can be processed or be forwarded (published). If this situation happens for longer time durations, the input buffer will reach its configured capacity limits and arriving messages will be dropped, so that the system will not run into an uncontrollable out-of-memory exception. This is a fundamental and unavoidable property of distributed systems due to its finite resources. But the actual behaviour of the input buffer can be adapted to the actual application scenario by setting the properties in the inputBuffering
section (optional).
Supported properties are (all optional):
enabled
(type: boolean, default: false
) Enable or disable input buffering.
maxInputBufferSize
(type: integer, default: 5000
) Maximum number of input messages that are queued in the input buffer. Exceeding messages will be discarded. Adjust this to a higher value if you are handling bursty traffic.
maxConcurrentMessages
(type: integer, default: 2
) Maximum number of concurrently processed messages as long as the input buffer queue is non-empty.
waitingTimeOnEmptyQueue
(type: integer, default: 10
) Waiting time in milliseconds after the input buffer queue ran empty and before checking again for newly queued input messages. Regardless of this value, on non-empty input buffer queue all messages will be processed without waiting time in between until the queue is empty again.
Controls whether error information is published in a structured format when an operation fails. When publishError
is set to true
, errors will be published as a structured object containing both an error code and an error message:
is optional
type: boolean
, must be one of true
, false
default: false
compatible with HTTP and OPC UA protocols
Without this properties, errors may still be published but only as a simple text string, like: "error": "An error occured"
* One out of is required.
Reference to an resource
Depending on the type, this property needs the following parameters (properties) which specifies the actual data address in the respective protocol:
Ads
Bacnet
EthernetIp
GenericVrpc
Focas
Hbmdaq
Heidenhain
Http
InfluxDB
Kafka
Modbus
Mqtt
Mssql
Opcda
Opcua
S7
Shdr
Sinumerik
Sopas
Sql
Systemstate
Werma
Strictly speaking, the protocol’s properties mentioned here are not properties of the endpoint but rather those of the property of the endpoint. In other words, those important properties must appear one level deeper in the yaml file: Not directly below endpoint but below subscribe / read / write, which in turn is below endpoint. Those two different levels must not be confused.
type: array
of
The provided topic name is prefixed with the value of the global parameter. This global parameter by default has the value services/<serviceId>
where <serviceId>
is replaced with the actual ServiceID of the current service. Hence, in the default case the full endpoint topic will expand to:
See the explanation at if alternative topic structures are needed.
Providing a custom topic and avoiding an additional mapping resource improves overall performance as the message has to travel one hop less. Endpoints with custom topics can still be mapped using a regular mapping (see ).