.. _user/services/structure/resources/endpoint: *************** Cybus::Endpoint *************** An *endpoint* describes the address of a single data endpoint within the language of a specific protocol. A single endpoint address is always mapped into a single topic of the internal broker. The topic can either be specific explicitly by using the `topic`_ property, or it is auto-generated (see note at `topic`_). In a concrete service, the endpoint topic will typically be mapped (using a :ref:`user/services/structure/resources/mapping` resource) into an application-specific topic, which will then be used by other services such as a dashboard application. The actual data address of the protocol is specified by the properties below the :ref:`subscribe, read, or write ` property of the endpoint. These important properties are actually not immediately below the *endpoint* but one level lower, namely below *subscribe / read / write*, which in turn is below *endpoint*. Those two different levels must not be confused. .. _user/services/structure/resources/endpoint/results: Operation results ============================== Endpoints of type `read` or `write` generate an additional topic with a ``/res`` suffix ("result") where the results of the operation are sent to, loosely following the JSON-RPC 2.0 Specification. - A `read` endpoint named `myEndpoint` will listen to requests on the MQTT topic ``myEndpoint/req``, and publish the result as a message to the MQTT topic ``myEndpoint/res``. - A `write` endpoint named `myEndpoint` similarly will listen to requests on the MQTT topic ``myEndpoint/set``, and publish the result as a message to the MQTT topic ``myEndpoint/res``. The data message on the result topic will have the following format in the successful case: .. code-block:: json :linenos: { "id": 29194, "timestamp":1629351968526, "result": { "value":0 } } In this result message, ``id`` is the request identifier that was sent on the original request, ``timestamp`` is the Unix timestamp of when the result was received, and ``result`` is the JSON object with the actual result, but its content depends on the concrete protocol implementation. If there was an error, the resulting JSON object does not contain a property `result` but instead a property ``error``. The content of the `error` property depends on the concrete protocol implementation, too. Often, it could simply be of string type and contains an explanatory message of the error. Hence, in the error case the data message on the result topic will have the following format: .. code-block:: json :linenos: { "id": 29194, "timestamp":1629351968526, "error": "Wrong input values" } .. _user/services/structure/resources/endpoint/polling: Polling interval and Subscribe ============================== When an endpoint should subscribe to some data point on the device, it should be defined with the `subscribe` operation. Some protocols support such a subscription directly (e.g. OPC UA), whereas others only support regular *polling* the data point from the Connectware side. Depending on the available choices, the actual behaviour can be chosen by the properties in the :ref:`subscribe` section. If the endpoint is set to *polling*, there is the choice between specifying an *interval* or a *time schedule expression* for polling from the Connectware side. - An ``interval`` specifies the waiting time between subsequent polls. There is no guarantee on the exact time interval, only that on average the time interval should be matched, i.e. if the protocol needed longer for one interval, the next one will be chosen shorter. Typical numbers for specified time intervals of 1000 milliseconds are actual intervals in the range of 950 to 1050 milliseconds, but this also strongly depends on the protocol and device behaviour. - A *time schedule expression* is specified in the ``cronExpression`` syntax, see https://github.com/node-cron/node-cron , for example ``"0 * * * *"`` for "every hour at the 00 minute, such as 00:00h, 01:00h, 02:00, and so on. In this case there is no guarantee on the exact time when data is received, but one polling will be triggered for each time expression match. So it can be relied on receiving 24 polling results per day if "once per hour" has been specified in the cronExpression. For any subscribe endpoint in the protocols where polling is available, you can either specify an *interval*, or a *cronExpression* (which takes precedence over the *interval* property), or neither, in which case *interval* will be used with its default value. Properties ========== ================= ============ ============ Property Type Required ================= ============ ============ `protocol`_ ``enum`` **Required** `connection`_ ``string`` **Required** `subscribe`_ ``object`` Optional* read ``object`` Optional* write ``object`` Optional* `rules`_ ``object[]`` Optional `qos`_ ``integer`` Optional `retain`_ ``boolean`` Optional `targetState`_ ``enum`` Optional `topic`_ ``string`` Optional `agentName`_ ``string`` Obsolete `buffering`_ ``object`` Optional `inputBuffering`_ ``object`` Optional ================= ============ ============ \* One out of :ref:`subscribe, read, and write ` is **required**. protocol -------- Identifies the protocol for which a connection should be established - is **required** - type: ``enum`` The value of this property **must** be equal to one of the below .. include:: ../../../shared/protocols.rstinc connection ---------- Reference to an :ref:`user/services/structure/resources/connection` resource - is **required** - type: ``string`` .. _user/services/structure/resources/endpoint/subscribe: .. _subscribe: subscribe / read / write ------------------------ - one of those is **required** - type: ``object`` Depending on the `protocol`_ type, this property needs the following parameters (properties) which specifies the actual data address in the respective protocol: - ``Ads`` :ref:`user/protocols/ads_endpoint` - ``Bacnet`` :ref:`user/protocols/bacnet_endpoint` - ``EthernetIp`` :ref:`user/protocols/ethernetIp_endpoint` - ``GenericVrpc`` :ref:`user/protocols/genericVrpc_connection` - ``Focas`` :ref:`user/protocols/focas_endpoint` - ``Hbmdaq`` :ref:`user/protocols/hbmdaq_endpoint` - ``Heidenhain`` :ref:`user/protocols/heidenhain_endpoint` - ``Http`` :ref:`user/protocols/http_endpoint` - ``InfluxDB`` :ref:`user/protocols/influxdb_endpoint` - ``Kafka`` :ref:`user/protocols/kafka_endpoint` - ``Modbus`` :ref:`user/protocols/modbus_endpoint` - ``Mqtt`` :ref:`user/protocols/mqtt_endpoint` - ``Mssql`` :ref:`user/protocols/mssql_endpoint` - ``Opcda`` :ref:`user/protocols/opcda_endpoint` - ``Opcua`` :ref:`user/protocols/opcua/opcuaClient_endpoint` - ``Profinet`` :ref:`user/protocols/profinet_endpoint` - ``S7`` :ref:`user/protocols/s7_endpoint` - ``Shdr`` :ref:`user/protocols/shdr_endpoint` - ``Sinumerik`` :ref:`user/protocols/sinumerik` - ``Sopas`` :ref:`user/protocols/sopas_endpoint` - ``Sql`` :ref:`user/protocols/sql_endpoint` - ``Systemstate`` :ref:`user/protocols/systemstate_endpoint` - ``Werma`` :ref:`user/protocols/werma_endpoint` .. note:: Strictly speaking, the protocol's properties mentioned here are not properties of the endpoint but rather those of the :ref:`subscribe / read / write ` property of the endpoint. In other words, those important properties must appear one level deeper in the yaml file: Not directly below *endpoint* but below *subscribe / read / write*, which in turn is below *endpoint*. Those two different levels must not be confused. rules ------ - is optional - type: ``array`` of :ref:`user/services/structure/resources/rules` You may specify rules to your payload here before sending it to the internal broker first time. .. note:: This rules will transform the raw data as received from this protocol. It affects all further steps in the processing chain. .. _user/services/structure/resources/endpoint/qos: qos --- MQTT Quality of Service (QoS) for the internal messaging from Endpoint to internal MQTT broker. If this endpoint runs on an :ref:`agent `, setting this to ``1`` instead of the default 0 will activate the simple buffering of MQTT client implementations. - is optional - type: ``integer``, must be one of ``0``, ``1``, ``2`` - default: ``0`` Note: QoS level 2 is most likely not useful in the industry context and is not recommended here. retain ------ Whether the last message should be retained (last-value-cached) on the internal MQTT broker. If this endpoint runs on an :ref:`agent `, setting this to `true` instead of default `false` might be useful in certain applications to have some value on the topic if the agent disconnects. However, in other applications this might not make sense. - is optional - type: ``boolean``, must be one of ``true``, ``false1`` - default: ``false`` targetState ----------- The state this resource should be in, after start-up. - is optional - type: ``enum``, must be one of ``enabled``, ``disabled`` - default: ``enabled`` .. _user/services/structure/resources/endpoint/topic: topic ----- Explicit topic name to which this endpoint address should be mapped. .. note:: The provided topic name is prefixed with the value of the :ref:`Cybus::MqttRoot ` global parameter. This global parameter by default has the value ``services/`` where ```` is replaced with the actual :ref:`user/services/service-id` of the current service. Hence, in the default case the full endpoint topic will expand to: ``services//`` See the explanation at :ref:`Cybus::MqttRoot ` if alternative topic structures are needed. Providing a custom topic and avoiding an additional mapping resource improves overall performance as the message has to travel one hop less. Endpoints with custom topics can still be mapped using a regular mapping (see :ref:`user/services/structure/resources/mapping`). - is optional - type: ``string`` agentName --------- Obsolete - this value is no longer being used. The agentName of the referenced connection is used always, if this connection and endpoint is being used on an :ref:`agent ` instance, separate from the Connectware. .. _user/services/structure/resources/endpoint/buffering: buffering --------- The ``buffering`` section can optionally switch on output data buffering on `write` endpoints. With this feature, it is possible to enable output buffering on write operations for when the connection to the device is lost, in order to avoid data loss. The buffering mechanism kicks in when a device disconnection is detected and will start buffering any incoming values. After the connection is reestablished, buffered messages will be written to the machine ("flushed"). The flushing of the buffer is implemented to handle subsequent disconnections during flushing correctly. In such a case newly incoming values will be buffered, too. Once the connection is reestablished again, the flushing will continue where it left off. By default, this feature is switched off. To enable it, the property ``enabled`` must be set to ``true``, and most likely additional properties should be set according to the expected behaviour in the actual application scenario. The supported properties of buffering are: - ``enabled`` (default: ``false``) Whether buffering should be enable or not when the connection to the source device is lost. - ``keepOrder`` (default: ``true``) Whether to keep the order of messages when going into redelivery mode after a endpoint came back online. - ``burstInterval`` (default: ``100``) Time in milliseconds to wait between each re-publishing of buffered messages after connection is re-established. - ``burstSize`` (default: ``1`` ) The number of messages to send in one burst when flushing the buffer upon re-connection. - ``bufferMaxSize`` (default: ``100000``) The max number of messages to be buffered. Older messages are deleted when this limit is reached. - ``bufferMaxAge`` (default: ``86400``, one day) The number of seconds the buffered data will be kept. If messages have been buffered for longer than this number of seconds, they will be discarded. .. note:: It is important to keep a balanced configuration of these properties to avoid potentially unwanted behavior. For example if a very large buffer (`bufferMaxSize`) is configured along with a very slow burstInterval and an small burstSize, the flushing of the buffer could take very long and depending on the bufferMaxAge it could be possible for messages to expire. The values should be configured based on the target device capabilities. The `keepOrder` property, which is switched on by default, will keep order of arriving messages when a flush of the buffer is in progress. This will delay newly arriving messages until all the buffered messages have been sent. For example if we had the values `1, 2, 3, 4` in the buffer, the buffer starts the flushing of values after a reconnection, and the values `5, 6` are received in the meantime, then the machine will get the values in that exact order `1, 2, 3, 4, 5, 6`. If this property was set to false and the same scenario is replicated, the order of arrival of the new values is unspecified and the end result would be an interleaved set of values, for example: `1,5,2,3,6,4`. .. _user/services/structure/resources/endpoint/inputBuffering: inputBuffering --------------- The input data to each endpoint can optionally be managed through an individual input buffer (also called *input queue*) to establish fine-grained control for high data rate behaviour. By default, this input buffering is disabled and instead all input data is handled on the global event queue, which works fine as long as there is no risk of out-of-memory exceptions due to unexpected slow data processing or forwarding. When enabling the individual input buffer, the buffer properties determine the behaviour in situations when the input buffer is filling up. The buffer is filling up when the message arrival rate is larger than the processing data rate or the forwarding (publishing) data rate. Or, in other words, the input buffer is filling up if the messages arrive faster than how they can be processed or be forwarded (published). If this situation happens for longer time durations, the input buffer will reach its configured capacity limits and arriving messages will be dropped, so that the system will not run into an uncontrollable out-of-memory exception. This is a fundamental and unavoidable property of distributed systems due to its finite resources. But the actual behaviour of the input buffer can be adapted to the actual application scenario by setting the properties in the ``inputBuffering`` section (optional). Supported properties are (all optional): - ``enabled`` (type: `boolean`, default: ``false``) Enable or disable input buffering. - ``maxInputBufferSize`` (type: `integer`, default: ``5000``) Maximum number of input messages that are queued in the input buffer. Exceeding messages will be discarded. Adjust this to a higher value if you are handling bursty traffic. - ``maxConcurrentMessages`` (type: `integer`, default: ``2``) Maximum number of concurrently processed messages as long as the input buffer queue is non-empty. - ``waitingTimeOnEmptyQueue`` (type: `integer`, default: ``10``) Waiting time in milliseconds after the input buffer queue ran empty and before checking again for newly queued input messages. Regardless of this value, on non-empty input buffer queue all messages will be processed without waiting time in between until the queue is empty again. Examples ======== *Bacnet* .. code-block:: yaml :linenos: bacnetSubscribe: type: Cybus::Endpoint properties: protocol: Bacnet connection: !ref bacnetConnection subscribe: objectType: analog-input objectInstance: 2796204 interval: 1000 # This subscribes to a Bacnet analog input of object instance 2796204 *OPC UA* .. code-block:: yaml :linenos: opcuaSubscribeToCurrentServerTime: type: Cybus::Endpoint properties: protocol: Opcua connection: !ref opcuaConnection subscribe: nodeId: i=2258 # This subscribes to a the OPC UA server node that publishes the current time *MQTT with write buffering enabled* .. code-block:: yaml :linenos: writeEndpoint: type: Cybus::Endpoint properties: protocol: Mqtt connection: !ref mqttConnection buffering: enabled: true keepOrder: true burstInterval: 10 burstSize: 100 bufferMaxSize: 20000 bufferMaxAge: 5000 write: topic: test/write # This configures a write endpoint which will buffer up to 200000 messages # if the connection is lost and will publish 100 messages every 10 milliseconds # when the connection is reestablished. New incoming messages will be published # only when the originally buffered items were all published