Using Edge to connect your data

You can use our Edge to connect your network and data stream to Wizata, it contains pre-defined connectors for various protocols.

Prerequisite - Configure and setup a device

To start you need to install and configure an edge device, it can be deployed on-premises or in the cloud, you can also deployed multiple edge device to suit your architectural needs.

Once your edge hardware or virtual machine is installed you need to register and set it up

Consumers & Writers

The edge is configured by defining where data are read (consumers) and where it is send back (writers).

By default, the edge is capable to:

  • write data to its local time-series database and to the cloud platform.
  • read data as results of its internal AI/ML Pipeline Engine.

Configuration of addtional consumers, writers as well as the triggers are defined as a JSON structure and synchronised from cloud to edge regularly.

Additional consumers and writers could be configured to add new source and destination for the data flow.

OPC-UA

You can configure an OPC-UA consumer to pull data frequently from your OPC-UA server.

Consumer

Here is the key configuration components for consumer:

  • id can be anything but unique on consumers list
  • type must be opc-ua
  • opc_server is the address of your server (e.g. opc.tcp://x.x.x.x:4840/myopc/...)
  • opc_username to set a user name (optional)
  • opc_password to set a password (optional)
  • opc_application_uri can be used to set an application URI.
  • opc_security_string can be used to set a security string.
  • opc_nodes used to list each node you are willing to listen too:
    • node_ids list of ids (e.g. "ns=3;i=1")
    • children true if you want to read also all sub nodes
    • poll_interval frequency on listening interval in ms
  • prefix is an optional key to prefix all OPC-UA tag with additional info like a factory or area name.
  • reconnect_interval frequency in ms on which OPC-UA attempts to reconnect when detecting a failure by default it is 5000.
  • mapping can be used to set a dictionary where key is the OPC tag name and value the desired tag name in Wizata.

Here is an example of a complete configuration:

{
    "id": "opc-server-01",
    "type": "opc-ua",
    "opc_server": "opc.tcp://0.0.0.0:4840/freeopcua/server/",
    "opc_nodes": [
      {
        "node_ids": ["ns=3;i=1"],
        "children": true,
        "poll_interval": 1000
      },
      {
        "node_ids": ["ns=4;i=1"],
        "children": true,
        "poll_interval": 1000
      }
    ],
    "prefix": "myprefix_",
    "reconnect_interval": 5000
  }

Each opc_nodes will create a dedicated routine that poll data independently.

Writers

To write with to OPC-UA, you can use a similar configuration as a writer:

  • id can be anything but unique on consumers list
  • type must be opc-ua
  • opc_server is the address of your server (e.g. opc.tcp://x.x.x.x:4840/myopc/...)
  • opc_username to set a user name (optional)
  • opc_password to set a password (optional)
  • opc_application_uri can be used to set an application URI.
  • opc_security_string can be used to set a security string.
  • datapoints is a dictionary using as key the tag name as emitted by a consumer or pipeline and value as the OPC node with an identifier or tag name.
{
    "id": "opc-server-01",
    "type": "opc-ua",
    "opc_server": "opc.tcp://0.0.0.0:4840/freeopcua/server/",
    "datapoints": {
      "my_tag_01" : "ns=1;s=t|opc_tag_name",
      "my_tag_02" : "ns=1;i=4",
    }
  }

Modbus

You can configure the message client to pull data from a Modbus server accessible to the edge.

  • id can be anything but unique on consumers list
  • type must be modbus
  • ip_address IPv4 address of the modbus server
  • port port address of the modbus server
  • poll_rate mandatory number of seconds between two polls
  • unit_id unit identifier/slave of your modbus address
  • datapoints dictionnary where the key is the desired datapoint name and content of the following format:
    • address address as integer (converted from hexadecimal value) on the modbus record (starts at 0)
    • quantity number of 16-bit data to fetch (e.g. 2 for a 32-bit data)
    • pack format to use to pack the data (e.g. "<HH" for Little Endian 32-bit data)
    • unpack format to use to unpack the data (e.g. "<f" for a float)

The modbus client use struct.pack and struct.unpack functions of python library struct, please refer to documentation to find the proper type and format https://docs.python.org/3/library/struct.html

Here is an example on how could look like a modbus configuration (reading a 32-bit float, and signed 16-bit integer from respectively address 0 and 2 on little-endian format):

{
  "id": "modbus-01",
  "type": "modbus",
  "ip_address": "10.0.0.1",
  "port": 502,
  "poll_rate": 60,
  "unit_id": 1,
  "datapoints": {
    "dp_modbus_01": {
      "address": 0,
      "quantity": 2,
      "pack": "<HH",
      "unpack": "<f"
    },
    "mdb_test_test2": {
      "address": 2,
      "quantity": 1,
      "pack": "<H",
      "unpack": "<h"
    }
  }
}

Rabbit MQ

You can configure the message client to pull data from an external Rabbit MQ queue.

  • id can be anything but unique on consumers list and type must be rabbitmq
  • Configure your connection to Rabbit MQ:
    • RQ_HOST - defines your server domain name
    • RQ_PORT - defines the port to use
    • RQ_USER / RQ_PASS - defines the user name and password
    • RQ_TLS - optional if you want don't use SSL or a different version than TLS 1.2
    • RQ_VHOST - virtual host name
  • queue - defines the queue to use, if multiple you need to create different consumers.
  • function - if necessary use a custom python function to convert your data to Wizata message format (see Connect your data to Wizata to understand the supported format).

Here's a sample consumer configuration for Rabbit MQ:

{
      "id": "a-unique-id",
      "type": "rabbitmq",
      "RQ_HOST": "server.domain.name",
      "RQ_PORT": 5672,
      "RQ_USER": "your-user",
      "RQ_PASS": "xxxxxxxxx",
      "RQ_TLS": "v1_1",
      "RQ_VHOST": "vhost",
      "queue": "your-queue",
      "function": {
          "filepath": "functions/sample_t.py",
          "function_name": "sample_t"
      }
  }

📘

Troubleshoot note

RQ_PORT number may vary depending on your installation process.

Functions

A custom function can be used to transform your data to Wizata message format. Some connectors such as OPC-UA, Modbus, ... doesn't requires function as the protocol itself embed the data format. Please read Time-series data - format & types to understand more about Wizata formats.

Custom functions can be used with the following connectors:

  • Rabbit MQ

Let's take an example;

  • if your data looks like :
{'t': 1728920878.798966, 'f': 0.6591128532976067, 'h': 'rq-custom-tag-01'}
{'t': 1728920883.817512, 'f': 0.6972545363650098, 'h': 'rq-custom-tag-01'}
{'t': 1728920888.83555, 'f': 0.6168179285479277, 'h': 'rq-custom-tag-01'}
{'t': 1728920893.85878, 'f': 0.5139981833045464, 'h': 'rq-custom-tag-01'}
  • you can use the following code :
def sample_t(payload: dict) -> list:
    from datetime import datetime, timezone

    message = {
        "Timestamp": datetime.fromtimestamp(payload["t"], tz=timezone.utc).strftime("%Y-%m-%dT%H:%M:%S.%f") + "+00:00",
        "HardwareId": payload["h"],
        "SensorValue": float(payload["f"])
    }
    return [message]

❗️

Important

The function must have one and only parameter as a dict corresponding to the message received and must returns a list of message. Each message being a dict.