The new Dynamic Selector allows you to query data based on the digital twin structure and datapoint properties, without the need to manually name or list datapoints.

Starting from v11.2, we have also refactored the Twin entity by introducing a new entity: TwinType. With TwinTypes, you can now define your own types of digital twins, enabling greater customization and better alignment with real-world twin configurations.

Additionally, we’ve introduced Extra Properties on Twin entities, providing more flexibility for storing and using custom metadata in your pipeline logic.

You can check more information in our dedicated article: Dynamic Selector .

Starting v11.2, we have made significant improvements to how categories and units are managed across the platform. In addition to serving as metadata that provide context for your datapoints, you can now assign them directly to template properties and even apply unit conversions within your queries and pipeline solutions.

We have added a wide range of new default categories and units ready to be used in your solutions, while still allowing you to create and customize your own, including defining their respective conversion formulas.

For more information, check our dedicated articles on Categories, units and labels and Dynamic Selector.

We have reworked the time interval selector in the Control Panel for both Grafana and Streamlit components, introducing time selection persistence and override capabilities.

You can now override default relative time, which will remain active within the tab context until manually refreshed or cleared using the X button. A subtle yellow highlight indicates when the override status is active, improving visibility and control over the selected time range.

Additionally, we have added a new auto-refresh capability for components. You will visualize the time selector with a green dot indicating live mode is active.

For more information, you can check our detailed article on Interacting with Time Interval selector.

Changelog for release version 11.2

Improvements

  • Add Siemens S7 consumer for Edge data connectivity.
  • Clear search box after selecting a tile.
  • Remove auto-redirect on twin child panel.
  • Add NumPy as default library on script execution.
  • Add script name display on execution failure.
  • Add eventType tag to group system.
  • Support eventType in queries and refactor them.
  • Add button to hide alerting conditionnal formatting on control panel.

Bug Fixes

  • Fix an issue where navigating on mobile didn't keep the right tab when switching between kilns.
  • Fix a bug where filters closed automatically on mobile after selecting one option.
  • Fix total filter not applying correctly and always showing first 20 results per page.
  • Fix refresh button on Edge Logs not updating timestamps and calling endpoint with same time.
  • Fix missing default tiles on control panel without filters active.
  • Prevent creation of Edge entities using uppercase IDs.
  • Fix a bug when expanding attached datapoints panel on twin view.
  • Fix an issue where experiment couldn't be executed on first run due to disabled button.
  • Fix yellow error popup appearing incorrectly on twin panel options.
  • Fix "Active filters" orange icon showing only for Assets filter and clear button not working.
  • Fix orange icon not displaying for other active filters (favorites, twin type, template, etc.)
  • Fix pipeline image generation error for Wizata library steps.

We have reworked web mobile display of UI to properly render the control panel on mobile phone. Navigate to your app with your mobile phone browser and start looking at your favorite browser from your mobile device.

With release 11.1 we introduce full control for your pipeline installation directly from the AI Lab. You can now see how many runners are linked to your environment, and pause, restart or force restart them.

You can consult which python packages are currently installed on the runners.

You can install additional python librairies not included in Wizata runners by yourself including the one you have developed yourself.

You simply have to go to AI lab > Overview, select custom requirements modify the requirements and then restart the runners.

It is also now possible to not have to upsert your function in Wizata but directly reference it within your pipeline by simply adding the library tag. Don't forget to reference your function in init file of your package.

    {
      "type": "script",
      "config": {
        "library": "my_package",
        "function" : "my_function"
      },
      "inputs": [
        { "dataframe" : "input_df" }
      ],
      "outputs": [
        { "dataframe" : "output_df" }
      ]
    }

With release 11.1, we have added dynamic tiles on control panel delivering real time Insights directly to you.

Insights checks real time values against user made condition to show different status and alerts. Learn how to configure them and start creating your first insights.

With it, we have added a favorite option and different filters to let you organise the tiles on your control panel.

Changelog for release version 11.1

Improvements

  • Add a batch_query option in wizata_dsapi to query large dataset
  • Optimize data storage for Execution logs page
  • Added a native modbus client for edge
  • Apply time filters on variables used for Grafana dashboard
  • Add "login assistance" option on user profile to use connectivity to Grafana or other iframe within Control Panel
  • Add a navigate back button on main top bar
  • Add access to data explorer for "operate user" role
  • Improve design of multi-value selector
  • Add support for email alerts on pipeline
  • Add support for slack and teams alerts on pipeline using webhook
  • Add support for auto-refresh interval on components for Grafana dashboards
  • Improved un-authorized error page
  • Add custom permissions for DS API
  • Add Rounded Intervals and Timestamp for end plotly widget time selection
  • Add license management directly on UI
  • Add a failover queue option for Rabbit MQ data connection.
  • Add Wizata native function on librairies and make them available directly on python toolkit
  • Add new fields support on time-series database write endpoint
  • Change default values for experiment execution properties
  • Add "reliability" as field on time-series data structure
  • Add light view mode support for Grafana
  • Add proper management of relative time on time selector of control panel for Grafana support
  • Sorts assets properly on control panel based on twin structure
  • Allow pipeline to write data on another influx bucket
  • Allow query to use different bucket through declaring "Data Store"
  • Allow multi-group query from event data
  • Add support for multi-buckets and extra tags querying on DS API queries
  • Improve error handling on triggers

Bug Fixes

  • Fix an issue using wrong behaviour on edit mode of a dashboard in control panel
  • Fix a bug where API is returning an empty list on non existant datapoints
  • Fix quadratic regression type on data explorer
  • Fix a crash when navigating to control panel with opened properties
  • Fix filters on related page using pagination
  • Fix an issue parsing some pipeline JSON on UI
  • Improved error message for failed import on pipeline
  • Fix get all datapoints search methods on python toolkit
  • Fix an issue with WindRose widget for Influx 2.x
  • Fix an error using the send_alerts method on pipeline
  • Improved stability of triggers over long periods
  • Fixed an issue with twin structure parsing on UI selector
  • Fix an issue with redirection on experiment page
  • Hide random appearing checkbox on user edit panel
  • Fix autofiltering for plotly widget for Influx 2.x
  • Fix a bug to allow null extra properties on datapoint
  • Fix a bug on .plots() method using new execution id
  • Allow emptying default from/to value on grafana dashboard
  • Fix an issue with access_check function for streamlit
  • Remove restriction for 8000 characters of pipeline JSON

We have been reimagining and reinventing how to deal with event and batch data within Wizata. This new solution creates foundation to store event/batch data within Wizata and analyse them. It can be useful to track batch number, cluster anomalies and alerts, or simply track working shifts.

The solution have been developed including native support with the time-series database and queries within the data science API. You can query data using the event ID stored within the platform even when this event happens at different time on different machine.

e.g. query vibration of same batch across multiple twin motors using datapoint events to link them.

e.g. query vibration of same batch across multiple twin motors using datapoint events to link them.

Learn more on Events & Batches added within version 11.0.2