Starting from v11.4, pipelines can send WhatsApp messages directly to one or more recipients using the new WHATSAPP alert type — no external integrations required.

def notify_oven_anomaly(context: wizata_dsapi.Context):
    context.api.send_alerts(
        recipients=["+1234567890"],
        alert_type=wizata_dsapi.AlertType.WHATSAPP,
        message="Oven temperature exceeded threshold — immediate attention required."
    )

Messages are delivered from the Wizata business account and can be triggered from any pipeline transformation, making it easy to set up real-time operational alerts based on sensor data, model predictions, or custom logic.

For more information, you can look at our dedicated article on Send Alerts by SMS, Emails, Teams, Slack

Starting from v11.4, Execution Logs have a brand new dedicated page in AI Lab with a significantly improved experience — richer information per entry, powerful filtering, and better performance.

Each log entry now shows a precise timestamp, a color-coded level (info, warning, error), execution status, associated pipeline, template, twin unit, trigger, edge device, and both wait time and execution time.

  • Filter by time range, status, pipeline, trigger, twin, edge device and more
  • Full-text search across log messages
  • Sortable columns and paginated results
  • Execution logs from Edge devices are now synced to the cloud automatically, with efficient management of internet disruptions

For a full walkthrough of all available interactions, check our dedicated documentation: Execution.

Starting from v11.4, you can now build, browse, and manage Pipeline Images directly from the UI — both at the pipeline level and across all your pipelines from a dedicated global view.

📘

A pipeline image is a downloadable package that encapsulates every step that might be composed inside a specific pipeline, such as scripts, trained models and the pipeline JSON file itself. This comprehensive bundle ensures that all necessary components are included for seamless execution.

Pipeline-level Image Panel

From any pipeline in AI Lab, you can now open the Images panel to:

  • Build a new image directly from the current pipeline version
  • Browse previously generated images with their date and version tag
  • Copy, download, or delete any existing image

Global Images View

A new Images page under AI Lab → Pipelines gives you a unified view of all images across every pipeline in your environment. From here you can filter, sort, and manage images at scale — with full visibility into the pipeline key, version, and timestamp for each entry.

Action required after environment update

⚠️

If you are using Pipeline Images on Edge devices, you must generate a new image for each affected pipeline after upgrading to v11.4. Existing images built on previous versions are not compatible with the new format and will not execute correctly.

For more information, check our documentation on Pipeline Images.


Starting from v11.4, Edge trigger payloads now use key-based values instead of internal UUIDs for referencing twins and templates. This makes manual configuration significantly easier and reduces the risk of errors.

{
  "interval": 60000,
  "pipelineImageId": "20260305115419.v1_4_3.inferOvenTemperaturePred_311",
  "twin": "case2_oven2",
  "template": "case2DemoOvenTemperaturePred311"
}

UUID-based configurations will be deprecated starting this release. We recommend updating your Edge trigger payloads to use keys at your earliest convenience to avoid any disruption in execution.

📦

This feature is part of the Close Loop Automation add-on, available on Professional and Enterprise plans. Learn more about Licensing

Minor change logs for release version 11.4

Improvements

  • Improve performance of execution logs search and grouped endpoints.
  • Optimize docker image built size by refactoring some libraries.

Bugs

  • Support empty requirements file instead of "" on custom requirements endpoint.
  • Fix pipeline image datapoints being mutated in cache across executions.
  • Fix triggers with pipeline images not passing required execution keys.
  • Fix event group query failing when no event IDs are specified.
  • Fix inability to select a twin when running a pipeline experiment.
  • Fix upsert_experiment method failing when passing twin hardware ID on creation.
  • Fix PyTorch model upload/download and tensor regression from v11.3.

Minor change logs for release version 11.3.1

Improvements

  • Add support on pipeline image for python multi-version
  • Add support to choose the right python version for pipeline edge
  • Add choosing model training features from a json file inside a pipeline
  • Improve error messages when model cannot be trained on inference pipeline
  • Add checkbox to choose train/plot/write options on experiment
  • Adding display of python default version on Trigger and Experiment execute page

Bug Fixes

  • Fix pipeline queuing management by trigger services with new python multi-version
  • Add proper displaying of python version used on execution logs
  • Fix issues collision on left menu collapse arrow on twin selector
  • Fix left menu button collapse arrow unclickable on some cases
  • Fix a sizing issue on bottom of models page
  • Fix an issue where datapoints are not filtered properly on python toolkit
  • Fix an issue selecting the proper python version when executing a pipeline directly from API
  • Fix an issue with datapoints deselection after being attached to a twin on UI
  • Fix an issue on some twins being created at root page when a parent is not assigned

Within our brand new release 11.3 we have added the capability to have multiple pipeline engine with different python versions.

By default, Wizata was supporting Python 3.9 and will now be based by default on Python 3.12. Additionnally special builds with python version 3.11 are also possible.

Check your versions

To check versions currently deployed within your environment navigate to AI Lab and check the runners versions:

If you need other Python versions deployed in your environment please contact our team.

Choose your version - Triggers & API

When using the API, make sure you use a Python version identical to the one of the runners you want to use or you can set manually on an execution the desired version (i.e. execution.version = "3.12" )

Within a trigger or an experiment on UI, you can now select a version. If no version selected, the system will use the default version (3.9 for environment pre-existing 11.3 and 3.12 for newly environment)

Update your solutions

If you have multiple versions, and you would like to update your solution pipelines and scripts, please follow this tutorial Upgrade your solution Python version

We have improved Machine Learning model management within our app with many little improvements but with key new features :

  • Models are now versionned generating an alias each a model is trained or upload inside the app. You can set the one you desire as active or app will always take the most recent one.
  • A brand new UI page allows you to browse all models, manage them and upload new ones manually.

Quick Action Points

The update is seamless but still minor changes are necessary ; if you already use a train script within a pipelines you need to adapt the context.set_model( ... ) to only pass the model as only required info or add extra parameters as named parameters and they will be stored as extra files.

See Model step for detailed explaination.

To go further

Please learn more about the model management and new principles directly from new documentation section about Models.

We now officially support PyTorch in AI Lab pipelines and training scripts when running on Python 3.11 or higher.

New capabilities include:

  • Training and using PyTorch models directly inside Script and Model steps
  • Passing and manipulating torch.Tensor objects between pipeline steps
  • Storing PyTorch models through context.set_model()
  • Models must be saved as full TorchScript .pt objects (scripted modules), ensuring safe, portable, framework-agnostic deployment across all execution environments
  • Support for tensor‐based feature extraction, embeddings, neural networks, etc.
  • GPU acceleration where available

No additional configuration is required — simply import PyTorch in your pipeline scripts and start building.


Minor change logs for release version 11.3

Improvements

  • Add support of pandas.Series as return type of a model
  • Optimize docker image built size by refactoring some librairies

Bugs

  • Fix an issue with assignment of default values on properties within an experiment
  • Fix an issue with update button on Twin type.
  • Fix an issue that blocked deletion of a unit attached to a template property
  • Fix an issue on group systems queries using some twin hierarchy
  • Fix an issue on data explorer using formula data source
  • Fix an issue when changing smooth to dense on data explorer
  • Fix an issue on business labels endpoints
  • Fix an error on OPC writer within new edge modules
  • Fix an issue listing twin types on digital twin chart