Guides

Models

This article provides a generic overview of how models work within Wizata and how the platform handles their lifecycle.

Wizata AI Lab manages machine-learning models as first-class assets across the entire platform.

Models can be created, trained, uploaded, versioned, and deployed using multiple tools, UI sections, and API endpoints.

What is a Model in Wizata ?

A model in Wizata represents a machine-learning artifact designed to process industrial time-series data. The platform supports a wide range of algorithms, from classical statistical models to advanced deep-learning architectures.

Quick facts about models within Wizata:

  • Models are stored and exchange as python artifacts
  • Models are either trained from custom code or using built-in Wizata library
  • Models can be trained within Wizata Pipeline or outside and then upload manually.
  • Models can be trained on different conditions for different assets and combined conditions.
  • Models are stored on a generic format based on ML Flow format.
  • Additionnal information regarding models can be stored as metadata or separate files.
  • Models can be versionned and specific version set as the active one.

Model Lifecycle in Wizata ?

Wizata’s AI Lab is designed as an ML Ops platform for Industrial IoT, managing the full lifecycle of machine-learning models across cloud and edge environments.

The goal is to simplify how data scientists and process engineers build, train, deploy, monitor, and maintain models that operate on industrial time-series data.

Below is a high-level overview of this lifecycle and how the platform orchestrates it. A model typically goes through the following stages.

Training a Model

In Wizata, a model is a Python-based machine-learning object. It can originate in different ways depending on the user’s workflow:

Using the Wizata Library (no custom code required)

Users can rely on predefined model templates with standard inputs/outputs designed for industrial time-series tasks. These models can be used directly inside a Pipeline, without writing custom Python classes.

Using Custom Python Logic (optional)

Users may also create fully custom models using any Python framework (scikit-learn, PyTorch, statsmodels, custom classes…).

This code can be executed:

  • in a notebook or external environment and then upload the artifact manually through UI or code.
  • inside a pipeline defined from UI or notebook and therefore the model is automatically managed.

Both approaches produce a Python model object compatible with Wizata.

Inference

Once a model is trained and registered, it can be used to generate predictions through Inference Pipelines. Inference in Wizata is fully integrated into the ML Ops flow and can run:

Inference can run in two environments:

Cloud Deployment

Cloud inference executed within app infrastructure it supports :

  • manual executions (user actions, API calls, experiments)
  • Automatic executions (Triggers, scheduled pipelines, event-based pipelines)

Edge Deployment

Executed on industrial PCs, gateways, on-premise data center or field devices. Edge inference supports:

  • Automatic execution only, based on deployed logic
  • No manual “run once” execution on the edge

Once deployed, edge models run continuously or according to defined rules, without requiring user actions.

Model Selection

On inference, the platform properly select the right model to execute which can depends on assets, custom properties such as material recipes and active version.

Data-Drift Detection & Automatic Retraining

Industrial processes evolve continuously: sensor ranges shift, equipment ages, raw material quality varies, and operating modes change over time.

These changes can reduce a model’s accuracy if not monitored correctly.

Wizata supports an MLOps workflow where customers can create custom pipelines that monitor data, evaluate model quality, and trigger retraining when required.