Write step
Write step can be used to write data back into datapoint and timeseries. Write step should be used carefully as they are modifying data within the time-series database. Therefore they are by default de-activated on experiment mode.
The config of a write step is a datapoints mapping dictionary between :
- “df.column” : "datapoint.hardwareId”
- “df.column” : “template.property”
When using a template, the write block will use the template property and registration to map the datapoint. Write step will using the index timestamp from your data frame, add a transformation block in combination with relative date variables to manipulate the writing time. (e.g. for forecasting)
Here’s an example of write step config using a mapping based on template properties :
{
"type": "write",
"config": {
"datapoints" : {
"sum_all_columns" : "bearing_output"
}
},
"inputs": [
{ "dataframe" : "sum_dataframe" }
]
}
Alternatively, the mapping between your data frame columns and datapoint hardwareId from the platform can be done dynamically based on your logic. For this, the config should contains the key map_property_name
pointing to a property name. The property itself must contains the dict mapping between as key (data frame columns) and as values (hardwareId).
Here’s an example of write step config using a dynamic mapping:
{
"type": "write",
"config": {
"map_property_name" : "your_property_name"
},
"inputs": [
{ "dataframe" : "sum_dataframe" }
]
}
Don’t forget to pass, by a script step or directly when calling your pipeline, a property with the name your_property_name and as value a dict with columns name/hardware id key value pairs.
As a reminder properties are passed as dict between all pipeline steps. It’s initialized at pipeline execution and is accessible with context.properties[key]
.
Updated 2 months ago