Setting Datapoint Properties
THE DATAPOINTs WIDGET IS AVAILABLE WITH SMARTSERVER 2.5 AND HIGHER. FOR SMARTSERVER 3.3 AND PRIOR, THIS WIDGET IS CALLED THE DATAPOINT BROWSER WIDGET.
For SmartServer 3.1 or prior, see Defining Datapoint Properties (Release 3.1).
You can use the Datapoint Properties widget to set datapoint monitoring, logging, and alarming, as well as to copy and clear datapoint definitions. You can also use this widget to define presets and localization settings.
This section consists of the following:
Editing Datapoint Properties
To edit datapoint properties, perform the following steps:
- Open the SmartServer CMS.
- Open the Datapoint Properties widget. Click the Expand button ( ).
- Click the Action button (
The Edit Datapoint Properties view appears. ) and then select the Edit action.
Go to the following sections to edit datapoint properties: Editing Datapoint Properties Information, Editing Monitoring and Logging Configuration, Editing Datapoint Value Alarm Conditions, Using Preset Definitions, or Using Localization Settings.
If you are using Presets and Localization
Using presets and localization in datapoints is optional. A datapoint can have presets only, localization only, both presets and localization, or neither. When both presets and localization are used, the presets map are typically configured based on the localization value. Therefore, when an update is received, the value needs to be localized first before mapping to a preset string. And, when a preset string is entered, the map value needs to be transformed using the revert transformation rule to get the native value.
Presets and localization settings should be defined in the Datapoint Properties widget or the DLA file file prior to the deployment of connections. If you configure presets and localization settings in the Datapoint Properties widget or the DLA file after deploying connections, then different connection results will be produced.
Editing Datapoint Properties Information
To edit datapoint properties information, perform the following steps:
- Go to the Edit Datapoint Properties → Info tab (default) and edit datapoint properties as described below.
Protocol – read-only field with LON, Modbus, BACnet, LoRaWAN, EnOcean, and IAP values
Device Type – read-only field with device type name
- Datapoint XIF Name – read-only field with fully qualified name with block name, block index, and datapoint name
- Initial Value – initial value for inputs only. If localization is defined, the value should be a local value. If this field is left blank, you will see yellow messages at the top of your dashboard. Click the Show button () to view the value for the selected datapoint as shown in the example below.
- Tags – a list of tags as defined in Datapoint Tags, with each tag specified as <tag name>:<tag value>, allowing you to add tag datapoint values that are forwarded to a data analytics application. Click the Add Tag () button to add tags to datapoints. Key and Value fields appear allowing you to add tags.
Tags are available with SmartServer 3.6 and higher.
With SmartServer 4.2 and higher, and for IDL-based drivers only (not LON), you can optionally add DBO device and datapoint tags to ev/data using the tag key prefix eit: , which indicates an event identification tag. This tag feature supports Haystack and Google DBO tags and should not be used with equal signs ( = ) or semicolons ( ; ).- Example:
Datapoint Properties widget example
KEY eit:DBO New Tag = VALUE some value
KEY eit:DBO DP Name = VALUE Power B
IAP/MQ example of ev/data message, which also shows a device eit: tag defined as:
KEY eit:Device Name = VALUE some device name
- Example:
Set the Visible option. The Visible option is related to the setting described in the Viewing / Hiding Datapoints in the Datapoints Widget section in Displaying Datapoint Properties.
- With this option enabled, the datapoint is visible on the Datapoints widget.
- With this option disabled, the datapoint is hidden on the Datapoints widget.
- Set the Provision Initial Value option.
- With this option enabled, the initial value is written to the device when the device is provisioned.
- With this option disabled, writing the initial to the device is suppressed when the device is provisioned.
- Click Update to save the edits.
Editing Monitoring and Logging Configuration
The Total Monitoring Traffic Indicator on the Datapoint Properties widget displays an estimate of the expected monitoring events per second (EPS) based on the current monitoring configuration. For optimal system operations, keep this number below 40 EPS on a quad core SmartServer IoT (Revision F or later), or 20 EPS on a dual core SmartServer IoT (Revisions A through E). Use the Total Datapoint Properties Parameters button () to display/hide this information, as well as the Total Logged Bytes and Data Annual Log Size, as shown below.
Total Datapoint Properties Parameters button
Total Datapoint Properties Parameters display
To edit datapoint properties monitoring and logging configuration, perform the following steps:
Go to the Edit Datapoint Properties → Monitoring and Logging Configuration tab and set the parameters as needed including the following:
- Monitored Yes/No – enables (Yes) or disables (No) monitoring. The default is disabled. The SmartServer monitors updates for a datapoint when monitoring is enabled for the datapoint. For datapoints on BACnet, LON (using IMM), and Modbus devices, polling must also be enabled to periodically read the datapoint value.
Monitoring Method – provides the ability to set the monitoring method as Event-Driven or Polled. These options are enabled when Monitored is set to Yes and disabled with Monitored is set to No. A Poll Interval should be specified if the Polled option is selected.
If Monitored is enabled and the Event-Driven option is selected, then LON (using DMM) and BACnet datapoints will use the Event-Driven monitoring method. For Modbus and IAP datapoints, the Polled option should always be selected for the monitoring method (Event-Driven does not apply).Event Driven – enables LON driver support for a maximum receive time property, which specifies a maximum time period between received updates from a datapoint. The configuration of a receive timeout appears the same as the configuration of a poll interval for polled monitoring, which occurs on a periodic fixed interval. However, for a receive timeout, the SmartServer will only send a poll request for an event-driven datapoint if the receive timeout period expires without a received event-driven update or poll response from a previous poll request.
The use-cases are as follows:
- For a datapoint configured for event-driven monitoring with a configured poll interval where the datapoint never sends an update event, this case will result in periodic polling.
- For a datapoint configured for event-driven monitoring with a configured poll interval where the datapoint sends a heartbeat at an interval less than the receive timeout, a poll will not occur until a heartbeat update is missed.
- For a datapoint configured for event-driven monitoring with a configured poll interval where the datapoint sends occasional update events with no heartbeat, this case will result in periodic polling except immediately after the datapoint sends an event-driven update. An occasional update event is an update event that occurs at intervals that are typically much longer than the heartbeat. For example, this could be the case for a motion detector with no heartbeat. The motion detector may be installed in an area with little activity and as a result the motion detector may only change state once every few hours. The motion detector will mostly be polled, but will provide an immediate update when motion is detected without any unnecessary poll requests.
- For a datapoint configured for event-driven monitoring with a configured poll interval where the datapoint never sends an update event, this case will result in periodic polling.
Polled – enables or disables periodic background polling where the driver periodically polls the datapoint to retrieve the current value of the datapoint.
- Logged Yes/No – enables (Yes) or disables (No) logging. This option is disabled when Monitored is disabled. The default is disabled.
- If Monitored is enabled, then the following fields are available:
Poll Interval (Seconds) – the interval between periodic polls from the IAP server to the endpoint in fractional seconds. The Poll Interval (Seconds) field is disabled when Monitored is disabled, and it is enabled when Monitored is enabled. The default value is 150 seconds for a polled datapoint and null for an event-driven datapoint. The Poll Interval is triggered for any event-driven update or poll response when one of the event-driven options is selected.
Publish Interval (Heartbeat) – the maximum interval between updates from the IAP server to any IAP clients, similar to a heartbeat. You can specify a fractional value such as 0.2 seconds. For a datapoint configured for periodic polling, this is typically a multiple of the polling interval. For a datapoint not configured for periodic polling, the publish interval specifies the maximum interval between updates from the cached datapoint value.
A datapoint update may be published more frequently than specified by the publish interval if either of the following conditions are met:The datapoint is polled at a faster rate than the publish interval, and publishing is appropriate based on the minimum publish interval and minimum publish delta value.
The datapoint is updated by an event-driven update or an on-demand poll, and publishing is appropriate based on the minimum publish interval and minimum publish delta value.
Minimum Publish Interval (Seconds) – minimum interval in seconds between updates from the IAP server to IAP clients. This is a time-based throttle that does not throttle or otherwise limit periodic updates based on a configured publish interval.
Expected Update Interval (Seconds) – expected average number of seconds between updates. If 0, updates are not expected and included in traffic estimation. You can specify as a fractional value such as 0.5 seconds. Set this parameter to the Min Publish Interval value if the Expected Update Interval is blank and the Min Publish Interval is changed. This value is used for traffic estimation only and does not otherwise impact monitoring.
- Publish Minimum Delta Value – minimum change from the last published value required to publish an update from the IAP server to IAP clients. You can specify a scalar Value, or select Any Change (for any non-zero change) or Always (for all updates).
- Any Change publishes data on any non-zero change subject to the publish timing requirements.
- Always disables the delta value throttle, but does not override the delta time throttle if one is specified.
- Value specifies a scalar value. You must enter a value in the Value field.
If Monitored is enabled and Logged is enabled, then the following fields are available for Log 1, Log 2, and Log 3:
Operational Considerations
If at any time your log size exceeds 8 GB, system performance will be degraded and operational failures may occur. With SmartServer release 3.2 and higher, configurable parameters for annual log size warnings and errors are defined in the com.echelon.cms.global.cfg file. The warning default is 7 GB and the error default is 8 GB. A warning message will appear if you specify logging parameters that will cause the calculated data log size to exceed these parameters.
- Minimum Interval (Seconds) – the minimum interval in seconds between log entries. The default is 300 seconds for log 1, 3600 seconds for log 2, and 43200 seconds for log 3. You can specify a fractional value such as 0.2 seconds, and also as a multiple of the Publish Interval if a Publish Interval greater than 0 is defined. The log interval specifies the minimum interval between log entries, where the first datapoint update after the log interval expires is logged with a timestamp. A datapoint update may be received as a response to a poll, an event-driven update, or a response to an on-demand read request.
- Expected Update Interval (Seconds) – the expected interval between logged values for logs. The default is 300 seconds for log 1, 3600 seconds for log 2, and 43200 seconds for log 3. The expected interval is only used for calculating estimates of log size growth per day or per year with no effect on whether or not a value is logged. For a datapoint configured for periodic polling, the default value for the log expected interval is the log interval.
- Retention Period (Days) – length of time (in days) that logs are stored. Logs are removed after the specified number of days. The default value is 14 days for log 1, 60 days for log 2, and 730 days for log 3.
- Minimum Interval (Seconds) – the minimum interval in seconds between log entries. The default is 300 seconds for log 1, 3600 seconds for log 2, and 43200 seconds for log 3. You can specify a fractional value such as 0.2 seconds, and also as a multiple of the Publish Interval if a Publish Interval greater than 0 is defined. The log interval specifies the minimum interval between log entries, where the first datapoint update after the log interval expires is logged with a timestamp. A datapoint update may be received as a response to a poll, an event-driven update, or a response to an on-demand read request.
- Delta Value – Log Minimum Delta Value – minimum change from the last logged value (not the last received value unless the last received value is also the last logged value) required to log a datapoint update. The last logged value for each log is the last datapoint value logged for the specific log (1, 2, or 3). For scalar datapoints (e.g., temperature), you can specify a scalar value by selecting Value (the default) or by selecting Any Change (for any non-zero change) or Always (for all updates). For structured datapoints or enumerated datapoints, you can only use Any Change or Always; do not use the default Value.
- Any Change logs data on any non-zero change subject to the log timing requirements.
Always setting disables the delta value throttle.
Value setting (the default) logs data if the new value exceeds the previous logged value by the value you specify (the delta value). You must always set a value. Do not use for structured or enumerated (string) datapoints. For example, If you do not set a value, then you may see gaps in log data. For example, if you want to log temperature when the value changes by 1 degree, then set Log Minimum Delta Value to Value, and you set the value to 1 as shown below.
If Log Minimum Delta Value is set to Value (the default), then you must specify the a value (the delta value). If you do not set the delta value then you may see gaps in log data.
A scalar value specifies that a datapoint update will not be logged if the difference between the new value and the last logged value is less than the Log Minimum Delta Value. When enabled, this is a datapoint delta, value-based throttle. Log entries are not throttled based on value if the Log Minimum Delta Value is not Any Change or Always, unspecified, or equal to 0.
- If Monitored is enabled, Polled is enabled, Logged is enabled, the Poll Interval is greater than 0, and the Publish Interval is greater than 0, then the following additional field is available for logs 1, 2, and 3 (otherwise it is greyed out):
- Multiple – logging multiples for logs 1, 2, and 3. Each specifies a multiple of the Publish Interval for the minimum time between logged values. When this value is set, the CMS updates the value to equal the Multiple times the monitoring interval. A Multiple value of 0 indicates always. Set all three logs to the same Multiple value to achieve one logging behavior. Or, set log 2 and log 3 to have a higher Multiple value and higher Retention Period value than log 1.
- If the Poll Interval is less than the Publish Interval, then the default value for the corresponding Log Interval is the Publish Interval times the Multiple minus 50% of the Poll Interval.
If the Poll Interval is equal to or greater than the Publish Interval, then the default value for the corresponding Log Interval is the Publish Interval times 80% times the Log Interval.
Example: If a datapoint is polled every 10 seconds and the multiple is 6, then the datapoint value is logged at least every 60 seconds. Any positive value specifies that the datapoint is logged for this log level.
- Multiple – logging multiples for logs 1, 2, and 3. Each specifies a multiple of the Publish Interval for the minimum time between logged values. When this value is set, the CMS updates the value to equal the Multiple times the monitoring interval. A Multiple value of 0 indicates always. Set all three logs to the same Multiple value to achieve one logging behavior. Or, set log 2 and log 3 to have a higher Multiple value and higher Retention Period value than log 1.
- If Logged is enabled, and either Event-Driven monitoring is enabled with a defined Poll Interval, or Polled monitoring is enabled, then three Multiple values are displayed and enabled.
- If the Multiple value is changed, then the Minimum Interval value is updated to Multiple x Poll Interval.
- If the Multiple value is changed, then the Minimum Interval value is updated to Multiple x Poll Interval.
- If Logged is enabled, and either Event-Driven monitoring or Polled monitoring is enabled, then three Minimum Interval values are enabled.
- If the Minimum Interval value is changed and a Multiple value is displayed, and the value is changed to a multiple of the Poll Interval, then the corresponding Multiple value will be updated with the new multiple.
- If the value is changed to a value that is not an integer multiple of the Poll Interval, then the Multiple value will be cleared.
- Monitored Yes/No – enables (Yes) or disables (No) monitoring. The default is disabled. The SmartServer monitors updates for a datapoint when monitoring is enabled for the datapoint. For datapoints on BACnet, LON (using IMM), and Modbus devices, polling must also be enabled to periodically read the datapoint value.
- Click Update to save the settings and return to the Datapoint Properties widget.
Editing Datapoint Value Alarm Conditions
To edit datapoint value alarm condition, perform the following steps:
- Go to the Edit Datapoint Properties → Alarm Type Configuration tab.
- Set the Alarmed Yes/No option to enable alarm monitoring for the datapoint. This feature is available with SmartServer release 2.7 or higher.
- Set the other fields as described below:
Alarm Name – a text field where you can enter a name for the alarm type definition. If defined, this name appears in the Alarm Type list. Alarm Name must be unique across all alarm types and across all device types for handling alarm assignments.
High Warning and Low Warning – datapoint limits for triggering warnings, one for a high value warning and one for a low value warning.
High Error and Low Error – datapoint limits for triggering alarms, one for a high value alarm and one for a low value alarm.
High Warning Preset and Low Warning Preset – datapoint presets for triggering warning alarms, one for a high-value warning alarm and one for a low-value warning alarm. This setting requires Alarmed to be Yes; this feature is available with SmartServer release 2.8 or higher.
High Error Preset and Low Error Preset – datapoint presets for triggering error alarms, one for a high-value error alarm and one for a low-value error alarm. This setting requires Alarmed to be Yes; this feature is available with SmartServer release 2.8 or higher.
Refer to the table below for examples of alarm settings:
Alarm Indication High Error = 90 Indicates 90 and above High Warning = 75 Indicates 75 and above or values 75 - 89 Low Warning = 25 Indicates 25 and below or values 11 - 25 Low Error = 10 Indicates 10 and below
Refer to the table below for Alarm State changes (i.e., yellow pop-up warning and alarm emails sent) based on datapoint value changes (assumes you are not clearing the active Alarms):Value Error / Alarm Description Event Description Value = 50 Starting value, no alarm No alarm (non-alarm value) Value = 60 No alarm No alarm (non-alarm value) Value = 77 High Warning Alarm Alarm email sent Value = 100 High Error Alarm Alarm email sent Value = 80 Stay in High Error Alarm No change – High Error is more important than High Warning Value = 50 Stay in High Error Alarm No change – High Error is more important than non alarm state Value = 20 Low Warning Alarm Transitioning from High Error/Warning to a Low Error/Warning; changes Alarm state
Alarm email sent
Value = 5 Low Error Alarm Low Error is more important than Low Warning
Alarm email sent
Value = 50 Low Alarm No change – Low Error is more important than no alarm state Value = 75 High Warning Alarm Change – Transitioning from Low Error/Warning to a High Error/Warning; changes Alarm state
Alarm email sent
Value = 100 High Error Alarm Change – High Error is more important Alarm than High Warning
Alarm email sent
Value = 50 Stay in High Alarm Still in High Error Alarm state Alarm considerations:
- You must manually clear an alarm using the Alarms and Events widget.
- You will not receive an alarm state change or email notification when the state changes from alarm to non-alarm.
- You will receive an alarm state change when the state changes from Warning to Error.
- When the alarm state changes from an Error to Warning at the same level (e.g., High Error to High Warning or Low Error to Low Warning), you will not receive an alarm state change or email notification.
- You must clear alarms in High Error or Low Error states prior to the value changing from Error to Warning in order to receive an alarm state change and email notification.
- Alarms in High Error or Low Error states should be investigated before the alarms are cleared.
- When the alarm state changes from Low to High or High to Low, you will receive an alarm state change and email notification.
If the datapoint is structured, representing more than one value (multiple fields are separated by a comma), then the following format is used for an expression with n terms:
{ {<term 1>}, {<term _2>}, ... , {<term n>} }
where each term has the following format:{ "logical":<"always" | "never" | "and" | "or" | "xor" | "nand" | "nor">, "field":<field name (may be hierarchical, separated with ".")>, "comparison":"<" | "<=" | ">" | ">=" | "==" | "!="> "value":<scalar value> }
For structured datapoints, perform the following steps:
- Set the Alarm Type Name and enable the Alarmed Yes option.
- Disable the Preset option. Changes to the settings appear on the Edit Datapoint Properties view.
Set the Logical operator (i.e., always, never, and, or, xor, nand, nor).
The always and never logical operators introduce clauses consisting of a series of terms similar to the following:
{ {<clause1>}, {<clause_2>}, ... , {<clause n>} }
where each clause has the following format for an expression with n terms:
{ {<term 1>}, {<term _2>}, ... , {<term n>} }
The always logical operator specifies that an alarm must always be generated if the clause is true and is the equivalent of or for all terms until the next always or never logical operator. It is the appropriate logical operator if only one term is specified and for the first term if more than one term is specified. The logical comparison is ignored for the first term, or if only one term is specified.
The never logical operator specifies that an alarm should not be generated if the clause is true, which is the equivalent of and not for all terms until the next always or never logical operator.
The xor logical operator specifies that (A xor B) is true when A and B are different. For example: if you have (value > 0) xor (state == 1), then this is true when [(value > 0) and (state != 1)] or when [(value <=0) and (state == 1)].
The nand logical operator specifies that (A nand B) is not (A and B).
The nor logical operator specifies that (A nor B) is not (A or B).
- Set the Field name (i.e., name, which may be hierarchical, separated with ".").
- Set the Comparison (i.e., < , <= , > , >= , == ).
- Set the Value (i.e., a scalar value).
- Use the Add button () to add settings as needed.
Example 1: shows an alarm for a SNVT_switch with an ON state and greater than 80% value[ {"logical":"always", "field": "state", "comparison":"==", "value":"ON"}, {"logical":"and", "field": "value", "comparison":">", "value":80} ]
Example 2: shows a combination of logical operators[ {"logical":"always", "field": "sensorEnable1", "comparison":"==", "value":"ON"}, {"logical":"and", "field": "temperature1", "comparison":">", "value":80}, {"logical":"always", "field": "sensorEnable2", "comparison":"==", "value":"ON"}, {"logical":"and", "field": "temperature2", "comparison":">", "value":90}, {"logical":"never", "field": "bypassSwitch1", "comparison":"==", "value":"ON"}, {"logical":"or", "field": "bypassSwitch2", "comparison":"==", "value":"ON"} ]
Results((sensorEnable1 == "ON") and (temperature1 > 80)) or ((sensorEnable2 == "ON") and (temperature2 > 90)) and not ((bypassSwitch1 == "ON") or (bypassSwitch2 == "ON"))
- Set the Alarm Type Name and enable the Alarmed Yes option.
- Click Update to save the settings and return to the Datapoint Properties widget.
Using Preset Definitions
From the Edit Datapoint Properties → Preset Definitions tab, you can create, edit, or remove preset definitions as described in the sections that follow.
Creating a Preset Definition
To create a preset definition, perform the following steps:
- Go to the Edit Datapoint Properties → Preset Definitions tab.
- Click the Create button ().
- Enter the preset name and value. (Presets are based on the local values.)
- Click Save.
The preset definition appears on the Edit Datapoint Properties view. - Click Update to save the settings and return to the Datapoint Properties widget.
- Refresh your browser window (Ctrl-F5 in many browsers) to display the updated preset definition.
Editing a Preset Definition
To edit an existing preset definition, perform the following steps:
- Go to the Edit Datapoint Properties → Preset Definitions tab.
- Click the Action button (
The Edit preset view appears. ) for the desired preset definition and select the Edit Preset action. - Edit the preset definition as needed.
- Click Save.
- Click Update to save the settings and return to the Datapoint Properties widget.
- Refresh your browser window (Ctrl-F5 in many browsers) to display the updated preset definition.
Copying a Preset Definition
To copy an existing preset definition, perform the following steps.
- Go to the Edit Datapoint Properties → Preset Definitions tab.
- Click the Action button (
The Copy preset view appears. ) for the desired preset definition and select the Copy Preset action action. - Edit the preset definition as needed.
- Click Save.
- Click Update to save the settings and return to the Datapoint Properties widget.
- Refresh your browser window (Ctrl-F5 in many browsers) to display the updated preset definition.
Removing a Preset Definition
To remove a preset definition, perform the following steps:
- Go to the Edit Datapoint Properties → Preset Definitions tab.
- Click the Action button () for the desired preset definition and select the Remove Preset action.
The preset definition is automatically removed.
To remove multiple preset definitions, follow these steps:
- Click the checkmark for preset definitions to be removed. The checkmarks change from blue to yellow.
- Click the Delete button () at the top, right of the Edit Datapoint Properties widget.
- Click Update to save the settings and return to the Datapoint Properties widget.
- Refresh your browser window (Ctrl-F5 in many browsers) to display the updated preset definition.
Presets Formulas
These example formulas can be copied into the CMS field.
Datapoint format | Datapoint type | Direction | Formula |
---|---|---|---|
Scalar | SNVT_count | Source | |
Scalar | SNVT_count | Destination | {"$": {"enumeration":{"source": "$", "map": {"ON": 100,"MID": 50, "OFF": 0}}}} |
Structured | SNVT_switch | Source | {"$": {"transform": "$.value == 100 && $.state == 1 ? 'ON' : $.value == 50 && $.state == 1 ? 'MID' : $.value == 0 && $.state == 0 ? 'OFF' : ' ' "}} |
Structured | SNVT_switch | Destination | {"$": {"value":{"enumeration":{"source": "$", "map": {"ON": 100,"MID": 50,"OFF": 0}}},"state":{"enumeration":{"source":"$","map":{"ON": 1,"MID": 1,"OFF": 0}}}},"onlyPreset": true} |
Using Localization Settings
You can set the localization transformation rules to be used by a datapoint to get the native value. Any simple data type, as well as any field in a structure or union, can be transformed.
To set localization, go to the Edit Datapoint Properties → Localization tab.
The transformation value = ( input value [raw value] * multiplier ) + offset with precision.
Precision rounds half-up values as shown in the following examples:
- Calculated 80.456, with a precision value of 2, is rounded to 80.46.
- Calculated 80.452, with a precision value of 2, is rounded to 80.45.
- Calculated 80.995, with a precision value of 2, is rounded to 81.
Once any of the rules have been changed, click Update and then refresh your browser window (Ctrl-F5 in many browsers) to display the updated localization rules.
To clear localization transformation rules, click the Delete button (), then click Update, and finally refresh your browser window (Ctrl-F5 in many browsers) to display the updated rules.
Localization Formulas
These example formulas can be copied into the CMS field.
Datapoint format | Datapoint type | Direction | Formula |
---|---|---|---|
Scalar | SNVT_count | Source | {"transform": "$ * 1.8 + 32"} |
Scalar | SNVT_count | Destination | {"transform": "$ * 2"} |
Structured | SNVT_switch | Source | {"value":{"transform": "$.value + 10"},"state":{"transform": "$.state"}} |
Structured | SNVT_switch | Destination | {"value":{"transform": "$.value * 2"},"state":{"transform": "$.state"}} |
Structured | SNVT_switch | Destination | {"state":{"transform":"$ ? 1 : 0"},"value":{"transform":"min(max($, 10), 100)"}} |
Copying Datapoint Properties
To copy a datapoint property, perform the following steps:
- Open the SmartServer CMS.
- Open the Datapoint Properties widget. Click the Expand button ( ).
- Click the Action button (
The Copy Datapoint Properties view appears. ) and select the Copy action. - Select the datapoint property you want to copy. The checkmark changes from blue to yellow.
- Click Copy. A confirmation dialog box appears.
- Click OK.
Clearing Datapoint Properties
You can clear datapoint properties for a single datapoint, multiple datapoints, or all datapoints as described in the sections that follow.
Clearing a Single Datapoint Property
To clear the monitoring, logging, and alarming properties for a single, selected datapoint property, perform the following steps:
- Open the SmartServer CMS.
- Open the Datapoint Properties widget. Click the Expand button ( ).
- Click the Action button (
A Confirmation box appears. ) and select the Clear action. - Click OK to confirm the clear datapoint properties operation for the selected datapoint.
Clearing Multiple Datapoint Properties
To clear the monitoring, logging, and alarming properties for multiple, selected datapoint properties, perform the following steps:
- Open the SmartServer CMS.
- Open the Datapoint Properties widget. Click the Expand button ( ).
- Click the checkmark for the datapoint properties to be cleared. The checkmark changes from blue to yellow.
- Click the Action button (
A Confirmation box appears. ) and select the Clear Selected Datapoint Properties action. - Click OK to confirm the clear datapoint properties operation for the selected datapoints.
Clearing All Datapoint Properties
To clear the monitoring, logging, and alarming properties for all datapoint properties, perform the following steps:
- Open the SmartServer CMS.
- Open the Datapoint Properties widget. Click the Expand button ( ).
- Click the Action button (
A Confirmation box appears. ) and select the Clear All Datapoint Properties action. - Click OK to confirm the clear datapoint properties operation for all selected datapoints.