How to:

This section is under development.

Modify minimal refresh rate for Grafana

By defaylt, in Grafana 7 refresh rate for dashboard cannot be set to lower than 5 s. Whiel in production environment this might be good feature to avoid undesired too high load on the system, during development and testing it is often better to have faster refresh rates.

Minimum refresh interval can be modified in Grafana configuragion file grafana.ini, which might be located in different places depending on your deployment. Refer this page to find out locations: https://grafana.com/docs/grafana/latest/administration/configuration/#config-file-locations

In case of Docker, configuration file is not accessible easily from the host machine. In this case it can be copied to the host machine, modified, and then copied back to the container. Example steps are described below:

  • Open command line terminal

  • Copy file from container to the host machine:

    docker cp ogamma-logger_grafana_1:/etc/grafana/grafana.ini .
    
  • Modify setting min_refresh_interval to desired minumum value.

  • Copy file back to the container:

    docker cp ./grafana.ini ogamma-logger_grafana_1:/etc/grafana/grafana.ini
    
  • Restart the container.

How to enable SQL Server Authentication and TCP in Microsoft SQL Server Express

The ogamma Visual Logger for OPC connects to Microsoft SQL servers (including Express version) using SQL Server Authentication mode, over TCP connection. When you install MS SQL Server’s Express version, by default both of these options are disabled.

Enabling SQL authentication Mode.

To enable SQL Server Authentication mode, the Microsoft SQL Server Management Studio can be used. It is installed separately from SQL Server Express. With it, connect to the SQL Express instance using Windows Authentication mode, then select this connection node in the Object Explorer and open context menu by mouse right click. Then click on the Properties menu command, and open Server Properties dialog window. There, in the left side panel select Security page and set option Server Authentication to SQL Server and Windows Authentication Mode.

Also you can enable default user sa if it was disabled. For that, in the Object Explorer panel expand the Security node, and Logins node under it, and under it select the node sa. Open its Properties dialog window (by using context menu), select Status page, and set the option Login to Enabled.

You can also reset password for the sa user. Or add other users as required.

Enabling TCP connections.

To enable TCP connections, SQL Server Configuration Manager can be used (it should be installed when the MS SQL Express is installed. If you type SQL in the Windows search field, it should appear in the found applications list.

In its left side panel, find node SQL Server Network Configuration / Protocols for SQLEXPRESS. If you have multiple instances running, there might be multiple nodes under the SQL Server Network configuration node - select the desired one.

In the main panel, select node TCP/IP, and open its Properties dialog window. In the Protocol tab page, set option Listen All to Yes.

In the IP Addresses tab page, scroll down to the IPAll node, and clear TCP Dynamic Ports field, and set TCP Port to 1433 (or other port if it is used by another instance of the SQL Server). You can also set settings for each IP address separately.

To apply modifications restart the MS SQL Server service (for that you can use either SQL Server Configuration Manager or Microsoft SQL Server Management Studio).

Enable horizontal scrolling in the Logged Variables table.

In the version 2.0.0 and later horizontal scrolling should be enabled. If it is not, this might be due to the cached settings, saved in the previous version. To clear the cache in Google Chrome, open developer tools (Settings / More tools / Developer Tools), select Application tab, and delete Local Storage entries for the URL of ogamma Visual Logger for OPC web GUI.

Upgrade time-series database from InfluxDB 2.0 Beta version to InfluxDB 2.0.3 released version.

Upgrading InfluxDB from 2.0 beta version to 2.0.x release version is not automatic, requires some manual steps. Therefore this HowTo section is created to describe the upgrade process step by step.

Warning

This HowTo guide is applicable only in case if databases are running in Docker containers, and you are using InfluxDB 2.0 beta only to store data and not using dashboards, tasks, alerts. If you need to upgrade and transfer not only data but also all other settings (dashboards, tasks, alerts), refer to the guide provided by InfluxData at https://docs.influxdata.com/influxdb/v2.0/upgrade/v2-beta-to-v2/

  1. Prepare new docker-compose file, which uses new version 2.0.3 of the InfluxDB. You can use file influxdb2.0.3.yml from GitHub repository https://github.com/onewayautomation/ogamma-logger (in folder docker). Start container and configure instance of the InfluxDB 2.0.3 to use the same settings for organization and bucket, where the OVL was storing data in InfluxDB 2.0 Beta version. You can use configuration web GUI of the InflxuDB 2.0.3 to configure these settings, which is available over URL localhost:8016 in case of using this docker-compose file.
  2. In ogamma Visual Logger for OPC, create new TSDB connection of type “InfluxDb 2.0” for Influx 2.0.3 release version. Configure all settings (measurement, tags, buckets etc.), but do not enter token yet in the Password field, clear it and leave empty. This is required to prevent ogamma Visual Logger for OPC connecting to the InfluxDB before completion of the upgrade. If you click the button Test Connection, it should show that the connection status as OK, because during this testing it does not try to write data yet.

Note

The released version 2.0.x of the InfluxDB uses port number 8086 by default, not 9999 used by the beta version. In case of using Docker compose file influxdb2.0.3.yml, host name in the TSDB settings should be set to influxdb2.0.3.

  1. Modify settings for the instance (via menu Settings/Instances, selecting current instance and opening it for editing) to use the database IndluxDB 2.0.3 configured in the previous step.

After this, refresh OVL configuration GUI web page and verify that in the Logged Variables table columns Measurement and Tags have correct values. If they have incorrect values, then revise Measurement generation mode and Tags Generation Mode settings and JSON field settings in the InfluxDB 2.0.3 configuration settings.

Also, open Statistics dialog window via menu Tools/Statistics and verify that the TSDB Status field displays connection error message.

Collected data will be stored in the Local Storage buffer now.

  1. Connect to the InfluxDB 2.0.3 container’s terminal:
docker exec -it ogamma-logger_influxdb2.0.3_1 bash

Here ogamma-logger_influxdb2.0.3_1 is container name.

  1. Export data from InxlusDB 2.0 Beta container (connecting to it remotely), and import data to the InfluxDB 2.0.3:
  • Export from InxluDB 2.0. Beta:

    influx query 'from(bucket: "ogamma") |> range(start: -3y)' -r --host http://influxdb2:9999 -o ogamma -t secrettoken > data.csv
    

    Here option -o defines organization, option -t - authorization token. You might need to adjust these values.

    Note

    Export and Import can take significant time if database has large amount of data.

  • Import to the InfluxDB 2.0.3:

    influx write --format csv -o ogamma -b ogamma -f data.csv -t secrettoken
    
  • Exit from container terminal by running command exit.

  1. Verify that data is imported into InfluxDB 2.0.3. This can be done by connecting to its GUI and running queries, or via Grafana.
  2. In ogamma Visual Logger for OPC GUI, set correct authorization token for the InfluxDB 2.0.3 database.
Verify that data is being stored into the new database: in the Settings/Statistics dialog window TSDB Status field should display connected, and number of stored values should be increasing, and field Balance should display small number of values left in the Local Storage.
  1. Stop InfluxDB 2.0 beta version.

If all previous steps were completed successfully, in the Docker host machine, run command:

docker-compose -f influxdb2.0.yml down

Note

This command might report an error error while removing network, which can be ignored.

How to solve issue with duplicate records in SQLite or PostgreSQL/TimescaleDB.

To optimize read access to the time-series values stored in PostgreSQL or TimescaleDB, you might want to create additional index let say by sourceid and time columns.

Assuming that for each variable just one value is reported by the server, you might want to make this index unique.

This kind of index can be created by running SQL query:

CREATE UNIQUE INDEX values_index_sourceid_time on public."values" (sourceid, time);

In most cases this does not cause problems. But if there are logged variables which do not change too much, it is possible to get error about duplicate records, if the ogamma Visual Logger for OPC restarts, or server connection is deactivated for some time and re-activated again. The point is that after re-connecting to the server, it reports last sampled values for all variables. And because the variable did not change since it was last time written to the database, this value reported by the server will be duplicate. Note that in the time column an OPC UA Source Timestamp is written, which means the time when it was last changed in the server, not the time when it was reported.

To eliminate this error case:

  • With SQLite database, you can change JSON option ‘’insertSqlCommand`` to INSERT AND REPLACE.
  • With PostgreSQL/TimescaleDB, in version 2.1.0 new option was addded: insertPostFix. If you set it, for the index example given above, to value ON CONFLICT (sourceid,time) DO NOTHING, the database will ignore duplicate records without reporing errors. To apply this option, turn on another option applyInsertPostFix.

How to install and run InfluxDB.

Windows 10

  • Open PowerShell terminal and navigate to the folder where InfluxDB and databases should be located.
  • Download open-source version:
wget https://dl.influxdata.com/influxdb/releases/influxdb2-2.0.6-windows-amd64.zip -UseBasicParsing -OutFile influxdb2-2.0.6-windows-amd64.zip
  • Extract into current folder:
Expand-Archive .\influxdb2-2.0.6-windows-amd64.zip -DestinationPath './'
  • Run, with databases created under the current folder, sub-folders bolt and engine:
influxd.exe --bolt-path=bolt --engine-path=engine --http-bind-address :8086

Linux (Ubuntu 18.04)

wget https://dl.influxdata.com/influxdb/releases/influxdb2-2.0.7-linux-amd64.tar.gz
tar xvzf ./influxdb2-2.0.7-linux-amd64.tar.gz
cd influxdb2-2.0.7-linux-amd64
./influxd --bolt-path=bolt --engine-path=engine --http-bind-address :8086

The InfluxDB now should start, and its web interface should be avaibale at port 8086.

Installing SSL certificates from letsencrypt for InfluxDB (Ubuntu)

Free SSL certificates can be obtained from https://letsencrypt.org/ This can be automated in different ways, below are steps for one of them.

For more details you can refer to those materials:

Commands below should install the certbot application, verify that the certbot service is running, and get certificate for domain influxdb2.opcfy.io:

sudo apt install -y letsencrypt
sudo systemctl status certbot.timer
sudo certbot certonly --standalone --agree-tos --preferred-challenges http -d influxdb2.opcfy.io

The certificate and key file will be located at:

/etc/letsencrypt/live/influxdb2.opcfy.io/fullchain.pem
/etc/letsencrypt/live/influxdb2.opcfy.io/privkey.pem

They will be accessible only for the root user by default. You will ned either run influxdb under root user, or change permissions for those files.

Enabling SSL for InfluxDB is simple, you just need to pass certificate’s public and private key file names in the command:

./influxd --bolt-path=bolt --engine-path=engine --http-bind-address :8086 --tls-cert=/etc/letsencrypt/live/influxdb2.opcfy.io/fullchain.pem --tls-key=/etc/letsencrypt/live/influxdb2.opcfy.io/privkey.pem

To monitor size of the InfluxDB database folder, the tool ncdu is convenient, which can be used from the terminal. It can be installed by command sudo apt-get install ncdu.

Running InfluxDB as Azure IoT Edge Module

Once Azure IoT Edge runtime is installed in the device and provisioned, you can install InfluxDB in it remotely using Azure Portal.

In the device page, click on the Set Modules tab page. In the IoT Edge Modules section, click on the Add button. In the dropdown list select IoT Edge Module. As a result, Add IoT Edge Module page should be opened.

Here, in the Module Settings tab, enter the following fields:

  • IoT Edge Module Name - arbitrary name of the module.
  • Image URI - set to the full URL of the Docker container image: registry.hub.docker.com/influxdb/influxdb:2.0.

In the Container Create Options tab, enter the following settings:

{
    "Hostname": "influxdb2",
    "Volumes": {
        "/var/lib/influxdb2": {}
    },
    "NetworkDisabled": false,
    "ExposedPorts": {
        "8086/tcp": {}
    },
    "HostConfig": {
        "PortBindings": {
            "8086/tcp": [
                {
                    "HostPort": "8086"
                }
            ]
        },
        "Binds": [
          "/var/influxdb2-data:/var/lib/influxdb2"
        ]
    }
}

Click on the Add button. The new module entry should be added to the list of modules.

Click on the Review + create button to complete the module deployment.

How to quickly setup test configuration with 100K variables.

To verify that the product meets specific performance requirements, as well as to create test data to store in the database and have some metrics about what disk space is required for specific amount of stored data values, what duration of network interruptions can be handled, you can run tests using demo OPC UA Server from Unified Automation, following steps in this section. We use 100,000 variables with 1000 millisecond sampling interval in our test case setup described below, you can use lower or higher numbers.

In general, our tests show that average throughput about 100,000 float values per second can be handled in most modern PCs, if InfluxDB is used as a destination time-series database. If you have lower number of variables, sampling interval can be faster, and if data is not changing too often, more variables can be processed. For example: 10,000 variables with 100 ms sampling rate would put the system into the same load as 200,000 variables with slower sampling interval 2000 ms.

Unified Automation demo server has a limit of 100,000 nodes to add. If you need to add more than 100,000 variables logged, you can create second server node to add more variables.

As a result of the increased load, it takes longer time for the server to process requests from the client, as well as communication messages between the client and the server becomes larger. For that reason, some configuration options in both client and server side need adjustment as described below.

  1. Download and install OPC UA C++ Demo Server from Unified Automation web site: https://www.unified-automation.com/downloads/opc-ua-servers.html
  2. Find configuration file ServerConfig.xml under default installation folder C:\Program Files (x86)\UnifiedAutomation\UaCPPServer\bin\, and modify values for the following options:
  • MaxSubscriptionsPerSession: 100
  • Serializer / MaxMessageSize: 33554432
  • Serializer / MaxAlloc 33554432

Restart the OPC UA Server to apply changes.

  1. Increase number of variables under folder Objects/Demo/007_Massfolder_Dynamic of the OPC UA C++ Demo Server. By default it has 1000 variables, and more nodes can be added to it at rutime, up to 100,000 variables.

To increase number of variables, the server has a method AddNodes under the node mentioned abobe. You can call it using UaExpert, client application from Unified Automation, available to download at https://www.unified-automation.com/downloads/opc-ua-clients.html

Tip

Calling of this method can be automated by adding methods node in the server’s advanced configuration settings dialog window. Switch the JSON editor to text mode to insert the following JSON member:

{
 "methods": [
   {
     "objectNodeId": "ns=2;s=Demo.Massfolder_Dynamic",
     "methodNodeId": "ns=2;s=Demo.Massfolder_Dynamic.AddNodes",
     "inputArguments": [
       {
         "dataType": "UInt32",
         "value": 10000
       }
     ]
   }
 ]
 }

Now the server has number of variable nodes ready to be logged with ogamma Visual Logger for OPC.

  1. In the ogamma Visual Logger for OPC configuration GUI, add new server node in the Address Space panel. In the opened OPC UA Server node settings dialog window, enter endpoint URL for the demo server, as well as modify values of the following options:
  • timeout: 5000
  • in the JSON editor dialog window opened by clicking on the Edit Advanced Options button:
    • publishRequestQueueSize : 10
    • delayBeforeReconnect : 5000
    • reconnectInterval : 10000
    • readServerStateInterval : 10000
    • timeoutReadServerState : 5000
    • requestTimeouts / createMonitoredItems : 10000

Note

These settings work in our test setup, where ogamma Visual Logger for OPC runs in a workstation with 32 Gb RAM, 8 core CPU, and 1 Tb SSD. You might need to increase those numbers in PC with lower performance.

  1. Set logging options which will be used when you add new variables to the Logged Variables table.

When new variables added to the Logged Variables table, they will have parameters as Publishing Interval, Sampling Interval, and others, as they are configured in the currently selected variable group. By default both Sampling and Publishing Interval are set to 1000 ms. in the Default Group. If you need values different than default values, then you can modify them in the default variable group, or create new variable group with desired options and select it.

  1. Now you can browse the OPC UA Server node in the Address Space panel, by expanding the server node and its child nodes. But do not expand the node Objecs/Demo/007_Massfolder_Dynamic` which now has large numbers of varaibles under it, because rendering them all in the web browser takes too long time. Instead, hover mouse on the folder, and open the context menu. It will have one command: Log all first-level children. When you click on this menu command, all variables located directly under it will be added to the Logged Variables table. It might take from 3 to 10 seconds to add them depending on the used hardware.

After adding variables, refresh configuration GUI web page in the browser. After refreshing, all variables should be added to the Logged Variables table, and shortly start getting values updated.

If they stay with bad status (red icon in the Status column), you can hover mouse on the icon in the Status column to display description of the status code.

If connection with the server cannot be established, increasing values of the options described above might help.

  1. You can see some statistical information about total number of data values received from the server and others in the Tools/Statistics dialog window.

Tip

You can check number of currently logged variables in the bottom of the Logged Variables table, to the lefth from the page navigator. Total number of variables with bad and good status is displayed in the Logged Variables table header area.

Tip

Due to large number of records in the table, number of pages will be large (for 100,ooo variables and 10 records per page, there will be 10,000 pages). So moving to the specific page using page navigator becomes not effecient.

To quickly move to the specific page, use page selector, located at the top right corner of the Logged Variables table.

How to modify configuration options for large number of variables.

Sometimes you might need to modify configuration options for large number of variables. For example, change sampling interval for all variables for a specific server.

We have in the roadmap plans to add easy to use mass editing features right into the GUI. For now, you can modify settings by modifications in the configuration database by the database management tools.

You can see the type (SQLite or PostgreSQL) and location of the configuration database in the dialog window opened via menu Settings / Connection to the configuration database. Name of the table in the database where Logged Variables are stored is loggingNodes. You can modify it by using SQL database management tool. In case of SQLite (default type of the configuration database), you can use SQLite Browser available to download at https://sqlitebrowser.org/.

For example, to change Sampling Interval and Queue Size, for all variables under the server with id 1, you can execute the following SQL query:

UPDATE loggingNodes SET sampling_interval = 1000, queue_size=11 WHERE server_id = 1;

After making changes, don’t forget to save the changes (by clicking on the Write Changes icon.). To apply these changes, you can etierh restart ogamma Visual Logger for OPC, or de-activate the server node in the Address Space panel and activate it again.

Import variables from CSV file

When number of variables to log is large, it might be better to import list of variables from CSV file than select them from the address space and add manually. This section describes how list of variables can be imported from CSV file.

The configuration database can be located either in SQLite or PostgreSQL type database. You can see type of the database and its location via menu Settings/Connection to the Configuration Database.

Each database type has its own tools to import data, and some field data types differ in tables, therefore for each database type import procedure is described separately in the sections below.

In both cases name of the table where list of logged variables is stored is called loggingNodes, so import of records from CSV file is performed into this table.

Before importing records to the loggingNodes table, create OPC UA Server nodes in the Address Space panel. These records will be stored in table servers in the configuration database. You will need value of the record identifier column id from this servers table, to set in the server_id column in the CSV file with list of variables.

Import into SQLite configuration database.

  • Prepare CSV file with list of variables, using this template as a base: template file for SQLite. Following columns should be modified:

    • id - enter unique integer identifier. If currently there are no variables, then start from 1. Otherwise, view table loggingNpodes in the database, find out what is the value of last record’s id, and start with value incremented by one. Next record should have next value incremented by 1 and so on.
    • server_id - value of the id column for the corresponding OPC UA Server record in servers table.
    • display_name - display name for the variable.
    • opc_node_id - OPC UA node id, formatted as text with namespace index and identifier. For example:
      • ns=2;s=Demo.Massfolder_Dynamic.Variable0001 - namespace index 2, string type node identifier Demo.Massfolder_Dynamic.Variable0001.
      • ns=2;i=100 - namespace index 2, integer type node identifier 100.
    • sampling_interval - sampling interval in milliseconds.
    • publishing_interval- publishing interval in milliseconds
    • queue_size - queue size for the monitored item values.
    • active - set to 1 if the variable values should be logged.

    Other columns can be set to default values as in the template file.

  • Download and install SQLite DB Browser from https://sqlitebrowser.org/

  • Stop ogamma Visual Logger for OPC.

    Accessing SQLite database from two processes at the same time can cause errors, therefore stop the ogamma Visual Logger for OPC process before importing data.

  • Import records from prepared CSV file:

    To import records into table loggingNodes, open the database with SQLite DB browser. By default, configuration database is located in file ./data/config.db (relative to the working directory of the ogamma Visual Logger for OPC process).

    From menu File select command Import / Table from CSV file:

    Select CSV file with list of variables.

    In the Import CSV file dialog window make sure that the the Table name field is set to loggingNodes, and click OK button.

    On the warning dialog window click on the Yes button. Records from the CSV file should be imported.

Import into PostgreSQL configuration database.

  • Prepare CSV file with list of variables, using this template as a base: template file for PostgreSQL. Following columns should be modified:

    • id - enter unique integer identifier. If currently there are no variables, then start from 1. Otherwise, view table loggingNpodes in the database, find out what is the value of last record’s id, and start with value incremented by one. Next record should have next value incremented by 1 and so on.
    • server_id - value of the id column for the corresponding OPC UA Server record in servers table.
    • display_name - display name for the variable.
    • opc_node_id - OPC UA node id, formatted as text with namespace index and identifier. For example:
      • ns=2;s=Demo.Massfolder_Dynamic.Variable0001 - namespace index 2, string type node identifier Demo.Massfolder_Dynamic.Variable0001.
      • ns=2;i=100 - namespace index 2, integer type node identifier 100.
    • sampling_interval - sampling interval in milliseconds.
    • publishing_interval- publishing interval in milliseconds
    • queue_size - queue size for the monitored item values.
    • active - set to t (that is boolean true) if the variable values should be logged, or to f, if the variable should not be logged.

    Other columns can be set to default values as in the template file.

  • Have a database management tool available, with feature to import records to the database tables. In this manual we are using PgAdmin.

  • In PgAdmin, connect to the PostgreSQL server where ogamma Visual Logger for OPC configuration database is located. Expand the database, and select table loggingNodes. Using menu Tools/Import/Export, open Import/Export data dialog window:

  • In the Import/Export field select Import.

  • Open Select File dialog window. On its toolbar, click on the Upload File icon. Double click on the central panel to open file selector and select CSV file, or drug and drop file from the file explorer windows into it.

  • The file should be uploaded to the server, and listed in the central panel. If it does not appear in the list, click on the upload icon one more time. Select the file name and click on the Select button.

  • Set option Header to Yes, and select comma , as a delimiter.

  • Click OK to import records.

    Notification should appear in the PgAdmin GUI, reporting import result. If there were errors, click on the More Details button in the notification message area to view what caused the import failure.

  • Restart ogamma Visual Logger for OPC, or toggle option Active in the OPC UA Server node settings to apply changes.