Home Uniquely NZ Travel Howto Pauline Small Firms Search
Upgrading Corinna's Electrical System
with a Raspberry Pi with additional Node-RED software.

The initial document on the Upgrade to Corinna's Electrical System got far too long so I decided to cover the addition of a Gateway to allow remote monitoring using a Raspberry Pi 3B+ on new pages. There are currently two new parts: the straightforward addition of The Raspberry Pi using the Venus OS and The Extension to Venus OS Large with Node-RED to allow the addition of a dedicated Dashboard and additional smart management.

This part which covers Venus OS Large with Node-RED is very much work in progress as it serves as a working diary/log book for the development with some parts broken out into Appendices.

The initial part covers running Node-RED on a standard Linux machine which I did during my early development but is no longer in use as I have an extra Pi 4 running at home for development. The Pi 3B+ which Victron recommend is still in production but on long delivery - it is possible to use a Pi 4 with Node-RED but it is not trivial at this stage - the proceedures are covered in an Appendix.

The selection and addition of relays and their programming under Node-RED is another major piece of work which is covered in an Appendix

Introduction to Venus OS Large and Node-RED

My current installation of Victron Smart devices I am using on Corinna and the addition of a Raspberry Pi to provide a gateway to allow remote monitoring has been covered on the previous page The Raspberry Pi using the Venus OS There is an extended build of Venus OS under development called Venus OS Large which adds Node-RED and a Signal K server. To quote Victron Energy:

Node-RED is a tool for connecting hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows. With it, one can for example program a functionality such as activating a relay based on a temperature measurement. Or make far more complex algorithms, tying relays, measurements, or other data available from Venus OS or elsewhere together. All without having to write real source code, as Node-RED calls low-code programming for event-driven applications.

Node-RED features a fully customisable dashboard, viewable in a web browser - both locally but also remotely, via the VRM Servers on the Victron Portal.

The Signal K server is aimed for yachts, and multiplexes data from NMEA0183, NMEA 2000, Signal K and other sensor inputs.

Venus OS Large runs well with Node-Red on my Raspberry Pi 3B + when installed using the procedure in Venus OS Large. It is still 'Work in Progress' but looks very promising for the future for those with some programming knowledge who wish to tinker or customise their systems and have a sufficiently powerful processor in their Victron controller or a Raspberry Pi. Images are available as beta versions for both. The image I installed and describe in most of his document is Venus OS Large v2.80. The initial install was version v2.80~21-large-23 dated 25 September 2021 which has been subsequently updated and is currently Venus OS Large v2.80~33-large-24. It was reported before I started that they worked well and that several users were using them without any issues. The idea of an improved dashboard and extra control from Node-Red was very attractive to me and I tried it out once I had got the basic system Raspberry Pi up and running. I

To cut the story short for this introduction: The Raspberry Pi 3B+ has adequate power for what I have tried so far and the following sections describe my journey of exploration, currently much is in a diary form. In general the use and testing of the Node-RED extension has not affected the basic operation of the Venus OS, data gathering by the Pi and transfer to the Victron Portal and I have been able to mostly work remotely through the VRM Portal rather than on a very cold Corinna over the winter.

Exploration of Node-Red on a Desktop running Linux Mint (Not essential)

This was carried out when I was trying to understand how Node-RED works and is not required although it helped me as it took a while until I got a grasp of all the Node-RED capabities and avaoided risking a system working well at the time.

I found one could load the basic Node-RED software under Debian/Ubuntu and hence Mint - I am not quite sure how I eventually managed to get it working so I will not repeat here! It was based on https://nodered.org/docs/getting-started/local. It was not the Victron Version with any Victron Nodes built in but it was enough to try out some of their examples and after a short time the penny dropped as how it worked and progress was very rapid and enabled me to write the next section.

How does one use Node-RED

I will try to give a very simple description in the hope it helps others. Node-RED works on the basis of a pictorial workspace where you can drag a selection of nodes from a palette on the left of the workspace and connect them together by dragging from a nodes output to another's input. What this is doing is connecting a series of messages between outputs and inputs each of which has a payload. In essence these streams of messages are asynchronous and everything is event driven. To the right of the screen is a multi purpose sidebar (panel) which can display debug information, help and several other panels useful for configuration and diagnostics and importantly it allows one to configure a separate Dashboard - a User Interface (ui) to display and control the system. Nodes can generate messages, change messages as they pass through (a function node) and display messages on the dashboard amongst many other things. The display nodes are very powerful and it is as easy to display on a dial or graph as a number.

Victron devices such as a Victron Solar Charger are added as nodes and you end up with multiple instances as each input and output from a device is added as a separate node - most nodes only have a single input and/or output so, for example, the the output representing solar power is a series of messages every 5 seconds. You add another copy of the Victron Solar node and set it up internally to say give the state which will be another series of messages every 5 seconds.

When one comes to start to use Node-RED one has to understand one fundamental difference from most programming one has done before. Node-Red is about flows. It is about messages with payloads being passed so in many ways it is transitory. An instrument produces a series of messages at regular intervals rather than a set of variables you work on. A good example is a panel switch node on your dashboard. When you turn it from off to on it sends a single message of true and a single message false when you turn it off. It can be used to carry out an action immediately by a device which responds to a single message such as 'turn inverter on'. But if you want to use that message to change anything in the future the true or false has to be captured. This takes a quite a bit of getting used to the first time.

More recent versions of node red have a feature called 'context' which provides the ability to save a value in the equivalent of the more usual variable and access them again with various scopes such as the same node, the whole flow or globally and even save them on a regular basis to a file to preserve them across restarts. Note saving and restoring from 'context' may require put() and get() functions in some cases, you need to read it up carefully. Being flow oriented is good in many ways and makes embedded device programming easier but carrying out more convention logical actions can seem overly complex initially.

Installation on the Raspberry Pi

It was installed using the procedure in Venus OS Large and installation is very similar to any other offline update. The image I initially installed and described in most of his document was Venus OS Large v2.80~21-large-23 dated 25 September 2021 downloaded for the Raspberry Pi from here. I used a new MicroSD card and bought two identical ones so I could make exact clones for backup. In theory the cards only need to be the same size but in practice it has been reported that anomalies can occur due to detailed differences in the internal implementation of the cards but I have not had problems and I have now covered cloning, including to different size cards, in an Appendix - Simple cloning of the microSD card as a backup .

There are periodically new versions which have enough differences to make me consider an update but first I need to keep track of exactly what changes I have made and how to reproduce them. I have a list of changes/settings - many (in brackets) can be made in Remote Console once you are connected.

  1. Pair Bluetooth with Pi (seemed to be needed after update) ??
  2. Provide Password for Wifi (Settings -> Wifi) , Survived update
    1. Find and note ip address - the same
  3. Set access level to Superuser (Settings General, survived update))
  4. Create a root password. Needs to be re-created after an update to enable SSH and SFTP login.
  5. Enable Remote Support to enable sshd and reverse tunnel on VRM and LAN in Remote Console (Survived update)
  6. Turn Off auto-update (Settings -> Firmware, survived update)
  7. Install update from USB stick
  8. Turn on Node-RED (Venus OS Large Features, Survived update)
  9. Make changes to provide persistence in Node-RED in settings.js (Survived update but what if settings.js changed in update)
  10. Extra nodes in Node-RED added to palette. (Survived update)
    1. node-red-dashboard (essential)
    2. node-red-contrib-victron (essential)
    3. node-red-contrib-ui-artless-gauge (More flexible with linear and 270 deg gauge options)
    4. node-red-contrib-cpu
    5. node-red-contrib-stoptimer
    6. node-red-contrib-timeswitch
    7. node-red-contrib-os
    8. node-red-contrib-simpletime

Installing Node-RED is a two stage process. First you install Venus OS which is the first 6 steps then you download the Venus OS Large update onto a USB stick and update just as you would the Venus OS. Stages 9 and 10 should not be needed in subsequent updates, the need for the earlier steps is open but should not take very long.

Update: I understand there is a new version 2.90 in beta testing which may have OS Large included as standard, if so it may simplify this proceedure.

Using Node-RED with Victron Devices

So lets actually go a bit further and look at what we need to do an actual useful task. For example, I want to be able to have a nice display with a 'dial' displaying the state of charge of my batteries and another showing the voltage from the Victron Smart Shunt and make it available on my phone locally and remotely. Once you have the Victron Large OS which is the version of the Venus OS updated to have Node-Red installed along with all the Victron Device Nodes we will find this becomes almost trivial using only half a dozen nodes total for the two displays. So lets have a look at how it all works. You access Node-Red's workspace and display areas (User Interface or ui) in almost the same way as you use for the Venus OS via a web browser.

The basic OS is accessed by pointing your browser at the IP Address of the Raspberry Pi on the local network or remotely via the Victron portal. To get to Node-Red locally on the same network you just access a specific port so if the Raspberry Pi is on a local port 192.168.1.127 then Node-Red is at 192.168.1.127:1880 and the Dashboard (ui) is at 192.168.1.127:1880/ui. On most local area networks you can just enter venus.local:1880 and venus.local:1880/ui . In the case of the portal an extra menu item appears to take you to the Node-Red workspace to set all up or the Dashboard to display it.

After you have made any changes in the workspace, locally or remotely, you have to Deploy them before they can be utilised - Deploy is the big red button at the top right on the black title bar and there is also a 'hamburger' menu to the right of it which has many useful items including an Import and Export to save your work and Palette to search for and install extra nodes.

Things I have learnt, mostly the hard way, which were not obvious when I started.

Node-RED runs as as a service (a systemd service under Linux) and needs to be restarted after a reboot or use [sudo] systemctl enable nodered.service under linux to autostart Node-RED at every boot. This is set up for the Venus OS Large but not for my test setup under Linux Mint.

The dashboard layout is in a tab in the side-panel and it is possible to work top down and set up extra tabs and groups and drag nodes to reorder and drag between them regardless of the flows they come from. Groups are actually useful to improve responsive design. Deleting is done by edit -> delete in opened panel for Tabs and Groups etc. Double click the name in a flows tab to edit it.

Switches are difficult to use as one is dealing with flows of messages and on/off switches buttons and drop down menus only provide single messages which are best held using a context variables. Even then the variables do not persist through restarts in the default system but that can be fixed by a simple addition to the configuration file.

Context may need get() and put() to work so the instructions need to be read well. See https://nodered.org/docs/user-guide/context

You can join many flows (streams) into a ui display node input such as four core usages of a processor and likewise into debug nodes. The flows are always one way.

There are some useful examples of Node-RED with Victron devices at https://github.com/victronenergy/node-red-contrib-victron/wiki/Example-Flows

My initial Node-RED ToDo List

This was my initial wish list of what I want to develop for Corinna, initially tested using the local system under Linux Mint to reduce risk before deploying on the Raspberry Pi on Corinna.

These are covered in detail below.

My Node-RED implementation for Corinna.

The development has been an iterative learning process making use of both a Node-RED system running on a Linux Mint computer and the actual Raspberry Pi on Corinna. I however want to give a more coherent description to make it easier for others to follow and to set up the system they want without the many (interesting) diversions in the path I followed. being truthful, it was a bottom up approach without a full appreciation of what could be achieved and how it should be integrated.

Initiation and restarts of Node-RED and Raspberry Pi

One of my concerns from the start was that the Raspberry Pi has no tidy shut down or restart mechanisms in its hardware and depends on the Power Supply being turned on and off. I discussed the problem earlier including the terminal commands which can be used by a user logged in over SSH. Node-RED allows one to get round these problems from the dashboard and the ability to do restarts is a vital part of testing the initialisation of the system, preferably remotely to avoid going to a cold boat in the middle of winter! This was therefore to be one of my first developments for the real system on the Raspberry Pi.

Node-RED provides an Exec node which enables one to execute a single line of commands but the shutdown command normally has to be run using sudo or as root. The way the Victron Venus OS is set up you probably end up running as the Superuser (root) which means that is not a problem although running as the superuser does leave one open to security issues. My standard Linux machine does need a fudge so commands can be run with sudo without asking for a password:

Using sudo without a password for specific commands in Linux. Not required on Pi running Venus OS

It is difficult to use Exec nodes with commands requiring sudo if you are using Node-RED installed on a normal Linux machine as a password is required and Node-RED has no mechanism to provide it.

A way round this is to configure the system to not require passwords for specific users and commands. I did that a long time ago for a program called truecrypt so I know it works. It involves the modification of a vital file /etc/sudoers - get it wrong and you are locked out of the machine permanently as sudo no longer works. The system actually provides a special tool to make the changes and checks before saving them - the catch is that it employs a terminal based editor called vi but only one line has to be added so it is not impossible. The important commands are Ctrl O to write the changes and then Ctrl X to exit. NOTE that Ctrl keys have different meanings in vi to usual so watch what you use - for example paste is Ctrl U. So

sudo visudo

and add a single line at the end of the existing file

ALL ALL=NOPASSWD: /sbin/shutdown,/bin/systemctl
This needs to be absolutely accurate so check and check again then save and exit with Ctrl O and Ctrl X

This gives ALL users access to the shutdown and systemctl commands with sudo but without a password.

I have only used visudo on Debian based Linux distributions but visudo --help should tell you if it is installed and appropriate.

I have added three buttons to my Diagnostic tab on the Dashboard for Restart, Cancel and Shut Down of the computer with notification before the Shutdown or Restart that one has one minute to cancel the actions in case they have been activated by mistake. This enables me to check how initiation takes place when the machine is booted and the node-red service is started. This is quite important as much is transient and not saved during a restart, the devices should not be affected but switch settings and modes you have set up are not preserved as they are in volatile memory and it is not obvious the initial state of say as switch is defined by default.

An aside on the dashboard: Up to now, I have been referring to the Dashboard as if it were a single panel but it can have any number of tabs; initially everything was mixed together but at an early stage I separated the diagnostic and testing functions onto separate tab and I then added a third tab for the 12V system so the main display stays the same to avoid confusing other users ie Pauline. Each tab needs to be restricted in size to fit onto our tablets without the need for scrolling in normal use. As previously mentioned, there are powerful GUI tools to rearrage the dashboard layout which are well worth exploring.

Restarting the Node-RED Daemon as from a reboot.

It is very important to be able to restart Node-RED whilst testing so I have added a button to the Diagnostic tab to restart the node-red daemon without a machine restart hence the inclusion of systemctl as well as shutdown in the modification above to evade the use of a password with sudo on my testing machine. This was tested on my trial Linux Mint machine and the restart of node-red works as expected with sudo present and I thought the job done but that was not the case for the Venus OS Large implementation on the Pi. What had seemed a trivial task turned into several days of frustration but a much better understanding of the Victron Venus OS.

I was unable to restart node-RED under the Venus OS Large in the same way as a normal linux machine. It seems that systemd is not in use in Venus. I did various searches of the Venus documentation and the impressive knowledge base in the community with little success. It rapidly became clear that the management of initiation and services was not by any of the usual sysVinit, upstart or systemd that I have grown up with. After spending a lot of time I eventually found that there is also another completely different collection of tools called daemontools developed by D J Bernstein - see https://cr.yp.to/daemontools.html which is my main source for the following paragraphs. The clues to its use were the svc and svok that were in busybox and confirmed by the presence of a folder /services which contains a folder node-red. Put ls services -l | grep -e node-red in an Exec node if you do not have easy ssh access to check. Further searches including svc found found a brief references in https://github.com/victronenergy/venus/wiki/commandline-introduction to confirmed it was used in Venus OS large.

Having read some of the documentation I can certainly see the attractions of daemontools for a basic embedded or headless system and it is well matched to the use of busybox. One of its strengths is that services are easy to start and if your daemon dies it is automatically restarted with ~5 secs. Another is that when the system administrator restarts a service under daemontools, the service receives the same fresh process state that it received at boot time whilst with other systems one may have considerable extra work to clean up environment variables, resource limits, controlling ttys, etc.

Daemontools also has easy, reliable signaling. One can use svc to control your daemon. For example:

svc -u /service/yourdaemon: starts or brings a service back up
svc -d /service/yourdaemon: sends TERM, and leaves the service down
svc -t /service/yourdaemon: sends TERM, and automatically restarts the daemon after it terminates
svc -h /service/yourdaemon: sends HUP (see below)
svc -o /service/yourdaemon: runs the service once

HUP goes back to the days of modems and terminals standing for hangup and covered the actions when control over a process was lost. It is [extensively] used with daemons but its interpretation is less well defined - in most cases it is interpreted as a request for the daemon to reinitializing themselves. The default behavior for HUP is the same as TERM.

In both the cases of options -t and -h one makes use of the automatic restarting of the daemon with a fresh process state if it is ever found to not be running. I have tried both the -t and -h options to give my restart/re-initialisationof node-RED and both seem to behave the same and both give seem to give quite a slow response through the VRM portal the and it takes over 20 seconds for the dashboard to come back up. I made a final decision to use -t after use of both.

Other Daemontools command examples and information

What mechanisms does Node-RED provide to help Initialisation.

The primary mechanism is arguably the inject node which can be programmed to inject almost any single message after 0.1 seconds which can be used to start and initialise flows and set up context variables. It is alleged that the first inject to be run is on the first tab but i have not checked.

switch-ui nodes can be controlled by Node-RED messages as well as from the dashboard. As well as the pass-through of messages messages with payloads 'false' and 'true' can be used to change the switch which provides a potential mechanism for initialisation of switches.

It is not trivial but context variables can be saved and read from a file to maintain context information and thus retain the switch status through a a restart situation. I think this is an essential change. See below

Use of persistent Context Variables and updating Node-RED

Node-RED offers the ability to choose between storing context variables in memory or in a file on the SD Card but the facility is not enabled by default. Changes have be made in the settings.js file and this should not be undertaken lightly. The first time I tried on the Laptop it completely screwed the system and it would not restart or more correctly went into a continuous loop of restarts. There are two possible causes I have thought of, firstly it is possible I had an extra , in the file after un-commenting the option internally - it looked wrong. Secondly I may have failed to stop the daemon before editing in the changes. After restoring the original file it did start but the flows were all messed up. I have however made many successful changes when I took more care.

Updating Node-RED (Only needed on laptop): The laptop had an old version of node-RED so I decided to update the system before making the modifications to enable context. Firstly I needed a newer version of node.js. I followed the instructions at https://github.com/nodesource/distributions/blob/master/README.md and used version 14 as recommended in https://nodered.org/docs/faq/node-versions for a debian based distribution. This is the record of how I updated from the terminal history - the KEYRING commands may not be needed

// Updating node.js to version 14 and nodered to latest

sudo systemctl disable nodered.service
node-red-stop

KEYRING=/usr/share/keyrings/nodesource.gpg
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource.gpg.key | gpg --dearmor | sudo tee "$KEYRING" >/dev/null

curl -fsSL https://deb.nodesource.com/setup_14.x | sudo -E bash -
sudo apt-get install -y nodejs

bash <(curl -sL https://raw.githubusercontent.com/node-red/linux-installers/master/deb/update-nodejs-and-nodered)

node-red-start
....
sudo systemctl enable nodered.service

Editing settings.js - Once more everything was working so I copied the new code in rather than removed existing commenting out and just added a single extra , at the end of the function. The code looked like this when I had finished:

......
// lang: "de",

// Context Storage
// The following property can be used to enable context storage. The configuration
// provided here will enable file-based context that flushes to disk every 30 seconds.
// Refer to the documentation for further options: https://nodered.org/docs/api/context/
//
//contextStorage: {
// default: {
// module:"localfilesystem"
// },
//},

contextStorage: {
default: "memoryOnly",
memoryOnly: { module: 'memory' },
file: { module: 'localfilesystem' }
},


// The following property can be used to order the categories in the editor
.......

This worked on the laptop and the extra option for context storage became available as well as the update from Node-RED from 1.3.5 to 2.1.4. The Pi at this time had an earlier version 2.0.5 but Node-RED can only be updated on the Pi with a complete Venus OS Large update

I can now use the change node to 'save' a payload as a variable in context (flow or global) in either memory or in a file on disk. When using a file there is a local cache which is only written every 30 seconds to save wear and tear on solid state memory. This has enabled me to save the state of my switches between reboots and restarts but is obviously useful for far more than that. Note: Almost all of my flows depend on the file based context storage feature being installed.

Where is Node-RED and the settings.js file installed in Venus on the Pi ?

I had a search of the Pi over SSH from the pad and eventually found the home folder for root was at /data/home/root and that contained the .node-red folder and hence the settings.js that I need to modify. See below for terminal listings using SSH

root@raspberrypi2:/# cd /data/home
root@raspberrypi2:/data/home# ls -al
drwxr-xr-x 4 root root 4096 Jan 1 1970 .
drwxr-xr-x 13 root root 4096 Nov 28 18:14 ..
drwxr-xr-x 5 root root 4096 Nov 28 12:24 root
drwxr-xr-x 3 vnctunne vnctunne 4096 Nov 28 11:54 vnctunnel

root@raspberrypi2:/# cd /data/home/root
rootroot@raspberrypi2:/data/home/root# ls -al
drwxr-xr-x 5 root root 4096 Nov 28 12:24 .
drwxr-xr-x 4 root root 4096 Jan 1 1970 ..
-rw------- 1 root root 362 Dec 18 12:08 .bash_history
drwxr-xr-x 4 root root 4096 Dec 19 12:08 .node-red
drwxr-xr-x 4 root root 4096 Dec 5 21:20 .npm
drwx------ 2 root root 4096 Nov 28 18:02 .ssh

root@raspberrypi2:/data/home/root# cd .node-red
root@raspberrypi2:/data/home/root/.node-red# ls -al
drwxr-xr-x 4 root root 4096 Dec 19 12:08 .
drwxr-xr-x 5 root root 4096 Nov 28 12:24 ..
-rw-r--r-- 1 root root 26770 Dec 5 21:20 .config.nodes.json
-rw-r--r-- 1 root root 26167 Dec 5 21:20 .config.nodes.json.backup
-rw-r--r-- 1 root root 505 Dec 8 10:10 .config.users.json
-rw-r--r-- 1 root root 503 Dec 8 10:10 .config.users.json.backup
-rw-r--r-- 1 root root 64301 Dec 19 12:08 .flows_raspberrypi2.json.backup
-rw-r--r-- 1 root root 64314 Dec 19 12:08 flows_raspberrypi2.json
drwxr-xr-x 3 root root 4096 Nov 28 12:19 lib
drwxr-xr-x 54 root root 4096 Dec 5 21:20 node_modules
-rw-r--r-- 1 root root 20052 Dec 5 21:20 package-lock.json
-rw-r--r-- 1 root root 324 Dec 5 21:20 package.json
-rw-r--r-- 1 root root 12487 Nov 28 12:19 settings.js

I got a listing of settings.js using cat and used meld to compare it to the one on my linux machine and there were minor differences due to the different Node-RED versions. Making changes to the settings.js file on the Pi are going to require a terminal editor I can use over SSH. For those of us used to the luxury of graphical text editors such as gedit or xed this is going to be a shock! Update: There is an alternative: Connect to Pi using Secure File Transport Protocol (SFTP). Initially I did not realise this was available but it means you can view the file structure as if you in a file manager and download and upload as required using FTP programs such as Filezilla.

The Venus OS uses busybox to provide a very cut down number of Linux utilities which mostly have reduced functionality - vi is no exception. vi was THE original unix editor and almost every linux system has vi or its advanced version vim. It has a steep learning curve for those used to recent graphical editors but is very effective to make quick changes to files when you are experienced especially on a headless system such as a server or embedded system when only 'terminal' access is available. I have put a brief introduction to vi in Appendix 1 for those who do not want (or cannot) use SFTP.

Updating the settings.js file on the Pi

Having found the location of the file on the Pi the first thing to do was to test stopping the service prior to making the change and re-enabling it. The first time I did this I logged into the Pi using SSH to stop Node-RED and then the vi editor to add the extra code to settings.js. The full procedure I used was:

On Laptop log into Pi via SSH (and set up your FTP program)

ssh root@192.168.xx.yyy

then on Pi

svc -d /service/node-red // sends TERM, and leaves the service down - Check
svc -u /service/node-red // starts or brings a service back up - Check back up

Now we can take node-red service back down and open settings.js in vi editor or use an FTP program and local editor.

svc -d /service/node-red // sends TERM, and leaves the service down again
cd /data/home/root/.node-red
vi settings.js

On Laptop I copied the following code to the clipboard by a right click copy on laptop.

contextStorage: {
default: "memoryOnly",
memoryOnly: { module: 'memory' },
file: { module: 'localfilesystem' }
},

Back in vi (on Pi)

Change to insert mode by i

Used cursor keys to reach insertion point (below existing commented out code for contextStorage function), added a few blank lines

moved up to middle of new lines and right click paste to insert extra code from clipboard

checked it all looked OK

Changed back to Command mode by Esc or ctrl [

Exited vi saving output by ZZ

Now started Node-RED up as below - this leaves it running again as a service which is restarted on boot

svc -u /service/node-red

Checked all back working and Exit in terminal to logout of access via SSH

I can now chose whether to use context in local memory or on the disk for each variable I use.

Warning: You should avoid duplicate names for context variable in disk and memory which will cause confusion and misunderstandings. Also one should always use meaningful names.

Update: Using SFTP: An alternative is to connect to Pi using Secure File Transport Protocol (SFTP). Initially I did not realise this was available but it should allow you to download the file to your machine, edit and check it and upload the new version back to the Pi without having to learn about the vi editor. My favourite FTP program is Filezilla for my Linux machine - it is also available for Windows and Apple. I use AndFTP on my android machines. Both allow you to carry out many useful actions on the remote files including Rename, Delete and change permissions. Node-RED should be completely stopped when you make changes to its files as above

This simple change makes a huge change to the usability of Node-RED as persistence is now available over both Node-RED and Venus OS restarts and means complex functions are now possible. Almost all of my flows depend on the feature being installed.

The use of persistence is not as simple as I expected but eventually I found the best way is to set and get variables in a function node is like this:

// Multiplication example to calculate power and save in another context variable

var num = global.get('shunt_voltage', 'file') * global.get('shunt_current', 'file');
global.set('shunt_power', num , 'file');
msg.payload = num
return msg;

Note the unexpected [to me] need for quotes. The ContextStore has to be specified if you want persistence to be 'file', if not given the default is 'memoryOnly' if you are using the settings.js additions given above. I am tempted to change the default. If you use the Change Node the work is done for you in the drop downs. See https://discourse.nodered.org/t/a-guide-to-understanding-persistent-context/ for extra details.

Tip: The debug node has an option at the bottom called node status (32 characters) which displays the first 32 characters of the debug message below it as well as in the debug side pane. This is very convenient during testing so I always use it and have left many of the debug nodes in place with their normal side pane output disabled.

Conclusions - How is the overall Raspberry Pi system with Node-RED working in practice?

This has a repeat of the ealier conclusions on the Pi alone for completeness.

As far as I can tell the Raspberry Pi system is all working as it would with a Victron Venus GX (which has no display), in fact it is better in many ways at a much reduced cost. The major cost has been the VE.Direct to USB isolated cables, which are required, and are more expensive (£27) than the VE.Direct cables to Victron's own Gateways. The Raspberry Pi 'gateway' also has Bluetooth so it can be accessed directly by the SmartConnect App for setting up, a feature which most of the Victron GX devices lack.

So we now have several routes to control and monitor the system and can do it locally and remotely. As far as I can tell there is no difference in functionality between using the SmartConnect App via Bluetooth or the VRM portal, except perhaps in smoothness as there are always some lags through the internet. I find the WiFi gives better coverage through the boat than Bluetooth despite being initially provided by an elderly tethered Samsung Galaxy S3 mini (3g) which lives on a bracket in a window. So we have everything we had before with more local range even before we started to use the portal for world wide coverage. I am yet to check the data use fully but I do not need to worry too much in the UK and EU with a GiffGaff Goodybag providing 15 Gbytes of 4g data a month for £10. Initial impression is that it may be ~1Gbyte/month

I have not finished exploring the Portal (and it is frequently getting updated and improved) but I find it very easy to watch the Solar and Battery status real time, daily and historic data from the comfort of the house. The views can be fully customised and the configuration is saved on the Portal. There is an App for Android devices as well as access through a browser. It does not seem to be possible to simply control my devices to say switch the inverter to and from Eco mode or configure any parameters directly from the Portal but that can all be done by the SmartConnect App and both can be open at the same time. I can see the logic for separating monitoring from configuration but a few simple switches would be convenient, in particular inverter mode.

I have checked that the data continues to be stored in the Raspberry Pi if the internet connection is broken and then uploads to the portal as soon as it is reconnected. However, if the Pi is rebooted local data not already uploaded does seem to be lost. Even if the internet connection is temporarily broken one can still access the basic dashboard via any machine on the local wifi network (or via a network cable for ultimate reliability onboard) at venus.local.

Overall I am delighted with progress so far and I am gradually building up a comprehensive set of data which is stored on the VRM Portal for at least 6 months and can also be downloaded for additional processing in addition to that gathered earlier via Bluetooth.

The use of the Venus OS Large with Node-RED has added an extra dimension and is my major access when onboard r for remote monitoring. Node-RED allows much greater flexibility including 'intelligent' control of the devices via your own configurable dashboards. Node-RED can be accessed and programmed via the VRM portal and I have done all the Node-RED programming and 99% of testing remotely. I implemented remote control of the inverter including an option to automatically return the Inverter to Eco (Search) mode from Continuously On after an hour to save power if one forgets. I have also added an extra 'Tab' to the dashboard to monitor the Pi CPU loads and processor temperature and shut down and restart the Pi and Node-RED itself. And this is early days!

Appendices

Appendix A - Using the vi editor in Busybox

One may need to edit a configuration file on the Raspberry Pi using access via SSH. The Pi uses Busybox to provide a very cut down number of Linux utilities which mostly have reduced functionality - vi is no exception. vi was THE original unix editor and almost every linux system has vi or its advanced version vim. It has a steep learning curve for those used to recent graphical editors but is very effective to make quick changes to files when you are experienced especially on a headless system such as a server or embedded system when only 'terminal' access is available.

General Tutorials on vi:

The version of vi in Busybox is cut down in some facilities but easier than I remember to use in other ways. All the basic operation seems to remain the same excepting many :set commands are missing.

Cut and paste via Ctrl C, V and X are not available in vi but in Busybox vi the cursor movement keys seem to work [almost] as normal and in Command and Insert modes. Right click cut and paste works internally and to other programs which is different to what I recall of vim. Center click and roll also seem to work if you have a mouse.

yank, put and delete are the internal equivalents of copy, paste & cut and work in Command mode and are more powerful than a newcomer to vi might expect as they are leveraged by various selectors for complete lines and multiples.

Use under Android. Escape is fundamental so you may need to use the hackers keyboard. JuicySSH seems to add some useful extras to the keyboard which may make its use possible. In most implementations including vi Ctrl [ is equivalent to Esc . The internal yank and put may be better than android's copy and paste. I have not tried a copy paste from another program into vi running through JuicySSH yet.

Appendix B - Working with Switch Nodes

Switches (and hence switch nodes) are going to be a fundamental building block we will use in Node-RED user interface (ui). Working with flows (messages) and the switch node is rather different to normal programming with variables and needs a little thought as to how to implement it. First lets have a look at the help information for the Switch Node.

The Switch Node

  1. Adds a switch to the user interface.
  2. Each change in the state of the switch will generate a msg.payload with the specified On and Off values.
  3. The On/Off Color and On/Off Icon are optional fields. If they are all present, the default toggle switch will be replaced with the relevant icons and their respective colors.
  4. The On/Off Icon field can be either a Material Design icon (e.g. 'check', 'close') or a Font Awesome icon (e.g. 'fa-fire'), or a Weather icon. You can use the full set of google material icons if you add 'mi-' to the icon name. e.g. 'mi-videogame_asset'.
  5. In pass through mode the switch state can be updated by an incoming msg.payload with the specified values, that must also match the specified type (number, string, etc). When not in passthrough mode then the icon can either track the state of the output - or the input msg.payload, in order to provide a closed loop feedback.
  6. The label can also be set by a message property by setting the field to the name of the property, for example {{msg.topic}}.
  7. If a Topic is specified, it will be added to the output as msg.topic.
  8. Setting msg.enabled to false will disable the switch widget.
  9. If a Class is specified, it will be added to the parent card. This way you can style the card and the elements inside it with custom CSS. The Class can be set at runtime by setting a msg.className string property.


Extracting the relevant bits of information that I use.

So we have to work with one off messages indicating a change of state from the switch node and that means that for most purposes one has to use them to set a 'context' variable which can be used for what are thought of as normal switch functions a steady on/off state for logical decisions and a way of enabling and inhibiting a flow of information. We also need a way to ensure that we maintain the state of the switch through power cycling and restarting like a physical switch or, at a minimum, starting in a defined state.

What we end up with is that the we use the on/off messages to set a context variable, I have chosen to use a global one although that is a wider scope than I am currently using. I am also using a persistent variable (one that is saved periodically in a file so it survives power loses, restarts etc. The 'persistence' facility is not installed by default and 4 lines have to be added to system file to enable it. I have discussed hot to do that elsewhere. Once that variable is available I can be used in Function Nodes in many ways including 'gating' a flow of messages. It can also be used to restore the state of the switch after a restart.

I have used two implementations of the principle above. The first uses a Change node to take true|false outputs from the Switch Node to Set a global contact variable stored in a file and a separate Function node to carry out the gating action. The value of the context node can be Injected when Node-RED is restarted to give persistence to the Switch Node. This makes maximum use of built in facilities but risks an ill defined state the very first time it is run as the context variable is not defined until the first change of the Switch is made in the Dashboard. If that worries you an extra Inject can be used to manually input the initial status. I favour the following solution:

If we are clever a single function node can use the messages from the Switch Node to set the context variable and 'gate' all other messages depending on the switch. The value of the context node can be Injected when Node-RED is restarted to give persistence to the Switch Node. So in many cases we can do everything with three nodes: The Switch Node for the user interface on the dashboard, a simple dual purpose Function Node and an Inject node to set the initial status or restore the status after a restart. The following is the code for a Function Node which I use to control a flow of messages used by my display of the CPU loads and CPU temperature but it could be any flow. In this case the message topic being set to gate is used to distinguish between the messages from the switch from other messages. It has an advantage over the apparently simpler method above in that it predictably covers the first ever run as well as maintaining the switch setting during restarts.

// read gate state or initialize if undefined
var cpu_switch = global.get("cpu_switch" , "file") || false;
// Is message is from switch?
if(msg.topic === "gate") {
  cpu_switch = msg.payload;
  // store the gate state using global context with persistence
  // ie using 'file' option for global context store
  // and then return null
  global.set("cpu_switch", cpu_switch , "file");
  return null
}
if(cpu_switch === true) {
  // message is from other sources but only pass it on if gate open
  return msg;
}
return null

The example code given below can be imported to see how it all adds together. It requires the settings.js to have been modified to allow saving context to a file and the ui nodes to have been installed using palette -> import and searching for node-red-dashboard. It gates a timestamp flow from an Inject node to a Debug node as a useful example as that timestamp flow is often used as a clock tick in a system to, for example, update displays.

I have exported the flow as a file named example_switch_flow.json which you can imported to see how it all adds together. You should download the file (probably using a right click menu) and import it to a new flow. If you just click on it the results are unpredictable in that they will depend on browser you are using!

NOTE: It requires the settings.js to have been modified to allow saving context to a file and the ui nodes to have been installed using palette -> import and searching for node-red-dashboard. It gates a timestamp flow from an Inject node to a Debug node as a example.

If you get this example working and understand what each Node does you are probably over 80% of the way to being able to make practical use of Node-RED.

If you are using a Raspberry Pi (or Victron Controller) running Venus OS large and have a Victron device the next step is to use a display to look at the output flows from the device. If you are doing your preliminary stages on a normal computer/laptop/pi running a Linux kernel we need an alternative device to look at. The Node-RED community have provided a node which produces information from a device every controller has, the processor. This is actually very useful addition to ones system in any case as it enable one to check the processor loads and CPU temperature so you can make sure you are not overloading the system. The CPU Node works on the Pi and my systems running under Linux Mint which is flavour ultimately based on Debian and is supposed to work much more widely but I can not give any guarantees.

Appendix C - Venus OS (including Large) 2.80 Partitioning and File Systems

I had a quick look earlier at the partition and file structure for version 2.73 earlier - this section extends that and will ultimately replace it

The details of the drive Partitioning and File Systems have evolved over time but some features have remained constant. Victron have always tried to make the updating of file systems as safe and robust as possible so updates have always been based on the information being completely available from whatever source before the existing system was lost so, in particular, an update could never fail because a download from the internet was interrupted. This extends to the Venus OS Large. The drive has two identical partitions for the OS file system so the previous working system can be maintained until the update ss completed and one can swap back to the old system if problems arose. This is very sensible and the way I already work for major upgraded to my desktop Linux systems, in fact in some cases I have three partitions available for the root file system as well as a common system for users and and two additional partitions for unencrypted and encrypted data shared between users and backed up between machines.

The Venus OS has 4 partitions, one small boot partition which also handles the switching between root file systems, two root files systems of which only one is ever used, the other is a backup of the previous install and a data partition which gives persistence between updates. The main space is basically partitioned equally between two root file systems and a data partition and symbolic links bring the parts together into a more conventional layout. For example whilst the home folder would normally be below root it is actually at /data/home/ and user root's home folder is /data/home/root so, for example, the node-Red configuration is in folder /data/home/root/.node-red. Whilst it may seem confusing the big advantage is that because /data is on a different partitionand filesystem this gives complete independence of the two and the filesystems can be swapped or updated completely independently of users and data. With careful linking there is no reason why the root file system can not be read only for even greater stability and security.

My earlier exploration was via the console and built in commands, I have now looked at the microSD cards in my desktop where I can use tools such as gparted to see the structure in more detail whilst I was backing them up. This has revealed some interesting information about the differences in the deployment between my various versions.


Original Venus OS version 2.73 on 16 Gbyte microSD


Venus OS Large v2.80~21-large-23 installed over 2.73 on 64 Gbyte microSD


Venus OS Large after updating to version 2.80 v2.80~23-large-34

What is shown at the top is the original Venus OS without node-RED which I had used for a month or so. The partitioning uses the old fashioned Master Boot Record (MBR) partitioning scheme rather than the newer GUID Partition Table (GPT) which is fine for the level of hardware and allows 4 primary partitions as is used here. The ext4 filesystems fill the partitions as normal.

The next stage is my first Venus OS Large. OS Large is installed as an upgrade to an existing system. I installed a fresh version 2.73 (the current version), set it up and updated to the latest Venus OS Large v 2.80 beta available to download and did most of my work with Node-RED. You will note that one of the two partitions used for the root filesystems now has a filesystem which is only about 1.2 Gbytes and is over 80% used.

For information I have big cards as they were a special black Friday price on Amazon and cheaper than the smaller cards I had intended to buy! The downside is the extra time to clone them as backups.

After a month it was updated to beta version 2.80 v2.80~23-large-34 and you can see that the original root file system remains and the new version is installed into the other partition (/dev/mmcblkp2) which now has a tiny filesystem only using a fraction of the available space in the partition and occupies 91% of the space. The lack of headroom is not a problem as the basic root file system is now read only so does not need to grow between version updates. Now there is obviously more than enough space on my huge 64 Gbyte cards for the data partition but it is surprising that there is 60% of the card unformatted.

I have therefore taken a cloned copy (just in case) of my current Sandisk Extreme Pro 64 Gbyte microSd card and expanded the two root partitions and sized the data partition suitably for shrinking to fit a 32 Gbyte card. It could easily be reduced again for a 16 Gbyte card if required. The changes are almost trivial to make in gparted - it is the backup/cloning which takes the time. The Sandisk Extreme Pro card is supposed to be one of the best and fastest cards when tested in the Pi and were again at a favourable price for the 64 Gbyte sizes from Amazon but eventually I may need to use smaller cards.


Changes to partitioning to a more sensible usage of the space
which can also be shrunk or expanded easily for smaller/larger cards

If one wants to reduce the above image for cloning onto a 32 Gbyte card one just saves the first 32 Gbytes with this incantation (all on one line): The 7608 is the number of 4 mbyte blocks corresponding to a standard 32 Gbytes card

sudo dd bs=4M count=7608 if=/dev/mmcblk2 | gzip > /media/DATA/shrunk_image_count_7608`date +%d%m%y`.gz

Appendix D - Monitoring the Venus OS using Node-RED

I think that it is important to be able to monitor the system to make sure that it is running reliably and it is well within the resources available. I have several applets on my Linux Mint Systems to enable me to do that and I would like to be able to easily check the Venus OS system on the Raspberry Pi in the same way. The information I believe is useful and I keep continuously available via applets on my Linux home computers is the CPU load, the memory (RAM) usage and the CPU temperature. I also monitor Network Data and when I have a separate Graphics processor I also monitor the GPU temperature on my home machines. I can then see at a glance when I have problems such as memory leaks and it is surprising how often, for example, a browser tab starts using excessive resources to the point the machine is overloaded.

It is very different with the Venus OS. It is an embedded system essentially doing the same thing all the time so we are not concerned with rogue browser tabs but we do need to make sure that the processor temperature is sensible as the Raspberry Pi has no active cooling so we need to keep the processor loads down to save valuable power from our batteries. We need to keep an eye on memory use as memory leaks are unacceptable on a system which is remote and may not be physically accessed for weeks over the winter. We also need to be able to do remote restarts of the Venus OS and Node-RED during development and as a contingency if one identifies a problem building up.

So, right from the start, I wanted to develop ways to monitor and investigate the OS performance and behavior remotely. Node-RED has a node which executes a terminal command and there are many terminal commands available to monitor the system so the simplest way to get a quick answer is to put a terminal command into an Exec node, activate it with an Inject Node and look at the output in the Debug Sidebar with a Debug node - a minutes work! There is a slight catch and that is that a limited subset of such commands are available through busybox so you may have to check your favourite commands exist and have the options you require so I have installed busybox on my Linux boxes running Linux Mint so I can check before playing on the Venus OS remotely. So lets look at specifics:

Restarting the Raspberry Pi and Node-RED I I have already covered those and I have them at the top of the Dashboard tab with the various System monitors which can be switched off to save power if required.

CPU Load and CPU Temperature: These are some of the most important to keep an eye on and they interact. A high CPU load is bad for a real time system and can even lead to unpredictable behavior if there is the chance of race conditions. A high load leads to high power consumption and rising CPU temperatures - once the temperature reaches about 75 degrees the processor cores re slowed down and the load factor increases. Computers for embedded systems rarely have any cooling like fans so you can end up with much less margin in computer speed than you expect and then a hot day comes along! I prefer to use code I understand although there are also a number of relevant Contributed Nodes, like the one for the CPU Load I discussed above. I had been using that for a while and I had already added it to my palett, so I have compromised and used the extra CPU Node because it did just what I wanted to extract and displays not only the processor core loads but also the temperature and seems to add very little overhead. Otherwise I have gone back to basics. So what else do we need and what are the utilities we can use from a console to get it?

Random Access Memory Usage - This is very important to monitor as the Raspberry Pi 3 only has 1 Gbyte of fast RAM. A Swap capability is not active as it would cause large numbers of reads and writes on the SSD card and, in the limit, risk premature failure. The level of 'Memory Usage' is not quite as simple as it might seem as the Linux kernel does not waste any apparently free memory and it is automatically allocated to buffers and cache to speed up access and memory is also used by temporary file systems so a well set up system should rarely have much 'free' memory. What matters more is how much memory can be made available when cache and buffer storage is recovered by the system but one must remember that as memory is recovered it is at the expense of buffers and cache which will slow down the system. My monitoring on the home computers show a near real time pie chart of the different uses and it is fascinating to see how the kernel is continually optimising as different programs run. That is obviously not appropriate for an embedded system but I have set up the node-RED dashboard to have a graph of RAM memory usage as well as an instantaneous display. I use exec nodes and the free command piped through a grep followed by spliting the resulting string into an array:

free | grep Mem

myArray = msg.payload.trim().split(/\s+/);
msg.payload = myArray
return msg;

This is my earlier coding for selecting fields from a record and uses the trim to remove any leading or trailing white space then the split to split on any intermediate white space. The relevant array element is then chosen by Array element. Most of selection is done on my later coding by using awk.

Disk Space: Recall that the Venus OS has 4 partitions, one small boot partition which also handles the switching between root file systems, two root files systems of which only one is ever used, the other is a backup of the previous install and a data partition which gives persistence between updates. The main space is basically partitioned equally between two root file systems and a data partition and symbolic links bring the parts together into a more conventional layout. There should never need to be a shortage on the Raspberry Pi when running the Venus OS as microSD cards currently get cheaper as they get larger and the root file systems are very small and the data from the devices is not only small but is only buffered before uploading to the Portal long enough to cover any breaks in network coverage. The portal stores data for months. I have added a readout of disk memory use for completeness - the root file system only occupies part of the partition and is read only so is fixed at ~90% but the file system only occupies about one third of the partition. On a 32 GByte card the data filesystem usage is well under 1%. An 8 Gbyte SD card would be generous and 4 Gbytes possible. The advantage of large cards is that their life is dramatically increased if there is unused capacity thanks to the clever algorithms in the manufacturers drivers on the cards. The Exec command used is

df -mT | grep ext4 | grep 'sda2\|home\|root\|data'

The second grep contains the 4 OR options so the code can be used on the Venus OS or on my Linux Mint OS.

The resulting string is split into an array.

myArray = msg.payload.trim().split(/\s+/);
msg.payload = myArray
return msg;

The calculation of the percentage use and saving of variables in context follows.

// Force string coercion to number by *1
global.set("root_free", msg.payload[4]*1 , "file");
global.set("root_used", msg.payload[3]*1 , "file");
root_total = msg.payload[4]*1 + msg.payload[3]*1;
global.set("root_total", root_total , "file");
root_percent_used = msg.payload[3] / root_total * 100;
root_percent_used = Math.round(root_percent_used * 10) / 10 ;
global.set("root_percent_used", root_percent_used , "file");
global.set("home_free", msg.payload[11]*1 , "file");
global.set("home_used", msg.payload[10]*1 , "file");
home_total = msg.payload[11]*1 + msg.payload[10]*1
global.set("home_total", home_total , "file");
home_percent_used = msg.payload[10] / home_total * 100;
home_percent_used = Math.round(home_percent_used * 10) / 10 ;
global.set("home_percent_used", home_percent_used , "file");
return msg;

Note the simple way used for coercion of the strings to numbers - a very fast method in Javascript

Network Address - I have run into problems using my standard tethered connections from mobile phones which have made it desirable to have some basic network information saved as persistent memory for diagnostic purposes and displayed in the Dashboard. I extract the network address from the ifconfig output in an Exec Node with this command

ifconfig | grep 192.168 | awk '{print substr($2,6); }'

Signal quality and strength: The details of signal quality and strength are currently in a separate Appendix below as they are more complex.

Current Time: I used the Simpletime Node for time with the following script in a Function Node

current_date_time_str = msg.mydate + " " + msg.mytime
global.set("current_date_time", current_date_time_str , "file");
msg.payload = current_date_time_str;
return msg

Uptime: I use the Uptime Node which is part of node-red-contrib-os followed by a Function Node containing and a Set node to set a global persistent context value.

var numb = msg.payload.uptime;
msg.payload = Math.round(numb * 100 / 3600) / 100;
return msg;

Some extra logic displays and saves the last value as well

var uptime = global.get("uptime_hours" , "file") || 0;
if(msg.payload < uptime) {
global.set("last_global_uptime", uptime , "file");
}
return msg

SSID: This proved to be much more difficult than I expected. I eventually found that connmanctl services enabled me to list the services which it had available and somewhere in the small print I found "The symbols in the output above are: '*' favorite (saved) network, 'A' autoconnectable, 'O' online and 'R' ready. If no letter is shown in the O/R column, the network is not connected. In addition, temporary states include 'a' for association, 'c' configuration and 'd' disconnecting. When any of these three letters are showing, the network is in the process of connecting or disconnecting." I could therefore use grep to select the record with the *AO flags set and then awk to extract the SSID. Note the code only extracts the start of an SSID with spaces

connmanctl services | grep AO* | grep wifi | awk '{print $2}'

Version Number of OS: This was even more tricky and the only place I could find it was in the boot log where the last few lines are

Sat Feb 5 11:41:08 2022: dbus-daemon[598]: [system] Successfully activated service 'fi.w1.wpa_supplicant1'
Sat Feb 5 11:41:08 2022: Checking available software versions...
Sat Feb 5 11:41:08 2022: Active rootfs: 2
Sat Feb 5 11:41:08 2022: Active version: 20220122213600 v2.80~41-large-25
Sat Feb 5 11:41:20 2022: Backup version: 20211217171415 v2.80~33-large-24

so again a grep and awk extracts what I want, I have also used a tail to get to the end of the boot log

cat /data/log/boot | tail -n 5 | grep "Active version" | awk '{print $(NF)}'

Note: NF is the number of fields in a record

Data Usage: This has again been interesting as the normal programs are not available so I had to resort to internet searches and came up with https://serverfault.com/questions/533513/how-to-get-tx-rx-bytes-without-ifconfig which had anumber of suggestions including using cat /proc/net/dev which produces a formated version of this:

Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame comp mcast|bytes packets errs drop fifo colls carrier comp
wlan0: 1491661 25728 0 0 0 0 0 0 14153120 29370 0 0 0 0 0 0
eth0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
lo: 22049815 195673 0 0 0 0 0 0 22049815 195673 0 0 0 0 0 0
ll-eth0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

cat /proc/net/dev | grep wlan0 | awk '{print "Received bytes:",$2,"Transmitted bytes:",$10}' then gives something like this:

Received bytes: 1332540 Transmitted bytes: 14124366

This looked promising and showed that more data was being transmitted than received - this must be because I testing use of MQTT. When costing data the sum is important so I changed from using awk to my alternative of spliting the incoming record into an array when determining the data rate using the following function. Note the initialization to take care of the times when context variable are not set.

myArray = msg.payload.trim().split(/\s+/);
rxtx = myArray[1] * 1 + myArray[9] * 1;
current_rxtx = rxtx ;
last_rxtx = global.get("current_rxtx", "file") || rxtx ;
global.set("last_rxtx", current_rxtx , "file");
global.set("current_rxtx", current_rxtx , "file");
delta = current_rxtx - last_rxtx;
global.set("delta_rxtx", delta , "file");
msg.payload = Math.round(delta/(1024 * 10) * 1000) / 1000 + " KB/s" ;
return msg;

The results are interesting as there are occasional big peaks but most of the time it is steady round 2 kBytes/second. I initially had a graph to explore what was giving the peaks which seem to be the Deployments in Node-RED! With peaks of 120 kBytes/sec a graph has little long term utility.

I decided it would be more useful to have some daily totals for the current day and the last two days to allow estimates of data costs so created some more context variable in a Function Node. Note the initialisation using || 0 as the variables are not set during the first few days.

rxtx_2 = global.get("rxtx_1" , "file") || 0 ;
global.set("rxtx_2", rxtx_2, "file")
rxtx_1 = global.get("rxtx_0" , "file") || 0 ;
global.set("rxtx_1", rxtx_1, "file")
rxtx_0 = global.get("current_rxtx" , "file") || 0 ;
global.set("rxtx_0", rxtx_0, "file")
return msg;

These are then used for the Text Nodes. Note the inject Node is run at an 11 second interval to keep them asynchronous to everything else using 5 or 10 second intervals. The code is common and un-required lines can be edited out

current_rxtx = global.get("current_rxtx", "file") ;
rxtx0 = global.get("rxtx_0", "file");
rxtx1 = global.get("rxtx_1", "file");
rxtx2 = global.get("rxtx_2", "file");
// daily totals
rxtx_today = current_rxtx - rxtx0;
rxtx_yesterdays = rxtx0 - rxtx1;
rxtx_day_before = rxtx1 - rxtx2
// specific to each output
msg.payload = Math.round(rxtx_today/(1024 * 1024) * 1000) / 1000 + " Mb";
return msg;

The daily data usage is looking to be about 100 Mbytes, say 3 Gbtyes a month which is well within my 15 Gbytes allowance on my £10 a month GiffGaff Golden Goody Bag. I have yet to check how much is due to the MQTT broker.

[WIP] This write up is ongoing but I show the current status of the dashboard below in january 2022. It shows that the CPU loads and memory usage are pleasingly low. The Disk spaces correspond to my trial partitioning changes for a 32 Gbyte microSD. The CPU temperature is low as the outside temperature (and Corinna) are close to freezing. The system had been restarted approximately 24 hours before this screenshot was taken. I have added indicators of up-time. The qualities of the wireless link have recently been added.


The current state of my display tab for the Raspberry Pi 3B+

Appendix E Useful Code Snippets for Node-RED Exec Nodes

The earlier sections discussed how to obtain various useful information but did not have details of how it was coded into the Node-RED Exec Nodes and subsequently display it in the user interface (Dashboard). This covers many of the coding techniques I used. It is useful to have a basic knowledge of Linux commands for this section and how to find additional information.

1. Selecting fields and striping characters from strings using awk

Awk is a very powerful tool and whole books have been written on how to use it. This is just a very simple example of awk which uses an ability of awk to split an input into a series of fields $1 $2 etc, print them to the output and use a substring function when printing to strip characters from the start and/or finish. This is ideal for selecting information from an Exec Node for further processing, storage or display in Node-RED and is often combined with pipes and grep to select particular lines of output (although awk can also do that itself in a relatively simple manner - see above book link - but more difficult to follow).

Select field(s) from a record (aka word from strings)

$ echo "I am very undecided on how to proceed!" | awk '{print $3,$4,"so help me"}'
very undecided so help me

Note that there must be no spaces either side of the commas.

Note 2 print gives a newline by default, if you want to avoid use printf

Strip characters from start of string in specified field.

One can use substr() to do string slicing eg

$ echo "very undecided!" | awk '{print substr($2, 3); }'
decided!

Strip a character from the end of a string

awk '{if (NR>0) {print substr($2, 1, length($2)-1)}}'

NR is the Number of Records so we can check we have records present - useful if we the input is piped from a previous test

length($2) will get us the length of the second field, deducting 1 from that to strip off the last character.

Example:

$ echo "very undecided!" | awk '{if (NR > 0) {print substr($2, 1, length($2)-1)}}'
undecided

The two can be combined and it is wise to check if it also works in busybox awk if you are testing the embedded version on a different machine.

peter@gemini:~$ busybox echo very undecided! | busybox awk '{if (NR > 0) {print substr($2, 3, length($2)-3)}}'
decided

2. Using Javascript to select fields

We can use javascript split(/\s+/) to divide multiple lines into an array splitting on any non printing space including line feeds and then work on array elements. It is often useful to remove leading and trailing spaces with trim() first. eg:

myArray = msg.payload.trim().split(/\s+/);
// Note array indices start at 0 so for second field use Array[1]
msg.payload = MyArray[1];

The function substring( start , end ) can be used to extract part of the string between indices start and end. end is not required and note the use of length, the length of the string

let text = "Hello world!";
let result = text.substring(3, text.length - 1);
// result is "lo World"

3. Using Grep in a Function Node

Grep is very well known and our most powerful tool for searching for PATTERNs in FILEs (or stdin). Here it is used with input piped from a terminal command to select only those lines (individual records) containing a string within a Function Node. What is less known is that one can do OR and AND equivalents. Grep is can be used on files and here we are using it with input on the stdin

grep AND

There is no actual AND in grep but a simple way is to apply it multiple times

grep charstr1 | grep charstr2

you can also use the -E, --extended-regexp option which apparently interprets PATTERNS as extended regular expressions. I would not like to explain further!

grep -E 'charstr1.*charstr2'

Both versions seem to work under Venus OS with Busybox but I have never used the second version

grep OR

Match lines containing any of a number of strings by

grep e charstr1 e charstr2 e charstr3

or

grep 'chrstr1\|chrstr2\|chrstr3'

Both versions work under Venus with Busybox

Other Grep options I often use

-i Ignore case
-v Select non-matching lines
-w Match whole words only
-r recursive (on files)

These all work in busybox

4. Rounding to 2 decimal places with associated text for display in Dashboard

The reduction from multiple decimal places before output is a common requirement.

//Example from a Function Node in Node-RED
var numb = msg.payload;
numb = Math.round(numb * 100) / 100;
msg.payload = "User Disk " + numb + "% full";
return msg;

5. Context variables

Note Inject, Change, Switch, and Trigger Nodes support Context Variables directly by dropdowns.

Note 2 Use of Persistent Global Context variables ie saved to file requires modifications to the settings.js file in Node-RED

Set Persistent Global Context Variable example

var cpu_switch = true ;
global.set("cpu_switch", cpu_switch , "file");

Or one can use a Change Node

'Gate' on Global Context Variable in Function Node Example

// Set gate state from global context variable or initialize if undefined
var cpu_switch = global.get("cpu_switch" , "file") || false;
if(cpu_switch === true) { return msg }
return null

Note the format of global.get() and global.set() with single or double quotes

Appendix F - Reducing processor load and number of elements in a Chart Node

There are two useful nodes to reduce the number of messages being sent on.

Another way to even out the processor loads is to run the various message streams at different speeds so they are asynchronous. The Victron devices are fixed in rate but I try to use different intervals in my inject nodes, mostly prime numbers close to the rates I want. The effect is noticeable on the chart output of CPU Load with a reduction in spikes.

The last way is to 'gate' the output to the chart nodes which are only used for diagnostics. I have that built in but it is not obvious the saving is worthwhile.

Appendix G - Measuring Signal Quality on the Raspberry Pi

The definition and measurement of signal quality is always contentious so this has a separate appendix.

The usual utility for such measurements is ifconfig but that does not give any information on signal strength, quality or noise in the busybox version so we have to go to /proc/net/wireless for information.

$ cat /proc/net/wireless
Inter-| sta-|     Quality      |   Discarded packets        | Missed          | WE
face  | tus | link level noise | nwid crypt frag retry misc | beacon          | 22
wlan0: 0000   67.  -43.  -256    0    0     0    0     33941  0

So what do they mean? The important ones are under the Quality heading:

Signal Strength (quality level in /proc/net/wireless)

Basically, the higher the signal strength (level), the more reliable the connection and higher speeds are possible. The signal strength is specified in dBm (decibels related to one milliwatt). Note Decibels are a logarithmic scale.

Values between 0 and -100 are possible, with more being better. So -51 dBm is a better signal strength than -60 dBm. For the Raspberry Pi 3B+ :

-30 dBm probably means the antenna is virtually in contact with the Wifi Router!
-50 dBm is considered an excellent signal strength.
-67 dBm is the minimum signal strength for reliable and relatively fast packet delivery.
-70 dBm is the absolute minimum signal strength for reliable packet delivery.
-90 dBm is very close to the basic noise.

On Corinna the Signal Strength is -50 dBm at ~5m when tethered to my Samsung Galaxy M32 mobile phone through 4 wooden bulkheads and slightly worse at -52 dBm with my old Samsung Galaxy S3 Mini which is also very orientation sensitive.

Link Quality

A network can be received with a very good signal strength but not as good a link quality. In simple terms, it means how much of the data you send and receive will make it to the destination in good condition.

The quality indicator includes data like Bit Error Rate, i.e., the number of bit errors in received bits that have been altered due to noise, interference, distortion, or bit synchronization errors. Others are Signal-to-Noise and Distortion Ratio. It is measured in percentage or on a scale of up to 70. The figure in /cat/net/wireless is on a scale of 70 but it is important to understand the reading is somewhat subjective and depends on the hardware and the manufacturers figures and driver software. So, unlike signal strength, it is somewhat harder to say which values are still considered to be OK but may still be useful when looking for interference from, for example, a microwave cooker in a house, dimmers on lighting or motors on a boat and choosing positions for equipment. I get a typical reading of 83% on Corinna at 5m.

Node-RED Code Snippets for Quality and Signal Strength

Exec Node

cat /proc/net/wireless | grep wlan0

Function Node

myArray = msg.payload.trim().split(/\s+/);
link_level_db = myArray[3]*1;
global.set("link_level_db", link_level_db , "file");
msg.payload = link_level_db + " dBm"
return msg;

Appendix H

Use of MQTT - Draft post to Victron Community Modifications Space

I have been trying out MQTT and started to write a post looking for answers to various problems I was having. However as fast as it got written I found answers to he problems which were holding me up and further questions raised there heads. It is currently my main documentation of what I have found so far! I have added a little formatting but little else

Assistance with MQTT use, keepalives and Node-RED required

I require some help with how to efficiently set up MQTT for use with node-RED in a system comprising:

My main source of information has been https://github.com/victronenergy/dbus-mqtt and that seems to be the obvious and very comprehensive starting point but even so it took me a long time to get to be able to subscribe to and see data from my Victron Devices. Even after a lot of background reading and searches of community documents I still have a number of open questions and too much has initially been done by trial and error but sometimes finding justification at a latter stage. A good general reference source for MQTT brokers is https://mosquitto.org/man/mosquitto_pub-1.html - although Victron may be using a using a different implementation the basic options should be identical.

The first action is to turn on the MQTT in the Remote Console via the VRM portal -> Menu -> Services -> MQTT as it is disabled by default. I initially switched on Modbus, MQTT on LAN (SSL) and MQTT on LAN (Plaintext) locally because I found a screen shot of that configuration from a user who seemed to have had some success with MQTT so my first question was: which are actually necessary?

A question: Is the 'Secure MQTT on LAN (SSL)' the only one actually required? Answer: Yes - I have now turned it off Modbus and MQTT on LAN (Plaintext) and everything still works as I expected.

The next stage is to download the CA certificate file venus-ca.crt and put it somewhere accessible, I put it in my home folder for simplicity when testing from a terminal.

Note: One needs ones username and password (the ones used to access the VRM portal) - during initial testing these will be used entered directly into scripts so if you are concerned you may need to clear the history file after you have finished.

I did a few quick trials using the information above in Node-RED without success although even more basic tests using a local broker and Node-RED worked.

So my next stage was to follow the suggestion in https://github.com/victronenergy/dbus-mqtt and do some trials using the mosquitto_sub and mosquitto_pub command line tools which are part of the mosquitto-clients package installed before I started. This was where the problems started. I briefly got to a point where I saw a N/bxxxxxxxxxa/system/0/Serial response to a subscription which gave hope and I was able check that the problems were not due to usernames, passwords, certificates, Portal Id or and broker URL by changing each in turn and seeing errors in the terminal!

A question "What should the Client ID be?" My assumption is arbitrary but unique, alphanumeric and without whitespace but does Victron have any conventions?

I got eventually got a keepalive working with the following code.

$ while true; do mosquitto_pub -m "" -t 'R/bxxxxxxxxxa/system/0/Serial' -h mqtt20.victronenergy.com -u p@pc.com -P pwpwpwpwpw --cafile venus-ca.crt -p 8883 -d ; sleep 60 ; done

Notes for keep-alive code

ISSUE There is a potential problem in that the use of N/<portal ID/keepalive suggested seems to be for a local network using the mqtt server built into the venus OS. Its clever new options will then provide options to restrict the huge flows of data which result from the current policy of publishing all the dbus data to the Victron mqtt servers in parallel with the local broker. Publishing N/<portal ID/keepalive to the mqttxx,victronenergy.com servers does nothing at present and one possibly needs a different mechanism. Publishing N/<portal ID//system/0/Serial works fine

The Code for subscription has been more of a problem as the following test shows:

:~$ mosquitto_sub -v -I nbcorinna -c -t 'N/bxxxxxxxxxxa/solarcharger/289/History/Daily/#' -h mqtt20.victronenergy.com -u p@pc.com -P pwpwpwpw --cafile venus-ca.crt -p 8883
Error: You must provide a client id if you are using the -c option.

I looked at https://mosquitto.org/man/mosquitto_pub-1.htm and it seemed the -I option was the problems so I tried the following:

:~$ mosquitto_sub -v -i nbcorinna -c -t 'N/bxxxxxxxxxxa/solarcharger/289/History/Daily/#' -h mqtt20.victronenergy.com -u p@pc.com -P pwpwpwpwpw --cafile venus-ca.crt -p 8883
N/bxxxxxxxxxxa/solarcharger/289/History/Daily/30/TimeInFloat {"value": 275.0}
N/bxxxxxxxxxxa/solarcharger/289/History/Daily/30/TimeInAbsorption {"value": 1.0}
N/bxxxxxxxxxxa/solarcharger/289/History/Daily/30/TimeInBulk {"value": 227.0}
.....
..... ~ 450 lines follow

ISSUE: As can be seen above the option -I should be a -i if one wants persistence from the -c option (see https://mosquitto.org/man/mosquitto_pub-1.html)

I have not determined the effect of running the keepalive on the Raspberry Pi's mqtt server and testing its options to chose the data which is published.

This was now all looking promising as I now understood more of terms involved and the transfer to using Node-RED was very quickly and simple. As several people have noted it is the keepalive which is crucial and a basic keepalive was almost trivial to implement - just two nodes are needed when using Node-RED.

There is no separate configuration for communication with the broker in Node-RED, it is done in the first mqtt node you use and then shares with other node you add. (You can add extra brokers as well). I will add screen shots for the three 'screens' you to to set up but having used mosquitto_sub and mosquitto_pub the inputs required were now very obvious. I already had an inject node with a blank character string input set to repeat every 60 seconds as a keepalive so that was setup complete.

It was also quick to get a mqtt in node (subscription node) running and get the expected stream of messages pouring in when I added a debug node. The object delivered is however a little different to what I expected and the ability of the debug node to output the whole object was useful to understand how to use it. The mqtt in node has a choice of output payload and I tried several. I have initially chosen the parsed JSON output and then use a Function Node to transform it to a value rounded to two decimal places to feed chart, gauge and text nodes in a dashboard. I have also added a Delay Node set up to limit the rate to a maximum of one every 5 seconds. This means that a typical flow from device to display only needs 4 nodes with minor changes only between flows. See attachments.

The difficulty comes in finding the 'address' of the parameter you want to subscribe to. There are nearly a thousand to choose from and there are some duplicates and the designations are not always very obvious. I did a subscribe to everything by a /# subscription and saved it into a file. It was 4600 lines long! I then used sort and uniq to make it more managable and largely free of duplicates. grep enabled me to remove null values and I was down under a thousand half of which were daily solar history readings. From the remainder, I selected a few I might need into a new list of appoximately 100 including some possibly useful history.

The huge number of parameters on the dbus even with only three Victron Devices plus the Pi gives another problems. Every time the keep-alive timeouts, all retained values are be removed from the broker by publishing an empty payload. They then need to be resent after which after which the messages are retained by the broker, so if you subscribe to the broker you'll always get the last message for each subscribed topic. The size of the bulk send is circa 250 Kbytes even on my small system. There is now a way to specify which parameters should be sent but it does not work on the VictronEnery Broker on the Web.

Question: One can see the problem in data flows and the load on the broker with just publishing everything on the dbus. can somebody point me to the latest thoughts on that?

 

Is the MQTT approach a useful addition. What are the advantages and disadvantages.

Having got MQTT working I now ask myself if it is useful when there are already many ways to access information from and control Victron Devices.

None of these make a very convincing case other than the daily totals to me for use via the Victron Broker. It becomes more balanced if one is on the same network when I see no reason why both should be used in parallel. I have not however yet investigated the processor etc loads of using the inbuilt Mosquitto MQTT Broker in the Victron OS.

I have also looked at the data usage and that was surprising as the stead state data usage with MQTT on my system was a order of magnitude lower than using the Victron VRM although that would normally only be used for short times.

The bottom line: I am currently not using MQTT

Appendix I - Controlling relays from a Raspberry Pi GPIO

The addition of relays is very important as some of the less smart Victron Devices have inputs for remote control switches. It always surprises me that after 50 years or more of solid state electronic relays are still in common use, in fact their use surprised me over 50 years ago when I started designing equipment for the research satellites!

There are several different ways to access GPIOs from programs, but sysfs is a simple one that is supported by the Linux kernel which makes the devices visible in the file system. Sysfs is a pseudo filesystem provided by the Linux kernel that makes information about various kernel subsystems, hardware devices, and device drivers available in user space through virtual files. GPIO devices appear as part of sysfs so one can experiment from the command line without needing to write any code. For simple applications you can use it this way, or by putting the commands in shell scripts. Before continuing, I should mention that this interface is being deprecated in favour of a new GPIO character device API. The new API addresses a number of issues with the sysfs interface. However, it can't be easily be used from the file system like sysfs, so the examples here will use sysfs, which is still going to be supported for some time. In any case, ass far as I can tell, the Venus OS only uses sysfs for GPIO control. A basic introduction to controlling the gpio using sysfs is described in https://raspberry-projects.com.

So what is meant by a pseudo filesystem? There are many devices which are accessible via a mechanism which looks like a folder structure - I found about 60 in /sys/class/. If we look at /sys/class/gpio under the Venus OS 2.80 we find the following list

export
gpio13
gpio19
gpio21
gpio26
gpio5
gpio6
gpiochip0
gpiochip504
unexport

This corresponds to a set of folders for the gpios which are in use and we also find two other items which behave like files, export and unexport. Only a small number of the possible gpio ports are in use and to make say GPIO23 available we need to send 23 to that 'pseudo file' export. We will now find an extra folder gpio23 as below:

root@raspberrypi4:~# echo 23 > /sys/class/gpio/export
root@raspberrypi4:~# ls /sys/class/gpio/gpio23
active_low device direction edge power subsystem uevent value
root@raspberrypi4:~# echo "out" > /sys/class/gpio/gpio23/direction
root@raspberrypi4:~# echo 1 > /sys/class/gpio/gpio23/value
root@raspberrypi4:~# echo 0 > /sys/class/gpio/gpio23/value
root@raspberrypi4:~#

I have then sent "out" to direction to make it an output and 1 to value to turn it on and 0 to turn it off. All quite simple. The use of export and unexport gives a crude mechanism to avoid problems if several programs or users try to access the same GPIO. I have not tried all the various parameters but active_low is useful as some common relays boards are activated with a low input.

I now need to find a GPIO not in use at present for my experiments. https://github.com/kwindrem/RpiGpioSetup/blob/main/FileSets/gpio_list has a list of current pins used in Venus OS including proposed extension to 6 (2) relays outputs, 5 inputs and an extra graceful shutdown pin. Following is an extract:

# relays are active HIGH those which exist in my 2.80 large have a *

# Relay 1 Pin 40 / GPIO 21 *
# Relay 2 Pin 11 / GPIO 17
# Relay 3 Pin 13 / GPIO 27
# Relay 4 Pin 15 / GPIO 22
# Relay 5 Pin 16 / GPIO 23
# Relay 6 Pin 18 / GPIO 24
# Digital input 1 Pin 29 / GPIO 05 *
# Digital input 2 Pin 31 / GPIO 06 *
# Digital input 3 Pin 33 / GPIO 13 *
# Digital input 4 Pin 35 / GPIO 19 *
# Digital input 5 Pin 37 / GPIO 26 *
#### Graceful shutdown input
#### Note this input is NOT added to the available I/O used by Venus OS !!!!
# Pin 36 / GPIO 16

Another check is via gpioinfo, another command built into the kernel as part of the new GPIO character device API with a trial version of OS 2.82 large on my Pi 4 - note none of the GPIOs are used apart from one I am testing.

root@raspberrypi4:~# gpioinfo 0 | grep GPIO
line 4: "GPIO_GCLK" unused input active-high
line 5: "GPIO5" unused input active-high
line 6: "GPIO6" unused input active-high
line 12: "GPIO12" unused input active-high
line 13: "GPIO13" unused input active-high
line 16: "GPIO16" unused input active-high
line 17: "GPIO17" unused input active-high
line 18: "GPIO18" unused input active-high
line 19: "GPIO19" unused input active-high
line 20: "GPIO20" unused input active-high
line 21: "GPIO21" unused input active-high
line 22: "GPIO22" unused input active-high
line 23: "GPIO23" unused input active-high
line 24: "GPIO24" "sysfs" output active-high [used]
line 25: "GPIO25" unused input active-high
line 26: "GPIO26" unused input active-high
line 27: "GPIO27" unused input active-high

There is some documentation from kernel documentation at https://www.kernel.org/doc/Documentation/gpio/sysfs.txt and https://embeddedbits.org/new-linux-kernel-gpio-user-space-interface/ has an introduction to the differences between the new api and the old depreciated sysfs mechanism. The fact that gpioinfo works shows that the new mechanism support is complied into the kernel used by the Venus OS but they have different versions. Current documentation (https://github.com/victronenergy/venus/wiki/bbb-gpio) also implies that io access in the Venus OS is implemented in the kernel as sysfs control: /sys/class/gpio.

I also did a check on my Pi 3B+ with OS version 2.80 large by gpioinfo | grep sysfs

line 5: unnamed "sysfs" input active-high [used]
line 6: unnamed "sysfs" input active-high [used]
line 13: unnamed "sysfs" input active-high [used]
line 19: unnamed "sysfs" input active-high [used]
line 21: unnamed "sysfs" output active-high [used]
line 26: unnamed "sysfs" input active-high [used]

Which shows that the 5 standard inputs and the relay output all use sysfs as expected

There is a good interactive pinout guide for the Raspberry Pi showing the pins which are used by convention for certain purposes and those available for general purposes at https://pinout.xyz/# This shows that most GPIOs have multiple uses. Many of the optional uses are configured by Device Tree overlays. Device Trees are well beyond what I understand enough to cover here but they makes it possible to support many hardware configurations with a single kernel and without the need to explicitly load or blacklist kernel modules. On Raspberry Pi, Device Tree usage is controlled from /boot/config.txt. By default, the Raspberry Pi kernel boots with device tree enabled but only a small set of GPIOs are configured for special purposes - eg use of pins for a serial port. gpioinfo seems to be the way to find out the current status.

In summary it looks like 23, 24 and 25 are suitable for my experimenting with relays and 25 looks best long term.

At this point I could work happily in the terminal to setup and change GPIO pins stage by stage so it was time to create some scripts which I would then be able to use from Node-RED. The following checks if the GPIO has already been exported and is thus set up, if not it is exported and set to be an output and I also add what has been done to a log file. Then the value can be changed and again logged.

#!/bin/bash
# File /data/home/root/.node-red/scripts/gpio24_on.sh

# If gpio folder does not exist then export and set up as output
if [ ! -d /sys/class/gpio/gpio24 ]
then
   echo "24" > /sys/class/gpio/export ;
   echo "out" > /sys/class/gpio/gpio24/direction ;
#    Uncomment if active_low required to activate relay
#    echo 1 > /sys/class/gpio/gpio24/active_low ;
   echo `date` "Enabled gpio24 for output" >> /data/home/root/.node-red/gpio_log.txt;
fi
echo 1 > /sys/class/gpio/gpio24/value ;
echo `date` "Switched gpio24 on" >> /data/home/root/.node-red/gpio_log.txt;

This is called within an Exec Node containing a Switch Node with some persistent context variables to maintain the position over restarts. It shows how simple Node-RED can be in use. The blue warning dots below are because I temporarily shifted the flow to get a screen shot and I did not want to Deploy it.

https://tlfong01.blog/2020/06/27/jdvcc-relay/ has wiring diagrams of many relay boards with pictures and includes the cheap ones from Amazon I have been using for tests namely https://www.amazon.co.uk/YOUMILE-Channel-optocoupler-Support-Trigger/dp/B07TTVYGC8 at £10.99 for 5. I advise derating and not using these relays for 230 volt.

Appendix J - Kernel Changes required for Venus OS for Raspberry Pi 4 v1.4 and higher.

I am carrying out a number of experiments on interfacing to the gpio on my Raspberry Pi and I have begun to feel slightly vulnerable as I only have the one which is at the heart of the solar and power system on Corinna so I started to look for a spare for experiments. Unfortunately the Raspberry Pi 3B+ which is recommended and all the software is written for is no longer available or is on months long delivery. I however found that a couple of people are working on the software for the latest Pi 4B - there was support for the first couple of revisions but the kernel does not support the latest versions 1.4 and 1.5. The current software from Victron uses kernel 4.19 for the Pi 4 which is now very out of date although other versions have moved to 5.10 which seems to support most of the interfaces on the Pi 4 revs 1.4 and 1.5 and versions have been made available for test.

I therefore decided to try to buy a Raspberry Pi 4B which are also like finding Hen's Teeth. I eventually found a 2Gbyte RAM one in a starter kit with case, microSD, Micro HDMI cable and a 3A mains power supply at Pi Hut where I bought my last Pi 3B+ which was not a ridiculous price as would have had to buy some of the items in any case.

This worked with a trial versions of the Venus OS normal and large and the micro HDMI cable allowed me to connect to a standard monitor for initial tests. The two most important sources of information at the start were below although the links to software can not be relied on, versions come and go and the large version disappeared shortly after I collected mine. There are various suggestions and recipes on how to compile a suitable kernel and install it and you will now find a number of my posting and suggestions and assistance I was given.

There is also a very good introduction to compiling a kernel by the Raspberry Pi people themselves at

I have patched and built kernels in the distant past on Ubuntu to get round a hardware problem but I had no idea what was going to be involved on the Pi. The problem is that it is a highly integrated computer featuring a Broadcom system on a chip (SoC) with an integrated ARM-compatible central processing unit (CPU) and on-chip graphics processing unit (GPU). The ARM processor varies between models of the Pi and it is not helped by the fact that the Venus OS is directed towards real time embedded systems with little inbuilt software. The kernels try to avoid total changes as different ARM based processors are used by use of various software such a Device Trees with Overlays and various other software called during the initial boot sequence which very loosely does part of the job of the old CMOS BIOSes and the new UEFI system. I do not understand the details sufficiently to risk explaining further and making a fool of myself. The problems are compounded by the fact that, in practice, one has to work on a different computer with a different processor so everything has to be cross compiled. https://community.arm.com/oss-platforms/w/docs/525/device-tree gives an indication of how it is all configured.

To cut a long story short, the Victron Community threads above give enough hints and recipes, combined with a couple of days reading, to understand basically what I was trying to do and has allowed me to cross compile a compatible kernel and insert it into the Venus OS 2.80 designed for the earlier Pi 4B versions 1.1 and 1.2. That was extended to include large versions with Node-RED. Essentially this updates the kernel to the 5.10.y series which is the minimum level to support the latest processor and peripheral chips on the Pi 4B version 1.3 and higher motherboards. In most Linux distributions the kernels get minor version changes dozens of times during their life and major changes come every couple of years. Support for a very limited number of LTS (Long Term Support) kernels is 6 years. Some recent LTS kernels with there End of Life (EOL) are 4.19 EOL Dec, 2024, 5.4 EOL Dec, 2025, 5.10 EOL Dec, 2026, 5.15 EOL Oct, 2023. So an update from 4.19 to 5.10 as a base for the Venus OS is very sensible providing 4 more years support rather than go higher however one should note that the Raspberry Pi's own kernels have jumped to 5.15, we may yet find some new feature needs it but currently basing on kernel 5.10.y is sensible.

Linux is hosted on GIT and it is easy to select any previous version, all significant releases will have a branch and major changes and bugs will be back-ported during their supported life. I have cover the use of GIT elsewhere however to have the whole GIT tree for Linux with a history going back 10s of years is GBytes of download however it is only needed once as successive commits (patches) can be applied to it easily. The much quicker alternative is to start with a shallow depth of history and download frequently. Whatever one does the Cloning and stages of cross compellation takes several hours of elapsed time so one hopes it will not be too long before Victron take on the task. There is no formal support for The Pi but Victron have been extremely supportive of the Open Source Movement and people who wish to push to the limits or just tinker.

This has not cut the story as short as I would have liked so it is time to get to the nitty gritty and provide recipes for those with a significant (undefined) previous knowledge of Linux who are not frightened of use of the Console (terminal) and a suitable Linux machine. They are very much based on the work of @Johnny Brusevold and @bathnm and as I said before the stages are going to take many hours and need to currently be repeated every time you wish to upgrade rather than the short upgrade probably applied automatically. But it allows use of the Pi 4B so is a very important bridge at present and essential updates should be rare. There are two sources of a 5.10 kernel, The one provided by Raspberry Pi and one forked by @bathnm which has a number of the Venus patches applied. I followed the suggestion of @Johnny Brusevold and initially used the one from Raspberry Pi with the minimum modification to get Bluetooth working with Victron Software as he had provided a comprehensive recipe.

This all assumes you are using a Linux based machine preferably running a Debian based distribution. This could be a Raspberry Pi but it might be a little slow.

First time we need to install a few utilities on the Linux machine you will be using, some may already be present but some not, this will make sure they are all present:

sudo apt install git bc bison flex libssl-dev make libc6-dev libncurses5-dev crossbuild-essential-armhf

Now we clone the branch in the kernel used in the latest venus images namely 5.10.y from the main raspberry Pi linux

git clone --depth=1 --branch rpi-5.10.y https://github.com/raspberrypi/linux

The --depth=1 parameter restricts the depth of commits to the top level to save downloading a vast amount of information not required at the expense of being able to always merge the latest commits.

There have been changes by Victron in the Bluetooth device handler which we need to download and incorporate with latest Bluetooth file smp.c from Victron code

The file for their 5.10.y source which works is located here: https://github.com/victronenergy/linux/blob/fb01a308bf550ea244bcf2b465a01a0f19c6dd63/net/bluetooth/smp.c - place it in your home folder then copy into kernel code by.

cp -r smp.c linux/net/bluetooth/

cd linux

Tell the system the type of kernel we are making for the relevant ARM processor which is different to the earlier Pi

KERNEL=kernel7l

Start Cross compiling

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- bcm2711_defconfig

The next step is very slow, expect several hours on most machines. The -j 4 parameter will tell the compiler to use four cores (adjust if you have more or less).

make -j 4 ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- zImage modules dtbs
Now we write the venus image file to our microSD card. The card should contain a recent image for the Pi4 version 1.1 to 1.2 downloaded from here https://updates.victronenergy.com/feeds/venus/release/images/raspberrypi4/ . I use BalenaEtcher on my Linux machines to Flash the image. It is an appimage which does not need installing and it is available for Windoz and Macs as well as Linux. It is very quick and importantly checks the image.

You can now mount your new card - adjust the /dev/sdb1 and dev/sdb2 to match your SD card mount points which may differ, check and check again to prevent overwriting your own files:

mkdir mnt
mkdir mnt/fat32
mkdir mnt/ext4
sudo mount /dev/sdb1 mnt/fat32
sudo mount /dev/sdb2 mnt/ext4

More Cross Compiling

sudo env PATH=$PATH make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- INSTALL_MOD_PATH=mnt/ext4 modules_install

Now we copy the new kernel into place on our SD card

sudo rm mnt/ext4/boot/zImage
sudo cp arch/arm/boot/zImage mnt/ext4/boot/
sudo cp -r arch/arm/boot/dts/bcm2711-rpi-4-b.dtb mnt/fat32/
sudo rm -rf mnt/fat32/overlays/*
sudo cp arch/arm/boot/dts/overlays/{ads7846.dtbo,mcp2515-can0.dtbo,mcp2515-can1.dtbo,pitft28-capacitive.dtbo,pitft28-resistive.dtbo,rpi-display.dtbo,rpi-ft5406.dtbo} mnt/fat32/overlays/
sudo cp arch/arm/boot/dts/overlays/README mnt/fat32/overlays/

Get two more files from the raspberry pi site for part of the boot sequence and copy to the card

wget https://github.com/raspberrypi/firmware/raw/master/boot/fixup4.dat
wget https://github.com/raspberrypi/firmware/raw/master/boot/start4.elf
sudo cp -r {fixup4.dat,start4.elf} mnt/fat32/

and unmount your microSD card.

sudo umount mnt/fat32
sudo umount mnt/ext4

That got me a normal (not large with Node-RED) working Venus OS with everything I need working. Bluetooth, Wifi, Ethernet, USB, GPIO ports, HDMI for monitor etc.

Now the extra steps to change to a Venus OS large with Node-RED.

This stage is a little obscure because of the way the large versions of the Venus OS are created by installing a normal system then updating it to a large version. No full images for large versions are produced by Victron. I have not fully explored how the update is done internally but Victron supply .swu files for all the updates which contain compressed images which are installed in turn into one of two rootfs partitions. The update process also seems to end up with an ext4 file system which does not fill the partition and is read only, I am not sure why the partition is not filled but it potentially causes a problem if we make changes independently for the new kernel which no longer fit - only about 10% headroom is provided. What follows is again based on the notes of @Johnny Brusevold which I am expanding as I write up

So the first thing to do is to adjust the partitions on our target microSD which we have just generated which has a standard Venus OS without Node-RED. Victron provide a script to expand the filesystem which can be found on your machine at /opt/victronenergy/swupdate-scripts/resize2fs.sh which will also make it writable. There is an explanation at https://www.victronenergy.com/live/ccgx:root_access

Or you can do what I did which is to modify it on a separate Linux machine with gparted which has a graphical interface so you can adjust the sizes of the partitions to be more suitable and that also expands filesytems to fill the partitions. That is covered in another Appendix Venus OS and OS Large Partitioning and File Systems in great detail.

All the following assumes you are still in linux folder

The next stage extracts the update images and then mount as a loop device. cpio -iv extracts files from an archive in verbose mode then gzip -d does the next decompression which gives us the image itself that we require.

cpio -iv < ./venus-swu-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-large-XX.swu
gzip -d venus-image-large-raspberrypi4.ext4.gz

That is then mounted along with the two partitions on our repartitioned Venus OS small microSD card.

sudo mount /dev/sdb1 mnt/fat32
sudo mount /dev/sdb2 mnt/ext4s
sudo mkdir /mnt/virtual
sudo mount -o loop venus-image-large-raspberrypi4.ext4 /mnt/virtual

The following basically copies everything in the update .swu file over the ext4 partition replacing existing files (which we extended in size) apart from the lib/modules which contains the new kernel which we want to keep and likewise the compressed version zImage. The boot partition is not changed so I am not sure why we bothered to mount it!

sudo rm -rf /mnt/virtual/boot/zImage*
sudo rm -rf /mnt/virtual/lib/modules/*
sudo cp -r /mnt/virtual/* linux/mnt/ext4/

The following makes an image you can put onto other cards. You may need to adjust for different card sizes. You can skip to the unmounting if an image file is not required. I skipped this as I have my own procedures for making compressed images covered in Simple cloning of the microSD card as a Backup so it is untested by me.

sudo dd if=/dev/sdb of=venus-image-raspberrypi4v1.4-vX.XX_XX-Large.rootfs.img bs=1024 count=1627153

sudo pishrink.sh venus-image-raspberrypi4v1.4-vX.XX_XX-Large.rootfs.img

gzip -k9 venus-image-raspberrypi4v1.4-vX.XX_XX.rootfs.img

and unmount your microSD

sudo umount mnt/fat32
sudo umount mnt/ext4

Additional thoughts and inconsistencies:

I seem to have a mix of zimage and zImage which both seem to be compressed versions of the kernel. They are created because in many cases, it is faster to decompress than read a slow memory and it is often used on embedded systems. But which is used?

The .swu files can update most recent previous versions so must contain [almost] everything. So is it possible to do it the other way round and do a standard modification on .swu files rather than have to do the slow cross compiling for every update?

Anomalies including between Pi 3 implementations and Pi 4

Some of the GPIO pins seem to have been set up on the Pi 3 and not on my Pi 4 images

The RAM on my 2 Gbyte Pi4 is limited to 1 Gbyte - possibly wrong fixup files or overlays - or can we use a parameter to set such as add "total_mem=1024" or similar to /boot/config.txt which is alleged to reduce memory on larger Pis.

Summary Status of use of Pi 4

I feel more secure now I know I could get a working Pi 4 system as a fallback if the existing Pi 3B+ fails but I am far from ready to consider changing whilst the existing system is working - "If it ain't bust don't fix it" is always a good principle as is "Keeping layers of contingency".

Before you leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.

Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 19th May 2022