Home Uniquely NZ Travel Howto Pauline Small Firms Search
Diary of System and Website Development
Part 34 (January 2021 ->)

1 January 2021

Change from NTFS to ext4 partition for DATA on Defiant

This turned out to be even easier than expected although very long elapsed times for copy and synchronising. The stages were:

  1. Log into primary user 1000 (pcurtis)
  2. Check that everything on /media/DATA was synchronised onto LUKS_G drive. using terminal commands such as date && sudo du -h --max-depth 1 "/media/LUKS_G/My Video" | sort -rh and date && ls -Alh "/media/DATA/My Video"
    to check the requirements of the synchronisation. The date command is there because these are done on a regular basis and saved to give an audit trail on a regular basis.
  3. Edit /etc/fstab as root and comment out the line mounting /media/DATA as a NTFS partition
  4. Use gparted to format the partition as ext4 and then label as DATA - Two stages
  5. Find the UUID of the new partition using lsblk -f and/or properties in gparted
  6. Edit fstab to mount the ext4 dive at /media/DATA using parameters copied from /home and the UUID found above.
  7. Then and only then restart after which an empty /media/DATA should be present.
  8. Set user and group of /media/DATA to pcurtis (1000) and adm and permissions to 775
  9. Copy all the folders required back from from LUKS_G (takes hours)
  10. Check ownership, group and permissions and adjust if required including .Trash folders (Using standard pre-sync terminal commands)
  11. Run Unison from primary user pcurtis (1000) to confirm DATA is identical to LUKS_G.

Job done!

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda3 during installation
UUID=d0b396c2-ff7f-4087-9ddc-1fb56d4a679a / ext4 errors=remount-ro 0 1
# /home was on /dev/sda4 during installation
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
# UUID=2FBF44BB538624C0 /media/DATA ntfs nls=utf8,uid=pcurtis,gid=adm,umask=0000 0 0
UUID=82d55e72-bf32-4b4f-9f84-f0a97ce446b7 /media/DATA ext4 defaults 0 2

# swap was on /dev/sdb3 during installation
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0

Changes in red lead to:

peter@defiant:~$ lsblk -o NAME,FSTYPE,UUID,SIZE,MOUNTPOINT
NAME   FSTYPE    UUID                                   SIZE MOUNTPOINT
sda                                                   232.9G 
├─sda1 vfat      06E4-9D00                              500M 
├─sda2 ext4      2b12a18d-5f49-49b0-9e1a-b088fd9d7cc1    41G 
├─sda3 ext4      d0b396c2-ff7f-4087-9ddc-1fb56d4a679a    41G /run/timeshift/back
├─sda4 ext4      99e95944-eb50-4f43-ad9a-0c37d26911da 141.6G /home
└─sda5 crypto_LU ae0c3ea4-28b2-42e7-9804-f69a31567659   8.8G 
sdb                                                   931.5G 
├─sdb1 ntfs      269CF16E9CF138BF                       350M 
├─sdb2 ntfs      8E9CF8789CF85BE1                       118G 
├─sdb3 swap      178f94dc-22c5-4978-b299-0dfdc85e9cba  15.6G [SWAP]
├─sdb4                                                    1K 
├─sdb5 ext4      82d55e72-bf32-4b4f-9f84-f0a97ce446b7 698.3G /media/DATA
└─sdb6 crypto_LU 9b1a5fa8-8342-4174-8c6f-81ad6dadfdfd  99.2G 
  └─_dev_sdb6
       ext4      0b72c485-6f2a-4c86-b9b7-c03f01225edb  99.2G /media/VAULT
peter@defiant:~$ 

6 January 2021

Unintended consequences of changes to DATA partition to ext4

The main consequence has been that the Unison synchronisation databases in .unison have been changed and get rebuilt the next time unison is used. This is a slow activity taking many hours for each synchronisation to a different machine or backup disk as the whole ~600 Gbytes in use has to be read. At 60 Mb/sec that is 10,000 seconds or 3 hours which is optimistic for a hard drive - allow double that.

This has led me to run two users simultaneously. I have always been cautious about using the Switch user option rather than logging out of user before logging into different user but it has worked well when rebuilding these databases on Gemini as they could be set running by the primary user and then I could switch to my own user name. If you plan to switch users a lot you really need more than the 4 Gbytes of memory on Gemini as the Swap space was almost fully used for the activity as well as all the cache and buffer space. Once the databases are built even the full synchronisations take only a couple of minutes.

This did mean that the monthly backup took far longer than expected to synchronise between machines and also backup drives as added an extra synchronisation between drives for completeness. It also needed a lot more checking of changes as I had done some sorting and rationisation of the information stored on the DATA drive and moved it round so folders and data were deleted in some places and added in others.

Would I have made the change if I had realised - probably yes as ext4 is a better, more robust, file system than ntsf under Linux and I have found big transfers under ntfs seem to slow down to a crawl in nemo. It does make for greater consistency between my machines.

8 January 2021

Draft for Github Issue now actioned

LUKS encrypted removable USB drives not mounting in expected manner. #343

Reproducibility:

The problem most of the time on all my 4 machines running Mint 20 Cinnamon 4.6.7 or Mint 20.1 Cinnamon 4.8.4 Beta. All have 2 or more users and it occurs for all users (encrypted or otherwise) and with all 5 of my backup drives tested. Further information on sample machine at bottom of posting.

Expected behaviour

Most of the time the actual behaviour after the passphrase is given:

Other experiments

Literature searches:

=================== End of Draft Uploaded ====================================

Investigation of gnome-keyring

peter@defiant:~$ ps -efl | grep gnome-keyring
1 S peter 1419 1 0 80 0 - 78492 - Jan09 ? 00:00:03 /usr/bin/gnome-keyring-daemon --daemonize --login
0 S peter 103621 103163 0 80 0 - 2259 pipe_w 05:43 pts/0 00:00:00 grep gnome-keyring

peter@defiant:~$ grep -r gnome_keyring /etc/pam.d
/etc/pam.d/lightdm:auth optional pam_gnome_keyring.so
/etc/pam.d/lightdm:session optional pam_gnome_keyring.so auto_start
/etc/pam.d/common-password:password optional pam_gnome_keyring.so
/etc/pam.d/cinnamon-screensaver:auth optional pam_gnome_keyring.so
/etc/pam.d/lightdm-greeter:auth optional pam_gnome_keyring.so
/etc/pam.d/lightdm-greeter:session optional pam_gnome_keyring.so auto_start
peter@defiant:~$

peter@defiant:~$ cat /var/log/auth.log | grep gnome-key
peter@defiant:~$

1 February 2021

Monthly Activity Schedule and Checklist [WIP]

Whilst finalising my writing of Grab bag I decided that I should add a check list of monthly activities - mainly backing up but also various maintenance activities and checks. The idea was to test and validate the overall procedures at the start of December. It was intended to have sufficient background and supporting information to enable a normal user to be able to carry out the backing up etc to the level required for the procedures in the Linux Grab Bag page to be used to rebuild a machine when disaster strikes. I found that not to be as easy as I expected as there were a number of quirks (aka bugs) and potential problems that really needed to be addressed before a satisfactory and fool proof procedure and check list could be finalised. This led to considerable background work including developing some scripts for the routine activities and this section took several addition months to develop before it could be incorporated. In the meantime it was developed in parts 33 and 34 of the Diary.

I went back to basics and looked at what counted as critical. I worked much of my life in the Space Game on Satellites where reliability was paramount and one component in many of the reviews was called a FMECA (Failure Mode Effects and Criticality Analysis) and I used a similar approach to look at the overall system of back-ups and redundancy that was in place starting with the importance of particular information including the timeliness. This showed a couple of failure or loss points which would have very serious impacts. It also revealed there was a false sense of security as what seemed to be multiple redundancy in many areas was actually quite the opposite and allowed a single mistake or system error to propagate near instantaneously to all copies.

I am going to look at an example you possibly have. Passwords and Passphrases have become much more critical as hacks have increased so they need to be more complex and difficult to remember and should not be repeated. Use of password managers has become common and loss of the information in a password manager would certainly count as a critical failure and even a monthly backup could result in lose of access to many things, especially if you routinely change passwords as recommended. A quick check showed we have over 250 passwords in our manager of which I would guess 200 are current and a month of changes could easily be over 10. So you think its not that risky because they are present on several machines but that is an illusion as they are linked through the cloud but that means that a mistake on one machine could destroy lose that key on all machines and serious fault could your database on all machines. This makes the configuration and reliability of ones clouds systems to be another source of critical single point failures.

So by the end I decided that I should implement the following before coming back to complete a section on a backup schedule

  1. Backup a number of critical pieces of data on a daily basis independent of the cloud
  2. Investigate the robustness of pCloud and monitor its performance on a daily basis
  3. Convince myself the procedures for the remainder of the critical information were robust.
  4. Understand and work round problems in mounting encrypted drives
  5. Write some scripts to make various backup activities easier and more repeatable

I believe I have now reached that point and all the above have been completed and documented in earlier sections of the diary and have been tested and/or are in use.

The following has been transferred from Diary part 33 where it was originally written in late November and has been modified extensively to remove duplication.

Overview

The purpose of this section is to identify all the various areas which need to be backed up on a regular basis and put together a plan and implementation. Most of this has already been covered in

The intention was to generate an extra and explicit monthly activity list which would add routine house keeping activities to the activities needed to maintain the backups required to easily rebuild a system in the case of a disaster and to keep our various machines synchronised. The idea was to end up with a 'checklist' which could be run monthly and the whole would become another section or appendix in Grab Bag

One important extra area identified at an early stage was that there are fast changing pieces of information which are at risk because synchronised in the cloud. An error on a single machine could rapidly propagate and any sense of redundancy in being on multiple machines is illusory. The most important is the data file for Keepass as that contains the password information for everything. Alongside it is the todo list, not so important but still varying on a daily basis.

Other cloud based information is the calendar, address lists and emails. Emails are left on the server so pose less of a problem and Address books and Calendars are exist in two Google accounts reducing the risks and timescales less urgent. The Thunderbird and Firefox profiles also contain the contacts and calendars as well as bookmarks etc and are in the home folder so are also backed up once a month along with all the user configuration, Desktop etc.

So any plan for backup still have to also take into account data taken on a daily basis but synchronised on several machines via the cloud making it less robust. In addition I have had a number of problems with pCloud Sync.

My solution is to use a layered approach with a daily local backup on multiple machines within each month before the results are transferred to external drives on a monthly basis. So every day the todo.txt and Keepass2 files are automatically copied to a backup folder with a date appended. They are then pruned so only the most recent daily files are kept, then weekly and monthly. The backup folder for these small files lives in the users home folder and the home folder and all the daily copies are archived monthly as part of the users home folder.

In the end it took a couple of months of development before coming to this final action list and all the underpinning work in developing techniques and writing scripts to make life easier and more predictable is covered in the previous diary part 33

All the additional software and scripts have been installed on every machine and extensively tested but not written up in the various howto pages

Monthly Actions List

The following is an action list of Housekeeping Activities that should routinely be run at least monthly.

  1. Check Timeshift is working and there is plenty of space - Timeshift runs in the Background but once month it should be checked and excess manual snapshots pruned. Automatic snapshots are automatically pruned to keep 3 days, 3 weeks and 3 months of snapshots. They do not need to be backed up away from the machine - a reinstall is easier.
  2. Check that pCloud is working and AndroidSync, Phonebox and Shoebox are being Synced - by checking the dates and/or contents of the daily log files present in those folders.
  3. Check for updates in Applets, Desklets and Themes - now made easy by adding the Spices Update Applet.
  4. Check for system updates in Update Manager - apply and reboot machine then log back into normal user ( this mounts the users home folder if encrypted )
  5. Plug in Back-up USB drives, provide password and preferably tick the Forget the Password Immediately option rather than the default of Remember the Password until you logout and keep it mounted until you have completely finished. That is best done immediately after the reboot.
  6. Log out then log in to the prime user pcurtis (id 1000) ready to backup all the other users
  7. Adjust Ownership of DATA and VAULT and any back-up drives in use - Optimises sharing between users during routine use and essential for synchronisation using Unison between machines and backup drives. There is a script backup_users.sh which should be run as root by sudo sh backup_users.sh in terminal (also often a copy in ~/Desktop/Shoebox/Scripts/)
  8. Create Tar archives of home folders of users - this is done as part of the script backup_users.sh
  9. The prime user rarely changes but there is an additional script backup_pcurtis.sh run as root from any different user to backup the home folder of the prime user (id 1000) - run every few months.
  10. Synchronise DATA and VAULT between machines and Backup Drives using Unison from admin user id 1000. They need to be synchronised between machines and transferred to the [3] backup drives at least once month. In practice some synchronisation between machines is needed far more often. This is normally done from gemini which has more profile files after it has been backed up itself.
  11. Do not forget to stop and un-mount the backup drives before unplugging them - best done from the Removable Drives Applet.Keep one Backup drive off site for security and replace one in the Grab Bag
  12. Check for new versions of Mint which come approximately every 6 months, download and add a LiveUSB to The Grab Bag if and when you chose to update.

The above list is actually more than a simple check list as it has been arranged in a very specific order and specifies a number of reboot and login activities which avoid a number of features (a polite term for issues and bugs) to make the monthly activities quicker and easier.

Notes:

Scripts used for maintenance and housekeeping

During development of this action list it became clear that use of a few scripts would make life much easier and more predictable. They are covered in detail in an earlier part of the diary and the action list above depends on their 'availability'. They fall into two classes:

Notes on Implementation [WIP]

There are a number of anomalies in how Mint works which mean that the best way actually to proceed may not be what one initially thinks the logical way when dealing with encrypted drives. In particular there are two issues we have to consider that are taken into account above.

  1. Our backup drives are encrypted with LUKS and a bug in gnome-keyring means that they are best mounted after complete reboot before and changes of user or logouts are made. If not, the Forget the Password Immediately option rather than the default of Remember the Password until you logout must be used.
  2. Remote logins and use of unison to synchronise from another machine to a different user will un-mount VAULT, which is encrypted as a security measure. This can lose data if a user has open files on VAULT.

Both these issues are understood well enough to work out a procedure to avoid problems. Firstly I am going to divide the machines into two classes:

  1. A 'central machine' which carries out the synchronisations of 'data' to and from all the other machines and the hard backup drives. Although it is often useful to keep a couple of machines in step using a central machine is much easier for a monthly and total synchronisation. This, in my case, is a desktop, less powerful than the other machines but adequate for the job.
  2. A series of other machines, all multi-user, and capable of sharing tasks and, if necessary able to provide a similar environment to a user who would normally use a different machine whilst on-the-road or in case of a failure.

Initially I had hoped the majority of the monthly backup could be controlled from the central machine without major interruptions to a user on one of the other machines. The issues raised above mean that is only possible to an experienced users prepared to use a few risky work rounds. I am therefore adopting a low risk and easy to understand procedure above which avoids rather than works round the problems. The penalty is minimal but does involve a reboot of the 'peripheral machines before starting if every work round is to be avoided and the procedure above has been modified to reflect the change.

Latest Scripts used for maintenance and housekeeping

(Transfered from Diary part 33 for completeness)

During development of the action list it became clear that use of a few scripts would make life much easier and more predictable. They fall into two classes, one of which we have already covered:

It turns out that it is simplest for the user to combine the second pair of activities and backup_users.sh (run as root) currently does both.

This script below has been developed to the point that it completely automates the activities once one has logged ino the prime user and has one or more of my backup drives mounted. It now detects the machine name to use in the backup files so is machine independent and can easily edited to add extra drives or channge the list

#!/bin/sh
echo "This script is intended to be run as root from the prime user id 1000 (pcurtis) on $(hostname)"
echo "It expects one of our 6 standard backup drives (D -> I) to have been mounted"
#
echo "Adjusting Ownership and Group of DATA contents"
chown -R 1000:adm /media/DATA
test -d /media/DATA/.Trash-1001 && chown -R 1001:1001 /media/DATA/.Trash-1001
test -d /media/DATA/.Trash-1002 && chown -R 1002:1002 /media/DATA/.Trash-1002
#
echo "Adjusting Ownership and Group of VAULT contents"
test -d /media/VAULT && chown -R 1000:adm /media/VAULT
test -d /media/VAULT/.Trash-1001 && chown -R 1001:1001 /media/VAULT/.Trash-1001
test -d /media/VAULT/.Trash-1002 && chown -R 1002:1002 /media/VAULT/.Trash-1002
#
echo "Adjusting Ownership and Group of any Backup Drives present"
# Now check for most common 2TB backup Drives
test -d /media/LUKS_D && chown -R 1000:adm /media/LUKS_D
test -d /media/SEXT_E && chown -R 1000:adm /media/SEXT_E
test -d /media/SEXT4_F && chown -R 1000:adm /media/SEXT4_F
test -d /media/LUKS_G && chown -R 1000:adm /media/LUKS_G
test -d /media/LUKS_H && chown -R 1000:adm /media/LUKS_H
test -d /media/LUKS_I && chown -R 1000:adm /media/LUKS_I
echo "All Adjustments Complete"
#
echo "Starting Archiving home folders for users peter and pauline to any Backup Drives present"
echo "Be patient, this can take 10 - 40 min"
echo "Note: Ignore any Messages about sockets being ignored - sockets should be ignored!"
#
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_D/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_D/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT_E/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/SEXT_E/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT4_F/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/SEXT4_F/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_G/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_G/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_H/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_H/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_I/backup_$(hostname)_peter_$(date +%Y%m%d).tgz" /home/peter/
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" -cpPzf "/media/LUKS_I/backup_$(hostname)_pauline_$(date +%Y%m%d).tgz" /home/pauline/
#
echo "Archiving Finished"
echo "List of Archives now present on any backup drives follows, latest at top"
test -d /media/LUKS_D && ls -hst /media/LUKS_D | grep "backup_"
test -d /media/SEXT_E && ls -hst /media/SEXT_E | grep "backup_"
test -d /media/SEXT4_F && ls -hst /media/SEXT4_F | grep "backup_"
test -d /media/LUKS_G && ls -hst /media/LUKS_G | grep "backup_"
test -d /media/LUKS_H && ls -hst /media/LUKS_H | grep "backup_"
test -d /media/LUKS_I && ls -hst /media/LUKS_I | grep "backup_"
#
echo "Summary of Drive Space on Backup Drives"
df -h --output=size,avail,pcent,target | grep 'Avail\|LUKS\|SEXT'
echo "Delete redundant backup archives as required"
exit
#
# 20th January 2021

Notes:

  1. The script sets the ownerships to be that of the prime user and the groups to be adm and then corrects the ownership of the recycle bin to be the current owner and group to enable trash to work correctly
  2. It creates archives for DATA and VAULT for both of the normal users (peter and pauline)
  3. The script has hard wired my most common 6 backup drives and will set the ownership etc to any and all that are mounted and backup in turn to each
  4. It list all the backup archives on the drives and the spare space available.

It is currently called backup_users.sh and is in the prime user's (id 1000) home folder of and needs to be made executable and run as root ie by

sudo ./backup_users.sh

You should also back up the prime user (id 1000) pcurtis in our case from a different user. The script can be modified to do this as below and is currently called backup_pcurtis.sh . The setting of permissions is not required so it is much shorter.

#!/bin/sh
echo "This script is intended to be run as root from any other user and backs up the prime user pcurtis (id 1000) on $(hostname)"
echo "It expects one of our 6 standard backup drives (D -> I) to have been mounted"
#
echo "Starting Archiving home folder for users the prime user pcurtis on $(hostname)"
echo "Be patient, this can take 5 - 20 min"
echo "Note: Ignore any Messages about sockets being ignored - sockets should be ignored!"
#
test -d /media/LUKS_D && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_D/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/SEXT_E && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT_E/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/SEXT4_F && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/SEXT4_F/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_G && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_G/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_H && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_H/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
test -d /media/LUKS_I && tar --exclude=/home/*/.gvfs --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" --exclude="/home/*/Trash" --exclude="/home/*/PeteA6Camera" -cpPzf "/media/LUKS_I/backup_$(hostname)_pcurtis_$(date +%Y%m%d).tgz" /home/pcurtis/
#
echo "Archiving Finished"
echo "List of Archives now present on any backup drives follows, latest at top"
test -d /media/LUKS_D && ls -hst /media/LUKS_D | grep "backup_"
test -d /media/SEXT_E && ls -hst /media/SEXT_E | grep "backup_"
test -d /media/SEXT4_F && ls -hst /media/SEXT4_F | grep "backup_"
test -d /media/LUKS_G && ls -hst /media/LUKS_G | grep "backup_"
test -d /media/LUKS_H && ls -hst /media/LUKS_H | grep "backup_"
test -d /media/LUKS_I && ls -hst /media/LUKS_I | grep "backup_"
#
echo "Summary of Drive Space on Backup Drives"
df -h --output=size,avail,pcent,target | grep 'Avail\|LUKS\|SEXT'
echo "Delete redundant backup archives as required"
exit
#
# 3 February 2021

 

5 February 2021

Using Adobe Connect with Linux

The Open University uses Adobe Connect for teaching and interviews. There is no client (App) for Linux but the specifications say that it ought to work with a HTML client in Chrome or Firefox. Unlike Zoom there is no free version although the use of the Mobile apps appears to give free access to an existing Adobe Connect System.

When we try to join a meeting in either Chrome or Firefox we just get a message to say that I need to have Flash installed. Flash has, of course, now been discontinued.

There are very few posts in the forums involving Linux but one at https://www.connectusers.com/forums/viewtopic.php?id=25740 gave the following suggestions

Try adding ?html-view=true (i.e. https://xxxx.adobeconnect.com/room/?html-view=true) to the URL and check again. I had a participant on Linux who ran into the problem as well but was able to call up the room in Firefox when changing the URL.

If there are existing queries I assume the ? is changed to an & to concatenate onto the end

I have tried that on their own examples and it does not solve the problem, possibly as they involve a number of extra queries which invoke extra stages which lose the html-view=true option.

I have, however, had more success with the OU where I had initial success after being given a 'bare' url of the form https://ou.adobeconnect.com/rfk1jqcsiuo8/ and that worked when changed to be https://ou.adobeconnect.com/rfk1jqcsiuo8/?html-view=true.

This gave me hope and I persevered. The way the ou access the pages is via a script which hides the url from sight before connecting but I could get the url from a connection from the android pad and then patch that to have the magic string on the end and that worked even when there were other query strings if I changed the ? to an &. and added it to the end. This was tried in both Chromium (which I had loaded as part of my tests) and Firefox.

There are still microphone and sound problems to sort out but it is looking more promising.

We used Firefox for a tutorial lasting 2 hours but a serious problem occured, namely there is a huge memory-leak in the Adobe Connect HTML5 client and we had to reload the tab 5 times during the 2 hours. This equated to a leak of circa 6 Gbytes per hour, possibly the largest I have ever seen.

To enable testing to continue I have increased the swap file size by 5 fold (see below) which can only be a temporary fix. There is nothing specific in the Adobe forums but such effects have occured in the past.

7th February 2021

Mounting of LUKS volumes - Bug/Feature of pam_mount also affects Switching Users

I have suffered a problem for a long time that when using unison to synchronise between machines my LUKS encrypted common partition which is mounted as VAULT has been unmounted (closed) when unison is closed. The same goes for any remote login to users other than the when the user accessed remotely and the user on the machine were the same. This is bad news if another user was using VAULT or tries to access it.

I have somewhat belatedly realised this also affects Switching between users

The following is what I previously wrote about the problem

I finally managed to get sufficient information from the web to understand a little more about how the mechanism (pam_mount) works to mount an encrypted volume when a user logs in remotely or locally. It keeps a running total of the logins in separate files for each user in /var/run/pam_mount and decrements them when a user logs out. When a count falls to zero the volume is unmounted REGARDLESS of other mounts which are in use with counts of one or greater in there files. One can watch count incrementing and decrementing as local and remote users log in and out. So one solution is to always keep a user logged in either locally or remotely to prevent the the count decrementing to 1 and automatic demounting taking place. This is possible but a remote user could easily be logged out if the remote machine is shut down or other mistakes take place. A local login needs access to the machine and is still open to mistakes. One early thought was to log into the user locally in a terminal then move it out of sight and mind to a different workspace!

The solution I plan to adopt uses a low level command which forms part of the pam_mount suite. It is called pmvarrun and can be used to increment or decrement the count. If used from the same user it does not even need root privileges. So before I use unison or a remote login to say helios as user pcurtis I do a remote login from wherever using ssh then call pmvarrun to increment the count by 1 for user pcurtis and exit. The following is the entire terminal activity required.

peter@helios:~$ ssh pcurtis@defiant
pcurtis@defiant's password:

27 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sat Oct 24 03:54:43 2020
pcurtis@defiant:~$ pmvarrun -u pcurtis -o 1
2
pcurtis@defiant:~$ exit
logout
Connection to defiant closed.
peter@helios:~$

The first time you do an ssh remote login you may be asked to confirm that the 'keys' can be stored. Note how the user and machine changes in the prompt

I can now use unison and remote logins as user pcurtis to machine helios until helios is halted or rebooted. Problem solved!

I tried adding the same line to the end of the .bashrc file in the pcurtis home folder. The .bashrc is run when a user opens a terminal and is used for general and personal configuration such as aliases. That works but gets called every time a terminal is opened and I found a better place is .profile which only gets called at login, the count still keeps increasing but at a slower rate. You can check the count at any time by:

pmvarrun -u pcurtis -o 0

I got round the problem that I had accidently closed my VAULT by switching user by using an ssh login to the user which remounted VAULT which was not unmounted when I exited. Fortunately I did not have any files open in VAULT at the time. I now have to think over the consequences as it is useful to be able to Switch users.

My current thinking is to do a an ssh login to the users one intends to switch to and use pmvarrun before the Switch as a bodge but this is not a long term solution. This bug seems to rule out use of Switch User if a pam-mounted folder is in use.

10th February 2021

Increasing Swap file size

We have been having memory leak problems with Adobe Connect which was using approximately an additional 6 gbytes/hour. As a temporary fix whilst investigating I looked at the existing Swap file size and found it was smaller than I would have expected on gemini. I had allowed the Mint Installer a free hand and it had allocated 2Gbyte file whilst I would normally have used a partition of at least twice the memory size.

There is a good tutorial on swap at https://itsfoss.com/create-swap-file-linux/. I have however used dd to create swap files as fallocate must be avoided on ext4 file systems for creating swap as it potentially creates sparse files - see https://askubuntu.com/questions/1017309/fallocate-vs-dd-for-swapfile/. dd can also be used to append to an existing file to increase the size.

I decided the safe way to proceed was to create a new Swap file of 8 Gbytes called /swapfile8G then use it to replace the existing file by renaming when everything was set up. Note that /etc/fstab only contains a filename and path rather than a UUID as would be the case with a swap partition so this is valid. The proceedure was

peter@gemini:~$ swapon --show
NAME TYPE SIZE USED PRIO
/swapfile file 2G 3M -2
peter@gemini:~$ sudo swapoff /swapfile
[sudo] password for peter:
peter@gemini:~$ swapon --show
peter@gemini:~$ sudo dd if=/dev/zero of=/swapfile8G count=8 bs=1G
8+0 records in
8+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 22.8708 s, 376 MB/s
peter@gemini:~$ sudo mkswap /swapfile8G
Setting up swapspace version 1, size = 8 GiB (8589930496 bytes)
no label, UUID=cbac2848-15d8-4821-bb8d-e972bbecf7e0
peter@gemini:~$ sudo chmod 0600 /swapfile8G
#
# Renamed /swapfile to /swappfile2G and /swapfile8G to /swapfile in nemo via Open as Root
#
peter@gemini:~$ sudo swapon /swapfile
peter@gemini:~$ swapon --show
NAME TYPE SIZE USED PRIO
/swapfile file 8G 0B -2
peter@gemini:~$

I have left the original swap so I can change back once this crisis is over!

The disadvantage of having a swap file is that it takes up space in the root partition and is then saved in timeshift - a double wammy and leaving insufficient space to make room for a dual install for Mint major version upgrades.

So I finally redused back to a 4G swap file on gemini and deleted all other sizes once the initial crisis was over. (13 Feb 2021)

11 February 2021

Adobe Connect - Further testing and experiences

There is a version checker here https://helpx.adobe.com/adobe-connect/connect-downloads-updates.html which showed the OU was due to upgrade on 20th February to version 11.2 so was still using 11 when we were using it.

Microphone, Speaker and video setup

The Adobe Connect pre-meeting test checks your computer and network connections, and helps you troubleshoot connection problems before a meeting begins. You can access the pre-meeting test at https://onlineevents.adobeconnect.com/common/help/en/support/meeting_test.htm or for an end to end test replace onlineevents by your our identifier eg: https://ou.adobeconnect.com/common/help/en/support/meeting_test.htm

These tests showed that the microphone and speakers had to be selected both in Sound Settings and in the AC set up. The microphone level could only be changed in the Sound Settings and unlike Zoom did not have an automatic adjustment. In the end we choose our microphone in the webcam as it had the best background level in Gemini but used our Sony Bluetooth headset for output. The headset should have a mic but it does not seem to work (or we have never found how to enable it!

12 February 2021

Prevo X 12 TWS Earpods from 7DayShop

These X 12 TWS earpods are similar to Apple Airpods in appearance and functionality and come with obscure instruction. Looking on the internet it seems that there are many versions of these with different charging arrangements. 7Dayshop sell two versions from Prevo.

The original AirPods Pro used the Apple H1 chip, a custom-made chip. The copycat versions of AirPods often use the Qualcomm TWS headphone chip QCC scheme which is also a noise reduction chip. The designation X 12 TWS is used with many implementations and the internal firmware can be upgraded so the functionality may differ.

The Gearbest site seems to have useful informatio and an introduction point for the similar i12 TWS is https://www.gearbest.com/blog/how-to/i12-tws-operation-instruction-how-to-use-the-i12-tws-earbuds-7725 another set of information is at /https://www.thephonetalks.com/i12-tws-manual/

Firstly one must realise that the earbuds are touch sensitive rather than having a physical button, the sensitive area is marked and secondly what happens with the taps is dependent on the software and Linux and our Android devices respond differently to some of the codes. The Android devices seem to be closest to what is in the various instructions in the box and those I have found on the internet. The earbud(s) it seems can be operated independently as well as together and have at least one microphone but so far I have not got it to operate although there is some indication one is expected under Android as te BT seittings have 'switches for sound, microphone and phone. The pictures below shows the main functions which my Android pad responds to for sound control.

 

In addition various setd of instructions indicate there is a 'Phone' mode where a single touch can answer, reject or end a call depending on which earbud is touched.

Longer touches also have various functions which can include activating a voice assistant (~3 secs) and turning on and off (~2 and 5 secs) .

Pairing seems to be largely automatic. They pair with each other if removed from the charger together and with existing pairs when turned on. I found with Linux I had to frequently remove and recreate the pairing which is a feature of my other BT headsets and some devices.

The X 12 TWS is capable of reporting its battery level under android.

Instructions for [i12] and x12 TWS:

Different versions of fimware may change the actions slightly between manufacturers even when using the same chips and actual actions depend on the software of, for example, the music player. Rythmbox under Linux does not control well.

Warning multiple taps may change settings you do not want - 4, 5 or 6 taps depending on model may change the language!

Using X12 TWS for Video conferencing

I have not found a way to use the microphone properly under Linux. there are a number of features of the way BT profiles are historically implimented which need fudging to use the [low quality mono] microphone used for phone calls in parallel with high quality audio output which are done in phones and some desktop systems but not Linux. See, for example, https://askubuntu.com/questions/354383/headphones-microphone-is-not-working and https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/508522 for some background.

Practical Experience with X 12 TWS

The earpod is fairly easy to pair both under Linux and Android and automatically reconnects when extracted from the charging box under android. In both cases it can easily be selected for listening to music and the audio quality is good enough to make listening to classical music a pleasure, it may not be the quality of by Sony Headphones but much less intrusive. Volume is easy to adjust once one has had some practice to determine the sensitivity and touch sensitive area.

The concept is however designed round use for phones with easy switching from music to enable one to answer a call, hang up and have the music continue which it does well on my phone. It even reads out the number which is calling!

The x12 does not seem to have good control over Rhythbox and stop/start and track control is not as good as with Andoid although volume control works fine.

17th February 2021

Boosting Microphone Levels

After mch experimenting with the X12 and my other microphones on the Sony DB-BTN200 headphones i started to look for ways to boost the microphone level. In the past I used alsamixer which is instaled by default but it made no difference so I installed pavucontrol which is the 'gold standard' control for pulse-audio as used by Mint. It enabled me to boost the level from my exixting Webcam and the Sony headphones enough to make them much more sensitive in general and also adjust for specific programs wich is just what I required.

20 february 2021

Changes to GitHub Authorisation

GitHub now requires that one uses a Personal Access Token (PAT) to access through the command line (or with the API). This is in addition to the existing username/password combination whic still required to log into Github on the web. The PAT is 40 hex digits long so is very secure. I suspect that two factor authorisation will be reuired soon to go with the username/password - it is currently only recommended.

It appears you can have several PATs with different authorisation levels and, as a security precaution, GitHub automatically removes personal access tokens that haven't been used in a year. See https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token but note the example token has far less options than now available and required - see my example below.

Creating a token

  1. Log in as usual to GitHub on the web
  2. In the upper-right corner of any page, click your profile photo, then click Settings.
  3. In the left sidebar, click Developer settings.
  4. In the left sidebar, click Personal access tokens.
  5. Click Generate new token.
  6. Give your token a descriptive name.
  7. Select the scopes, or permissions, you'd like to grant this token. To use your token to access repositories from the command line, select repo and workflow
  8. Click Generate token.
  9. Click to copy the token to your clipboard. For security reasons, after you navigate off the page, you will not be able to see the token again.

Once you have a token, you enter it instead of your password when performing Git operations over HTTPS.

NOTE: Personal access tokens can only be used for HTTPS Git operations. If your repository uses an SSH remote URL, you will need to switch the remote from SSH to HTTPS.

The PAT can be cached in the same way as the password it replaces and once created it can be edited on GitHub evenwhilst cached if you have missed a scope you nee. I missed Workflow and had to add it having been warned when I tried my first push.

What I have currently for my PAT called Github Access is:

6 March 2021

Adding Extra Fonts to Linux so they are accessible from Web browsers, Libre Office and programs running under Wine.

This is a draft update of a section of common text found in several places including ubuntu.htm and ouopen.htm which needs updating to take account of the latest versions of Windows, Ubuntu and Mint and is now part of a separate and much more comprehensive page - Fonts in Linux and on Web Sites

Mint (and Ubuntu) contain many extra fonts which can be installed using the synaptic package manager including the Microsoft Core Fonts which are not installed as standard as they are not Open Source. These fonts are used widely on the web and for older documents and include well known fonts such as Arial and Times New Roman. They can be installed using the ttf-mscorefonts-installer package (use the command line as you need to accept a licence agreement).

I also wanted to install some extra Fonts, namely the Nadianne True Type font I use for Invitations, Wine Labels etc., and the various Windings fonts which provide ticks and other symbols used by Pauline for marking Open University eTMA scripts. Nadianne is not a standard Windows font and originally I think came with a printer but the others are common to Windows and the defaults in Microsoft Office, hence the need to import them for marking of OU scripts.

This brings me to major issue in editing shared files originally created in Microsoft office. LibreOffice will do a good job of substituting fonts in the document with open source equivalents but a change of font will change the size and spacing of the text so the layout will be changed which may be unacceptable when the documents have to be converted back and returned. The worst problems seem to occur with drawings and mixed drawings and text and we have one example where equations have used some drawings overlaid and the meaning has been completely changed due to text slippage under brackets - that was obvious although the cause was not initially. Worse still text boxes may no longer be large enough to contain the text and the ends of text strings can be lost again changing the meaning completely in several cases we have seen. Combined with a double conversion from .docx to .doc and to LibreOffice which is used by many tutors, including ourselves, for marking one is no longer sure what one is seeing! This is not a satisfactory situation, one can just imagine the effects in complex technical commercial documents and agreements even if everybody is using Windows - thank you Bill for yet another setback to the progress of mankind. This means that one needs to add the common fonts used in Office such as Calibri which is the default font (starting at Office 2007) for Word, Powerpoint, Excel and Outlook.

There should be no licence issues in using them on a dual booted machine with a valid copy of Windows/Office or for viewing or editing existing documents created in Office. If you have doubts or wish to use them in documents you create a licence can be purchased from Microsoft. You can find the required fonts in c:\Windows\Fonts. In Windows this is a 'virtual' folder and contains links which Linux does not understand so you need to copy/paste the fonts you need under windows to a new folder for future use in Linux.

Having obtained any extra fonts you need they need to be installed in Linux. There is useful information at https://askubuntu.com/questions/3697/how-do-i-install-fonts but to summarise.

The fonts which are available to all users are stored in folders under /usr/share/fonts with truetype fonts in subfolder /usr/share/fonts/truetype in Ubuntu Linux so type in a terminal:

nemo admin:////usr/share/fonts/truetype

So I have created a new folder for my extra fonts which I call ttf-extra or ttf-all-extra by a right click -> create folder etc.

Drag the extra fonts into the ttf-extra folder from where they were stored

The folder and the files within it MUST have the permissions and owner set correctly to allow everyone to access it otherwise you may get some most peculiar errors in Firefox and some other programs. It should be OK if you use the procedure above but check just in case that they are the same as all the other folders and files.

If the fonts are only required by a single user then create, if not already present, a folder .fonts in your home directory and copying the fonts into it may be a better solution for a single user. It has advantages as they are retained through a system reinstall. That location is however depreciated and and I have changed to ~/.local/share/fonts which is now used by Mint and can hold individual fonts or folders of fonts.

Then alert Mint/Ubuntu that you added the fonts by typing the following in a terminal

sudo fc-cache -f -v

This rebuilds the font cache. You may also need to close and open programs needing the font or log out and back into a user. A reboot is the nuclear option. Recent experience shows that Mint seems to detect changes automatically.

You can check the fonts present by fc-list and the following gives an easy to read listing

fc-list -f '%{file}\n' | sort

Avoiding Duplicate Font files and use of Updated Font files.

The above procedure has ignored the issue of duplicated fonts. If you just pick up or install extra fonts without thought you will end up with duplicate fonts, this does not seem to crash anything but I can find nothing in the documentation covering how the font used is chosen.

I, and most people, start by installing and licencing the msttcorefonts font set using the ttf-mscorefont-installer package to add a number of important but not Open-source fonts for rendering web pages.

Andale Mono
Arial Black
Arial (Bold, Italic, Bold Italic)
Comic Sans MS (Bold)
Courier New (Bold, Italic, Bold Italic)
Georgia (Bold, Italic, Bold Italic)
Impact
Times New Roman (Bold, Italic, Bold Italic)
Trebuchet (Bold, Italic, Bold Italic)
Verdana (Bold, Italic, Bold Italic)
Webdings

The font files provided date back to 1998 and are missing modern hinting instructions and the full character sets but render adequately for web use and are small. However each version of Windows (and MS Office) has improved versions of many of the same fonts but just adding the font files indiscriminately from say C::\Widows\Fonts will end up with duplicates of Arial, Times New Roman and Verdana and many of the others.

Current Situation (March 2021)

My current solution is to have two sets of all additional fonts files I need over and above a basic install - the first has the extra font files needed when the msttcorefonts are installed and is a folder called ttf-extra and the second is self contained, is used without msttcorefonts and is called ttf-all-extra. My machines may use either approach depending on their requirements.

The set in ttf-extra comprises the extra fonts I need beyond those provided by msttcorefonts such as Nadianne, the extra files for fonts such as Webdings and those used as defaults in MS Office such as Calibri. The ttf-extra approach has been been in use for many years - mine contains ~53 truetype font files and has advantages on Linux only machines as you explicitely accept the licence agreement. The font files in ttf-extra are more compact as they date from the Windows XP / Office 2003 days and should render faster through less accurately than more recent versions.

The self contained set in ttf-all-extra is however more up-to-date. It starts with those in as the msttcorefonts set to which I added the files in ttf-extra containing the extra font files as above. I then updated this complete set of font files with the latest versions from Windows 10 Pro, removed any symbolic links and converted all files to lower case. This gives me a folder with ~ 90 truetype font files. I add the ttf-all-extra folder (which is backed up in My Programs) to either /usr/share/fonts/truetype or preferably to ~/.local/share/fonts depending on whether I want to make it available to all users or just a single user. The msttcorefonts folder is not needed and, if present, should be removed from /usr/share/fonts/truetype to avoid duplication with the fonts in ttf-all-extra.

To Do: Add Logic behind choice of per user for fonts.

10th March 2021

Commands to tidy up files

Convert files in current folder to lower case

for i in $( ls | grep [A-Z] ); do mv -i $i `echo $i | tr 'A-Z' 'a-z'`; done

From https://linuxconfig.org/rename-all-files-from-uppercase-to-lowercase-characters

files converting links to the linked files

Use cp with the -L, --dereference option to "always follow symbolic links in SOURCE"

cp -rL ttf-all-extras ttf--unlinked

From manual page

22 March 2021

More on substitution fonts

NOTE: This section is now part of a separate and much more comprehensive page - Fonts in Linux and on Web Sites

Amongst many rabbit holes I have been down whilst looking at font issues I have found a lot of interesting information. Perhaps the biggest issue every different system has is what to do about working with proprietory fonts used in 'documents' in the widest sense including web sites. Even the most common fonts used are actually proprietory examples being Times New Roman, Arial and Courier. These were the defaults in the early days of Windows and there use was licenced through Microsoft as part of Windows. This did not matter in the early days and nobody noticed when nearly all systems were Windows based but the situation is very different now where Apple and Google are big players and mobile devices dominate the commercial market place and Open Source solutions need to be addressed. This where Font Substitution comes in.


Carlito: Google's Carlito font, google-crosextrafonts-carlito is a modern, sans-serif font, metric-compatible with Microsoft Fonts Calibri font in regular, bold, italic, and bold italic. It has the same character coverage as Microsoft Fonts Calibri. Carlito is a default Calibri font replace in the LibreOffice Suite. It can be installed complete with the configuration to make it a substitution for Calibri from package fonts-crosextra-carlito

Caladea Google’s Caladea font, google-crosextrafonts-caladea is a modern, friendly serif font, metric-compatible with Microsoft Fonts Cambria font in regular, bold, italic, and bold italic. It has the same character coverage as Microsoft Fonts Cambria. Caladea is a default Cambria font replace in the LibreOffice Suite. It can be installed complete with the configuration to make it a substitution for Calibri from package fonts-crosextra-caladea.

 

Fixing problems with LibreOffice displaying inappropriate Bitmapped versions of fonts such as Calibri

lets first have a look at the magnitude of the problem, the following piece of text show what happens when the display reverts to a bitmap and the proper Truefont rendering using outlines. The problem does not show for all sizes of font, zoom and screen resolution. In my case it is obvious displaying Calibri at 6 and 11.5 point with no zoom in LibreOffice.

This is a problems mentioned on the Ubuntu and other forums. It occurs because certain font sizes and screen resolutions cause the rendering engine to revert back to using the embedded bitmap version which scales badly. It is not obvious why this occurs and even less obvious why use of an embedded version is allowed when bitmapped fonts are rejected by default. Fortunately the problem is known and the way to reject embedded bitmaps is simple and only involves adding a few lines of code. The simplest way is to add the code for rejection of embedded bitmaps to the code which is already present to reject simple bitmapped fonts.

The file in question is /etc/fonts/conf.avail/70-no-bitmaps.conf (which is actually symlinked into /etc/fonts/conf.d ). This symlinking is a simple way to allow the dozens of font configuration options to be simply enabled and disabled and are automatically brought into /etc/fonts/fonts.conf which should never be edited directly.

So /etc/fonts/conf.avail/70-no-bitmaps.conf has six lines added to become:

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
  <its:rules xmlns:its="http://www.w3.org/2005/11/its" version="1.0">
    <its:translateRule translate="no" selector="/fontconfig/*[not(self::description)]"/>
  </its:rules>
  <description>Reject bitmap fonts</description>
<!-- Reject bitmap fonts -->
  <selectfont>
    <rejectfont>
      <pattern>
        <patelt name="scalable"><bool>false</bool></patelt>
      </pattern>
    </rejectfont>
  </selectfont>
<!-- Also reject embedded bitmap fonts -->
  <match target="font">
    <edit name="embeddedbitmap" mode="assign">
      <bool>false</bool>
    </edit>
  </match>

</fontconfig>

You may need to regenerate the fonts cache by

sudo dpkg-reconfigure fontconfig
and certainly need to reopen programs such as LibreOffice Writer to see the changes.

A diagnostic trick is to open an editor such as mousepad like this:

FC_DEBUG=1024 mousepad
Which will list all the files that are being scanned for font configuration as it opens, there will be dozens. For some reason it does not work with xed! The following is the result when passed through grep.

peter@defiant:~$ FC_DEBUG=1024 mousepad | grep -i bitmaps
Loading config file from /etc/fonts/conf.d/70-no-bitmaps.conf
Loading config file from /etc/fonts/conf.d/70-no-bitmaps.conf done
Scanning config file from /etc/fonts/conf.avail/70-force-bitmaps.conf
Scanning config file from /etc/fonts/conf.avail/70-force-bitmaps.conf done
Scanning config file from /etc/fonts/conf.avail/70-yes-bitmaps.conf
Scanning config file from /etc/fonts/conf.avail/70-yes-bitmaps.conf done
peter@defiant:~$

The above solution applies to all users. Instead changes can be picked up from ~/.config/fontconfig/conf.d/70-no-embedded.conf for a single user and my file is just:

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<!-- Reject embedded bitmap fonts -->
  <match target="font">
    <edit name="embeddedbitmap" mode="assign">
      <bool>false</bool>
    </edit>
  </match>
</fontconfig>

This is a more consistent solution if you are adding fonts on a per user basis

Changing the default fallback subsitution fonts in linux

See http://eosrei.net/articles/2016/02/changing-default-fallback-subsitution-fonts-linux

LibreOffice Font substitution table

I have not used this but it is included for reference as it looks a useful feature

https://blog.documentfoundation.org/blog/2020/09/08/libreoffice-tt-replacing-microsoft-fonts/

27 March 2021

Google Fonts [WIP]

NOTE: This section is now part of a separate and much more comprehensive page - Fonts in Linux and on Web Sites

Google provide a huge number of fonts licensed under the Open Font License which one can use freely in ones products & projects - print or digital, commercial or otherwise. The only restriction seems to be that you can't sell the fonts on their own. Ubuntu/Mint packages some of these fonts as OpenSource replacements for the Microsoft "C" fonts series which provide most of the Windows and Office Default fonts.

Any of these fonts can be very easily added to Mint for use in LibreOffice or more widely. First go to https://fonts.google.com where you will find ~1000 fonts to start to choose from! There are many easy to use selection tools and it seems sensible to start sorting by popularity and reducing the selection by Catagories to serif, sans serif, monospace etc.

Before you leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.


Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 3rd May, 2021