Home Uniquely NZ Travel Howto Pauline Small Firms Search
Sharing, Networking, Backup, Synchronisation and Encryption under Linux Mint and Ubuntu

This page is completing the first major updates for many years, in particular in the Backing Up section where there is a new philosophy laid out and the section is completely restructured. It also takes into account the addition of TimeShift to the standard Mint Installs. Overall the emphasis is much more on Linux Mint, in particular the LTS versions (19 and 20)

There is more emphasis on multiple boot and multiple user systems and, in particular, the use of encryption of the home folders, automounted encrypted drives and including backup USB drives using LUKS.

The coverage of the Cloud for Backup and Synchronisation has been extended with a new section on pCloud.



This page has been extended from my original page covering standard backing up and now covers the prerequisites of Mounting Drives, File Sharing between Computers running both Linux and Windows, secure access to remote machines and Synchronisation. It also covers other techniques in the armoury for preservation of data, namely separation of system, user data and shared data as well as encryption. When this page was originally written we were using Ubuntu, now we have switched to Mint, which uses much of the infrastructure of Ubuntu, which again is built on Debian. We believe Mint has a better desktop in Cinnamon and is more user friendly. Most of what is written here is compatible with Mint and Ubuntu although some of the facilities in the Nemo file manager are accessed differently or do not exist in Nautilus and the examples will use Xed as a text editor. Mint 19 is based on Ubuntu Bionic 18.04 LTS version and there are significant improvements such as TimeShift but also a number of differences. If you are using earlier Mint versions or Ubuntu versions less than 18.04 there is an older version of the page available.

The ability to Backup has always been considered an essential to any system. The amount of data involved has increased dramatically from the days when we started this web site 20 years ago - data was preserved on floppy disks and our internal drive was 850 Mbytes. Now our latest laptop has a 2 TByte Hard Drive and 250 Gbyte SSD and backups use 2Tbyte external USB drives but we still have important data from 20 years ago still accessible and even emails from 1998. 18 years of Digital Pictures totaling 135,000 takes up 200 Gbytes and our Music collection another 70 Gbytes. We will not even talk about Video!

Backing up takes many forms now that the norm is for machine to be networked. This article covers many forms of achieving redundancy and preserving data. It covers sharing drives between different operating systems on multi booted computers, networking of both Linux and Windows machines, techniques to separate system and data areas, conventional backing up to internal and external drives and most important in this time of mobility - synchronisation both between machines and to external hard drives.

File Systems and Sharing

Preliminaries - Background on the Linux File System

I am not going to extol the virtues of Linux, in particular the Debian based distributions here, but I need to explain a little about the ways that file systems and disks differ in Windows and Linux before talking about backing up. In Windows, Physical Disk drives and the 'virtual' Partitions they are divided up into show up in the top level of the File System as Drives with names such as C: , a floppy disk is almost always A: and the system drive is C: . This sort of division does not appear at all in Linux where you have a single file system starting from root ( / ) which is where the operating system is located and booted. Any additional disk drives which are present or are 'mounted' at a latter time such as a USB disk drive will by default in most distributions appear in the file tree in /media and the naming will depend on several things but expect to entries such as /media/cdrom , /media/floppy and /media/disklabel. If you create a special partition for a different part of the filesystem such as home where all the users live then it can be 'mounted' at /home. In theory you could mount partitions for all sorts of other parts of the file system. If, for example, you add a new disk and choose to mount a partition to just contain your home directory it is only the addition of a single line in a file although you need to exactly copy the old contents to the new partition first - I will cover that in detail latter in this article.

There is a nearly perfect separation of the kernel, installed programs and users in Linux. The users each have a folder within the home folder which has all the configuration of the programs they use and their data which is specific to them - the configuration is set up the first time they run a program. A users folder only contains a few tens of kbytes until programs are used and all the program settings are hidden (hidden files and folders start with a . dot) and by default do not appear in the file browser until you turn them on (Ctrl h). There are usually a number of folders generated by default including /Desktop /Documents /Music /Pictures /Videos /Templates and /Public which are used by program as their defaults for user data. This makes backing up very easy - an exact copy of the home folder allows the system to be restored after a complete reload of the operating system and programs. Note the word exact as the 'copy' has to preserve Timestamps, Symbolic Links and the Permissions of all the files - permissions are key to the security of Linux so special archiving utilities are best employed as a normal copy is usually not good enough.


Permanently Mounting a shared drive in Ubuntu (Advanced)

If we are going to use a shared drive for data then we must ensure that it is permanently mounted. The mounting points in all major flavours of Linux are defined in a file system table which is in /etc/fstab and and the convention is that the mount points are in /media. We therefore need to set modify /etc/fstab to set up to mount points in /media and we must also create the folders for them using sudo to make them owned by root or the primary user and set the permissions so they are accessible to all users for read and write.

A standard approach was to open the File Browser as Root in a terminal by gksudo nemo however gksudo is no longer available in the Ubuntu 18.04 and higher code base but instead nemo has the ability to open any folder as root using the right click menu and in this mode shows a very visible red banner saying 'elevated priviledges'.

This allows me to use the graphical file browser to create the folders and set permissions by standard navigation and right click menus for create folder and Properties -> Permisions. It is best to make these folders with the same names as those assigned by mounting from 'Places' which is derived from the partition label if it is set – see below. Do not continue to use this Root File Browser after setting up the shared folder as running anything as Root has many dangers of accidental damage although the aware reader will realise that the terminal can be avoided in the next stage by also opening fstab from within the Root File Browser - but do take care!

You will however have to use a terminal for some of the other actions so I now feel the easiest way is to use a terminal and use the following two commands.

sudo mkdir /media/DRIVE_NAME
sudo chmod 777 /media/DRIVE_NAME

I have found that fstab does not like the folder name to include any spaces even if it is enclosed in quotes so it is best to use short single word drive names or join them with underscores. I also seem to recall an early Windows restriction of 13 characters maximum.

It is desirable to back up /etc/fstab and then make the changes using the editor xed. This is done in a terminal by:

sudo cp /etc/fstab /etc/fstab_backup
xed admin:///etc/fstab

Note the special way that xed is used to get root priviledges now gksudo is no longer available. Before proceeding we need some information on the identifier for the file systems. The UUID can be found by typing blkid in a terminal – typical output looks like:

pcurtis@matrix:~$ blkid
/dev/sda1: LABEL="VISTA" UUID="2E6121A91F3592E4" TYPE="ntfs"
/dev/sda2: LABEL="HP_RECOVERY" UUID="0B9C003E5EA964B2" TYPE="ntfs"
/dev/sda5: LABEL="DATA" UUID="47859B672A5D9CD8" TYPE="ntfs"
/dev/sdb5: UUID="a7c16746-1424-4bf5-980e-1efdc4500454" TYPE="swap"
/dev/sdb6: UUID="432c69bd-105c-454c-9808-b0737cab2ab3" TYPE="ext4"
/dev/sdb7: UUID="a411101c-f5c6-4409-a3a1-5a66de372782" SEC_TYPE="ext2" TYPE="ext3"

The recommended procedure for modifying /etc/fstab is to use the drives UUID rather than the device's location ie append lines to /etc/fstab looking like:

# /dev/sda3: LABEL="DATA" UUID="47859B672A5D9CD8" TYPE="ntfs"
UUID=47859B672A5D9CD8 /media/DATA ntfs nls=utf8,uid=pcurtis,gid=pcurtis,umask=0000 0 0

Note – This is for a NTFS partitions. This provides a read/write mounts ( umask=0000 for read/write for owner, group and everyone else) If you want a read only mount use umask=0222. The uid=pcurtis and gid=pcurtis are optional and define the user and group for the mount. See below for reason for defining the user. This does not work for ext3 or ext4 partitions, only vfat and ntfs and it seems the only way is the permisions have to be set up after every boot in a script for ext3 and ext4.

After modifying /etc/fstab and rebooting the chosen 'Drives' are mounted and appear on the desktop by default as well as in the file browser - they can not be unmounted without root privileges which is just what we want.

Ownership and permissions of Windows type Filesystems mounted by fstab

There is one 'feature' of this way of mounting which seems to be totally universal and that is that only root (the owner) can set the time codes - this means that any files or directories that are copied by a user have the time of the copy as their date stamp. What seems to happen is this:

A solution for a single user machine is to find out your user id and mount the partition with option uid=user-id, then all the files on that partition belong to you - even the newly created ones. This way when you copy you keep the original file date. This is important if you have a file synchronisation program such as Unison which checks file creation and modification dates.

# /dev/sda5 - note this is from a vfat (FAT32) filesystem
UUID=706B-4EE3 /media/DATA vfat utf8,uid=yourusernam,gid=yourusernamee,umask=0000 0 0

You must change yourusername to you own user name.

The uid can also be specified numerically and the first user created has user id 1000.

More about UIDs

This brings us to point you need to understand for the future which is that user names are a two stage process. If I set up a system with the initial user peter when I install that is actually just an 'alias' to a numeric user id 1000 in the linux operating system. I then set up a second user pauline who will correspond to 1001. If I have a disaster and reinstall and this time start will pauline who is then 1000 and peter is 1001. I get my carefully backed up folders and restore the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc.

You can check all the relevant information in a terminal by use of id:

pcurtis@defiant:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),27(sudo),30(dip)

pcurtis@defiant:~$ id pauline
uid=1002(pauline) gid=1002(pauline) groups=1002(pauline),24(cdrom),27(sudo),30(dip)

So when you install on a new machine you should always use the same username and password as on the machine you wish to 'clone' from and set up any further users in the same order so they have the same numeric ids using the Users and Groups utility

In summary: A user ID (UID) is a unique positive integer assigned within the kernel to each user. Each user is identified to the system by its UID, and user names are generally used only as an interface for humans, in the kernel, only UIDs are used. Users ids are allocated starting at 1000 in Debian and hence Mint at 1000

We will return to this when we consider backups in more detail .

Set up file system to mount an ext4 'DATA' partition to allow sharing by all users using groups. (Advanced)

This is still work in progress and is only required for multiple users or for syncronising between machines.

Drives of type ext3 and ext4 mounted by fstab are by default owned by root. You can set the owner to the main user but not the group and access rights so other users can access a shared area as you can with an ntfs file system.

You can largely get round this by making sure that all the users belong to all the other users groups if you are prepared to share everything rather than just the DATA drive.

The alternative is to set the group for all files and folders on the DATA drive so it can be accessed by all users. I chose to use the adm group as most users with administrator rights will be a member. An alternative is the sudo group - all sudoers would be able to mount any way so there is no point in keeping them out.

I have a script on the Helios which is run at the correct time during a reboot which sets all the owners, groups [and permissions] after ensuring the file system has been mounted. This is very fast on a small SSD and works well.

My script file is /usr/bin/shareDATAadm which must have execute permission set and should contain:

# shareDATAadm file to set ownership, group [and permissions] of the media/DATA partition mount point
# after a delay to allow the mount to be complete.
# sleep 10
chown 1000:adm -R /media/DATA
# chmod -R 664 /media/DATA
exit 0

The best way is to use the power of systemd to create a service to run our script waiting until the partition has been mounted. The initial idea came from: https://forum.manjaro.org/t/systemd-services-start-too-soon-need-to-wait-for-hard-disk-to-mount/37363/4

First we need to find out the mount points of the drives, so the service can be set up to wait for the /media/DATA drive by:

systemctl list-unit-files | grep .mount

In my case the output looked like:

$ systemctl list-unit-files | grep .mount
proc-sys-fs-binfmt_misc.automount static
-.mount generated
boot-efi.mount generated
dev-hugepages.mount static
dev-mqueue.mount static
home.mount generated
media-DATA.mount generated
proc-sys-fs-binfmt_misc.mount static
sys-fs-fuse-connections.mount static
sys-kernel-config.mount static
sys-kernel-debug.mount static
clean-mount-point@.service static
systemd-remount-fs.service static
umountfs.service masked
umountnfs.service masked
umountroot.service masked
umount.target static

and the relevant mount point is media-DATA.mount

We can now create our Unit file for the service which will run our script after the /media/DATA partition has been mounted. So, create a new file in /etc/systemd/system/ as sharedata.service and add the following contents:

Description=Runs script to set owner and group for /media/DATA



Note: You need a #!/bin/sh or #!/bin/bash in the first line of the script.

Enable the service to be started on bootup by

systemctl enable sharedata.service

You can check it is all working by

$ systemctl status sharedata.service
Loaded: loaded (/etc/systemd/system/sharedata.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2018-07-09 01:43:39 BST; 10min ago
Process: 789 ExecStart=/usr/bin/shareDATAadm (code=exited, status=0/SUCCESS)
Main PID: 789 (code=exited, status=0/SUCCESS)
Jul 09 01:43:01 lafite systemd[1]: Starting Runs script to set owner and group for /media/DATA...
Jul 09 01:43:39 lafite systemd[1]: Started Runs script to set owner and group for /media/DATA.

Note: This proceedure of using a script works very well on a SSD as everything is very fast but it can take quite a few seconds on large hard drive with tens of thousands of files - see above. It also requires a reboot to trip it when changing users or before a syncronisation using Unison.

You can easily disable the service from running every boot by

sudo systemctl disable sharedata.service

In practice making sure every user is in every other users groups allows almost everything to work other than some use of Unison so the permissions when the owner and group can be set in a terminal or by running the above script just before synchronising with Unison may be better with large hard drives.

Afterwords: The use of a systemd service is a very easy way to start any program at startup which requires root access and is for all users and has wide applicability.

Ownership and Permissions of USB Drives with Unix filesystems (ext3, ext4, jfs, xfs etc)

The mount behavior of Unix file systems such as ext*, jfs, xfs etc is different to windows file systems such as NTFS and FAT32. When a USB drive with a 'Windows' type file system which has no internal knowledge of each folder and files permissions is plugged in it is auto-mounted with the Owner and Group corresponding to the user who mounted it. Unix filesystems have the permissions built in and don't (and shouldn't) get their ownership and permissions changed simply by mounting them. If you want the ownership/permissions to be different, then you change them, and the changes persist across unplug/replugs

Defined Mounting of USB Drives with Non Unix filesystems or mounting early in the boot process using fstab

It is sometimes necessary to mount a USB drive early in the boot process but this should only be done using this proceedure for ones which are permanently connected as boot process will halt and need intervention if the drive is not present. Even so it does allow a mount with defined ownership and permissions at an early enough stage for programs started automatically and run in the background as Daemons to use them.

Change auto-mount point for USB drives back to /media

Ubuntu (and therefore Mint) have changed the mount points for USB drives from /media/USB_DRIVE_NAME to /media/USERNAME/USB_DRIVE_NAME. One can change the behaviour by using a udev feature in Ubuntu 13.04 and higher based distributions (needs udisks version 2.0.91 or higher). This has been tested with the latest Mint 19.

Create and edit a new file /etc/udev/rules.d/99-udisks2.rules

sudo xed /etc/udev/rules.d/99-udisks2.rules

and cut and paste into the file


then activate the new udev rule by restarting or by

sudo udevadm control --reload

When the drives are now unplugged and plugged back in they will mount at /media/USB_DRIVE_NAME

Accessing Files on Windows Machines over the Windows Network

I originally considered this would be an important step but the order of difficulty was not what I expected - I could access files on Windows machines over the Network immediately although the creation date is only displayed correctly for files on NTFS partitions. I have not found a workaround yet or any reference to it despite extensive web searches. You will usually be asked for a password the first time.

Accessing Files on your Linux machine from Windows machines (and other Linux machines on your local network).

This is rarely required with Mint as Nemo has the ability to mount a remote file system built in. It can be accessed from the File menu as Connect to Server. You should use SSH for security and the server should be hostname.local . You will often get a series of checks and questions but it will work at the end!

Warning - the Samba GUI does not seem to work under Mint 19

If you wish to use Samba to share with Windows see the previous version of the page for Ubuntu Xenial LTS

Printing over the Network

This was again much easier than I expected. Use System -> Administration -> Printing -> New printer -> Network Printer:Windows Printer which takes one on to identify the printer on the network and has a selection procedure which has a huge range of printers with drivers available including the members of the Epson Stylus series - I have done this with many printers including an Epson C66 and Wifi linked Epson SX515.

Secure access to a Remote File System using SSH and SSHFS (Advanced)

No longer used by me, if you need it, follow up on the previous version of this page

Using ‘Connect to Server’ in the file browser with the SSH option

An Alternative way to mount a secure remote file system is to open the nemo file browser then File -> Connect to Server and select SSH from the drop down menu. Fill in the server name and other parameters as required. This way will give you a mount on the desktop and you can unmount by a right click -> Unmount Volume. You still need SSH on both machines.

If this fails with a timeout error try deleting or renaming ~/.ssh/known_hosts - you will get a message saying that it is a new host you are connecting to and everything will then work. It seems to be caused by the IP address changing for the remote hostname and you not getting the warning message.

Backing up

Overall Backup Philosophy for Mint

My thoughts on Backing Up have evolved considerably over time and now take much more into account the use of several machines and sharing between them and within them giving redundancy as well as security of the data. They now look much more at the ways the backups are used, they are not just a way of restoring a situation after a disaster or loss but also about cloning and sharing between users, machines and multiple operating systems. They continue to evolve to take into account the use of data encryption.

So firstly lets look at the broad areas that need to be backed up:

  1. The Linux operating system(s), mounted at root. This area contains all the shared built-in and installed applications but none of the configuration information for the applications or the desktop manager which is specific to users. Mint has a built in utility called TimeShift which is fundamental to how potential regressions are handled - this does everything required for this areas backups and can be used for cloning. TimeShift will be covered in detail in a separate section.
  2. The Users Home Folders which are folders mounted within /home and contain the configuration information for each of the applications as well as the desktop manager which is specific to users such as themes, panels, applets and menus. It also contains all the Data belonging to a specific user including the Desktop, the standard folders such as Documents, Video, Music and Photos etc. It will probably also contain the email 'profiles' if an SSD is in use. This is the most challenging area with the widest range of requirements so is the one covered in the greatest depth here.
  3. Shared DATA. The above covers the minimum areas but I have an additional DATA area which is available to all operating systems and users and is periodical syncronised between machines as well as being backed up. This is kept independent and has a separate mount point. In the case of machines dual booted with Windows it uses a files system format compatible with Windows and Linux such as NTFS. The requirement for easy and frequent syncronisation means Unison is the logical tool for DATA between machines with associated synchonisation to a large USB hard drive for backup. Unison is covered in detail elsewhere in this page.
  4. Email and Browsers (profiles). I am going to also mention Email specifically as that has specific issues as it needs to be collected on every machine as well as pads and phones and some track kept on replies regardless of source. All incoming email is retained on the servers for months if not years and all outgoing email is copied to either a separate account accessible from all machines or where that is not posible automatically such as android a copy is sent back to the senders inbox. Thunderbird has a self contained 'profile' where all the local configuration a filing sytem for emails is retained and that profile along with the matching one for the firefox browser need to be backed up and depends where they are held. The obvious places are the DATA area allowing sharing between operating systems and users or in each users home folder which offers more speed if an SSD is used and better security if encryption is implemented. I use both.

Physical Implications of Backup Philosophy - Partitioning

I am not going to go into this in great depth as it has already been covered in other places but my philosophy is:

  1. The folder containing all the users home folders should be a separate partition mounted as /home. This separates the various functions and makes backup, sharing and cloning easier.
  2. There are advantages in having two partitions for linux systems so new versions can be run for a while before commiting to them. A separate partition for /home is required if different systems are going to share it.
  3. When one has an SSD the best speed will result from having the linux systems and the home folder using the SSD especially if the home folders are going to be encrypted.
  4. Shared DATA should be in a separate partition mounted at /media/DATA. If one is sharing with a Windows system it should be formatted as ntfs which also reduces problems with permissions and ownership with multiple users. DATA can be on a separate slower but larger hard drive.
  5. If you have an SSD swaping should be minimised and the swap partition should be on a hard drive if it is available to maximise SSD life.
  6. Encryption should be considered on laptops which leave the home. Home folder encryption and encrypted drives are both possible and one may wish to allocate space for an encrypted partition - in our systems it is mounted at /VAULT. It is especially important that email is in an encrypted area.

The Three Parts to Backing Up

1. System Backup - TimeShift - Scheduled Backups and more.

TimeShift which is now fundamental to the update manager philosophy of Mint and backing up the linux system very easy. To Quote "The star of the show in Linux Mint 19, is Timeshift. Thanks to Timeshift you can go back in time and restore your computer to the last functional system snapshot. If anything breaks, you can go back to the previous snapshot and it's as if the problem never happened. This greatly simplifies the maintenance of your computer, since you no longer need to worry about potential regressions. In the eventuality of a critical regression, you can restore a snapshot (thus canceling the effects of the regression) and you still have the ability to apply updates selectively (as you did in previous releases)." The best information I hve found about TimeShift and how to use it is by the author.

TimeShift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. Snapshots are taken using rsync and hard-links. Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager. TimeShift is efficient in use of storage but it still has to store the original and all the additions/updates over time. The first snapshot seems to occupy slightly more disk space than the root filesystem and six months of additions added another approximately 35% in my case. I run with a root partition / and separate partitions for /home and DATA. Using Timeshift means that one needs to allocate an extra 2 fold storage over what one would have expected the root file sytem to grow to.

In the case of the Defiant the root partition has grown to about 11 Gbytes and 5 months of Timeshift added another 4 Gbyes so the partition with the /timeshift folder neeeds to have at least 22 Gbytes spare if one intends to keep a reasonable span of sheduled snapshots over a long time period. After three weeks of testing Mint 19 my TimeShift folder has reached 21 Gbytes for a 8.9 Gbyte system!

This space requirements for TimeShift obviously have a big impact on the partition sizes when one sets up a system. My Defiant was set up to allow several systems to be emplyed with multiple booting. I initially had the timeshift folder on the /home partition which had plenty of space but that does not work with a multiboot system sharing the /home folder. Fortunately two of my partitions for Linux systems plenty big enough for use of TimeShift and the third which is 30 Gbytes is accceptable if one is prepared to prune the snapshotss occasionally. With Mint 20 and a lot of installed programs I suggest the minimum root partition is 40 Gbytes

Cloning your System using TimeShift

Timeshift can also be used for 'cloning' as you can chose what partition you restore to. For example I have recently added an SSD to the Defiant and I just created the partition scheme on the SSD , took a fresh snapshot and restored it to the appropriate partition on the SSD. It will not be available until Grub is alerted by a sudo update-grub after which it will be in the list af operating systems available at the next boot. Assuming you have a separate /home it will continue to use the existing one and you will probably want to also move the home folder - see the previous section on Moving a Home Folder to a dedicated Partition or different partition (Expert level) for the full story.

Warning about deleting system partitions after cloning.

When you come to tidy up your partitions after cloning You Must do a sudo update-grub after any partitio deletions before any reboot. If Grub can not find a partition it expects it hangs and you will not be able to boot your system at all and you will drop into a grub-recovery prompt.

I made this mistake and used the proceedure in https://askubuntu.com/questions/493826/grub-rescue-problem-after-deleting-ubuntu-partition by Amr Ayman and David Foester which I reproduce below

grub rescue > ls
(hd0) (hd0,msdos5) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos1)
grub rescue > ls (hd0,msdos1) # try to recognize which partition is this
grub rescue > ls (hd0,msdos2) # let's assume this is the linux partition
grub rescue > set root=(hd0,msdos2)
grub rescue > set prefix=(hd0,msdos2)/boot/grub # or wherever grub is installed
grub rescue > insmod normal # if this produced an error, reset root and prefix to something else ..
grub rescue > normal

For a permanent fix run the following after you successfully boot:

sudo update-grub
sudo grub-install /dev/sdX

where /dev/sdX is your boot drive.

It was not a pleasant activity and had far too much trial and error so make sure you do update-grub.

2. Users - Home Folder Archiving using Tar.

Tar is a very powerful command line archiving tool round which many of the GUI tools are based which should work on most Linux Distributions. In many circumstances it is best to access this directly to backup your system. The resulting files can also be accessed (or created) by the archive manager accessed by right clicking on a .tgz .tar.gz or .tar.bz2 file. Tar is an ideal way to backup many parts of our system, in particular one's home folder. The backup process is slow (15 mins plus) and the file over a gbyte for the simplest system. A big advantage of tar is that (with the correct options) it is cabable of making copies which preserve all the linkages within the folders - simple copies do not preserve symlinks correctly and even an archive copies (cp -aR mybackup) are not as good as a tar archive

The backup process is slow (15 mins plus) and the file will be several Gbyte for the simplest system. After it is complete the file should be moved to a safe location, preferably a DVD or external device. If you want to do a higher compression method the command "tar cvpjf mybackup.tar.bz2" can be used in place of "tar cvpzf mybackup.tgz". This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file.

You can access parts of the archive using the GUI Archive Manager by right clicking on the .tgz file - again slow on such a large archive.

Tar, in the simple way we will be using it, takes a folder and compresses all its contents into a single 'archive' file. With the correct options this can be what I call an 'exact' copy where all the subsiduary information such as timestamp, owner, group and permissions are stored without change. Soft, symbolic links, and hard links can also be retained. Normally one does not want to follow a link out of the folder and put all of the target into the archive so one needs to take care.

We want to back up each users home folder so it can be easily replaced on the existing machine or on a replacement machine. The ultimate test is can one back up the users home folder, delete it (safer is to rename) and restore it exactly so the user can not tell in any way. The home folder is, of course, continually changing when the user is logged in so backing up and restoring should be really be done when the user is not logged in, ie from a different user, a LiveUSB or from a consul. Our systems also retain the first installed user for such administrative activities.

You can create a basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, Type: Administrator ... -> Add -> Click Password to set a password (otherwise you can not use sudo)

Firstly we must consider what is, arguably, the most fundamental decision about backing up, the way we specify the location being saved when we create the tar archive and when we extract it - in other words the paths must usually restore the folder to the same place. If we store absolute locations we must extract in the same way. If it is relative we must extract the same way. So we will always have to consider pairs of commands depending on what we chose.

Method 1 has absolute paths and shows home when we open the archive with just a single user folder below it. This is what I have always used for my backups and the folder is always restored to /home on extraction.

sudo tar --exclude=/home/*/.gvfs cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/

sudo mv -T /home/user1 /home/user1-bak
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /

Method 2 shows the users folder at the top level when we open the archive. This is suitable for extracting to a different partition or place but here the extraction is back to the correct folder.

cd /home && sudo --exclude=user1/.gvfs tar cvpzf "/media/USB_DATA/mybackup2.tgz" user1

sudo tar xvpfz "/media/USB_DATA/mybackup2.tgz" -C /home

Method 3 shows the folders within the users folder at the top level when we open the archive. This is also suitable for extracting to a different partition or place and has been added to allow backing up and restoring encrypted home folders where the encrypted folder may be mounted to a different place at the time

cd /home/user1 && sudo tar --exclude=.gvfs cvpzf /media/USB_DATA/mybackupuser1method3.tgz" .

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" -C /home/user1

These are all single lines if you cut and paste.


Archive creation options: The options used when creating the archive are: create archive, verbose mode (you can leave this out after the first time) , retain permissions, -P do not strip leading backslash, gzip archive and file output. Then follows the name of the file to be created, mybackup.tgz which in this example is on an external USB drive called 'USB_DATA' - the backup name should include the date for easy reference. Next is the directory to back up. Next are the objects which need to be excluded - the most important of these is your back up file if it is in your /home area (so not needed in this case) or it would be recursive! It also excludes the folders (.gvfs) which is used dynamically by a file mounting system and is locked which stops tar from completing. The problems with files which are in use can be removed by creating another user and doing the backup from that user - overall that is a cleaner way to work. If you want to do a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Other exclusions: There are other such files and folders to avoid saving including the cache area for the pCloud cloud service as it will be best to let that recreate and avoid potential conflicts. ( --exclude="/home/*/pCloudDrive" --exclude="/home/*/.pcloud" )

Archive Restoration uses options - extract, verbose, retain permissions, from file and gzip. This will take a while. The "-C / ensures that the directory is Changed to a specified location, in case 1 this is root so the files are restore to the original locations. In case two you can chose but is normally /home. Case 3 is useful if you mount an encrypted home folder independently of login using ecryptfs-recover-private --rw which mounts to /tmp/user.8random8

tar options style used: The initial set of options are in the old options style; old options are written together as a single clumped set, without spaces separating them or dashes preceding them and appear first on the command line, after the tar program name. The one exception is that recent versions of tar >1.28 require exclusions to immediately follow the tar. Mint 20 has version 1.30 in November 2020

Higher compression: If you want to use a higher compression method the option -j can be used in place of -z option and .tar.bz2" should be used in place of .tgz for the backup file extension. This will use bzip2 to do the compressing - j option. This method will take longer but gives a smaller file - I have never bothered so far.

Deleting Files: If the old system is still present note that tar only overwrites files, it does not deleted files from the old version which are no longer needed. I normally restore from a different user and rename the users home folder before running tar as above, when I have finished I delete the renamed folder. This needs root/sudo and the easy way is to right click on a folder in Nemo and 'open as root' - make sure you use a right click delete to avoid going into a root deleted items folder.

Deleting Archive files: If you want to delete the archive file then you will usually find it is owned by root so make sure you delete it in a terminal - if you use a root browser then it will go into a root Deleted Items which you can not easily empty so it takes up disk space for ever more. If this happens then read http://www.ubuntugeek.com/empty-ubuntu-gnome-trash-from-the-command-line.html and/or load the trash-cli command line trash package using the Synaptic Package Manager and type

sudo trash-empty

Alternative to the multiple methods above - use --strip=n option

The tar manual contains information on an option to strip given number of leading components from file names before extraction namely --strip=number

To quote

For example, suppose you have archived whole `/usr' hierarchy to a tar archive named `usr.tar'. Among other files, this archive contains `usr/include/stdlib.h', which you wish to extract to the current working directory. To do so, you type:

$ tar -xf usr.tar --strip=2 usr/include/stdlib.h

The option --strip=2 instructs tar to strip the two leading components (`usr/' and `include/') off the file name.

If you add the --verbose (-v) option to the invocation above, you will note that the verbose listing still contains the full file name, with the two removed components still in place. This can be inconvenient, so tar provides a special option for altering this behavior:


This should allow an archive saved with the full path information saved (as we do in Method 1) to be extracted with the/home/user information striped off. I have only done partial testing but the following seems to work and provides Method 4:

Method 4 This should also enable one to restore to an encrypted home folders where the encrypted folder may be mounted to a different place at the time by encryptfs-recover-private --rw

sudo tar cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/ --exclude=/home/*/.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" --strip=2 --show-transformed-names -C /home/user1

It is likely new versions of the following sections may only use the existing Method 1 combined with use of --strip=n during extraction

Archiving a home folder and restoring

Everything has really been covered above so this is really just a slight expansion of the above for a specific case and the addition of some suggested naming conventions.

This uses Method 1 where all the paths are absolute so the folder you are running from is not an issue. This is the method I have always used for my backups so it is well proven. The folder is always restored to /home on extraction so you need to remove or preferably rename the users folder before restoring it. If a backup already exists delete it or use a different name. Both creation and retrieval must be done from a different or temporary user to avoid any changes taking place during the archive operations.

sudo tar --exclude=/home/*/.gvfs cvpPzf "/media/USB_DATA/backup_machine_user_a_$(date +%Y%m%d).tgz" /home/user_a/

sudo mv -T /home/user_a /home/user_a-bak
sudo tar xvpfz "/media/USB_DATA/backup_machine_user_YYYYmmdd.tgz" -C /

This is usually the best and most flexible way to create a backup and the resulting archive can be used for almost anything with appropriate use of the --strip=n option when extracting. All my backups to date have used it.

Note the automatic inclusion of the date in the backup file name and the suggestion that the machine and user are also included.

Cloning between machines and operating systems using a backup archive (Advanced)

It is possible that you want to clone a machine, for example when you buy a new machine. It is usually easy if you have the home folder on a separate partition and the user you are cloning was the first user installed and you make the new username the same as the old. I have done that many times. There is however a catch which you need to watch for and that is that user names are a two stage process. If I set up a system with the user peter when I install that is actually just an 'alias' to a numeric user name 1000 in the linux operating system. I then set up a second user pauline who will correspond to 1001. If I have a disaster and reinstall and this time start will pauline who is then 1000 and peter is 1001. I get my carefully backed up folders and restore the folders which now have all the owners etc incorrect as they use the underlying numeric value apart, of course, where the name is used in hard coded scripts etc.

You can check all the relevant information for the machine you are cloning from in a terminal by use of id:

pcurtis@mymachine:~$ id
uid=1000(pcurtis) gid=1000(pcurtis) groups=1000(pcurtis),4(adm),6(disk),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),29(audio),30(dip),44(video),46(plugdev),104(fuse),108(avahi-autoipd),109(avahi),110(netdev),112(lpadmin),120(admin),121(saned),122(sambashare),124(powerdev),128(mediatomb)

So when you install on a new machine you should always use the same username and password as on the original machine and then create an extra user with admin (sudo) rights for convenience for the next stage. Change to your temporary user, rename the first users folder (you need to be root) and replace it from the archived folder from the original machine. Now login to the user again and that should be it. At this point you can delete the temporary user. If you have multiple users to clone the user names must obviously be the same and, more importantly, the numeric id must be the same as that is what is actually used by the kernel, the username is really only a convenient alias. This means that the users you may clone must alaways be installed in the same order on both machines or operating systems so they have the same numeric UID.

So we first make a backup archive in the usual way and take it to the other machine or switch to the other operating system and restore as usual. It is prudent to backup the system you are going to overwrite just in case.

So first check the id on both machines for the user(s) by use of

id user

If and only if the ids are the same can we proceed

On the first machine and from a temporary user:

sudo tar --exclude=/home/*/.gvfs cvpPzf "/media/USB_DATA/mybackup1.tgz" /home/user1/

On the second machine or after switching operating system and from a temporary user:

mv -T /home/user1 /home/user1-bak # Rename the user.
sudo tar xvpfz "/media/USB_DATA/mybackup1.tgz" -C /


Moving a Mint Home Folder to a different Partition (Expert level)

This is actually more complex than any of the other sections as you are moving information to a an initial mount point and then changing the automatic mounting so it become /home and covers up the previous information which then has to be deleted from a LiveUSB or other system. I have recently added a SSD to the Defiant computer and the same proceedure applies to the moving of the home folder /home from the existing hard drive partition to a partition on the faster SSD.

The problem of exactly copying a folder is not as simple as it seems - see https://stackoverflow.com/questions/19434921/how-to-duplicate-a-folder-exactly. You not only need to preserve the contents of the files and folders but also the owner, group, permissions and timestamp. You also need to be able to handle symbolic links and hard links. I initially used a complex proceedure using cpio but am no longer convinced that covers every case especially if you use wine where .wine is full of links and hard coded scripts. The stackoverflow thread has several sensible options 'exact' copies. I also have a well proven way of creating and restoring backups of home folders exactly using tar which has advantages as we would create a backup before proceeding in any case!

When we back up normally (Method 1 above) we use tar to create a compressed archive which can be restored exactly and To the Same place, even during cloning we are still restoring the user home folders to be under /home. If you are moving to separate partition you want to extract to a different place, which will become /home eventually after the new mount point is set up in file system mount point list in /etc/fstab. It is convenient to always use the same backup proceedure so you need to get at least one user in place in the new home folder before changing the mount point. I am not sure I trust any of the copy methods for my real users but I do believe it is possible to move a basic user (created only for the transfers) that you can use to do the initial login after changing the location of /home and can then use to extract all the real users from their backup archives. The savvy reader will also realise you can use Method 2 (or 4) above to move them directly to the temporary mount point but what is written here has stood the test of time.

An 'archive' copy using cp is good enough in the case of a very basic user which has recently been created and little used, such a home folder may only be a few tens of kbytes in size and not have a complex structure with links:

sudo cp -ar /home/basicuser /media/whereever

The -a option is an archive copy preserving most attributes and the -r is recursive to copy sub folders and contents.

So the proceedure to move /home to a different partition is to:

Example of changing file system table to auto-mount different partition as /home

I will give an example of the output of blkid and the contents of etc/fstab after moving my home folder to the SSD drive highlighting the changes in red. Note this is under Mint 19 and the invocation for the text editor.

pcurtis@defiant:~$ blkid
/dev/sda1: LABEL="EFIRESERVED" UUID="06E4-9D00" TYPE="vfat" PARTUUID="333c558c-8f5e-4188-86ff-76d6a2097251"
/dev/sda2: LABEL="MINT19" UUID="e07f0d65-8835-44e2-9fe5-6714f386ce8f" TYPE="ext4" PARTUUID="4dfa4f6b-6403-44fe-9d06-7960537e25a7"
/dev/sda3: LABEL="MINT183" UUID="749590d5-d896-46e0-a326-ac4f1cc71403" TYPE="ext4" PARTUUID="5b5913c2-7aeb-460d-89cf-c026db8c73e4"
/dev/sda4: UUID="99e95944-eb50-4f43-ad9a-0c37d26911da" TYPE="ext4" PARTUUID="1492d87f-3ad9-45d3-b05c-11d6379cbe74"
/dev/sdb1: LABEL="System Reserved" UUID="269CF16E9CF138BF" TYPE="ntfs" PARTUUID="56e70531-01"
/dev/sdb2: LABEL="WINDOWS" UUID="8E9CF8789CF85BE1" TYPE="ntfs" PARTUUID="56e70531-02"
/dev/sdb3: UUID="178f94dc-22c5-4978-b299-0dfdc85e9cba" TYPE="swap" PARTUUID="56e70531-03"
/dev/sdb5: LABEL="DATA" UUID="2FBF44BB538624C0" TYPE="ntfs" PARTUUID="56e70531-05"
/dev/sdb6: UUID="138d610c-1178-43f3-84d8-ce66c5f6e644" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="56e70531-06"
/dev/sdb7: UUID="b05656a0-1013-40f5-9342-a9b92a5d958d" TYPE="ext4" PARTUUID="56e70531-07"
/dev/sda5: UUID="47821fa1-118b-4a0f-a757-977b0034b1c7" TYPE="swap" PARTUUID="2c053dd4-47e0-4846-a0d8-663843f11a06"
pcurtis@defiant:~$ xed admin:///etc/fstab

and the contents of /etc/fstab after modification

# <file system> <mount point> <type> <options> <dump> <pass>

UUID=e07f0d65-8835-44e2-9fe5-6714f386ce8f / ext4 errors=remount-ro 0 1
# UUID=138d610c-1178-43f3-84d8-ce66c5f6e644 /home ext3 defaults 0 2
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=pcurtis,gid=46 0 0
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0

In summary: there are many advantages in having ones home directory on a separate partition but overall this change is not a proceedure to be carried out unless you are prepared to experiment a little. It is much better to get it right and create one when installing the system.

Cloning into a different username - Not Recommended but somebody will want to try!

I have also tried to clone into a different username but do not recommend it. It is possible to change the folder name and set up all the permissions and everything other than Wine should be OK on a basic system. The .desktop files for Wine contain the user hard coded so these will certainly need to be edited and all the configuration for the wine programs will have been lost. You will also have to change any of your scripts which have the user name 'hard coded'. I have done it once but the results were far from satisfactory. If you want to try you should do the following before you try to log in the first time after replacing the home folder from the archive. Change the folder name to match the new user name and the following commands set the owner and group to the new user and gives standard permissions for all the files other than .dmrc which is a special case.

sudo chown -R eachusername:eachusername /home/eachusername
sudo chmod -R 755 /home/eachusername
sudo chmod 644 home/eachusername/.dmrc

This needs to be done before the username is logged into the first time otherwise many desktop settings will be lost and the following warning message appears.

Users $Home/.dmrc file is being ignored. This prevents the default session and language from being saved. File should be owned by user and have 644 permissions.
Users $Home directory must be owned by user and not writable by others.

It this happens it is best to start again remembering that the archive extraction does not delete files so you need to get rid of the folder first!

What about backing up encrypted home folders?

This is covered at the end of the Encrypting Your Home Folder section.

3. DATA Synchronisation and Backup - Unison

This is a long section because in contains a lot of background but in practice I just have one central machine which has Unison set up which offers a set of profiles each of which will check and then do all the synchronisations to a drive or machine very quickly. I do it once a month as backup and when required if, for example, I need to edit on different machines.


So how does it work. Linux has a very powerful tool available called Unison to synchronise folders, and all their subfolders, either between drives on the same machine or across a local network using a secure transport called SSH (Safe 'S Hell). At its simplest you can use a Graphical User Interface (GUI) to synchronise two folders which can be on any of your local drives, a USB external hard drive or on a networked machine which also has Unison and SSH installed. Versions are even available for Windows machines but one must make sure that the Unison versions numbers are compatible even between Linux versions . That has caused me a lot of grief in the past and has been largely instumental in causing me to upgrade some of my machines to Mint 20 from 19.3 earlier than I would have done.

If you are using the graphical interface, you just enter or browse for two local folders and it will give you a list of differences and recommended actions which you can review and it is a single keystroke to change any you do not agree with. Unison uses a very efficient mechanism to transfer/update files which minimises the data flows based on a utility called rsync. The initial Synchronisation can be slow but after it has made its lists it is quite quick even over a slow network between machines because it is running on both machines and transferring minimum data - it is actually slower synchronising to another hard drive on the same machine.

The Graphical Interface (GUI) has become much more comprehensive than it was when I started and can handle most of the options you may need, however it does not allow you to save the configurations to a different name. You can however find them very easily as the are all stored in a folder in your home folder called .unison so you can copy and rename them tto allow you to edit each separately, for example, you may want similar configurations to synchronise with several hard drives or other machines. The format is so simple and obvious that I often edit them before saving.

For more complex synchronisation with multiple folders and perhaps exclusions you set up a more complex configuration file for each Synchronisation Profile and then select and run it from the GUI as often as you like. It is easier to do than describe - a file to synchronise my four important folders My Documents, My Web Site, Web Sites, and My Pictures is only 10 lines long and contains under 200 characters yet synchronises 25,000 files!

The review list whenyou run Unison is intelligent and if you have made a folder full of sub folders of pictures whilst you are away it only shows the top folder level which has to transferred which is a good job as we often come back with several thousand new pictures! You have the option to put off, agree or reverse the direction of anny files with difference. Often the differences are in the metadata such as date or permissions rather than the file and there is enough information to resolve those difference

Both Unison and SSH are available in the Mint repositories but need to be installed using System -> Administration -> Synaptic Package Manager and search for and load the unison-gtk and ssh packages or they can be directly installed from a terminal by:

sudo apt-get install unison-gtk ssh

The first is the GUI version of Unison which also installs the terminal version if you need it. The second is a meta package which install the SSH server and Client, blacklists and various library routines. A third winbind may be required to allow ssh to use host names rather than just absolute IP addresses if you do not have an entirely linux network.

Unison is then accessible from the Applications Menu and can be tried out immediately. There is one caution - it is initially configured so that the date of any transferred file is the current date ie the file creation/modification date is not preserved although that can easily be set in the configuration files - the same goes for preserving the user and group which is less essential than the file dates.

There is, however, an important issue which must be taken into account when synchronising and preserving the file dates and times. Users can work on files in owned by others whose group they are in except for one thing and that is that only the owner or root can change the time stamp so synchronisation fails if the time stamp needs to updated for files you do not own. I have got round the problems of shared folders by making sure all users are in the groups of the other users but that is not sufficient for synchronisation so I carry out all synchronisations from the first user installed (id 1000) and set the owners of all files to that user (id 1000) and the group to adm which all users with sudo are automatically in. This can be done very quickly with recursive commands in the terminal for the folders DATA and VAULT on the machine and the USB drives which are plugged in (assuming they are formated with a linux file system). The old fashioned Windows systems can be mounted with the same user 1000 and group adm a different way.

The owner, group and permissions are best set in a terminal, the File Manager looks as it could be used via the set "set permissions for all enclosed files" option all but the recursion down the file tree does not seem to work in practice. So for example to set user, group and permissions for DATA do;

sudo chown -R 1000:adm /media/DATA && sudo chmod -R 664 /media/DATA

which only takes seconds on a SSD and appears instantaneous after the first time when only a few need changing.

Windows Files Systems (fat32, vfat and ntfs) (Most users ignore this section)

The old fashioned Windows systems can be mounted with the same user 1000 and group adm when mounted

Ownership of fat and ntfs drives Mounted permanently using fstab: There is one 'feature' of mounting using the File System Table in the usual way is that the owner is root and only the owner (root) can set the time codes - this means that any files or directories that are copied by a user have the time of the copy as their date stamp which can cause problems with Unison when Synchronising. What seems to happen is this:

A solution for a single user machine is to find out your user id and mount the partition with option uid=user-id, then all the files on that partition belong to you - even the newly created ones. This way when you copy you keep the original file date.

# /dev/sda5
UUID=706B-4EE3 /media/DATA vfat iocharset=utf8,uid=yourusername,umask=000 0 0

In the case of multiple user machines you should not mount at boot time and instead mount the drives from Places.

The uid can also be specified numerically and the first user created has user id 1000.

SSH (Secure aS Hell)

In the introduction I spoke of using the ssh protocol to synchronise between machines.

When you set up the two 'root' directories for synchronisation you get four options when you come to the second one - we have used the local option but if you want to synchronise between machines you select the SSH option. You then fill in the hostname which can be an absolute IP address or a hostname. You will also need to know the folder on the other machine as you can not browse for it. When you come to synchronise you will have to give the password corresponding to that username, often it asks for it twice for some reason.

Setting up ssh and testing logging into the machine we plan to synchronise with.

I always checked out that ssh has been correctly set up on both machines and initialise the connection before trying to use Unison. In its simplest form ssh allows on to log using a terminal on a remote machine. Both machines must have ssh installed (and the ssh daemons running which is the default after you have installed it). The first time you use SSH to a user you will get some warnings that it can not authenticate the connection which is not surprising as and will ask for confirmation and you have type yes rather than y. It will then tell you it has saved the authentication information for the future and you will get a request for a password which is the start of your log in on the other machine. After providing the password you will get a few more lines of information and be back to a normal terminal prompt but note that it is now showing the address of the other machine. You can enter some simple commands such as a directory list (ls) if you want.

pcurtis@defiant:~$ ssh lafite.local
The authenticity of host 'lafite.local (' can't be established.
ECDSA key fingerprint is SHA256:khcDIB60+p0dFGoH0BQCVdqOLX3m0LGYauS+656tqpU.
Are you sure you want to continue connecting (yes/no)? yes
Added the host to the list of known hosts (/home/pcurtis/.ssh/known_hosts).
pcurtis@lafite.local's password: ***********

packages can be updated.
0 updates are security updates.

Last login: Sun Aug 5 07:06:23 2018 from
pcurtis@lafite:~$ exit
Connection to lafite.local closed.

This is how computing started with dozens or even hundreds of users logging into machines less powerful than ours via a terminal to carry out all there work.

The hostname resolution seems to work reliably with Mint 20 on my newtword however if username@hostname does not work try username@hostname.local. If neither work you will have to use the numeric IP address which can be found by clicking the network manager icon in the tool tray -> network settings - > the settings icon on the live connection. The IP addresses can vary if the router is restarted but can often be fixed in the router internal setup but that is another story.

More complex profiles for Unison with notes on the options:

The profiles live in /home/username/.unison and can be created/edited with xed.

# Profile to synchronise from triton to netbook vortex-ubuntu
# with username pcurtis on vortex-ubuntu
# Note: if the hostname is a problem then you can also use an absolute address
# such as on vortex-ubuntu
# Roots for the synchronisation
root = /media/DATA
root = ssh://pcurtis@vortex-ubuntu//media/DATA
# Paths to synchronise
path = My Backups
path = My Web Site
path = My Documents
path = Web Sites
# Some typical regexps specifying names and paths to ignore
ignore = Name temp.*
ignore = Name *~
ignore = Name .*~
ignore = Name *.tmp
# Some typical Options - only times is essential
# When fastcheck is set to true, Unison will use the modification time and length of a
# file as a ‘pseudo inode number’ when scanning replicas for updates, instead of reading
# the full contents of every file. Faster for Windows file systems.
fastcheck = true
# When times is set to true, file modification times (but not directory modtimes) are propagated.
times = true
# When owner is set to true, the owner attributes of the files are synchronized.
# owner = true
# When group is set to true, the group attributes of the files are synchronized.
# group = true
# The integer value of this preference is a mask indicating which permission bits should be synchronized other than set-uid.
perms = 0o1777

The above is a fairly comprehensive profile file to act as a framework and the various sections are explained in the comments.

Using Public Key Authentication with ssh, sshfs and unison to avoid password requests

This needs domain name resolution to be working reliably - if you need information see the earlier version of the page,


VeraCrypt (a current and improved fork of TrueCrypt) - very useful

I have used TrueCrypt on all my machines and despite various well documented shock withdrawal by the authors it is still well regarded and safe by many. See https://www.grc.com/misc/truecrypt/truecrypt.htm. There are many conspiracy theories based round the fact that the security services could not crack it for its sudden withdrawal. Fortunately it has now been forked and continues Opensource with enhanced security as VeraCrypt. There is the transcript of a podcast by Steve Gibson which covers security testing and his views on changing to VeraCrypt at https://www.grc.com/sn/sn-582.htm and he now supports the shift. VeraCrypt is arguably now both the best and the most popular disk encryption software over all machines and I have shifted on most of my machines. VeraCrypt can continue to use Truecrypt vaults and also has a slightly improved format of its own which addresses one of the security concerns. Changing format is as simple as changing the vault password see this article. They:

Truecrypt and its replacement VeraCrypt create a Virtual Disk with the contents encrypted into a single file or onto a disk partition or removable media such as a USB stick. The encryption is all on the fly so you have a file, you mount it as a disk and from then on it is used just like a real disk and everything is decrypted and re-encrypted invisibly in real time. The virtual Drive is unmounted automatically at close down and one should have closed all the open documents using the Virtual Drive by that point just like when you shut down normally. The advantage is that you never have the files copied onto a real disk so there are no shadows or temporary files left behind and one does not have to do a secure delete.

Truecrypt and its replacement VeraCrypt obviously install deep into the operating system in order to encrypt decrypt invisibly on the fly. This has meant in the past that it was specific to a Linux Kernel and had to be recompiled/installed every time a Kernel was updated. Fortunately it can be downloaded as as an installer in both 32 and 64 bit versions n – make sure you get the correct version.

The VeraCrypt installers for Linux are now packed into a single compressed file typically veracrypt-1.21-setup.tar.gz just download, double click to open the archive and drag the appropriate installer say veracrypt-1.21-setup-gui-x64 to the desktop, double click it then click 'Run in Terminal' to run the installer script.

The linux version of Vera/TrueCrypt has a GUI interface almost identical to that in Windows. It can be run from the standard menu although with Cinnamon you may need to do a Cinnamon restart before it is visible. It can also be run by just typing veracrypt in a terminal. It opens virtual disks which are placed on the desktop. Making new volumes (encrypted containers) is now trivial – just use the wizard. This is now a very refined product under Linux.

The only feature I have found is that one has to have administrative privileges to mount ones volumes. This means that one is asked for ones administrative password on occasions as well as the volume password. There is a way round this by providing additional 'rights' specific to just this activity to a user (or group) by additions to the sudoer file. There is information on the Sudoers file and editing it at:


Because sudo is such a powerful program you must take care not to put anything formatted incorrectly in the file. To prevent any incorrect formatting getting into the file you should edit it using the command visudo run as root or by using sudo ( sudo visudo ). The default editor for visudo has changed to vi, a terminal editor, which is close to incomprehensible at least to those used to Windows so it is fortunate we only have single line to add!

You launch visudo in a terminal

sudo visudo

There are now two ways to proceed, if you have a lot of users then it is worth creating a group, changing veracrypt to be a member of that group and adding all the users that need veracrypt to that group. You then use sudoer to giving group members the 'rights' to use it without a password. See:


If you only have one or two users then it is easier to give them individual rights by adding a line(s) to the configuration file by launching visudo in a terminal and appending one of the following lines for either a single user (replace USERNAME with your username) or a group called veracrypt (the last option is brute force and gives everyone access) :

USERNAME ALL = (root) NOPASSWD:/usr/bin/veracrypt
%veracrypt ALL = (root) NOPASSWD:/usr/bin/veracrypt
ALL ALL=NOPASSWD:/usr/bin/veracrypt

Type the line carefully and CHECK - there is no cut and paste into Visudo

Make sure there is a return at the end.

Save by Cntr O and exit by Cntr X - if there are errors there will be a message and a request what to do in the terminal.

I have used it both the simple way and by creating a group.

Encrypting your home folder - new

The ability to encrypt your home folder has been built into Mint for a long time and it is an option during installation for the initial user. It is well worth investigating if you have a laptop but there are a number of considerations and it becomes far more important to back-up your home folder in the mounted (un-encrypted) form to a securely stored hard drive as it is possible to get locked out in a number of less obvious ways such as changing your login password incorrectly.

There is a passphrase generated for the encryption which can in theory be used to mount the folder but the forums are full of issues with less solutions! You should generate it for each user by


Now we will find there is considerable confusion in what is being produced and what is being asked for in many of the encryptfs options and utilities as it will request your passphrase to give you your passphrase!. I will try to explain. When you login as a user you have a login password or passphrase. The folder is encrypted with a much longer randomly generated passphase which is looked up when you login with your login password and that is whatt you are being given and what is needed if something goes dreadfull wrong. These are [should be] kept in step if you change your login password using the GUI Users and Groups utility but not if you do it in a terminal. It is often unclear which password is required as both are often just referred to as the passphrase in the documentation.

Encrypting an existing users home folder.

It is possible to encrypt an existing users home folder provided there is at least 2.5 times the folder's size available in /home - a lot of waorkspace is required and a backup is made.

You also need to do it from another users account. If you do not already have one an extra basic user with admin (sudo) priviledges is required and the user should be given a password otherwise sudo can not be used.

You can create this basic user very easily and quickly using Users and Groups by Menu -> Users and Groups -> Add Account, and set Type to Administrator provide username and Full name... -> Create -> Highlight User, Click Password to set a password otherwise you can not use sudo.

Restart and Login in to your new basic user. You may get errors if you just logout as the gvfs file system may still have files in use.

Now you can run this command to encrypt a user:

sudo ecryptfs-migrate-home -u user

You'll have to provide your user account's Login Password. After you do, your home folder will be encrypted and you should be presented with some important notes In summary, the notes say:

  1. You must log in as the other user account immediately – before a reboot!
  2. A copy of your original home directory was made. You can restore the backup directory if you lose access to your files. It will be of the form user.8random8
  3. You should generate and record the recovery passphrase (aka Mount Passphrase).
  4. You should encrypt your swap partition, too.

The highlighting is mine and I reiterate you must log out and login in to the users whose account you have just encrympted before doing anything else.

Once you are logged in you should also create and save somewhere very safe the recovery phrase (also described as a randomly generated mount passphrase). You can repeat this any time whilst you are logged into the user with the encrypted account like this:

user@lafite ~ $ ecryptfs-unwrap-passphrase
user@lafite ~ $

Note the confusing request for a Passphrase - what is required is your Login password/passphrase. This will not be the only case where you will be asked for a passphrase which could be either your Login passphrase or your Mount passphrase! The Mount Passphrase is important - it is what actually unlocks the encryption. There is an intermediate stage when you login into your account where your account login is used to used to temporarily regenerate the actual mount passphrase. This linkage needs to updated if you change your login password and for security reasons this is not done if you change your login password in a terminal using passwd user which could be done remotely. If you get the two out of step the mount passphrase may be the only way to retrieve your data hence the great importance. It is also required if the system is lost and you are accessing backups on disk.

The documentation in various places states that the GUI Users and Groups utility updates the linkage between the Login and Mount passphrases but I have found that the password change facility is greyed out in Users and Groups for users with encrypted home folders. In a single test I used just passwd from the actual user and that did seem to update both and everything kept working and allowed me to login after a restart.

Mounting an encrypted home folder independently of login.

A command line utility ecryptfs-recover-private is provided to mount the encrypted data but it currently has several bugs when used with the latest Ubuntu or Mint.

  1. You have to specify the path rather than let the utility search.
  2. You have to manually link keychains with a magic incantation which I do not completely understand namely sudo keyctl link @u @s after every reboot. A man keyctl indicates that it links the User Specific Keyring (@u) to the Session Keyring (@s). See https://bugs.launchpad.net/ubuntu/+source/ecryptfs-utils/+bug/1718658 for the bug report

The following is an example of using ecryptfs-recover-private and the mount passphrase to mount a home folder as read/write (--rw option), doing a ls to confirm and unmounting and checking with another ls.

pcurtis@lafite:~$ sudo keyctl link @u @s
pcurtis@lafite:~$ sudo ecryptfs-recover-private --rw /home/.ecryptfs/pauline/.Private
INFO: Found [/home/.ecryptfs/pauline/.Private].
Try to recover this directory? [Y/n]: y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] n
INFO: To recover this directory, you MUST have your original MOUNT passphrase.
INFO: When you first setup your encrypted private directory, you were told to record
INFO: your MOUNT passphrase.
INFO: It should be 32 characters long, consisting of [0-9] and [a-f].

Enter your MOUNT passphrase:
INFO: Success! Private data mounted at [/tmp/ecryptfs.8S9rTYKP].
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP
Desktop Dropbox Pictures Templates
Documents Videos Downloads Music Public
pcurtis@lafite:~$ sudo umount /tmp/ecryptfs.8S9rTYKP
pcurtis@lafite:~$ sudo ls /tmp/ecryptfs.8S9rTYKP

The above deliberately took the long way rather than use the matching LOGIN passphrase as a demonstration.

I have not bothered yet with encrypting the swap partition as it is rarely used if you have plenty of memory and swoppiness set low as discussed earlier.

Once you are happy you can delete the backup folder to save space. Make sure you Delete it (Right click delete) if you use nemo and as root - do not risk it ending up in a root trash which is a pain to empty!

Feature or Bug - home folders remain encrypted after logout?

In the more recent versions of Ubuntu and Mint the home folders remain mounted after logout. This also occurs if you login in a consul or remotely over SSH. This is useful in many ways and you are still protected fully if the machine is off when it is stolen. You have little protection in any case if you are turned on and just suspended. Some people however logout and suspend expecting full protection which is not the case. In exchange it makes backing up and restoring a home folder easier.

Backing up an encrypted folder.

A tar archive can be generated from a mounted home folder in exactly the same way as before as the folder stays unencrypted when you change user to ensure the folder is static. If that was not the case you could use a consul (by Ctrl Alt F2) to login then switch back to the GUI by Ctrl Alt F7 or login via SSH to make sure it was mounted to allow a backup. Either way it is best to logout at the end.

Warning: I have found dangers in using a consul to login to a different user when folders are already mounted - I have not lost data but it becomes very confusing as some folders seem to be mounted multiple times.

Another and arguably better alternative is to mount the user via encryptfs-recover-private and backup using Method 3 from the mount point like this:

sudo ecryptfs-recover-private --rw /home/.ecryptfs/user1/.Private

cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

Restoring to an encrypted folder - Untested

Mounting via encryptfs-recover-private --rw seems the most promising way but not tested yet. The mount point corresponds to /home (see example above) so you have to use Method 3 (or 4) to create and retrieve your archive in this situation namely:

cd /home/user1 && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=user1/.gvfs
# or
cd /tmp/ecryptfs.8S9rTYKP && sudo tar cvpzf "/media/USB_DATA/mybackupuser1method3.tgz" . --exclude=.gvfs

sudo tar xvpfz "/media/USB_DATA/mybackupuser1method3.tgz" -C /tmp/ecryptfs.randomst

These are all single lines if you cut and paste. The . (dot) means everything at that level goes into the archive.

Solving Problems with Dropbox after encrypting home folders

The following day to when I encrypted the last home folder I got a message from Dropbox say that they would only support EXT4 folders under Linux from 3 months time and encryption would not be supported. They also noted the folders should be on the same drive as the operating system.

My solution has been to move the dropbox folders to a new EXT4 partition on the SSD. What I actually did was to make space on the hard drive for a swap partition and move the swap from the SSD to make space for the new partition. It is more sensible to have the swap on the hard drive as it is rarely used and if it is it ends to reduce the life of the SSD. Moving the swap partition need several steps and some had to be repeaed for both the operating systems to avoid errors in booting. The stages in summary were:

  1. Use gparted to make the space by shrinking the DATA partition by moving the end
  2. Format the free space to be a swap partition.
  3. Right click on the partition to turn it on by swapon
  4. Add it in /etc/fstab using blkid to identify the UUID so it will be auto-mounted
  5. Check you now have two swaps active by cat /proc/swaps
  6. Reboot and check again to ensure the auto-mount is correct
  7. Use gparted to turn off swap on the SSD partition - Rt Click -> swapoff
  8. Comment out the SSD swap partition in /etc/fstab to stop it auto-mounting
  9. Reboot and check only one active partition by cat /proc/swaps
  10. Reformat the ex swap partition to EXT4
  11. Set up a mount point in /etc/fstab of /media/DROP; set the label to DROP
  12. Reboot and check it is mounted and visible in nemo
  13. Get to a root browser in nemo and set the owner of media/DROP from root to 1000, group to adm and allow rw access to everyone.
  14. Create folders called user1, user2 etc in DROP for the dropbox folders to live in. It may be possible to share a folder but I did not want to risk it.
  15. Move the dropbox folders using dropbox preferences -> Sync tab -> Move: /media/Drop/user1
  16. Check it all works.
  17. Change folders in KeePass2, veracrypt, jdotxt and any others that use dropbox.
  18. Repeat from 15 for other users.

Dropbox caused me a lot of time-wasting work but it did force me to move the swap partition to the correct place.

Encryption using LUKS (dm-crypt LUKS)

I currently have an ext4 partition of 8 Gbytes mounted at /media/SAFE on the Defiant which I intend to convert to a LUKS encrypted Partition which I believe will be acceptable for Dropbox.

First we need to know details of the partition using blkid

pcurtis@defiant:~$ blkid
/dev/sda3: LABEL="MINT183" UUID="749590d5-d896-46e0-a326-ac4f1cc71403" TYPE="ext4" PARTUUID="5b5913c2-7aeb-460d-89cf-c026db8c73e4"
/dev/sda4: UUID="99e95944-eb50-4f43-ad9a-0c37d26911da" TYPE="ext4" PARTUUID="1492d87f-3ad9-45d3-b05c-11d6379cbe74"
/dev/sda5: LABEL="SAFE" UUID="1b77be28-65f5-49ad-8264-3614b9b275b3" TYPE="ext4" PARTUUID="7ad6cb0d-db2c-4dca-ad8a-4978786c02bf"

The first thing to do before making any changes is to check for auto-mounting in /etc/fstab, remove if required. It is also prudent to do a sudo update-grub after any changes made in partitioning. If you try to boot with a non-existant file in fstab the machine hangs as I have found to my cost. This is my /etc/fstab file

# <file system> <mount point> <type> <options> <dump> <pass>
UUID=e07f0d65-8835-44e2-9fe5-6714f386ce8f / ext4 errors=remount-ro 0 1
# UUID=138d610c-1178-43f3-84d8-ce66c5f6e644 /home ext3 defaults 0 2
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=pcurtis,gid=46 0 0
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0

In this case there was nothing to do as I had not got round to auto-mounting in fstab.

One can over-write the partition with random data if you want to be totally sure all information is lost. There are doubts this is effective on an SSD due to the way the data is shuffled to reduce wear by the controller. You have to really need to hide something to go through these proceedures to my mind.

shred --verbose --random-source=/dev/urandom --iterations=3 /dev/sda5

Note: This is a long job even on a small partition and I actually only used a single iteration just to test the proceedure.

Now we can create a cryptographic device mapper device in LUKS encryption mode. The magic incantation which follows is very secure but slow as it has a high iteration time of 5 seconds which is also the time to prepare for mounting, on lower power machines which are not laptops at high risk I reduce as low as 1000 (1 sec) and also use shorter keys on low processor power machines.

sudo cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random luksFormat /dev/sda5

we now have an encrypted partition but it has no file system so we first or open, 'unlock', the device by

sudo cryptsetup open --type luks /dev/sda5 sda5e

and now create an ext4 file system in order to write encrypted data that will be accessible through the device mapper name. The 'label' sda5e is only used here to format the partition. The -L option labels the file system within the LUKS partition so it is accessible with a meaningful name when it is mounted.

sudo mkfs.ext4 /dev/mapper/sda5e -L VAULT

If we now look gparted we find we have a partition with a filesystem decribed as [Encrypted] ext4 and if we look in nemo we will find the a new device described as a 9.4 GB Volume and if we click on it we will be taken to a screen asking for the Pass Phrase and we can mount it. At this point you will see the label. Before being able to add files etc we may need to set its permissions - I set owner Root and group adm with read and write as users with admin rights are in group adm .

Important: There is one more important step and that is to create a backup header file for security - lose that vulnerable bit of the system and you are completely stuffed.

sudo cryptsetup -v luksHeaderBackup /dev/sda5 --header-backup-file LuksHeaderBackup.bin

and put it and perhaps a copy somewhere very safe.

Auto-mounting our LUKS partition

We now have two basic choices. I have tried both and both have advantages and disadvantages.

  1. Auto-mount at boot needs a keyfile created which is used in addition to the Pass Phrase to unlock the drive. This is arguably the best way if you have somewhere already encrypted to store it otherwise this has no security at all. So this is good for drives when you already have an encrypted root (you can't easily have an encrypted /boot) and possibly an encrypted /home might serve depending on timing.
  2. Auto-mount at login uses a utility you have to install (pam_mount) which works best for local logins which are what most users will do. You have to use the same Pass Phrase as the user login. LUKS has 8 slots for Pass Phrases so it will work with up to 8 users with different logins and if the login passpahrase changes you must change the matching one on the LUKS volume.

This is what to do in more detail for each option.

1. Auto-Mount a LUKS Partition at System Boot

First create a random keyfile

sudo dd if=/dev/urandom of=/etc/luks-keys/disk_secret_key bs=512 count=8

Notes: The folder has to exist otherwise it complains and use less obvious names!

Now we add the keyfile to LUKS

sudo cryptsetup luksAddKey /dev/sd5 /etc/luks-keys/disk_secret_key

You can see how many slots are in use by:

sudo cryptsetup luksDump /dev/sda5 | grep "Key Slot"

and you can see that we have used 2 of 8 slots, one with the Pass Phrase and the second a keyfile.

pcurtis@defiant:~$ sudo cryptsetup luksDump /dev/sda5 | grep "Key Slot"
Key Slot 0: ENABLED
Key Slot 1: ENABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED

We now need to create a mapper for the LUKS device that can be referenced in the fstab. This is all very easy as there is built table for the mappings. Open /etc/crypttab by

xed admin:///etc/crypttab

and add then a line like this:

sda5_crypt /dev/sda5 /etc/luks-keys/disk_secret_key luks

or you better still is to use the UUID of the device:

sda5_crypt /dev/disk/by-uuid/1b77be28-65f5-49ad-8264-3614b9b275b3 /etc/luks-keys/disk_secret_key luks

What we have done there actually is telling that /etc/luks-keys/disk_secret_key should be used instead of password entry to unlock the drive.

Note: /etc/crypttab did not exist in my system and I had to create it

You can now mount in nemo without a password but we still need the last step of adding the device mapper to the file system table if we want to auto-mount at boot with all the other file systems.

xed admin:///etc/fstab

and add a line like this at the end of /etc/fstab:

/dev/mapper/sda5_crypt /media/SAFE ext4 defaults,noauto 0 0

to give

# <file system> <mount point> <type> <options> <dump> <pass>

UUID=e07f0d65-8835-44e2-9fe5-6714f386ce8f / ext4 errors=remount-ro 0 1
# UUID=138d610c-1178-43f3-84d8-ce66c5f6e644 /home ext3 defaults 0 2
UUID=99e95944-eb50-4f43-ad9a-0c37d26911da /home ext4 defaults 0 2
UUID=2FBF44BB538624C0 /media/DATA ntfs defaults,umask=000,uid=pcurtis,gid=46 0 0
UUID=178f94dc-22c5-4978-b299-0dfdc85e9cba none swap sw 0 0
/dev/mapper/sda5_crypt /media/SAFE ext4 defaults,noauto 0 0

And thats it, you should have your encrypted LUKS volume mounted at /media/SAFE

You may need to set the permissions and group of /media/SAFE

I have tried this and it all works but I have no secure encrypted place for the keyfile

2. Auto-Mount a LUKS Partition at User Login.

First we need to install the utility libpam-mount, a one off

sudo apt-get install libpam-mount

Now we need to add the login password(s) to the LUKS by

sudo cryptsetup luksAddKey /dev/sda5

This will ask for any existing passphrase the let you enter a new one and verification

You can have a total of 8 passphrases and keyfiles so this could cover 7 users plus your original Pass Phrase.

Now we edit the pam_mount configuration file at /etc/security/pam_mount.conf.xml

xed admin:///etc/security/pam_mount.conf.xml
and the books says add something like <volume fstype="crypt" path="/dev/sda5" mountpoint="/media/VAULT" /> after the line <!-- Volume definitions --> and that is it, no need to even modify the file system table!

<!-- Volume definitions -->
<volume fstype="crypt" path="/dev/sda5" mountpoint="/media/VAULT" />

<!-- pam_mount parameters: General tunables -->

I rapidly found that was not the whole story and two problems showed up which is why I have greyed out the code above.

So find the UUID of the device by use of blkid

peter@defiant:~$ blkid
/dev/sda1: LABEL="EFI" UUID="06E4-9D00" TYPE="vfat" PARTUUID="333c558c-8f5e-4188-86ff-76d6a2097251"
/dev/sdb6: UUID="9b1a5fa8-8342-4174-8c6f-81ad6dadfdfd" TYPE="crypto_LUKS" PARTUUID="56e70531-06"

so the addition to that I am using in /etc/security/pam_mount.conf.xml is:

<!-- Volume definitions -->

<volume fstype="crypt" path="/dev/disk/by-uuid/9b1a5fa8-8342-4174-8c6f-81ad6dadfdfd" mountpoint="/media/VAULT" user="*" />

<!-- pam_mount parameters: General tunables -->

Note the addition is a single line

There is some further information on persistent mounting at Disk_Encryption_User_Guide How_will_I_access_the_encrypted_devices_after_installation which seems to justify my solution and persistent_block_device_naming has an alternative.

This is basically what I initially did with a small LUKS partition but I plan to either encrypt my whole DATA partition with LUKS or increase the size of the LUKS partition (VAULT) to contain all the sensitive data including Dropbox and the Thunderbird and Firefox profiles.

User Login Password Changes when using pam_mount

If you change a login password you also have to change the matching password slot in LUKS by first unmounting the encrypted partition then using a combination of

sudo cryptsetup luksAddKey /dev/sda5
sudo cryptsetup luksRemoveKey /dev/sda5
# or
sudo cryptsetup luksChangeKey /dev/sda5

otherwise you will be using the previous password.

Warning: Never remove every password or you will nevere be able to access the LUKS volume

Changing label of LUKS filesystem

Firstly this is about the label of the fileystem which will only appear when the LUKS partition is mounted. You will not see it when unmounted so it is secure.

You can not change the label in gparted or nemo or any other GUI I have found. The normal way in a terminal to set a label is to use e2label /dev/sdxx LABEL but we need to find the mount point which is mapped.

So I looked for the mount point on my pam_mount system in /dev expecting it to be something like /dev/mapper

pcurtis@lafite:~$ ls -l /dev | grep -i map
drwxr-xr-x 2 root root 80 Sep 6 07:37 mapper

Which was not what I expected so I had a look a level down

pcurtis@lafite:~$ ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Sep 6 07:36 control
lrwxrwxrwx 1 root root 7 Sep 6 07:37 _dev_sda3 -> ../dm-0

Now we know that it is a link called /dev/mapper/_dev_sda3

I did not need to set a new label but I checked the existing label by:

pcurtis@lafite:~$ sudo e2label /dev/mapper/_dev_sda3
[sudo] password for pcurtis:

I have set the label on another machine and it all works fine using

pcurtis@defiant:~$ ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Sep 4 17:19 control
lrwxrwxrwx 1 root root 7 Sep 6 12:08 _dev_sda5 -> ../dm-0
pcurtis@defiant:~$ sudo e2label /dev/mapper/_dev_sda5 VAULT
[sudo] password for pcurtis:

Conclusions on LUKS Encryption

The encrypting of a partition using LUKS and the various ways of auto-mounting were much easier to understand and implement than I expected.

Link Dropbox to folder in LUKS encrypted folder

Having moved the Dropbox folder an encrypted partition one has a different and tedious path. Linking is good for programs using Dropbox but does not show the fancy icons if you look at it through the link for some reason.

ln -s /media/DROPBOX/Dropbox_pcurtis/Dropbox /home/pcurtis/Dropbox

And now all the programs can see it as it was!

Encrypt USB Drives and Sticks

IMPORTANT NOTE: I have set up my system to mount 'removable' drives at /media/label not /media/username/label so you may need to make some changes below to reflect that or do the same - I find with a multi-user machine the common mount to be much better so I will include instructions here.

Change auto-mount point for USB drives back to /media (Better for Multiple Users)

Ubuntu (and therefore Mint) have changed the mount points for USB drives from /media/USB_DRIVE_NAME to /media/USERNAME/USB_DRIVE_NAME in Ubuntu version 13.04. This seems logical as it makes it clear who mounted the drive as has permissions to modify it as one switches users but when users share information it is intrusive. I have always continued to mount mine to /media/USB_DRIVE_NAME. One can change the behavior by using a udev feature in Ubuntu 13.04 and higher based distributions (needs udisks version 2.0.91 or higher).

Create and edit a new file /etc/udev/rules.d/99-udisks2.rules

xed admin:///etc/udev/rules.d/99-udisks2.rules

and cut and paste into the file


then activate the new udev rule by restarting or by

sudo udevadm control --reload

When the drives are now unplugged and plugged back in they will now mount at /media/USB_DRIVE_NAME

Encrypting a USB stick with a LUKS container.

This is a summary of the sequence for a stick which has a partion mounted at sdb1

First we create the LUKS container Note the iteration time has been set to 2 seconds and that effects mounting times which unfortunately scale with processor speed so take that into account if you have a mix of very fast and very slow machines - perhaps use an acceptable figure on your slowest machine and it will be quick on the others. i have also reduced the key size to 256 but that make no difference in practice.

sudo cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 2000 --use-random luksFormat /dev/sdb1

Now we open it with a temporary location of

sudo cryptsetup open --type luks /dev/sdb1 sdb1e
Now we can format it to ext4 and add the Label

sudo mkfs.ext4 /dev/mapper/sdb1e -L LUKS_4

If we fail to add the label or change it we need to find out what the mount point used by the system is called when pluged in or mounted by nemo .

pcurtis@lafite:~$ ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Sep 8 18:16 control
lrwxrwxrwx 1 root root 7 Sep 8 23:00 _dev_nvme0n1p3 -> ../dm-0
lrwxrwxrwx 1 root root 7 Sep 8 23:00 _dev_sda3 -> ../dm-1
lrwxrwxrwx 1 root root 7 Sep 8 23:00 luks-f43d424d-61c8-467b-9307-c054ac0d1086 -> ../dm-2

and now we know the mount point is /dev/mapper/luks-f43d424d-61c8-467b-9307-c054ac0d1086 which is a link to /dm-2 we can Label/change the label by:

sudo e2label /dev/mapper/luks-f43d424d-61c8-467b-9307-c054ac0d1086 USB4
# or
sudo e2label /dev/dm-2 USB4

Once we have the Label set and the container mounted we can set the permissions, owner and group for the USB stick - these seem to be retained. They are set to the 'primary' owner 1000 and group adm which all my users belong to.

sudo chown -R 1000:adm /media/USB4 && sudo chmod -R 770 /media/USB4
and as a confirmation lets have a look at /media where all the partitions/mount points are encrypted.

pcurtis@defiant:~$ ls -l /media
total 20
drwxrwxrwx 1 pcurtis plugdev 8192 Sep 9 04:38 DATA
drwxrwxr-x 8 pcurtis adm 4096 Sep 7 15:07 DROPBOX
drwxrwx--- 3 pcurtis adm 4096 Sep 8 18:13 USB4
drwxrwx--- 9 pcurtis adm 4096 Sep 8 11:16 VAULT

The end result is that one is asked for a the pass phrase when the stick is pluged in or you can mount in nemo if it was in when the system was booted. You can also unmount before unpluging.

NOTE: The auto-mounting on plugin and use of nemo may be limited to latter kernel versions above 4.1.

Important: There is one more important step and that is to create a backup header file for security - lose that vulnerable bit of the system and you are completely stuffed. An example follows:

sudo cryptsetup -v luksHeaderBackup /dev/sda5 --header-backup-file LuksHeaderBackup_LUKS_4.bin

and put it and perhaps a copy somewhere very safe.

Note: One can use blkid to find the mount point if you do this at a later stage. Sometimes blkid does not seem to show LUKS drives and it is often easier to fire up gparted to find the mount point in the form of /dev/sda5

Bug when mounting USB Backup Drives Encrypted using LUKS in a multiuser environment.

This problem is easy to miss whilst testing as it only appears when one is frequently switching users. Unfortunately that is the situation when doing back-ups to an encrypted drive. The following proceedure gets round the problem, which seems to be caused by the gnome-keyring getting confused in a multiuser environment, by avoiding using it! You should never need to permanently save the passphrase which is bad for security and saving for the session is only useful if you need to keep removing and replacing the drive which is unlikely.

Auto-Mounting USB Backup Drives Encrypted using LUKS

All the USB Backup drives we currently have in use have been encrypted with LUKS and need a passphrase when they are mounted (plugged in).

There is a bug or feature when mounting under the latest versions of Mint, It occurs if you have been switching users before plugging in the LUKS encrypted drive. It is easily avoided by:

  1. Rebooting before plugging in the drive then keeping it mounted through any subsequent changes of user.
  2. or Mounting at any time using the Forget the Password Immediately option rather than the default of Remember the Password until you logout and again keeping it mounted until you have completely finished.

If you make a mistake you will get an error message: The drive will be unlocked but the only way to use it will be to open the file manager and click on it under devices which will mount it and it will then be accessible for use.

Remember - you should always eject/un-mount the drive using the Removable Drives applet in the tray when you have completely finished. This may need your normal user password if you are logged into a different user - you never need the passphrase for the drive to lock it.

Transfer speed penalty for LUKS Encrypted USB Drives

One would expect the encryption process to slow down the data transfers rates but in practice this has been much less than I had expected. The reason is that most modern modern CPUs come with hardware-based AES support built in. Intel calls this feature "AES-NI" (shown in lscpu as "aes"), and it allows reaching 2–3 GB/s rates for AES decryption.

My lscpu (filtered) shows:

peter@defiant:~$ lscpu | grep -i aes
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d

We can run a benchmark to see how efficient the conversion alone (not overall drive performance) is for my 8 thread Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz for aes-xts 256b encryption and decryption used for LUKS by:

peter@defiant:~$ cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1 1263344 iterations per second for 256-bit key
PBKDF2-sha256 1458381 iterations per second for 256-bit key
PBKDF2-sha512 1106092 iterations per second for 256-bit key
PBKDF2-ripemd160 845625 iterations per second for 256-bit key
PBKDF2-whirlpool 669588 iterations per second for 256-bit key
argon2i 4 iterations, 1048576 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
argon2id 5 iterations, 1048576 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
# Algorithm | Key | Encryption | Decryption
aes-cbc 128b 625.9 MiB/s 2497.2 MiB/s
serpent-cbc 128b 84.3 MiB/s 523.9 MiB/s
twofish-cbc 128b 167.7 MiB/s 328.2 MiB/s
aes-cbc 256b 467.8 MiB/s 1980.5 MiB/s
serpent-cbc 256b 83.2 MiB/s 526.1 MiB/s
twofish-cbc 256b 164.7 MiB/s 330.1 MiB/s
aes-xts 256b 1672.2 MiB/s 1651.6 MiB/s
serpent-xts 256b 517.9 MiB/s 501.8 MiB/s
twofish-xts 256b 325.2 MiB/s 321.3 MiB/s
aes-xts 512b 1361.8 MiB/s 1381.0 MiB/s
serpent-xts 512b 496.1 MiB/s 502.1 MiB/s
twofish-xts 512b 323.2 MiB/s 321.5 MiB/s

It is clear this is not going to slow down transfers to and from a USB 3 disk drive with typical speeds of 100 MiB/s but will have a small impact on internal SSD drives. In my case only the root and home partitions are using an SSD.

Using the built in Disks utility benchmarks I could actually see no difference on the SSD both recording reads of 555 MB/sec. On USB drives I was seeing circa 50 MB/sec on both LUKS and straight EXT4 drives on my Defiant with USB 2 and 3 ports whilst I could get 100 MB/sec on the Helios with it's USB 3 port.

If yoou want to read more about the overheads of using "eCryptfs-based home directory encryption" and "LUKS based full disk encryption" options available to Linux Ubuntu and Mint users see the comprehensive benchmarking by Phoronix at https://www.phoronix.com/scan.php?page=article&item=ubuntu-1804-encrypt&num=1

Conclusions on the speed penalty for LUKS Encrypted USB Drives. The overall conclusion I have is that the speed cost in using LUKS encrypted drives even with it's 256b AES-xts encryption is too small to be a consideration with external USB drives, in fact it seems too small to measure on my machines. The differences between aparently identical specification ports seem to be far more significant and I have found two fold differences between two USB 3 ports on different machines running identical versions of Linux.

Encrypting information on existing backup drives

Little of my backup information is sensitive - I do not care about 200 Gbytes of boring pictures or Terabytes of videos. So I add a Veracrypt (ex truecrypt) volume to each existing hard drive which is between 8 and 256 Gbytes in size. That gives plenty of room for sensitive files at one end and partition backups at the other. Partition/Disk encryption is not as reliable as the well proven Veracrypt which is a fork of Truecrypt. One can nor easily auto-mount a Veracrypt volume but that is small price for reliability. A small volume can be hidden to look like video file. Using a keyfile gives ultimate security over a stolen drive if one keeps it in an encrypted folder on ones machines.

Finding filenames which are too long for ecryptfs (used for home folder encryption)

ecryptfs puts a limit on the maximum lenth of a filename. This statement based closely on one by Dustin Kirkland, one of the authors and the current maintainer of the eCryptfs userspace utilities, at https://unix.stackexchange.com/questions/32795 explains the problem:

Linux has a maximum filename length of 255 characters for most filesystems (including EXT4), and a maximum path of 4096 characters.

eCryptfs is a layered filesystem. It stacks on top of another filesystem such as EXT4, which is actually used to write data to the disk. eCryptfs always encrypts file contents, but it by default encrypt (obscure) filenames.

If filenames are not encrypted, then you can safely write filenames of up to 255 characters and encrypt their contents, as the filenames written to the lower filesystem will simply match. While an attacker would not be able to read the contents of index.html or budget.xls, they would know what file names exist. That may (or may not) leak sensitive information depending on your use case.

If filenames are encrypted, things get a little more complicated. eCryptfs prepends a bit of data on the front of the encrypted filename, such that it can identify encrypted filenames definitively. Also, the encryption itself involves "padding" the filename.

Empirically, it has been found that character filenames longer than 143 characters start require >255 characters to encrypt. So the eCryptfs upstream developers typically recommend you limit your filenames to ~140 characters.

I have found that a few of our files can not be transfered into the encrypted file system because of this limit so it is good to identify and shorten them.You can do this in the shell. find will give you a list of all files recursing down into the subdirectories, starting at dot - the current directory then awk can tell you the lengths. This clever command then outputs the length of the base name and the full original name. It is then sorted into reverse numeric order

sudo find . | awk '{base=$0;sub(/.*\//,"",base);x=length(base);if(x>133)print $0,x,base}'

The awk gets the whole line in $0, then copies it to base. It then strips off everything up to the last slash and re-saves that as base. Then it gets the length of base and, if greater than 133 does the printing of files close or over the limit. Thanks to Mark Setchell for this masterpiece of compact coding which goes through a complete home folder too short a time to measure! The sudo is my addition to cope with a couple of cache folders where the permissions prohibit access.


Secure deletion of data

Encrypting data is very important but one also needs a way to erase files without traces. It is no good being able to encrypt a file if you can not delete the original or working copies.

Secure Delete Files

shred: Linux has a built in command shred which does a multiple pass write of data selected to make a read based on residual information at the edges of the magnetic tracks almost impossible before the file is deleted. This is not foolproof for all file systems and programs as temporary copies made be made and modern file systems do not always write data in the same place however on an ext2, ext3 or ext4 system with the default settings in Ubuntu Linux it is acceptable

shred -u sensitive.text

Do a man shred to find out more.

System Housekeeping to reduce TimeShift storage requirements

The new update philosophy by Mint of presenting every possible update and using TimeShift if there are problems may lead to a more secure system but does lead to a large build up of cached downloads of updates in the apt cache and of kernels. There are possibilities for some automatic reduction in cached downloads but Kernels have to be deleted manually.

Another area where there can be large amounts of little used data is in log files which are normally transient in the filesystem but have long term copies in TimeShift. On one of my machines I had about 7 Gbytes of system and systemd log files in /var/log . A good way to look for problems is to use the built in Disk Usage Analyser in the menu which allows one to zoom in to find unexpectedly large folders. The main areas to look into are:

  1. The apt cache of downloaded updates ~ 1 Gbyte
  2. Old unused kernels ~ 450 Mbytes per kernel and ~ 5 kernels
  3. Systemd Journal Files ~ 3 Gbytes on one of my machines.
  4. Log Files ~ 2.1 Gybytes reduction on one machine - should be less as I also found there was a problem with log file rotation at the time.

Overall I quadrupled the spare space on the Defiant from 4.2 Gbytes to 16.4 Gbytes on a 42 Gbyte root partition and that should improve further as the excess rolls out of the saved snapshots.

The last two may have been higher than would be expected as there was a problem with log file rotation and the changes need not be made unless you have freqent.

You need not make the changes for 4. Log files unless you find you frequently have large files and it is much more complex involving extensive use of the terminal.

We will now look at the most likely problem areas in detail:

1. The apt cache of downloaded updates.

All updates which are downloaded are, by default, stored in /var/cache/apt/archive until they are no longer current ie there is a more up-to-date version which has been downloaded. As time goes on and more and more updates take place this will grow in size and TimeShift will not only back them up but also the ones which have been replaced. Fortunately it is easy to change the default to delete download files after installation which will also save space in TimeShift. The only reason to retain them is to copy them to another machine to update it without an extra download which is most appropriate if you are on mobile broadband and the default can be changed when you need to do that and they can then be cleared manually before TimeShift finds them.

The best and very simple way is to use the Synaptic Package Manager

Synaptic Package Manager -> Settings -> Preferences -> Tick box for 'Delete download files after installation' and click 'Delete Cached Package Files'.

This freed 1.0 Gbyte in my case and they will slowly fall out of the TimeShift Snapshots - a good start

2. Out of date Kernels


Update Manager -> View -> Linux Kernels - You will get a warning which you continue past to a list of kernels.

I had 7 installed . You need at least one well tried earlier kernel, perhaps 2 but not 7. So I deleted all but three of the the earlier kernels. You have to do each one separately and it seems that it updates the grub menu automatically. Each Kernel takes up about 72 Mbytes in /boot and 260 Mbytes in /lib/modules and 140 in /usr/src as kernel headers so removing 4 old kernels saved a useful 1.9 Gbytes.

There is a script being considered for 19.1 https://github.com/Pjotr123/purge-old-kernels-2 which may be used to limit the number of kernels.

3. Limiting the maximum systemd journal size

The journal is a special case of a raw log file type and comes directly from systemd. I sought information on limiting the size and found https://unix.stackexchange.com/questions/130786/can-i-remove-files-in-var-log-journal-and-var-cache-abrt-di-usr which gave a way to check and limit the size of the systemd journal files in /var/log/journal/

You can query using journalctl to find out how much disk space it's consuming by:

journalctl --disk-usage
You can control the total size of this folder using a parameter in your /etc/systemd/journald.conf . Edit from the file manager (as root) or from the terminal by:

xed admin:///etc/systemd/journald.conf

and set SystemMaxUse=100M

Note - do not forget to remove the # from the start of the line!

After you have saved it it is best to restart the systemd-journald.service by:

sudo systemctl restart systemd-journald.service
This immediately saved me 3.5 Gbytes and that will increase as Timeshift rolls on.

4. Limiting the size of Log files [Advanced]

The system keeps a lot of log files which are also saved by TimeShift and these were causing me to run out of space on my Defiant. The main problem was identified by running the Disk Usage Manager as being in the /var/log area where about 7 Gbytes was in use! Of this 3.6 Gbytes was in /var/log/journal/ which I have already covered and the single syslog file was over 1 .4 Gbytes. In contrast the Lafite had only 385 Mbytes in journal, 60 Mbytes in the cups folder and 57 Mbytes in syslog out of a total of 530 Mbytes. I have a very easy way to pick out the worst offenders using a single terminal command which looks for the largest folders and files in /var/log where they are all kept. You may need to read man du to understand how it works! It needs sudo as some of the log files have unusual owners.

sudo du -hsxc --time /var/log/* | sort -rh | head -30

which on my system now produces

pcurtis@defiant:~$ sudo du -hsxc --time /var/log/* | sort -rh | head -20
217M 2018-08-31 05:11 total
128M 2018-08-31 05:11 /var/log/journal
28M 2018-06-14 12:02 /var/log/installer
27M 2018-08-31 05:11 /var/log/syslog
20M 2018-08-31 05:11 /var/log/kern.log
6.1M 2018-08-31 04:48 /var/log/syslog.1
3.1M 2018-08-31 05:11 /var/log/Xorg.0.log
1.6M 2018-08-31 05:00 /var/log/timeshift
480K 2018-08-26 16:00 /var/log/auth.log.1
380K 2018-07-28 11:27 /var/log/dpkg.log.1
336K 2018-08-26 11:15 /var/log/boot.log
288K 2018-08-30 04:06 /var/log/apt
288K 2018-08-16 08:12 /var/log/lastlog
192K 2018-08-26 11:37 /var/log/kern.log.1
176K 2018-08-26 16:02 /var/log/samba
172K 2018-08-30 04:06 /var/log/dpkg.log
152K 2018-08-31 05:00 /var/log/cups
148K 2018-06-29 03:40 /var/log/dpkg.log.2.gz
128K 2018-08-26 16:02 /var/log/lightdm
120K 2018-08-26 11:37 /var/log/wtmp

Note: The above listing does not distinguish between the folders (where the total of all subfolders is shown) and single files. For example journal is a folder and syslog and kern.log are files. We have considered journal already and one should note that although journal is in etc/log it is Not rotated by logrotate. installer, timeshift and Xorg. are further special cases.

When I started I had two huge log files (sytem and kern.log) which did not seem to be rotating out and totalled nearly 3 Gbytes. The files are automatically regenerated so I first checked there were no errors filling them continuosly by use of tail to watch what was being added and to try to understand what had filled them:

tail -fn100 /var/log/syslog

tail -fn100 /var/log/kern.log

The n100 means the initial 'tail' is 100 lines instead of the default 10. I could see nothing obviously out of control so I finally decided to delete them and keep a watch on the log files from now on.

Note: Although the system should be robust enough to regenerate the files I have seen warnings it is better to truncate. This ensures the correct operation of all programs still writing to that file in append mode (which is the typical case for log files).

sudo truncate -s 0 /var/log/syslog
sudo truncate -s 0 /var/log/kern.log

Warning: If your log files are huge there is probably a reason. Something is not working properly and is filling them. You should try to understand before just limiting the size or deleting. See https://askubuntu.com/questions/515146/very-large-log-files-what-should-i-do. The main reason was that they had not been rotating for about 6 weeks but just increasing. But there were other errors I found.

Once you are sure you do not have a problem you can avoid future problems by limiting the size of each log file by editing /etc/logrotate.conf and adding a maxsize 10M parameter.

xed admin:///etc/logrotate.conf

From man logrotate

maxsize size: Log files are rotated when they grow bigger than size bytes even before the additionally specified time interval (daily, weekly, monthly, or yearly). The related size option is similar except that it is mutually exclusive with the time interval options, and it causes log files to be rotated without regard for the last rotation time. When maxsize is used, both the size and timestamp of a log file are considered.

If size is followed by k, the size is assumed to be in kilobytes. If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G are all valid

My Note: The maxsize parameter only comes into play when logrotate is called which is currently once daily so its effect is limited unless you increase the frequency.

The easiest way to increase the frequency is to move the call of logrotate from /etc/cron.daily to /etc/cron.hourly.

If you look at the bottom of /etc/logrotate.conf you will see it has the potential to bring in configuration files for specific logs which inherit these parameters but can also overrule them and add additional parameters - see networkworld article 3218728 - so we also need to look at for specific configuration files in /etc/logrotate.d . As far as I can see none of them use any of the size parameters added so they should all inherited from /etc/logrotate.conf so one should only need to add it if you want a different or lower value. /etc/logrotate.d/rsyslog is the most important to check as it controls the rotating the syslog and many of the other important log files such as kern.log.

cat /etc/logrotate.d/rsyslog

Most of the other file names in /etc/logrotate.d are obvious except for cups which seems to be handled by /etc/logrotate.d/cups-daemon .

Last rotation times are a very valuable diagnostic and are available by:

cat /var/lib/logrotate/status

The following is a good overall check to do periodically see if you have succeed:

sudo du -hsxc --time /var/log/* | sort -rh | head -20 && journalctl --disk-usage && date

Replacing Dropbox with pCloud


Dropbox has got difficult to use on Linux systems with encryption and now has limited the number of clients to 3 so I have been looking for a replacement both for common file storage between machines and also to synchronise my todo list and password manager. Many of the cloud offerings do not work with Linux or have other disadvantages.

pCloud appeared to offer a similar sort of implementation to Dropbox on computers and compatibility with Android, so I registered and started some tests. Initially everything seemed to be easy but since then I have found that there are a number of important differences. I still believe that pCloud will be the solution to the extent that I have a paid subscription for 500 Gbytes storage but it hass needed more work to get to an overallsolution which will continue to work well offline yet allow reasonably automatic operation when connected via a Wifi network or mobile internet.

In the case of Dropbox the Local folder on a Computer is the same size as the Cloud folder and everything is available offline and automatically synchronised when on line. pCloud is very different - the Cloud folders can be much larger than the local storage folders and the files are only downloaded when required and on-line unless a specific synchronisation is set up between a pCloud folder. This is has many advantages on a machine with limited storage, for example, one using an SSD but needs an extra step if you need offline local access. The step to set up a synchronised folder like Dropbox is however only a few minutes of work for each user and you can have lots of them whilst, I believe, Dropbox is limited to the one in the free version

Using pCloud On Linux Computers

Install pCloud by:

The pcloud vitual drive (which looks like a folder) is called pCloudDrive and is in your home folder. Files and folders can be dragged and dropped into it and are automatically synchronisd with pCloud.

To get a folder which is available offline (in this example called AndroidSync for reasons which will become obvious) one needs an extra stage.

The folder AndroidSync in your home folder will now always be the same as the same as the one in the Cloud and files added or deleted will be added or deleted in the other. For more information see https://www.pcloud.com/help/drive-help-center/whats-the-difference-between-pcloud-drive-and-pcloud-sync but note that the right click menus do not seem to exist under Linux so you have to use the method above to create the offline folder.

Now, when you have no Internet connection, you will still be able to work with all your files in the local folder offline. Once your connection is restored, pCloud Drive will update (synchronise) these folders . That way, you can be sure that you're always with the latest version of your data.

I use AndroidSync primarily for sharing between Linux machines and my Android phones and tablets - see below - so it is kept small.

I also have a much larger shared folder which is used just between my Linux machines - this is my replacement for my Dropbox folder and it would even be possible to call the offline folder Dropbox! In my case my offline folder for sharing between my Linux machins called Shoebox. I do not know of any limits on how many folders you can share as offline folders using pCloud.

Using pCloud under Android

The pCloud App

There is a pCloud App which offers several useful facilities such as automatic photo uploads but does not allow you to set up offline folders (but that was also the case for Dropbox). Both Simpletask and keepass2droid have Dropbox support built in but currently do not support pCloud so an a separate way to obtain an offline folder is required.

The FolderSync App

I have now found an Android App called FolderSync which enables one to sync to a long list of different cloud services including pCloud, Mega, Onedrive, Google Drive and Dropbox. It has been around for a long time and seems to have a good reputation. Once one has got used to it, makes it very easy to set up syncs both one and two way between a folder on the android machine and one in the cloud and has great flexibility.

You can set up multiple Sync pairs and there are a huge number of options you can switch on and off or set. The synchronisation can be on a timed basis for each sync pair or on demand. It can also be an instant sync on the paid version. It can also be restricted to only carry out scheduled syncs when one is using wifi and the wifi can be chosen or blocked to prevent use of tethered wifi from another phone. The direction of the sync can be chosen (one way, two way etc). I have not tried but it looks as if you can simulataneously set up syncs to several cloud services.

Hint: One setting I missed initially was the switch to sync the deleting of files which was off by default and that meant that files I deleted kept coming back shortly afterwards from another device - very confusing.

Hint 2: There is an option to turn on Wifi when doing syncs which seems to even override Aircraft mode which again can surprise one.

Example Programs and other uses of Foldersync

Dropbox has been so popular in its free version that many programs set up direct access through its Application Programming Interface (API) including programs I use such as keypass2android and simpletask. It is likely that now Dropbox hs so many restrictions programs will start to support other Cloud services as well. In the meantime they need to be set up to use local folders which are then synchronised by FolderSync

I covered above how to set up a sync pair in FolderSync from folder AndroidSync to a similar folder in the internal memory for use by keypass2android and simpletask.

Using keypass2android with FolderSync: I just had to change the location in keypass2android from a direct connection to Dropbox to a subfolder of AndroidSync. Note you have to use the menu to allow one to find and access the system 'drive' in keypass2android.

Using Simpletask with FolderSync was more difficult as I was using the version that is configured to use Dropbox and I had to change to one called Simpletask Cloudless (which seems back to front) so I will include a quote from the author which helps make sense of this:

"Instead of including all kinds of cloud providers, the Simpletask Cloudless app will put your todo list on internal storage in /data/nl.mpcjanssen.simpletask. You can the use external applications such as Foldersync or Bittorrent sync to sync to a large collection of cloud offerings or own machines."

Initially I created a sync pair beween AndroidSync/todo and /data/nl.mpcjanssen.simpletask on the phone. Interestingly Simpletask Cloudless actually suggests use of Foldersync. Then I found this in one of the reviews:

"You can open [and use] a different todo-file from the 3-dot overflow menu (Open todo file). So easiest way is to move/copy the todo file to the desired spot and open it from the menu."

This implies that any file can be opened and a separate sync is not required but it took a long time for me to find out how the file selection was done. The tiny .. below the existing path can be clicked to move up a folder level and then clicks on folders and files bring you back down. I removed the extra sync pairing and used the selection mechanism instead to get to /AndroidSync/todo

Photographs and Video taken on Android Phones and Tablets: These are always a problem to get off a phone and onto ones computer. The pCloud App offers the ability to automatically upload all pictures on the phone. This gave me a problem on the pad as it found every folder of pictures including a local copy of our web site with 30,000 images which was definitely not what I wanted to upload. It is probably ideal for a phone used in a normal way and you can also set it to only upload new pictures.

I have found it to be better to just set up a sync pair using FolderSync to upload Photos and that is what I have done on my phone. It was quite slow the first time as the pictures are quite large. I have set it to syncronise every 6 hours and also an instant sync now I have the pro version and obviously only on wifi. I have also made it a one way sync with delete off.

Other uses of fully synchronised Folders (Phonebox etc): I have set up a small routinely synchronised two way folder which I have called Phonebox. This was initially intended for OCR 'scans' and the resulting text as the Text Fairy App works like a dream on the Samsung Galaxy A6 but has gained various other sub folders for shared documents and transfers.

Uses of one way Synchronised Folders (My Website and Downloads): I am starting to set up local copies of My Website on the phones and tablets synchronised from from the master local copy on the computer so they are always up-to-date. These are one way to avoid any risks of accidental changes or deletions of the master. Another use I am making of FolderSync is for journal subscriptions which I routinely download to the Pad as PDFs. I have set up a one way upload of the Android Download folder on the Pad to pCloud so I can also archive and read them on the computers via pCloud.

Backup of Pictures: I have been exploring keeping a synchronised copy of My Pictures on pCloud but I have quickly discover a limitation which is not specific to pCloud. My trial was of my pictures for the first 7 months of 2019 which comprised 6400 pictures and 24 Gbytes which took 36 hours to upload due to having ADSL style Broadband with a relatively slow upload speed during which time the rest of internet access slowed to a crawl as, I assume, handshakes were delayed. The answer is to limit upload (and download?) speed in Settings when one needs rapid internet access but I have yet to investigate the auto setting or fixed settings. I seemed to have achieved a very stead 200 Kbytes/sec upload with BT Broadband.

Data usage of pCloud and FolderSync

This is a major consideration when one is away from home and using mobile data.

pCloud Data Usage:

There is no way to limit the flow of data within the pCloud application so transfers of large amounts of data to the pCloud need to be done using wifi. For example if you want to add your local copy of you website to pCloud as a Synced Folder you will incur a data transfer of at least the size of that folder. Our website has 46,000 files and is over 1 Gbyte so I would not want to do it using mobile internet where I have 3 Gbytes a month. With large transfers (in file numbers or overall size) to the Cloud pCloud actually recommend you set up synced folder pair for the initial transfer rather than use drag and drop so I set up the machine end then the sync pair and left it too it at home where it took several hours on an ADSL Wifi connection where the upload rate is limited.

Once the setting up of the folder pair and initial transfer has been completed I have not yet been able to detect any significant data transfers unless files are changed or new files added. When files are changed the synchronisation is via rsync which is clever and only transfers the changes rather than the whole file so the overall overhead is small. i have been watching my Network Usage Monitor Applet (NUMA) and I can only detect a total flow of less than 100 kbytes per hour of which pCloud contributes an unknown amount.

FolderSync Data Usage:

It is worth looking at the FolderSync help files at www.tacit.dk/foldersync/help which are very good and will help you understand all the settings and see potential ways to reduce mobile data usage.

Ways to reduce mobile data use of FolderSync

FolderSync offers a large number of ways to control the scheduling of synchronisations and what specific data is synchronised, the moost important being:

The Folderpairs screen shows for each pair: a summary of the information above, the last sync time and next if timed, the status of the last sync and a button for a manual sync. It also shows the direction of the sync (To local folder, To remote folder and Two-way). This is useful as I find I have to make changes depending on circumstances and urgency for a synchronisation and it is easy to lose track of any settings you have modified.

Ways to measure the data used by an Android App.

The Android system keeps a number of useful totals of mobile data use including by an idividual app. Use Sttings -> Apps select the App and top of the list under Usage is Mobile data which in my case for FolderSync is 32.65 MB/3.89 GB used since 18 April on 19th July ie less than 1% of my total Mobile Usage in a period where we were away for over half the time.

Settings -> Connections -> Data usage -> Mobile data usage gives you a view of data usage over time and again there is a breakdown by app of the monthly totals which is even more useful. Clicking again gives a breakdown into foreground and background usage.

Free and paid (Pro) Versions of the pCloud and FolderSync software.

Free and paid versions of FolderSync. There are a number of reasons why I have paid my £2.79 for the Pro version. Firstly the adverts are intrusive but more importantly it enable the instant sync option which is useful to keep everything in sync when changes are being made on several machines. It also allows filters to be set up. It is well worth while as it covers use on all machines with same Google user used to install it.

pCloud Pro. The free version comes with up to 10 Gbytes but what is not said is that you have to earn some of them so in practice it is 7 Gbytes. It may be worth paying for extra storage as, unlike Dropbox, one can use pCloud as a huge extra virtual drive whenever you are connected without having to match the size of Cloud Storage to Local Storage so keeping access to large numbers of pictures on machines with small SSDs becomes feasable. There is a lifetime option which is more costly than an external hard drive but not ridiculous nor are the monthly charges compaired to Dropbox. I did my initial testing with the free version but quickly took up a special offer of 500 Gbytes for just over £20 per year.

Update added 28 May 2020.

Uninstalling pCloud and Removing and Re-making Synced Folders. Requirements for work-space and cache.

I had problems on one of the machines where pCloud stopped Syncronising and displayed an exclamation mark in the Blue Icon.

This left me to do a search for files containing [conflicted] but there were few conflicted files and that did not seem to be the problem. I then progressively removed the sync folders links and found that removing the sync to my Website cured the problem and the other folder synced when they were re-synced with out any obvious problems - just the updates correctly propogating. I did not remake My Website as I wished to back that up first.

I found no instructions on the pCloud website on uninstalling. An internet search found https://askubuntu.com/questions/1041015/how-do-i-uninstall-pcloud-client-properly which stated pCloud indiicate that one should delete the following files:

there are also:

I also ended up contacting pCloud support and they offered the same suggestion to doing a complete reinstal but that did not solve it. I tried deleting the sync, deleting (having backed up to an archive) the local version of My Web Site and remaking the sync to download. This stopped short of completion every time. I used the advanced tab to ignore a few folders and that got further.

I finally realised that pCloud reserves workspace on the drive in use as well as its main cache and reducing that showed it to be the cause as it immediately downloaded more.

The drive was very full and contained several years of pictures as well the local website, both needed to work on the web site updating. Reducing workspace could only be a temporary solution to demonstrate the cause of the problem so I was forced to make partition changes which fortunately went well and gave be another useful 12 Gbytes.

Summary and Conclusions on pCloud

If the pCloud and FolderSync Pro combinations prove over time to work reliably and does what I have said above it will be significantly better and more flexible than Dropbox for me and offers significantly more free storage. At the time this was last updated I have been using them for 3 months, much of the time using mobile phone connections.

The compative reviews always place pCloud very high with the only shortfall being in the facilities available for collaborative working. There are few other obvious choices if you are using Linux (Dropbox (3 users only in free version), pCloud, Mega, Google Drive (but no Official Linux client) and Onedrive (again no Official Linux client). I currently have it installed on 4 Linux machines, mostly with several users, two Android phones and one Android pad.

FolderSync has been around for many years and also seems to have a high reputation and I am continually finding addition uses for the flexibible use of shared folders. Much of the time I have been using mobile internet and the many options have allowed be to control data usage well using a mixture of manual and automatic syncronisation. The sheer number of settings available could however be confusing to a new user although I find the main tabs display the important settings you have made.

Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Fonts revised: 28th April, 2021