Home Uniquely NZ Travel Howto Pauline Small Firms Search
Diary of System and Website Development
Part 31 (January 2019 - December 2019)

April 2019

Replacing Dropbox with pCloud

Introduction

Dropbox has got difficult to use on Linux systems with encryption and now has limited the number of clients to 3 so I have been looking for a replacement both for common file storage between machines and also to synchronise my todo list and password manager. Many of the cloud offerings do not work with Linux or have other disadvantages.

pCloud appeared to offer a similar sort of implementation to Dropbox on computers and compatibility with Android, so I registered and started some tests. Initially everything seemed to be easy but since then I have found that there are a number of important differences. I still believe that pCloud will be the solution to the extent that I have a paid subscription for 500 Gbytes storage but it hass needed more work to get to an overallsolution which will continue to work well offline yet allow reasonably automatic operation when connected via a Wifi network or mobile internet.

In the case of Dropbox the Local folder on a Computer is the same size as the Cloud folder and everything is available offline and automatically synchronised when on line. pCloud is very different - the Cloud folders can be much larger than the local storage folders and the files are only downloaded when required and on-line unless a specific synchronisation is set up between a pCloud folder. This is has many advantages on a machine with limited storage, for example, one using an SSD but needs an extra step if you need offline local access. The step to set up a synchronised folder like Dropbox is however only a few minutes of work for each user and you can have lots of them whilst, I believe, Dropbox is limited to the one in the free version

Using pCloud On Linux Computers

Install pCloud by:

The pcloud vitual drive (which looks like a folder) is called pCloudDrive and is in your home folder. Files and folders can be dragged and dropped into it and are automatically synchronisd with pCloud.

To get a folder which is available offline (in this example called AndroidSync for reasons which will become obvious) one needs an extra stage.

The folder AndroidSync in your home folder will now always be the same as the same as the one in the Cloud and files added or deleted will be added or deleted in the other. For more information see https://www.pcloud.com/help/drive-help-center/whats-the-difference-between-pcloud-drive-and-pcloud-sync but note that the right click menus do not seem to exist under Linux so you have to use the method above to create the offline folder.

Now, when you have no Internet connection, you will still be able to work with all your files in the local folder offline. Once your connection is restored, pCloud Drive will update (synchronise) these folders . That way, you can be sure that you’re always with the latest version of your data.

I use AndroidSync primarily for sharing between Linux machines and my Android phones and tablets - see below - so it is kept small.

I also have a much larger shared folder which is used just between my Linux machines - this is my replacement for my Dropbox folder and it would even be possible to call the offline folder Dropbox! In my case my offline folder for sharing between my Linux machins called Shoebox. I do not know of any limits on how many folders you can share as offline folders using pCloud.

Using pCloud under Android

The pCloud App

There is a pCloud App which offers several useful facilities such as automatic photo uploads but does not allow you to set up offline folders (but that was also the case for Dropbox). Both Simpletask and keepass2droid have Dropbox support built in but currently do not support pCloud so an a separate way to obtain an offline folder is required.

The FolderSync App

I have now found an Android App called FolderSync which enables one to sync to a long list of different cloud services including pCloud, Mega, Onedrive, Google Drive and Dropbox. It has been around for a long time and seems to have a good reputation. Once one has got used to it, makes it very easy to set up syncs both one and two way between a folder on the android machine and one in the cloud and has great flexibility.

You can set up multiple Sync pairs and there are a huge number of options you can switch on and off or set. The synchronisation can be on a timed basis for each sync pair or on demand. It can also be an instant sync on the paid version. It can also be restricted to only carry out scheduled syncs when one is using wifi and the wifi can be chosen or blocked to prevent use of tethered wifi from another phone. The direction of the sync can be chosen (one way, two way etc). I have not tried but it looks as if you can simulataneously set up syncs to several cloud services.

Hint: One setting I missed initially was the switch to sync the deleting of files which was off by default and that meant that files I deleted kept coming back shortly afterwards from another device - very confusing.

Hint 2: There is an option to turn on Wifi when doing syncs which seems to even override Aircraft mode which again can surprise one.

Example Programs and other uses of Foldersync

Dropbox has been so popular in its free version that many programs set up direct access through its Application Programming Interface (API) including programs I use such as keypass2android and simpletask. It is likely that now Dropbox hs so many restrictions programs will start to support other Cloud services as well. In the meantime they need to be set up to use local folders which are then synchronised by FolderSync

I covered above how to set up a sync pair in FolderSync from folder AndroidSync to a similar folder in the internal memory for use by keypass2android and simpletask.

Using keypass2android with FolderSync: I just had to change the location in keypass2android from a direct connection to Dropbox to a subfolder of AndroidSync. Note you have to use the menu to allow one to find and access the system 'drive' in keypass2android.

Using Simpletask with FolderSync was more difficult as I was using the version that is configured to use Dropbox and I had to change to one called Simpletask Cloudless (which seems back to front) so I will include a quote from the author which helps make sense of this:

"Instead of including all kinds of cloud providers, the Simpletask Cloudless app will put your todo list on internal storage in /data/nl.mpcjanssen.simpletask. You can the use external applications such as Foldersync or Bittorrent sync to sync to a large collection of cloud offerings or own machines."

Initially I created a sync pair beween AndroidSync/todo and /data/nl.mpcjanssen.simpletask on the phone. Interestingly Simpletask Cloudless actually suggests use of Foldersync. Then I found this in one of the reviews:

"You can open [and use] a different todo-file from the 3-dot overflow menu (Open todo file). So easiest way is to move/copy the todo file to the desired spot and open it from the menu."

This implies that any file can be opened and a separate sync is not required but it took a long time for me to find out how the file selection was done. The tiny .. below the existing path can be clicked to move up a folder level and then clicks on folders and files bring you back down. I removed the extra sync pairing and used the selection mechanism instead to get to /AndroidSync/todo

Photographs and Video taken on Android Phones and Tablets: These are always a problem to get off a phone and onto ones computer. The pCloud App offers the ability to automatically upload all pictures on the phone. This gave me a problem on the pad as it found every folder of pictures including a local copy of our web site with 30,000 images which was definitely not what I wanted to upload. It is probably ideal for a phone used in a normal way and you can also set it to only upload new pictures.

I have found it to be better to just set up a sync pair using FolderSync to upload Photos and that is what I have done on my phone. It was quite slow the first time as the pictures are quite large. I have set it to syncronise every 6 hours and also an instant sync now I have the pro version and obviously only on wifi. I have also made it a one way sync with delete off.

Other uses of fully synchronised Folders (Phonebox etc): I have set up a small routinely synchronised two way folder which I have called Phonebox. This was initially intended for OCR 'scans' and the resulting text as the Text Fairy App works like a dream on the Samsung Galaxy A6 but has gained various other sub folders for shared documents and transfers.

Uses of one way Synchronised Folders (My Website and Downloads): I am starting to set up local copies of My Website on the phones and tablets synchronised from from the master local copy on the computer so they are always up-to-date. These are one way to avoid any risks of accidental changes or deletions of the master. Another use I am making of FolderSync is for journal subscriptions which I routinely download to the Pad as PDFs. I have set up a one way upload of the Android Download folder on the Pad to pCloud so I can also archive and read them on the computers via pCloud.

Backup of Pictures: I have been exploring keeping a synchronised copy of My Pictures on pCloud but I have quickly discover a limitation which is not specific to pCloud. My trial was of my pictures for the first 7 months of 2019 which comprised 6400 pictures and 24 Gbytes which took 36 hours to upload due to having ADSL style Broadband with a relatively slow upload speed during which time the rest of internet access slowed to a crawl as, I assume, handshakes were delayed. The answer is to limit upload (and download?) speed in Settings when one needs rapid internet access but I have yet to investigate the auto setting or fixed settings. I seemed to have achieved a very stead 200 Kbytes/sec upload with BT Broadband.

Data usage of pCloud and FolderSync

This is a major consideration when one is away from home and using mobile data.

pCloud Data Usage:

There is no way to limit the flow of data within the pCloud application so transfers of large amounts of data to the pCloud need to be done using wifi. For example if you want to add your local copy of you website to pCloud as a Synced Folder you will incur a data transfer of at least the size of that folder. Our website has 46,000 files and is over 1 Gbyte so I would not want to do it using mobile internet where I have 3 Gbytes a month. With large transfers (in file numbers or overall size) to the Cloud pCloud actually recommend you set up synced folder pair for the initial transfer rather than use drag and drop so I set up the machine end then the sync pair and left it too it at home where it took several hours on an ADSL Wifi connection where the upload rate is limited.

Once the setting up of the folder pair and initial transfer has been completed I have not yet been able to detect any significant data transfers unless files are changed or new files added. When files are changed the synchronisation is via rsync which is clever and only transfers the changes rather than the whole file so the overall overhead is small. i have been watching my Network Usage Monitor Applet (NUMA) and I can only detect a total flow of less than 100 kbytes per hour of which pCloud contributes an unknown amount.

FolderSync Data Usage:

It is worth looking at the FolderSync help files at www.tacit.dk/foldersync/help which are very good and will help you understand all the settings and see potential ways to reduce mobile data usage.

Ways to reduce mobile data use of FolderSync

FolderSync offers a large number of ways to control the scheduling of synchronisations and what specific data is synchronised, the moost important being:

The Folderpairs screen shows for each pair: a summary of the information above, the last sync time and next if timed, the status of the last sync and a button for a manual sync. It also shows the direction of the sync (To local folder, To remote folder and Two-way). This is useful as I find I have to make changes depending on circumstances and urgency for a synchronisation and it is easy to lose track of any settings you have modified.

Ways to measure the data used by an Android App.

The Android system keeps a number of useful totals of mobile data use including by an idividual app. Use Sttings -> Apps select the App and top of the list under Usage is Mobile data which in my case for FolderSync is 32.65 MB/3.89 GB used since 18 April on 19th July ie less than 1% of my total Mobile Usage in a period where we were away for over half the time.

Settings -> Connections -> Data usage -> Mobile data usage gives you a view of data usage over time and again there is a breakdown by app of the monthly totals which is even more useful. Clicking again gives a breakdown into foreground and background usage.

Free and paid (Pro) Versions of the pCloud and FolderSync software.

Free and paid versions of FolderSync. There are a number of reasons why I have paid my £2.79 for the Pro version. Firstly the adverts are intrusive but more importantly it enable the instant sync option which is useful to keep everything in sync when changes are being made on several machines. It also allows filters to be set up. It is well worth while as it covers use on all machines with same Google user used to install it.

pCloud Pro. The free version comes with up to 10 Gbytes but what is not said is that you have to earn some of them so in practice it is 7 Gbytes. It may be worth paying for extra storage as, unlike Dropbox, one can use pCloud as a huge extra virtual drive whenever you are connected without having to match the size of Cloud Storage to Local Storage so keeping access to large numbers of pictures on machines with small SSDs becomes feasable. There is a lifetime option which is more costly than an external hard drive but not ridiculous nor are the monthly charges compaired to Dropbox. I did my initial testing with the free version but quickly took up a special offer of 500 Gbytes for just over £20 per year and after the year was progressing took up a black Friday 75% discount offer for a lifetime subscription on 500 Gbytes.

Uninstalling pCloud and Removing and Re-making Synced Folders. Requirements for work-space and cache. Added 28 May 2020.

I had problems on one of the machines where pCloud stopped Syncronising and displayed an exclamation mark in the Blue Icon.

This left me to do a search for files containing [conflicted] but there were few conflicted files and that did not seem to be the problem. I then progressively removed the sync folders links and found that removing the sync to my Website cured the problem and the other folder synced when they were re-synced with out any obvious problems - just the updates correctly propogating. I did not remake My Website as I wished to back that up first.

I found no instructions on the pCloud website on uninstalling. An internet search found https://askubuntu.com/questions/1041015/how-do-i-uninstall-pcloud-client-properly which stated pCloud indiicate that one should delete the following files:

there are also:

I also ended up contacting pCloud support and they offered the same suggestion to doing a complete reinstal but that did not solve it. I tried deleting the sync, deleting (having backed up to an archive) the local version of My Web Site and remaking the sync to download. This stopped short of completion every time. I used the advanced tab to ignore a few folders and that got further. I finally realised that pCloud reserves workspace on the drive in use as well as its main cache and reducing that showed it to be the cause as it immediately downloaded more.

The drive was very full and contained several years of pictures as well the local website, both needed to work on the web site updating. Reducing workspace could only be a temporary solution to demonstrate the cause of the problem so I was forced to make partition changes which fortunately went well and gave be another useful 12 Gbytes.

Summary and Conclusions

If the pCloud and FolderSync Pro combinations prove over time to work reliably and does what I have said above it will be significantly better and more flexible than Dropbox for me and offers significantly more free storage. At the time this was last updated I have been using them for 8 months, much of the time using mobile phone connections.

The compative reviews always place pCloud very high with the only shortfall being in the facilities available for collaborative working. There are few other obvious choices if you are using Linux (Dropbox (3 users only in free version), pCloud, Mega, Google Drive (but no Official Linux client) and Onedrive (again no Official Linux client). I currently have it installed on 4 Linux machines, mostly with several users, two Android phones and one Android pad.

FolderSync has been around for many years and also seems to have a high reputation and I am continually finding addition uses for the flexibible use of shared folders. Much of the time I have been using mobile internet and the many options have allowed be to control data usage well using a mixture of manual and automatic syncronisation. The sheer number of settings available could however be confusing to a new user although I find the main tabs display the important settings you have made.

3 August 2019

Git - Planning for and working from several machines

This has been transfered from Part 30 following considerable additional work on my second machine that has made me examine and revise my philosophy for handling branches once PRs have been generated and implemented and emphasises the use of a version number. My original philosophy was to not destroy information ie branches until absolutely necessary but I have realised that can be confusing and unnecessary the way the update cycle works with Github and it is better to delete them as soon as possible, esspecially if you use more than one machine. Most of the contents of this section has been transfered to other pages but this keeps all the threads together

When working on Github with Applets you never merge your branch into your master as it goes via your online repository (origin), the pull request PR is from origin and is then fetched back from the main repository (upstream). If you want to see what you have merged at a latter stage you look at the commit into the main repository or your local repository which is in step with it. The problem with this is that it can take many days before your PR is accepted is accepted and merged and you can fetch and merge the change - until then you need the branches. You will receive an email that the merge has been completed which will have links to github which you can follow to confirm the pull request (PR) is complete and will often contain a statement that your remote branch is no longer required and a button to delete it.

Firstly there are several initial states when one starts to work on an alternative machine:

  1. Git has never been used and needs to be installed and configured.
  2. Git is installed and configured but the particular repository has not been cloned onto the machine.
  3. The machine has been used with the repository at some time in the past or the whole home folder has been transfered and git has been installed

In the first and second cases the procedures have been covered already under starting using git with the difference that one may already have remote branches on origin rather than an unchanged fork. You will have no local branches and no way of telling what was on the other machine which means you need to have procedures which make sure you never end up working on the same applets on both machines and certainly not with the same branch names.

The third case is the most likely and is even worse from the point of view of overlaps as you will have an out of date local master which will probably have old local branches and you will also probably have stale remote branches - a stale branch is one where there has been a completed PR. You can find out the remote branches but not anything about PRs and their status by:

git fetch origin
git branch -r

But you still have to be certain that they have been pulled and merged. The easy way is to look on-line at Github at the list of your branches (ie at origin) and you can select active, stale etc. to view. Otherwise one has to look for a commit into upstream master.

All the information in stale branches has been transferred via the PR and subsequent merge into the upstream master back to ones own master branch. Once the PR has been merged the matching local branch holds no additional information as the PR and any comments and revisions are retained and accessible on Github.

At any time an excellent way to get an overview is to use gitk --all which shows all the local and remote branches and where they all link together in a 'graphic' way. You can search for strings so you can easily trace how an applet has been updated. Remember that not all changes will be yours as some will be translations and a team member may have also occasionally make minor coding changes to avoid problems with a new cinnamon version. It is a very good way to find your last update and version number.

So what does all this mean for the best way to work to make it easy to keep track of the development and to switch machines?

Branch names, Commit messages and PR titles. There is a requirement from Mint that the applet name is used in the PR title and subsequent commits so they are easily tied to a particular applet. I add a version number as well so, for exmple, the branch will be batterymonitor@pdcurtis-1.3.9 and the first commit will have a message like batterymonitor@pdcurtis v 1.3.9 Enhance Audible Warnings. Any subsequent commits add an extra level but when they are finally squashed for the PR you get back to the higher level

Applets do not have to have a verion number but again one use them so everything is linked together. The version number goes in metadata.json as a extra line like:

"version": "1.3.9",

and is then displayed in About... along with the other applet details.

Deleting Local and Remote branches

After a development is complete one will need to tidy up and make sure that your changes have been merged back into your master and the local branch you have used for the development can usually be deleted. If you are doing a collaborative development using Github your changes are usually incorporated via a Pull request from a branch on your Github repository (which is a fork of the repository you are submitting the changes to.). This means that you may need to not only delete your local branch but also the remote branch used to generate the Pull Request.

Deleting the remote branch is not as trivial as one would expect and the 'definitive' answer on stackoverflow on how to delete a git branch both locally and remotely has been visited nearly 4 million times! This article is so popular as the syntax is even more obscure than the rest of git and has changed with version number. I will assume that everyone reading this has at least git version 1.7.0 . I have already covered deleting a local branch and forcing a delete if changes have not been merged above but will repeat here for completeness some of the most useful commands when starting on an alternative machine.

To delete a single local branch use:

git branch -d branch_name

Note: The -d option only deletes the branch if it has already been fully merged. You can also use -D, which deletes the branch "irrespective of its merged status." Your local branches are usually not merged because they are used via a PR so the -D is almst always needed.

Delete Multiple Local Branches

git branch | grep 'string' | xargs git branch -d

where string is common to all the branch names, mine mostly have a pdcurtis in them.

You may need to force the delete by:

git branch | grep 'string' | xargs git branch -D

Delete Remote Branch

As of Git v1.7.0, you can delete a remote branch using

git push origin --delete <branch_name>
This is all you need to do if you are just using a single machine for development but if you are using several you may need an additional stage on the other machines in order to locally remove stale branches that no longer exist in the remote. Run this on all other machines:

git fetch --all --prune

to propagate changes.

Also if you have created a Pull Request on Github you will find that after it has been merged there is a tempting button to delete the branch. I often use it or sometimes use other mechanisms in github to delete my remote branches. This is perfectly acceptable but the local machines do not know and will have stale branches which you need to remove at some point on all your local machines, as above, by:

git fetch --all --prune

You can also use:

git remote update --prune origin
git remote update --prune upstream

Transferring an 'active' branch to a different machine

You should try to only work on a branch on a single machine. It is possible to continue work having transfered files from one machine to the other or via a remote branch but it should be the exception.

If you have to work on a remote branch started on a different machine

I have recently had to change machine and had an out of date (by 6 months) version of the git local repository sand I needed to modify a branch I had already pushed and started a PR before going away with a different machine.

I updated my master branch from upstream by the usual

git checkout master
git fetch upstream
git merge upstream/master
git status

I next tidied up by deleting all old local branches to avoid any confusion as above.

I tidied up the remote branches as above by deleting all stale (already merged by a PR) branches on github and pruned.

Now the important part is to get the remote branch and set it up so it is tracked (ie can be autiomatically pushed to and pulled from.. I looked at https://stackoverflow.com/questions/9537392/git-fetch-remote-branch. What seems to have worked for me is the simplest way namely

git fetch origin
git branch -r
git checkout remotebranchname

as below

peter@lafite:~/cinnamon-spices-desklets$ git branch -r
origin/F0701
origin/HEAD -> origin/master
origin/googleCalendar-l10n-fix
origin/master
origin/netusage-1.0.3
origin/netusage-1.0.4
origin/netusage@30yavash.com-issue_279
origin/simple-system-monitor-1.0.0
upstream/F0701
upstream/googleCalendar-l10n-fix
upstream/master
peter@lafite:~/cinnamon-spices-desklets$ git checkout netusage-1.0.4
Branch 'netusage-1.0.4' set up to track remote branch 'netusage-1.0.4' from 'origin'.
Switched to a new branch 'netusage-1.0.4'
peter@lafite:~/cinnamon-spices-desklets$

Note: This recogises it is a remote branch and sets up the tracking. This was using a recent git version 2.17.1 which is still in the Mint 19.2 repositories o 10 Aug 2019 but there are rumours that this no long works with even more recent versions which take you to a Detached Head state instead. Use instead:

git checkout --track origin/remotebranchname

Which will end up a local branch set up to track the remote branch.

Note: This section has now been intergrated into various parts of the Git page which also has an annex covering this subject added.

 

23 August 2019

Ongoing work - Unusual Situations and Recovery from Mistakes

I am currently revising An Introduction to Git and GitHub for Cinnamon Applet Development and this section has been extracted and is largely rewritten.

This section is now primarily concerned with identifying and extracting oneself from unusual situations, mainly recovery from mistakes. Git is designed to make losing information very difficult and there is no command which is the equivalent of undo. It concentrates on the workflow used for applet and desklet development for Cinnamon using Github although much is applicable to ones own developments using Git and Github.

The development environment and workflow for applets is slightly unuual so I will lay it out again for completeness. All the applets are kept in a common Cinnamon repository in Github which is forked by every applet author who then has his own local repository. The local repository is updated from the main cinnamon repository which is called upstream and your remote repository (called origin) is updated by pushes from your local repository. The master branches of all three repositories should always be kept in step by this cycle of fetch and merge from upstream to local and subsequent push from local to origin.

Any development is done in a local topic-branch which, when complete, is pushed to origin (your remote) and a Pull Request generated which brings it back into upstream and then it is fetched and merged into your local master branch. This cycle brings the changes into your master without your having to merge the branch which contained them. Once one is sure the cycle is complete is is best to deletes the local and remote branch. So the 'steady state' has upstream/master, master and origin/master identical possibly with a few local topic-branches with ongoing work. Changes are never merged directly (locally) into your local master branch. It is important that the three repositories are always kept in step and one of the folowing sections deals with checking if they are in step and how to correct the situation with minimum impact. There is one final twist to the updating and that is to do with your topic branches - it is usual to also keep the branch updated so the branch is from a commit close to or at HEAD before pushing to origin. This means even recent erroneous commits may also end up in your branch as well as master.

There are several common ways to make mistakes and I am sure I will add others to the list which follows:

  1. The first mistake is in a commit and you want to undo the commit, change the message or split it up.
  2. The second is to forget to checkout master before doing a fetch and merge form upstream which introduces commits into your topic-branch so you can no longer push it to origin for a clean pull request.
  3. The third is to make and commit changes in the working directory to master rather than your topic-branch.
  4. or even to merge the topic branch into your master

They can all be retrieved but the sooner you find out the better. Git is designed to make losing information very difficult and there is no command which is the equivalent of undo. It is permissable to make changes and tidy up in a topic-branch which has not been pushed to origin but changes which have been pushed are accessible to others including colaborators who may have already based work on them are undesirable even if you notify your collaborators. However in the workflow used here origin is just a fork used as a mechanism to push changes to update your applet one can be a little more flexible in the ways one hides ones mistakes as it is unlikely anybody is actually using that fork directly!

Checking and re-synchronising the local master, origin/master and upstream/master

The most important thing is to always keep the cycle of upstream/master, your local master and origin/master in syncronisation although lags are inevitable.

There is fortunately a form of the log command which enables one to compare branches which can also be used for the purpose of checking. To understand this see progit book and search "double dot" It uses a .. range specification which resolves a range of commits that are reachable from one commit but aren’t reachable from another.

You can, for example, ask Git to show you a log of just those commits with master..topic-branch – that means “all commits reachable by topic-branch that aren’t reachable by master.” by

git log master..topic-branch

or in our case where we are looking for changes between upstream/master and master

git fetch upstream
git log --oneline HEAD..upstream/master

note the fetch upstream to get the latest changes and the option to log of --oneline to reduce the output. You can also use --pretty=oneline to give the full hash.

:~/cinnamon-spices-applets$ git log --oneline HEAD..upstream/master
18da10c8 (upstream/master) mem-monitor-text: issue 2476, fix RAM occupation calculation (#2548)
cb544d49 weather@mockturtl: New updated fi.po file in Finnish. (#2559)
be1ad798 hwmonitor@sylfurd - Major changes (#2557)
e8095840 [temperature@fevimu] Option added: Change color when the temperature becomes High or Critical (#2555)
81a7187b [temperature@fevimu] pot file updated - French translation: fr.po created (#2554)
c99ce37e Spices update fi (#2553)
897591e1 Make compatible with vertical panels (#2552)
2a096009 SpicesUpdate@claudiux v3.0.4 (#2547)
3c7b407f Fixes #2545 (Cinnamon 2.8 to 3.6) (#2546)
:~/cinnamon-spices-applets$

The commits shown will disappear when one does a merge upstream/master

What we are really interested in is the other way round as we want to check that we have no commits which are not from upstream/master

git fetch upstream
git log --oneline upstream/master..HEAD

which should be empty

:~/cinnamon-spices-applets$ git log --oneline upstream/master..HEAD
:~/cinnamon-spices-applets$

If you have commits the only simple way out is going to be to a hard reset reset to the previous commit and then do a fresh fetch and merge upstream/master which will lose all work since that point, both the accidental work in master and possibly the branches starting after that commit. See below for more details of reset and also ways to recover a deleted branch.

But before going into that lets look also at the next step of the circle which is from the local master to origin/master

~/cinnamon-spices-applets$ git log --oneline HEAD..origin/master
:~/cinnamon-spices-applets$ git log --oneline origin/master..HEAD

here again there should be no differences after you have done a push. If there are you will have to force a push to overwrite the existing origin/master branch by:

git push -f origin master:master

-f is the force flag. Normally, some checks are being applied before it's allowed to push to a branch. The -f flag turns off all checks. origin is the name of the remote where to push. master:master means: push my local branch master to the remote branch master. The general form is localbranch:remotebranch. This is another way to delete a branch on the remote: in that case, you push an empty local branch to the remote, thus deleting it:

git push origin :remote_branch_to_be_deleted

Types of reset and their uses

git reset is a powerful tool but one where of the few commands where one can destroy data. It has three main modes.

git reset --soft <ID>
git reset --mixed <ID> (the default)
git reset --hard <ID>

<ID> will often be relative to HEAD ie HEAD~ or HEAD~4 .

The three options differ in the way they handle the index and working directory.

The soft option leaves the working directory and staging area (index) unchanged. If we look at git reset --soft HEAD~ it moves HEAD back to point to the previous commit with working directory and staging area unchanged so you can change what you plan to stage and re commit.

The git reset --mixed (default) leaves the working directory unchanged but removes staged files from the index so you can make changes, restage and commit. This is as close as you can get to an undo of a commit and also allows you to squash several commits if you go back a number of stages in a branch (but note an interactive rebase has more flexibility and is the prefered option in a topic-branch).

git checkout topic-branch
git reset HEAD~4

Squashes the last 4 commits in topic-branch together.

A hard reset also clears the working directory so is one of the few ways you can destroy data as there is no record of uncommited data. It is however a useful way to correct your master if it gets out of step with upstream by

git reset --hard SHA1_prior_to_error
git fetch upstream
git merge upstream/master

This may leave recent deleted hanging and/or they may have extra commits to remove depending where the branch was located.

Recover a deleted branch

You should be able to do git reflog and find the SHA1 for the commit at the tip of your deleted branch, then just git checkout [SHA1]. And once you're at that commit, you can just git checkout -b [branchname] to recreate the branch from there. See https://stackoverflow.com/questions/3640764/can-i-recover-a-branch-after-its-deletion-in-git for more details. This can be compressed to:

git checkout -b <branch> <SHA1>

I have tried this and it is easiest to pipe into grep and the commit message to cut down the search for the latest SHA1. The following is a recovery of a branch I had routinely deleted 28 days before. See below for details:

:~/cinnamon-spices-applets$ git reflog | grep batterymonitor@pdcurtis-1.3.8
bad1c721 HEAD@{22}: checkout: moving from batterymonitor@pdcurtis-1.3.8 to master
66942db0 HEAD@{23}: rebase -i (finish): returning to refs/heads/batterymonitor@pdcurtis-1.3.8
bad1c721 HEAD@{28}: checkout: moving from master to batterymonitor@pdcurtis-1.3.8

:~/cinnamon-spices-applets$ git checkout 66942db0

Note: checking out '66942db0'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b <new-branch-name>

HEAD is now at 66942db0 batterymonitor@pdcurtis v 1.3.8 Change location of temporary files to home folder

:~/cinnamon-spices-applets$ git checkout -b recovered
Switched to a new branch 'recovered'

:~/cinnamon-spices-applets$ git checkout master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
peter@helios:~/cinnamon-spices-applets$ git branch
batterymonitor@pdcurtis-1.3.9
* master
netusagemonitor@pdcurtis-3.3.0.1
recovered
vnstat@linuxmint.com-1.0.2.1

The recovered branch was back where it was previously 'located' not at HEAD.

This may help if you have ended up by deleting branches when doing a git reset --hard to get your local master in step with upstream master after making errors. But it is also possible that differences in local master may make it impossible to do a fast forwards merge for a PR from the recovered branch.

So it is always safer to check yout local master has not diverged from origin master before doing a rebase which moves a branch up to HEAD in anticipation of a push to origin ready for a PR.

Undo last Commit.

If you want to re-work the last commit - assuming it has not been already pushed to origin:

git reset HEAD~

This will undo the commit and restore the index to the state it was in before that commit and leave the working directory with the changes uncommitted, so one fix whatever you need to modify before committing again.

If you have pushed it or for more information see https://stackoverflow.com/questions/19859486/how-to-un-commit-last-un-pushed-git-commit-without-losing-the-changes

Use of interactive rebase: Another powerful tool for changing commits is an interactive rebase (git rebase --interactive ) which I have already covered for squashing commits before making a PR here. This enable you to squash commits together, reorder, merge and reword messages in a series of commits.

Help: I have merged upstream or master into a branch without meaning to

This is a very easy mistake to make when you intend to update your master from upstream and forget to checkout master. This usually causes a single commit in your topic branch which you want to get rid of. Most of the obvious ways lose the changes in your working folder as well as stepping back but the following leaves any non conflicting changes in your working directory. So what I do is to make a copy outside of git of the relevant part of the local repository which has the changes as a belt and braces fall back and then do:

git reset HEAD~1

The default option for reset is --mixed which does not make any non essential changes in your working directory so most times you just get rid of the commit and leave the working directory intact. If you find it has modified a file you have your backup to put the changes back.

There is an alternative way covered latter if you realise whilst the commit is taking place.

Modifications made in the wrong branch

We have already discussed how to remove modifications/commits from master or a topic-branch by git reset HEAD~ but not how to move them to a different branch. To a large extent it varies on the sort of modification you have been making. It is unlikely you have made many modifications on the same file in the wrong branch without noticing you were editing a file which had already been modified. I have usually made the mistake when finishing a job off by updating change logs, help files and version numbers.

I have not used it yet but it seems stash should be a way to save any un-committed modifications currently in the working directory (WD) which can then be reapplied in a different branch. The extra commits can be removed using git reset leaving the WD unmodificed before using git stash which saves the modifications on a stack and clears them from the WD. You can then change to the correct branch and reapply them, hopefully without conflicts with a git stash apply

I will add more after I use this proceedure

More about Stashing in General

Often, when you’ve been working on part of a project, you want to switch branches for a bit to work on something else butyou don’t want to do a commit of half-done work just so you can get back to this point later. The answer is the git stash command. Stashing takes the dirty state of your working directory – that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time and cleans the Working Directory.

You can save a stash on one branch, switch to another branch later, and try to reapply the changes. You can also have modified and uncommitted files in your working directory when you apply a stash – Git gives you merge conflicts if anything no longer applies cleanly.

You can have multiple stashes. To see which stashes you’ve stored, you can use git stash list and then use git stash apply stash@{n} to select slash n.

You can find out much more and see examples in Pro Git by Scott Chacon by a search for "Stashing and Cleaning" . I keep a pdf of of Pro Git which is licensed under the Creative Commons Attribution Non Commercial Share Alike 3.0 license permanently on my desktop.

Reminder about difftool and meld

One can use difftool which is set up to call meld to check (and even correct) the unmerged set of changes by:

git difftool HEAD

And a good way of seeing what you have currently changed in a branch when everything is up-to-date is

git difftool master

Reminder about gitk

It is useful to be able to see all the branches by:

gitk --all &

I use gitk whenever using git to visualise what I am doing and gitk is part of git so is always up-to-date.

4th September 2019

Howto handle various freezes of Linux, Mint, Cinnamon, Programs and Applets

The following sections have been extracted from cinnamon.htm for updating

Terminate an Unresponsive Programs using xkill

When one is experimenting with new software there is always a risk of programs freezing. Xkill is a tool for terminating misbehaving or unresponsive programs and is part of the X11 utilities pre-installed in Ubuntu and Linux Mint .

Use Alt+F2 to bring up a run box (or open a terminal) and enter xkill and return to turn the cursor to an X-sign, move the X-sign and drop it into a program interface with a left click to terminate the unresponsive program, or cancel the X-sign with a right-click.

One can easily add a shortcut key to launch xkill with the steps below.

  1. Go to Menu > Preferences > Keyboard.
  2. Under the Shortcuts tab, click the "+" button or "Add Custom Shortcut" to create a custom shortcut.
  3. Enter xkill to both the Name and Command boxes and click the Add button.
  4. Click on unassigned at the xkill row in the Keyboard Shortcuts window (unasigned is then changed to Pick an accelerator...).
  5. Press a new key combination, e.g. Ctrl+Alt+X (Pick an Accelerator ... is then changed to Ctrl+Alt+X).

Xkill is ready for use. Press the above key combination to turn the cursor to an X-sign, move the X-sign and drop it into a program interface to terminate the unresponsive program, or cancel the X-sign with a right-click.

Restarting Cinnamon

One sometimes finds that the panel has an unexpected appearance after moving icons arround or loading new applets - this can usually be sorted out by restarting Cinnamon. There are many ways to do this:

  1. Right click on the empty section of the panel -> Troubleshoot -> Restart Cinnamon
  2. Alt+F2 brngs up a 'Run Box' and enter r and return
  3. Add the Restart-Cinnamon applet (restart-cinnamon@kolle) to the panel. I always have this during applet development.

Re-starting a Linux Mint or Cinnamon System without Rebooting

The Ctrl+Alt+Delete shortcut key in Linux Mint brings you a menu to log out of your system.

The Ctrl+Alt+Backspace shortcut key in Linux Mint and Cinnamon immediately takes you back to a log-in screen without the need to reboot the system - this often still works with a frozen system but you will lose unsaved data and it will upset firefox and other open programs.

Open a Virtual Console - often still available on frozen system

Ctrl+Alt+F1 through Ctrl+Alt+F6 will open Virtual Consoles with a login prompt in place of your graphic interface which will enable you to log into an existing user in terminal mode.

Ctrl+Alt+F7 returns to your graphic interface (X session). Some systems use a different consoe for the graphic interface - the command who will show all logged in users so an be used to find it on your system.

A Virtual Console login will often still be available when the graphic interface has frozen.

Whilst still in the virtual console, Ctrl+Alt+Del should shut down and reboot the machine.

Once logged in, you can now type systemctl reboot or systemctl poweroff to reboot or shutdown losing any open work. This assumes you are using a recent system with systemd initiation. You can also try sudo shutdown -h now

Frozen System - When all else fails.

If you have really made a big mistake or are using beta software it is possible to freeze a system, often this is because one has lost commincation and it is best to shut down the file system in an orderly manner rather than just press reset even with the advanced file sytems in Linux. The following is a technique which is active within most linux kernels - it needs good dexterity but usually works.

Hold down ALT and SysRq (may be Print Screen or SysRec on your keyboard) together then add R, E, I, S, U and B in sequence - this needs long fingers or an assistant but does shut down the file systems and reboots cleanly.

There is some explanation in the Free Software Magazine at How to close down GNU/Linux safely after a system freeze with the SysRq key and there is a good Linux Mint Forum post What to do with frozen desktop. I often leave out the E and I

Note: It is alleged that Holding down the Alt key and continuing holding, then pressing and releasing the SysRec (PrtScn) key, then entering r e i s u b while continuing to hold the Alt key down is sufficient and avoids dislocated fingers.

Note 2: :Some keyboards need Ctrl+Alt+SysRq to be held.

18th September 2019

Creating LiveUSBs

Download the ISO file

The first activity is to download the the .iso file ready to burn to a USB stick (or DVD). The download page is https://linuxmint.com/download.php and this has lots of useful links to keep you busy whilst the file is downloading, you want the Cinnamon Version and probably you have a 64-bit machine these days. If you have a legacy machine then Xfce or Lubuntu is a good choice, less bells and whistles but much faster on an old machine - see my separate page on Lubuntu. The download (1.8 Gbytes for Cinnamon) takes me about 13 minutes with BT fibre broadband.

Checking your ISO file or LiveUSB

All the write-ups suggest you do a check of the downloaded ISO files MD5 checksum to make sure that not a single bit is in error in the ~1800 Mbytes you have downloaded - I tend not to bother as I have never had a problem and there are easy ways to check everything is perfect. The easiest way is to use the bottom entry in the grub menu "Check th intergrity of medium" the first time you use the LiveUSB. This runs the checks on every file on the medium against a list within a file on the medium.

If you have an Linux sytem or another LiveUSB system you can open the LiveUSB in your file browser. You will see a file called MDSUMS which contains a list of all the MD5 checksums and instructions how to verify manually using md5sum -c MD5SUMS

Whilst displaying the folder in Nemo right click in blank space and 'Open in a terminal' then do:

md5sum -c MD5SUMS

You should get hundreds of lines with OK and finally a line saying "WARNING: 6 lines are improperly formatted" This is OK as those are the first 6 lines with instructions!

Creating a Basic (no persistence) LiveUSB

The https://linuxmint-installation-guide.readthedocs.io/en/latest/burn.html recommends using the Mint USB Image Writer if you have a running Mint system or a program called Etcher by Balena https://www.balena.io/etcher/ if you are running Windows, Apple or other Linux distributions. I have tried it and it is very simple and fast and I have tested it under Mint as well. The Linux version is a self contained appimage which you download as a zip file, extract to your home folder, make executable and run by double clicking. The only caution is that it needs zenity installed before you run it under Linux by:

sudo apt-get install zenity

Thre are very full instructions and lots of screenshots at https://linuxhint.com/install_etcher_linux/ and many othe places. It is simple and fast (10:35 including validation time of 1:00) for my Mint 19.2 on a slow USB2 drive.

Booting your LiveUSB

We are almost there. We now have the LiveUSB and we just have to persuade the machine to boot from it. Some computers require you to hold down or press a key to give you a menu of boot choices, the best place to find this information is in your computers user manual or the manufactures website. Common keys to try - Toshiba, IBM and others: press F12 while booting to get to the boot menu and choose CD-ROM or you USB drive. HP Asus and others: press TAB key while booting and select CD-ROM or your USB drive from the boot menu. HP press F12 while booting to get to the boot menu and select the boot media. My Defiant uses F7 and the Helios F10. The options usually flash up on the screen at the start of the boot process but you do not usually have time to catch it the firt time!

Older machines will need you to enter the BIOS (Basic Input Output System) often also called CMOS. The most common way to enter the BIOS is to press the DELETE key when the computer is first booted (this seems to be becoming standard). On other systems it could be a different key, or combination of keys like ESC, F1, F2 (Toshiba), F9 (HP) F10, Ctrl-Esc, Alt-Esc, Ctrl-Alt-Esc, Ctrl-Alt-Enter, Ins or even others. You might have to press, press and hold, or press multiple times. The best way to find out the details of that is to look in the users manual or search the manufactures website. Tip: If your computer is a new computer and you are unsure of what key to press when the computer is booting, try pressing and holding one or more keys the keyboard. This will cause a stuck key error, which may allow you to enter the BIOS setup.

Once in the BIOS setup you then have to navigate the very basic menus using the instructions at the bottom of the screen until you find the Boot order and change it so that the USB or DVD is first. Then exit saving your change. (you may want to change back after you have finished experimenting as it is easy to leave a USB stick or DVD in the drive).

It is possible that one can not even start up with a Live USB or LiveDVD with some motherboards, Graphics boards, BIOS and SATA drive configurations. This is unlikely and has only happened to me once and all is not lost as there are various options at GRUB boot time which can be appended to the startup string. Google for your combination as a start as such conditions are very hardware specific and somebody will have solved the problem!

Changes for a system with Windows installed.

Turn Off Secure Boot and Fast Boot Bios Options. New machines may well be set up for a Secure Boot and that must be turned off before you can boot from a LiveUSB and whilst doing so you should also turn off Fast Boot especially if you have a Windows machine which will be dual booted. Both are done in the Bios and you will have to use the manual, internet or just search for where the options are hidden!

Disable Fast Startup within Windows: Fast Startup is a Windows 8 and 10 feature designed to reduce the time it takes for the computer to boot up from being fully shut down. However, it prevents the computer from performing a full regular shutdown and will therefore cause problems in a dual boot system and should be inhibited. You can switch fast start-up off in the Windows Control Panel -> Power Options -> Choose what power buttons do -> Change settings that are currently unavailable -> uncheck Fast startup -> Save Changes.

Never use hibernation before booting into a different operating system, at the best you will lose data.

Creating a Persistent LiveUSB

Basic and Persistent LiveUSBs

There are two sorts of LiveUSB, those with and without 'Persistance'. So what is 'Persistence' and why is it an important for LiveUSBs. Wikipedia indicates: "Persistence – in computer science refers to the characteristic of data that outlives the execution of the program that created it. Without this capability, data only exists in volatile memory, and will be lost when the memory loses power, such as on computer shutdown." A persistent Linux LiveUSB install save data changes back to the USB storage device instead of leaving the information in RAM. This data can then be used again on subsequent boots, even when booting from different machines.

In practice there are some limitations in saving system software but the latest versions of Ubuntu and Mint maintain system configuration changes such as setting up Wifi, allow you to load applets and programs, update system and application software and I have even done a full update successfully

You can even take the stick to a completely different machine and plug it in and continue working. There are implementations which allows you to have several different systems on the same [large] USB stick and choose when you boot up - perfect for compairing systems and demonstrating. The limiitations come when one is updating or loading new drivers, such as those for video cards, or changes within the kernel, certainly where a re-boot may be required. It is also not advised to do a Mint install to Disk from some implementations of persistance.

Advantages of a persistent LiveUSB:
Disadvantages of a persistent LiveUSB:
Additional information I have gleaned:

Persistence is built into the Linux Kernel but the methods forked many years ago and the methods are not interchangeable between distributions, even the kernel arguments are different between Debian and various Debian based distributions.

Applet Development using a persistent LiveUSB

For me it is important in applet development as I can have a number of LiveUSBs with different verions of Mint/Cinnamon to test changes do no affect earlier versions and, more important, to allow me to work with alpha and beta versions to ready applets for the next version. I often install git so I can do development on a LiveUSB.

How does one create a LiveUSB with persistance

This used to be easy using Unetbootin which offered the option of up to 4 Gbytes of persistent drive for Ubuntu based ISOs but persistence no longer seems to be working with Unetbootin and the latest versions of Ubuntu or Mint. I looked around long and hard and it is difficult to get a good description of how persistence is implemented and how to create a LiveUSB with persistence. It seems that the best way to start is with a program called mkUSB which is basically a series of scripts which work under Linux including from a liveUSB.

Using mkUSB to flash a persistent LiveUSB

mkUSB runs under Linux so you either need an existing Linux system installed on a machine or you need to install it on a LiveUSB and then flash another USB stick with the persistent system, fortunately even most laptops have two USB ports.

First you have to install the mkUSB application from a PPA. The following needs to be copied as a single line into a terminal to install it:

sudo add-apt-repository ppa:mkusb/ppa && sudo apt-get update && sudo apt-get install --install-recommends mkusb mkusb-nox usb-pack-efi zenity

You can now start mkUSB from the menu and go through many screens.

  1. Select Yes to use version DUS
  2. Enter sudo password.
  3. Acknowledge overwrite warning.
  4. Select option “i” Install (Make a boot device)
  5. Select "p" 'Persistent live' - only Debian and Ubuntu
  6. Navigate to select source ISO;
  7. select target drive from list; MAKE SURE IT IS YOUR USB DIVE
  8. Select upefi usb-pack-efi (Default Grub from ISO File)
  9. Select % of remaining space for persistence (I use 50% so there is some space for storage for transfers ).
  10. Read warnings, Double check installation target, select Go then Click “Go”.
  11. There will be a long period whilst files are copied with a progress bar showing
  12. The final stage of the creation is to flush the file system buffers to the USB drive.
  13. When the process has completed you will see a dialog with the phrase “Work done” highlighted in green. Click the “OK” button.
  14. If any other dialogs appear, close them by clicking on the “Quit” button.

It all looks quite complex but it is actually quite quick and obvious when one comes to do it. The first two of the following links have nice walk-throughs with screenshots which make it more obvious.

Using a LiveUSB created by mkUSB

It is now time to use the Persistent LiveUSB created by mkUSB.

The boot sequence will be slightly different to the basic LiveUSB as it will have a GRUB menu with a number of options:

The "Persistent Live to RAM" should be faster in operation if you have plenty of memory as the "ISO" is copied into memory for much faster access but slow to start due to the copy - several minutes on my Chillblast Helios.

Unfortunately the Grub options do not have a 'check integrety of medium' option so one should really do a check the MD5 checksum when one downloads the ISO.

 

25th September 2019

Domain name server changes

My hosting for pcurtis.com has had to be changed to a different server (jakata) because the original server has become short of space

This has resulted in problems with secure POP3, SMPT and FTP and TSOhost told me I had to change the DNS servers from ns0.freeezone.co.uk and ns1.freezone.co.uk to:

ns1.vidahost.com
ns2.vidahost.com

after a period to allow for DNS lookup migration which can take up to 24 hours I got the following from an nmap default scan

peter@defiant:~$ nmap pcurtis.com

Starting Nmap 7.60 ( https://nmap.org ) at 2019-09-25 18:06 BST
Nmap scan report for pcurtis.com (87.247.245.150)
Host is up (0.014s latency).
rDNS record for 87.247.245.150: jakarta.footholds.net
Not shown: 928 filtered ports, 61 closed ports
PORT STATE SERVICE
21/tcp open ftp
25/tcp open smtp
80/tcp open http
110/tcp open pop3
143/tcp open imap
443/tcp open https
465/tcp open smtps
587/tcp open submission
993/tcp open imaps
995/tcp open pop3s
3306/tcp open mysql

Nmap done: 1 IP address (1 host up) scanned in 3.73 seconds
peter@defiant:~$

The change also means I can once more get full SSL security on my POP, IMAP and SMPT as well as getting my FTP to pcurtis.com working again. The required settings to start resolving from the new server "jakarta.footholds.net" are:

TSOhost Email Clients on jakarta.footholds.net server using SSL are now:

Username: full email account
Password: email account's password.

Incoming Server: jakarta.footholds.net
IMAP Port: 993 or POP3 Port: 995
SSL:On

Outgoing Server: jakarta.footholds.net
SMTP Port: 465
SSL:On

IMAP, POP3, and SMTP all require authentication set.

TSOhost cPanel FTP client on jakarta.footholds.net server:

Host: 87.247.245.150
Username: "pcurtiscom" and cPanel password
Port: 21

As the port has changed incoming mail filters have to be reset or deleted.

Password protect directories in cPanel hosting

If you want to restrict access to directories on your website, you can use your hosting account's built-in password protection feature which provide a simple login box.

To password protect directories
  1. In cPanel ->Files area and click Directory Privacy.
  2. Select a directory to open (not the directory you want to use), and then click Go.
  3. Click the directory you want to use.
  4. Select Password protect this directory.
  5. Enter a Name for the protected directory, and then click Save.

After protecting the directory, you still need to add the users that can log in to it.

To create users
  1. Complete the fields in the Create User section.
  2. Click Save.

Visitors now have to enter a username and password combination to view the content.

25 September 2019

[WIP] Setting up for securing the site (HTTPS access)

I have been looking at getting a certificate for a while but until recently one could not use a shared hosting service but whilst sorting out my change of server I found some documentation on the TSOhosts site which indicated the constraint had been removed both for their legacy shared hosting sites (with cPanel and the newer Cloud based sites.

Reference info from cPanel -> Server information (on left of panel) which is useful because many techniques seem to depend on the cPanel and/or Apache version.

Hosting Package Professional Hosting
Server Name jakarta
cPanel Version 82.0 (build 16)
Apache Version 2.4.41
PHP Version 7.1.32
MySQL Version 10.3.18-MariaDB
Architecture x86_64
Operating System linux
Shared IP Address 87.247.245.150
Path to Sendmail /usr/sbin/sendmail
Path to Perl /usr/bin/perl
Perl Version 5.16.3
Kernel Version 4.19.44

There seem to be a number of steps, choices and options for getting a SSL Certificate

Redirection to https: from http:

The next important issue is how to get HTTP traffic redirected to the “secure,” or HTTPS version of the URL?

  1. .htaccess redirection
  2. cPanel redirection (probably uses .htaccess redirection set up by wizard)

Both http://website.com and http://www.website.com redirection needs to be imlemented to https://www.website.com

.htaccess redirection

Some reference and other links

Background searches have found:

# Forcing www in the URL

RewriteEngine On
RewriteCond %{HTTP_HOST} ^example.com
RewriteRule (.*) http://www.example.com/$1 [R=301,L]

# Removing www in the URL

RewriteEngine On
RewriteCond %{HTTP_HOST} ^www.example.com
RewriteRule (.*) http://example.com/$1 [R=301,L]

Forcing the domain to serve securely using HTTPS (for any site)

The following forces any http request to be rewritten using https. For example, the following code forces a request to http://example.com to load https://example.com. It also forces directly linked resources (images, css, etc.) to use https:

RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301,NE]

If this isn't working for you, first check your line endings. Copy/paste from your web browser into a text editor may not work right, so after pasting into your text editor you should delete each line break and add it back in (line break = return key).

RewriteEngine On
RewriteCond %{HTTP_HOST} !^www\. [OR,NC]
RewriteCond %{HTTPS} off
RewriteRule ^ https://www.domain.com%{REQUEST_URI} [NE,R=301,L]

Following looks the best but may need ...[OR] 443 or leave out .... 80 and [NE,R=301,L] at end

RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteCond %{HTTP_HOST} ^example\.com$ [OR]
RewriteCond %{HTTP_HOST} ^www\.example\.com$
RewriteRule ^(.*)$ https://www.example.com/$1 [R,L]

Note: There may be special rules for wordpress sites.

Initial disasterous test

RewriteEngine On
RewriteCond %{HTTP_HOST} ^pcurtis.com/access$ [NC,OR]
RewriteCond %{HTTP_HOST} ^www.pcurtis.com/access$ [NC]
RewriteRule ^(.*)$ http://www.pcurtis.com/access/$1 [R=301,L]

Loops because L flag does not break because rewrite system restarts from top so tripsagain and again. OK with change to https: or different place

Wordpress code ???

# BEGIN WordPress
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
# END WordPress

Rewrite conditions which seem useful and frequently quoted.

RewriteCond %{SERVER_PORT} 80

RewriteCond %{SERVER_PORT} 443

RewriteCond %{HTTPS} off

RewriteCond %{HTTPS} on

RewriteCond %{HTTP:X-Forwarded-Proto} =https

RewriteCond %{ENV:HTTPS} on

RewriteRule ^(.*)$ https://www.example.com/$1 [R=301,L]

RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

Interim proposal

RewriteEngine On

RewriteCond %{SERVER_PORT} 80
RewriteCond %{HTTP_HOST} ^pcurtis.com/access$ [NC,OR]
RewriteCond %{HTTP_HOST} ^www.pcurtis.com/access$ [NC]
RewriteRule ^(.*)$ https://www.pcurtis.com/$1 [R=301]

RewriteCond %{SERVER_PORT} 443
RewriteCond %{HTTP_HOST} ^pcurtis.com$ [NC]
RewriteRule ^(.*)$ https://www.pcurtis.com/$1 [R=301,L]

The first block does most of the work as it rewrites all http calls to https and also ensures they all have a www subdomain. The requirement for port 80 ensures that it does not loop and keep trying to rewrite to the same value.

The second catches any calls on port 443 (ie https calls) to example.com and rewrites them to https://www.example.com. This may be a luxury as one can live with external calls to https: always having to be to www.example.com

The NC flag makes the test Not Case dependent

.htaccess "flow" considerations

The flow is more complex than I initially thought and it was very easy to end up with loops. The L (last) flag in a RewritRule does terminate the steady processing down the file but to quote the apache documentation:

The [L] flag causes mod_rewrite to stop processing the rule set. In most contexts, this means that if the rule matches, no further rules will be processed. Use this flag to indicate that the current rule should be applied immediately without considering further rules.

If you are using RewriteRule in either .htaccess files or in <Directory> sections, it is important to have some understanding of how the rules are processed. The simplified form of this is that once the rules have been processed, the rewritten request is handed back to the URL parsing engine to do what it may with it. It is possible that as the rewritten request is handled, the .htaccess file or <Directory> section may be encountered again, and thus the ruleset may be run again from the start. Most commonly this will happen if one of the rules causes a redirect - either internal or external - causing the request process to start over.

It is therefore important, if you are using RewriteRule directives in one of these contexts, that you take explicit steps to avoid rules looping, and not count solely on the [L] flag to terminate execution of a series of rules.

An alternative flag, [END], can be used to terminate not only the current round of rewrite processing but prevent any subsequent rewrite processing from occurring in per-directory (htaccess) context. This does not apply to new requests resulting from external redirects.

In effect you often only finish after a pass reaches the bottom without a rule being activated without causing a redirect. I had a lot of looping until I understood that.

Testing .htaccess changes

I set up a couple of pages with a series of links between them to be able to test but found that changes to .htaccess were not being recogised immediately or at all and finally realised this was proably due to some form of caching. Clearing caches seemed to help but I still had a number of strange effects in both Firefox and Opera. Private Browsing is the next thing to try.

The other dificulty is that the Rewrite engine only kicks in once the HTTP request has been received - which means you need a certificate, in order for the client to set up the connection to send the request over! It means that testing the other way round https to http does not work.

Working test of .htaccess in folder access

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteCond %{HTTP_HOST} ^www.pcurtis.com [NC]
RewriteRule (.*) http://pcurtis.com/access/$1 [R=301,L]

Changing off to on inhibits force of non-web and allows tests of other HTTPS detection

21st October 2019

Contact Forms

I found that my contact forms were not working following the server upgrade. They had not been changed for over 10 years and had a lot of general code as they served several sites so it took a lot of rationalisation and test before I eventually found that a PHP function used in the form handler to check the email was a valid format had been removed from PHP version 7.x which the new server uses. The function call was

elseif(!eregi("^[[:alnum:]][a-z0-9_.-]*@[a-z0-9.-]+\.[a-z]{2,4}$", $email)) { echo "Bad"}

Alternatives to this function apparently include: preg_match() (with the i (PCRE_CASELESS) modifier, but I just removed the check as there was already a check before the form was sent.

I have taken the opportunity to change the contact and feedback mechanism from a popup to a responsive page which is much better for mobile devices and likewise the script handler i now a standard page rather than a response in the popup. I have added options to Cancel and Return to Previous Page to the initial page (contact.htm) and Go Back to Original Page to the response (contact_response.php) page. These both use code which manipulates history like this:

<button onClick="history.go(-2)">Go Back to original page</button>

It was a major task to change all the calls from popups like

<a class="howtobar" href="javascript:fbpopitup('contact.htm')" title="Feedback and Contact Form with Email Link" >Peter and Pauline Curtis</a><br>

Especially as many used a feedback identifier after a $ string and needed individual editing. In total the site has ~820 main pages and most needed to be changed.

Website Search

Whilst doing this work I also improved the Search Page layout and chnaged it to HTML5.

I also updated the robots.txt pages which control the indexing of pcurtis.com and uniquelynz.com to remove most indexing of pages in folders - primarily used for images, scripts etc.

I also created templates for use by freefind for the 'results' pages which I had never used before. This means the return pages have similar 'branding' to other pages. Use of linked cs files was allowed but not linked .js files so the format is slightly simpler

<!-- This is not a website file but is uploaded as a template to freefind for the search results -->
<!DOCTYPE html>
<html lang="en-GB">
<head>
<base href="http://www.pcurtis.com">
<meta charset="utf-8">
<link rel="stylesheet" href="responsive.css">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>::title::</title>
</head>

<body class="plaingreyback">
<div class="srchlimitwidth960">
<table style="width:100%; padding-top:2px;">
<tr>
<td class ="srchtitle" > Site Search Results</td>
</tr>
</table>
<div style=" padding-left:6px; padding-right:5px;">
::content::
</div>
</div>
</body>

</html>

The pages are different for pcurtis.com and uniquelynz.com

Broken Links

I have been using an Online Broken Link Checker on the site and found that many of the older pages had a lot of stale links as well as a few mistakes in interpage links. UNZ has been completely cleared and all internal page links on pcurtis.com have been corrected and eventually I have delt with all the other broken links.

I have not removed the links completely but changed the href=" in each to TITLE=" which mean the links will appear when one hovers and can be easily found by a case sensitive search. This is not perfect as itthrows errors that href is not prsent and one may also have two titles. I eventually adding a pale red background via global edit.

href="javascript:void(0)" class="brokenlink" TitlE="Broken Link -

W3 HTML5 web page validation

To quote:

The Nu Html Checker (v.Nu) is an ongoing experiment in better HTML checking, and its behavior remains subject to change. In particular, because new types of error checks continue to be actively added to the checker, there is no guarantee provided that if the checker reports zero errors for a particular document at one point in time, it will report zero errors for that same document at some later point in time.

I found several new errors were giving warnings on many pages and I have done some comprehensive global edits. I checked all the recent pages to do with travel and once the new warnings had been globally removed I still had to hand corrrect a smal number of other errors such as extra </p> s

Some changes to remember:

In total 776 pages have been modified in some way!

Google Search Console

Google Search Console is a free service offered by Google that helps one monitor, maintain, and troubleshoot ones site's presence in Google Search results. You don't have to sign up for Search Console to be included in Google Search results, but Search Console helps you understand and improve how Google sees your site.

Search Console offers tools and reports for the following actions:

It provides this wide range of tools and other diagnostics once you are identified as the owner of a site by uploading a file it upplies to the site to prove ownership. The responsiveness check is one I have used in the past but there are many others.

27th October 2019

Photographing the Northern Lights

Thre are considerable constraints for photographing the Aurora. You are photographing a very faint phenomena which may be at the limit of the eye or sometimes only visible on a photograph. You need a stable platform so you can get long exposures ie a tripod or some rest is essential as exposures of several seconds may be required. One must be able to have a Manual fixed focus set to infinity as autofocus needs plenty of light and will hunt. You must also turn off any automatic stabilisation as again it may go unstable with very low light. The lens needs to be wide open - say below f4.0 and the ISO (film speed equivalent) as high as possible without result being so grainy as to be useless - on modern cameras 800 ASA should be fine and up to 3200 ASA may help get a picture. You must use a remote control or use a timer to avoid disturbing the camera and to avoid vibration, even with a tripod. I would normally not state the obvious but having been repeatedly blinded by people who think a flash will help or, almost as bad a focus assist light - Turn Them Off and turn down the screen brightness.

Most compact cameras lack the sort of manual control required for good Auroral pictures but may have 'modes' such as 'night mode', 'low light' or even 'fireworks' which give you a chance under good conditions. Phones now have amazing cameras but little control but there are Apps which provide alternative camera interfaces which enable you exploit their full capabilities. I have the Open Camera App on my Android Samsung A6 which gives the ability to select manual or infinite focus, exposure, aperture and ISO (film speed equivalent) which gets one into the right range for a backup camera. Screeen Apps which allow you to turn down the screen brightness lower levels to maintain your night vision are also dessirable.

You will see that stars are clearly visible on most Aurora photographs A good test is to see if you can get a set up which enables you to get a clear picture of the star field. I could see far more stars than with my naked eye on a picture using a timer and 4 seconds exposure, f3.5, ISO 400 with Manual focus set to infinity and wide angle (Autofocus off, AF assist off and ISO extended in the menus) on my Panasonic Lumix DMC TZ80, an up-market compact camera even on an evening with a full moon and high ambient lighting whilst only using a very basic free tripod. One last point is that camera and phone batteries lose capacity rapidly as you approach zero centigrade so keep them warm but also watch out for condensation!

Android Camera and Screen Apps

Android Camera Apps were recommended by the ships photographers for extending the capabilities of phone and tablet cameras for Aurora. These need a recent version of Andoid (6 or greater) for full functionality as they offer use of the new camera interface. The Camera App from the phone manufacturer will be optimised for your particular Camera and may be best for point and shoot whilst the specialised Camera Apps are essential to reach the Cameras limits.

I have the Open Camera App loaded in parallel with the standard Samsung Camera App and it gives far more options including manual settings with ASA to 1600 and time to 1/10 second - these vary between devices which provide information on their capabilities to the App. Open Camera is very good for normal photography especially when you have time. You can bracket exposures and can easily use various ways to increase the dynamic range in pictures. Open Camera is also very fast for point and shoot. I have not tried all the options but it fares best in user feedback as well as being free.

It is worth looking at some of the many reviews of Camera Apps Reviews such as https://www.xda-developers.com/best-android-camera-app/ - a site which I have used for other things for many years. A number of Camera Apps keep coming up including Footej which looks promising but I have not tested.

Another worthwhile type of App to install is some form of screen dimmer, ideally on should use a red light to maintain night vision, a fact well known to pilots and sailors whose instrument lights are usual red. I have an App called Twilight installed which primarily uses a blue filter but the adjustments such as colour temperature allow one to boost red and also get a very low level of light. Overall it seems very flexible and the automatic changes with sunrise and sunset mean it also helps solve the blue light problem. Screens emit blue light that tricks our brains into thinking we should be awake. Studies various studies show that using gadgets in the evening affects both the quantity and quality of our sleep.

There are a number of similar Camera and Screen Apps for iPhones and probably for Windows based phones but I have no personal experience of them.

3rd - 7th November

Panic Stations - Mobile Browsers using WebKit looping

Whilst using Mobile Opera on my Samsung Galaxy A6 pages on our website were going into an endless loop.

To be more specific when the phone orientation was changed from vertical to horizantal (portrate to landscape) the page continuously kept reloading which could only be interupted by returning to a vertical view. It was confirmed that this was also true with other WebKit based rendering engines in Chrome and Samsung browser but again only landscape view and did not occur when browsers were set to display the 'desktop site'.

I have a common method of responding to orientation changes and large changes of width by reloading the page - this is actually very quick as everything is in cache - and this has been in use for nearly 5 years with no problems so this was a mystery. This took a long time to sort out exactly what was happening although the code look involved looked fairly simple and foolproof. The way it is implemented is covered in depth in my page on Mobile Friendly Responsive Web Site Design in the Responding Automatically to Orientation and Window Size Changes section. In fact the page on Using Lightbox Style Overlays to Display Web Images had some diagnostic information available which gave some clues in its inconsistency.

The code which triggers reloads when there are large changes of width or an orientation change which changes the width was.

window.addEventListener("orientationchange", responsiveReload);
window.addEventListener("resize", responsiveReload);

var twoCols = isTwoCols();
var threeCols = isThreeCols();
var fourCols = isFourCols();

function responsiveReload() {
 // Check for a change in the columns required before a reload
  if( twoCols == !isTwoCols() || threeCols == !isThreeCols() || fourCols == !isFourCols() ) {
    twoCols = isTwoCols();
    threeCols = isThreeCols();
    fourCols = isFourCols();
    if( isFirefox() ) {
      location.href=location.pathname
      return true;
    } else if (isApple()){
      location.reload(false) // setTimeout(location.reload(false), 1000);
      return true;
    } else {
      history.go(0)
    }
  }
}

Before I go into the solution further I think a little explanation about how modern web pages load is in order. Firstly I need to remind people that when you come to the screens on mobile devices there is effectively a mapping between what one writes and the screens because many upmarket phones have a 1920 x 1080 high resolution screen or in some premium phoneseven higher and if you rendered in the normal way for that screen you would need a magnifying glass to read the text. This mapping called a viewport is specific to the device and is called into use by a meta statement in the <head> of the html page of <meta name="viewport" content="width=device-width, initial-scale=1"> which gives a screen size for my Samsung Galaxy A6 of 360 x 740 instead of the real size of 720 x 1480 which makes it quite readable. When that meta statement is in place every interaction uses the scaled figures.

Now lets come to pages which use JavaScript. Mostly the JavaScript is in separate files which are again added in the <head> or at the end of the html code in the <body>. The JavaScript files mostly consist of functions which are called from the html, internally or run by events. Any code not in functions is run at the time they are loaded, ie before the rendering in the body if in the <head>. If several includes are present they are run in order. In our case there are several include files for both JavaScript and also for Cascading Style Sheets which need to present before the page is rendered.


<script src="js/jquery-1.11.0.min.js"></script>
<script src="js/lightbox.min.js"></script>
<link rel="stylesheet" href="css/lightbox.css">
<link rel="stylesheet" href="responsive.css">
<script src="rbox.js"></script>

The first two .js files are used by the Lightbox code as is one of the style sheets, theothers are mine for responsive rendering. The code in all of these should be available ready for the code in the <body> to be rendered, and code in them one would expect to be run before the html is rendered. I oly had a small amount of initiation in rbox.js and there was no indication it was not being run - in fact I had carried out tests to check and it was definitely being run before there was any rendered page displayed.

Now lets look back at the code for the function that would reload the page if required. That had a simple mechanism to make sure it was only called when required, ie when the width of the page had changed between 2, 3 and 4 column widths. The function was event drived ie it was called by event 'listeners' which called it when one of two events took place - a change of orientatiion or a resize. Orientation changes should be rare but resizes are common and many browsers fire them multiple times so the code only passed an event on if the column 'flags' changed which has always worked. The initial column flags were set up as initial conditions when the page loaded and were then updated whenever a change was detected - actually this never took place in practice as the page was reloaded and they were re-initialised. I tried multiple ways to de-bounce the events but it was much later when I put various checks using alerts displaying values at various points that the problem became obvious. The values extracted from the machine when the JavaScript was loaded were actually incorrect - probably the scaling via the viewport was incorrect but even that did not total explain the facts. For example on initialisation the values used for initialisation actually showed the screen to be 4 colums wide when it should be 3 I discoved when I looked at the flags with an alert. But by then I had tried lots of other possible solutions including different reload techniques and debouncing of the alerts.

So I have actually done two things - any viewport meta statements have ben moved ahead of the JavaScript, this is sensible but did not provide a full solution and the critical change is to only load the eventListeners after the page has been rendered and do the initialisation at that point. Different generations and versions of browsers offer different 'events' to say a page has loaded, the newer ones offer a much better event saying the page has been rendered but before all the associated images and css files are loaded which can take a significant time so the old page loaded event (load) is not so useful. I have two different event listeners and a timer for belt and braces and I block extra calls in the first function called. That then calls the actual responsiveReload function where the checks for change in required width inhibit everything else until a change is actually needed.

So the new code is


document.addEventListener('DOMContentLoaded', addListeners);
window.addEventListener('load', addListeners);
setTimeout("addListeners()", 5000);

var twoCols = isTwoCols();
var threeCols = isThreeCols();
var fourCols = isFourCols();
var eventCount = 0;
var totalEventCount = 0; // Only used for diagnostics
var listenersActive = false;

function addListeners() {
// Values at initial loading of JS may be suspect so redo
   twoCols = isTwoCols();
   threeCols = isThreeCols();
   fourCols = isFourCols();
//make sure listeners not loaded multiple times
   if (listenersActive == false) {
      window.addEventListener("orientationchange", countEvents);
      window.addEventListener("resize", countEvents);
      listenerActive = true;
   }
}

// The two following functions together were included to check and debounce multiple events at an early stage. Probably no longer required but adds a level of extra contingency.

function countEvents() {
   totalEventCount = totalEventCount + 1;
//only start timerloop if first event or event Counter has been run back down.
   if (eventCount == 0) {
      eventCount = eventCount + 1;
      timerLoop();
   }
}

// Runs responsiveReload only once per cycle and only if started by an event
// Stops when eventCounter = 0 which allows it to be re-tripped when event takes place via countEvents

function timerLoop() {
   if (eventCount > 0) {
eventCount = eventCount - 1 ;
   // Timer compromise between quick detection and processor useage for loop
   // delay for starting reload probably redundant.
      setTimeout(responsiveReload(),500);
      setTimeout(timerLoop, 2000);
   }
}

function responsiveReload() {
// Check for a change in the columns required before a reload
   if( twoCols == !isTwoCols() || threeCols == !isThreeCols() || fourCols == !isFourCols() ) {
      twoCols = isTwoCols();
      threeCols = isThreeCols();
      fourCols = isFourCols();

      if( isFirefox() ) {
         location.href=location.pathname; // Firefox
         return true;
      } else if (isApple() && isMobile() ){
         location.reload(false) // Mobile WebKit
         return true;
      } else if (isApple() && !isMobile() ){
         location.reload(false) // Mobile WebKit
         return true;
      } else {
         history.go(0) // Any other browser
      }
   }
}

So why did it take so long to find and fix:

I have also been through the remaining code and checked every initialisation of variable was not using derived values and replaced any with functions delivering current values - score, one significant case and one other check updated. I also optimised display for screens between 320 and 360 pixels in hpop/ hbox

Future - I believe some of the code is redundant but have not removed on the principle "If it works don't fix it". At the end of several days work the real solution is probably only 20 lines of code although I also moved the viewport meta statement in 800 files mostly by repeat edits but manualy checked.

It is clearly preferable to avoid use of page reloading completely and the new page Next Generation Site Layout - Media Queries replaces Responsive Reload covers how I set about doing that using Media Queries to adapt the CSS for different screen widths. The page, rather like the Lightbox page has acted as a development and trial page rather than put the diary at risk.

Before you leave

I would be very pleased if visitors could spare a little time to give us some feedback - it is the only way we know who has visited the site, if it is useful and how we should develop it's content and the techniques used. I would be delighted if you could send comments or just let me know you have visited by sending a quick Message.


Link to W3C HTML5 Validator Copyright © Peter & Pauline Curtis
Content revised: 7th July, 2020