Moving to new Virtual (really) Private Server provider

With all the post-Snowden privacy concerns I recently started to consider how much of my data was being stored on US servers, and hence fell under US legislation. It didn’t really surprise me that the answer was ‘quite a lot’, when you consider GMail, Drive, Dropbox, GitHub, Squarespace etc, but what was the alternative?

I’ve hosted my own systems, such as Joomla, Drupal, Subversion and Redmine, on Virtual Private Servers (VPS) running Linux for years but some time ago I scaled this back in favour of the above services. This was not a cost issue, the two servers I had cost less than £10 a month, but I’m not really an Infrastructure guy and configuring/patching servers and applications is not really what I do. But with all this mass snooping going on I thought that maybe I should at least take a look at moving my data back under my control.

I’d previously watched a Pluralsight course called Mastering Your Own Domain which presented a couple of interesting applications:

  • OwnCloud – a Dropbox alternative
  • GitLab – as the name suggests, a self-hosted source control system similar to Github

There were a number of other very interesting points made during the course so if you have access to Pluralsight then I highly recommend watching it.

Having all but stopped using the two VPS that I had it was quite simple for me to re-image one of them and have a play with the above applications – and I must say I was very impressed (still am). But then came the bombshell – my VPS provider emailed out of the blue to say that they were merging with a US company with immediate effect (immediate meaning just that!). Now, the servers were still hosted in the UK but did they come under US legislation? If the NSA requested all of my data (or all of the data on the same appliance that my VPS was running on) would it simply be handed over? If so, what was the point in effectively moving from one US service to another?

I raised a support ticket and received a reply saying:

“.. buying a service from a non-US company gives you very little protection in any case..” 

“.. Remember GCHQ in the UK were collecting lots of data on behalf of the NSA, and vice versa”

Not very reassuring I think you will agree, so I pressed the point and asked them straight out:

“If you received a request for my data (or all data from an appliance) from the US authorities, would it be handed over?”

I received no response and the ticket remained open until I ultimately closed my account a few months later.

With my investigations into OwnCloud and GitLab now well underway and being happy with how they were panning out I decided to investigate an alternative provider. Taking everything into account I knew that the new service would have to be provided by a company that was neither in the US or the UK.

After quite a bit of searching around I finally decided on Bahnhof – a company based in Sweden and who also hosted WikiLeaks. Their Privacy Policy says it all really. Yes they cost a little more than my previous provider but not excessively so.

I have been running with my new server now for a couple of months and have had zero issues. OwnCloud and GitLab are running fine – although there are a few configuration niggles to iron out (which reminded my why I started using hosted services in the first place) but on they whole I’m happy.

I’ve also configured both services to run under SSL and used SSL Labs to help me configure the systems to gain an A+ rating. Not bad for a Developer 🙂

The whole process, and the ongoing commitment to maintain these servers has reinforced to me that privacy is something that needs work. It’s all too easy to do nothing and to say “I’ve got nothing to hide” – but as Edward Snowden says;

Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say“.

Recovering deleted files on Fedora 15 using the photorec utility

So here is the scenario; I’d taken some photos of a special occasion and wanted to copy them to my usb key so that my girlfriend could take them to be printed. Simple stuff huh.? Well normally yes, but when I gave my girlfriend the usb key last time instead of finding just the handful of photos on there that she expected there were dozens of them displayed when she plugged it into the kiosk.

These were of course in the hidden .Trash folder on the device but I didn’t think to clear down the recycle bin. Well, I was not going to be caught out like that again so this time I deleted everything on the key, opened the Trash folder and clicked ‘Empty Trash’. Now all I had to do was drag and drop the photos from the camera and drop them onto the usb key. A fraction of a second too late I realised that I was not dropping the files into the usb key folder but into the trash instead!

My heart missed a beat until I thought to myself “no matter, I’ll just click the Restore button and put them back”. It was then that I saw that while the files had disappeared from the camera they had not reappeared in the Trash folder. Using Nautilus and the Terminal I trawled the system looking for the missing file but to no avail – they were gone.

OMG, I was the only one taking photos and now I’d taken the only copies I had and trashed them – what the hell was I going to do? I know what my girlfriend would have done if she found out!!! Well, as it happened my salvation was just a yum install away.

The answer to my prayers was a package called testdisk which contains a utility called photorec which saved my bacon.

The photorec utility is run as root from the command line with a simple sudo photorec which brings up the following interface.

Using the arrow keys select the appropriate drive (the above screenshot shows my XD card selected) and press enter to move onto the partition type screen (below). Now the XD card can be read by my Linux and Windows systems so I selected the Intel/PC partition option and that worked well for me. If you think that your drive may have a different format then I suggest finding out before progressing any further.

Make your selection and press enter to continue.

This next screen was a bit confusing to me as it didn’t really say what I was expected to specify. I opted to ignore the ‘No Partition’ option and again, this seemed to work for me.

Make your selection and press enter to continue.

Next we need to specify the filesystem type (different from the filesystem format) and knowing it was not ext2/ext3 (because they are Linux filesystems and as already mentioned my Windows system can read the card with no problem). I opted for ‘Other’.

Make your selection and press enter to continue.

The next screen is asking where to scan for the missing files, Free will look at the unallocated space on the drive while Whole will look at the entire drive – to be on the safe side I opted for the latter but then I was only looking at scanning a 1GB card, if you have a bigger drive then maybe looking at the unallocated space (which after all is where you hope your files are then choosing the Free option may suffice).

Make your selection and press enter to continue.

Finally we are asked where we want photorec to save any recovered files. Navigate through the file structure using the arrow keys to select and enter to open folders until you are in the desired location (I created a new folder called recovery for my files). Press ‘C’ to continue.

Photorec will now scan the device and save any recovered files to the location specified. It took about 3 minutes to scan my 1GB XD card.

With the process complete it was time to open the recovery folder I’d created and see if, by a miracle, this free utility had saved my bacon. I was certainly relived to find the photos that I thought I’d lost and quickly copied them over to my NAS  (and Dropbox just to be sure).

The recovery folder did not just contain these photos though, as you can see from the above screenshot, it contained hundreds of files. Now I fully expect that some of these files will be unreadable due to some sectors of the disk having been reused since they were deleted, but as no writes had been committed to the card since my little mishap I was confident that the files would be OK. Thankfully I was right and as requested in the above screen I will be making a donation very soon.

So if you ever find yourself in the same position, fear not help is at hand. Open Source to the rescue once more.

Backing up and Restoring to Dropbox on Ubuntu Server 10.04 LTS

In my previous couple of posts I’ve been configuring Dropbox on my Virtual Private server running a headless installation of Ubuntu 10.04. The main reason for this was to enable me to use it for storing backups without having to rely in the Dropbox Web API which has proven to be somewhat flaky – or at least that’s my experience. With the Dropbox daemon now installed, configured and running all I need now is a script to perform the actual backups and save in the appropriate location. Initially I’ll do this using bash script but it is my intention to look at using Python when I have some time on my hands.

Investigations

The first thing I need to to is backup the MySQL database which can be achieved quite easily via the mysqldump command.

mysqldump -u'username' -p'password' onthefence_blog > otfd_db_backup.sql

Note that there are no spaces between the -u/-p switches and their values (which are enclosed in single quotes) – this is deliberate!

This will create a script, called otfd_backup.sql, which will contain the SQL to create all of the tables and insert the data into them. In short, a script that you can run to totally recreate the database. There are a multitude of options that can be specified but for my purposes I don’t really need any of them.

So with the database backed up I now need to take a snapshot of the blog itself, i.e. the WordPress installation. I decided to create a compressed tarball using the following command:

tar -czf otfd_blog_backup.tar.gz /var/www/onthefence_blog

So those are my commands, but now I need to put these into a script that I can configure to be run via a cron schedule. I also wanted to be able to add datestamps to the generated files and to email myself either a success or a failure message along with a log file containing additional details.

To create a datestamp is a simple matter of creating a variable with the formatted date which can be injected into filenames and the log file.

echo "On The Fence Development Backup" > otfd_blog_backup_$today.log

The $today reference will be replaced with the dynamic value generated above.

Next on the list was to send an email depending on the success or failure of the backup – I’ve already configured Exim4 and Mutt to handle the emailing from the server so all I need to do is to check the success or failure of each command and take the appropriate action. If either of the main commands fail then an email needs to be sent with details of the problem to allow me to take the appropriate action.

When a command is run the resulting exit code can be checked via the <strong>$?</strong> variable. In addition to this anything sent to the stderr stream, i.e. an error, can be redirected to the log file by adding 2>>otfd_blog_backup_$today.log to the end of the command line.

Developing the Script

The resulting mysqldump section of the script looks like this:

echo "Backing up MySQL Database" >> otfd_blog_backup_$today.log
mysqldump onthefen > [path to dropbox folder]/onthefence_$today.sql 2>>otfd_blog_backup_$today.log

if [ $? != 0 ]; then
        echo "Blog Database Backup Failed." >> otfd_blog_backup_$today.log
        # Need to Send out EMail
        echo "On The Fence Development Blog Backup Failed" | mutt -s "Blog Backup Failed" someone@somewhere.com -a otfd_blog_backup_$today.txt
        exit 1
fi

If there is a problem running the mysqldump command then the error message is sent to the log file, which is then added as an attachment to a notification email. I also exit the script (which an exit code of 1) as there is little point completing the backup if any part of it fails.

The tarball command looks pretty similar although I have decided to cd into the /var/www folder before running the command. This is to prevent the var and www folders from appearing in the archive itself as I just wanted the main installation folder, not it’s parents. A personal preference but I think it’s cleaner to just backup what you actually want to be able to restore.

# Now Create a Tarball of the Blog Folder
echo "Creating Tarball of Blog Folder" >> otfd_blog_backup_$today.log
cd /var/www
tar -czf onthefence_$today.tar.gz onthefence 2>>otfd_blog_backup_$today.log

if [ $? != 0 ]; then
        echo "Blog Tarball Generation Failed." >> otfd_blog_backup_$today.log
        # Need to Send Out EMail
        echo "On The Fence Development Blog Backup Failed" | mutt -s "Blog Backup Failed" someone@somewhere.com -a otfd_blog_backup_$today.txt
        exit 1
fi

If the execution makes it passed both of these checks then I’m assuming that the backup has been successful and send an email, with the log file attached, to myself.

after making the script executable I created a simple cron job which runs at 00:01 on a Sunday morning – so far everything seems to be working well.

I’ve attached the full script (with my specific details removed) here.

Restoring from the backup

As I’ve already posted – a backup is only a backup when you can restore from it. So, can I recreate my blog on a new server using the files that I’ve produces above? Well, yes I can and it’s remarkably easy to do so.

I’ve got another VPS which I use for testing and have also configured it to run the Dropbox daemon on the same account as the live server – so the backups created above will be synced to the test server and I can access them quite easily.

After copying the backup files out of the Dropbox folder I extracted the tarball and then moved it to the /var/www folder

tar -zxvf otfd_blog_backup_26.06.2011.tar.gz
mv onthefence /var/www

Next I needed to create a database in MySQL along with a user with the appropriate permissions and the same username/password as on the live server (although I could change the values in the wp-config.php file in the WordPress install).

mysql> CREATE DATABASE onthefence_blog;
mysql> CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
mysql> GRANT ALL ON onthefence_blog.* TO 'user'@'localhost' IDENTIFIED BY 'password';
mysql> FLUSH PRIVILEGES;
mysql> exit

With the database created I imported the backup using the following command (and entering my password when promoted):

mysql -u my_username -p database_name < path_to_dump_file

I then navigated to http://testserverip/onthefence_blog and low and behold there was my blog. It should be noted the clicking on any of the links actually took me to the page on the live site but this is due to WordPress storing the domain name against each post within the database. The fact is it’s restored, up and running.

Now I could create a script for the restoration as well but frankly it’s so straightforward I don’t see too much value in doing so – I have a list of more pressing things to look at first.!

Next job, now that I can backup the contents of my server, get my Subversion server up and running and schedule the backups on a nightly basis but only if there have been commits since the last one.

Configuring Dropbox on an Ubuntu Server – Pt 2

In Part 1 I clarified a few points on the Dropbox tutorial for getting the server running on a headless Linux server and while I had it running it was ‘locked’ to an ssh session, i.e. close the session and the Dropbox server also stopped. The same tutorial provides links to sample init.d files which can be used to start the Dropbox server on boot. This works well enough but there were a couple of things that, again, could have done with a bit more detail.

The Wiki post provides links to start up files for Ubuntu, Fedora and Gentoo – I’ll obviously be using the Ubuntu version here. I simple copied and pasted the script into nano and changed the DROPBOX_USERS to read:
DROPBOX_USERS="root"
I also took the ‘header’ information from the bottom of the page and pasted it after line one.
 
This file was then saved to /etc/init.d/dropbox and made executable by running
sudo chmod +x /etc/init.d/dropbox
Now we need to create the init script links and fortunately there is a helper script that we can use;
sudo update-rc.d dropbox defaults
Finally, back at the original Dropbox Wiki article, there is an upstart script that needs to be saved to /etc/init/dropbox.conf.
 
Job done – all we need to do now is to start the Dropbox server and this is a simple matter of running the following command in a terminal (I did this as root).
 
I ran a quick test by cd’ing to the Dropbox folder and creating a simple text file and saving it. Navigating to the Dropbox site confirmed that the file had been synced successfully.
 
So, now with a Dropbox server running and syncing as expected all I need to do now is to create a script that will perform the required operations, i.e. backing up the MySQL database and create a tarball of the blog folder itself. This script will be run on a schedule using cron.