AppSettings in Xamarin.Forms

If you have used ASP.NET in recent years you will probably be familiar with the appSettings.jsonfile and it’s associated, build-specific transformations, e.g. appSettings.development.json and appSettings.release.json.

Essentially these allow developers to define multiple configuration settings which will be swapped out based on the build configuration in play. Common settings are stored in the main appSettings.json file while, for instance, API Endpoint Urls for development and production deployments are stored in the development and release files.

At compile-time any values specified in the deployment version of the file overwrite those in the common version.

Simple – works well and we use it all the time without thinking about it. But what about Xamarin.Forms – it doesn’t have such a mehanism out of the box so how do we achieve this and prevent accidently publishing an app to the App/Play Stores which are pointing to your development/staging servers?

The Problem

Previously I have tried a number of approaches and while they mainly worked there were always shortcomings which meant that I couldn’t really rely on them.

One used a pre-deployment step in Visual Studio to copy the appropriate file, based on the build configuration, from one location to another where it would be picked up by the applications startup code. This worked fine when running on Windows but not when building on the Mac (using Visual Studio for Mac) because it uses the cp command and not copy.

Yes, I could have created an alias from copy to cp but what about when I want to configure a Continuous Integration build on Azure DevOps?

Another approach had me creating Operating System specific script files, .bat on Windows and .sh on the Mac, and again using a pre-build task to run the appropriate script (executing the extension-less command would run the appropriate version on each platform). But passing arguments for the build configuration was clunky and again Azure DevOps pipelines didn’t really seem to want to play ball – maybe Microsoft is a bit cautious about letting anyone execute scripts on their Azure servers 😉

Well, after deploying and testing the wrong version of an app onto my phone and scratching my head for twenty minutes I decided enough was enough and using the first approach above as a base I have come up with a simple mechanism which does the job.

The (my) Solution

The solution I’m running with requires a configuration file to be created for each build configuration, e.g. Debug and Release (by default), which are configured as ‘Embedded Resources’.

A static property is added to the App class which implements the Singleton pattern to load and return an instance of a class that represents the contents of the configuration file – I’m using Newtonsoft.Json to deserialise the file and hydrate an instance of the class.

Add some Conditional Compilation Symbols to the mix and we are just about there.

If you would rather look at the code in Visual Studio then you can download it here.

Enough talk – let’s get to it 😉

To the Code

I’ve created a basic Xamarin.Forms project using Visual Studio 2019 Community Edition, stripped out all the default stuff and added in a single Content Page, ViewModel, Class and two Configuration Files:

The Solution (in all it’s glory)

The appsettings.debug.json file has the following content:

The appsettings.debug.json file

You probably don’t need any clues to the content of the other version 😉

The only thing to remember here is to ensure that you set the ‘Build Action’ for these files as ‘Embedded Resource’:

Setting Build Action on .json files

The AppSettings.cs class is a simple POCO with a single property which corresponds to the json files:

The AppSettings.cs POCO class file (yes – I could remove the redundant ‘using’ statements!)

Now, when a Xamarin.Forms app starts up the App class is, for all intents and purposes, the entry point and is generally available to the Content Pages throughout the app. So this is a good place to expose our AppSettings:

App.xaml.cs

Notice how the property getter will instantiate the AppSettings instance if it is not already in place.

Also notice the use of the conditional compilation statements (#if .. #else .. #endif) in the LoadAppSettings method. This is where the ‘magic’ happens. I know that some people may shy away from this approach but this is the way I’ve gone for now.

Basically the LoadAppSettings method will read in the specified file depending on which build configuration is in play at the time. The file is deserialised into an instance of AppSettings and the local variable updated.

As the .json files are Embedded Resources we can address them using their fully qualified names, noting that the namespace is made up of the overall Namespace (AppSettingsPoc), the folder name containing the files (Configuration) and the actual filenames. Yours will be different for sure – just remember how it’s comprised.

For the conditional compilation to work we need to specify the appropriate symbols (the ‘RELEASE’ text in the above code).

To do this, Right-Click on the shared project (the one with the App.xaml file) and select ‘Properties’:

Project properties for AppSettingsPoc (shared project)

Select the ‘Build’ tab from the left hand side and set the configuration to ‘Release’ if it’s not already.

In the ‘Conditional Compilation Symbols’ field enter ‘RELEASE’ (or whatever you want to call it – just match it up with what you use in the App.xaml.cs file). If there are other values already present then tag this new one to the end, delimiting it with a semi-colon.

So, we have the configuration files, we are loading them and making the available to the Application. Now we just need to consume the data and for this I’m using a ViewModel, HomeViewModel.cs, which will be the Binding Context to our Page.

The ViewModel class is another simple POCO with a single property which reads the WelcomeText from the AppSettings instance via the App class:

The HomeViewModel.cs POCO class – with redundant ‘using’ statements removed 😉

Finally, we just need to bind this property to a UI element on our Page:

HomePage.xaml

Notice that on line 10 in the above markup I’m specifying the BindingContext to an instance of the HomeViewModel. As with many things in Xamarin.Forms, there are numerous ways of acheiving this.

The label has a Text property which I’ve bound to the WelcomeText property (which will be on the ViewModel remember) and that’s about it.

If I run the app in Debug mode I will see the message as read from the appsettings.debug.json file. Run it in Release mode and it will be the message from the appsettings.release.json file:

In Summary

The solution presented above requires a small amount of setup and then it pretty much fire-and-forget. If you need a new setting then update your AppSettings.cs class, add the values to your .json files and you are good to go.

Need a new build configuration, say Staging? No problem. Just create the new .json file (remembering to set it’s build action as ‘Embedded Resource’), add a new ‘STAGING’ symbol to the project properties, update the LoadAppSettings method to check for it and you are done. Simple as that really.

Now, some may say that Conditional Compilation is bad and that using Reflection is the work of the Devil. But frankly – I’m quite pragmatic about these things and if it works and doesn’t have a dreadful code smell then I’m not going to lose too much sleep over that.

Roll my own DDNS? Why Not.!

Update: Well, there’s certainly more to Dynamic DNS than meets the eye – who knew.

Investigations led to the decision that I should put my hand in my pocket and spend my time better elsewhere.

Not that I opted for the overpriced offering from Oracle, signing up with NoIp instead.

Like many devs I have been known to host websites and services behind my home broadband router and therefore needed a Dynamic DNS resolver service of some description. But in my recent moves to limit my reliance on third-party services – including those provided by Google – I wanted to see what would be involved in creating my own service.

Why would I want to roll by own?

Over the last few years I’ve moved my hosted websites outside of my home network and onto services offered by Digital Ocean so I was only really using my DDNS provider for a single resource – my Synology NAS.

Now, in the past I’ve used DynDNS (an Oracle product) and while I’ve had no issues with the service it’s not what you could call cheap – currently starting at $55 a year. When a previous renewal came through, and after reviewing what I was using it for, I decided to let it expire and do without the access to the NAS from outside my network.

Recently though I’ve been using OwnCloud (a Dropbox-like system), hosted on Digital Ocean to replace some of the functions I used to use the Synology for. I don’t really want to use Dropbox or Google Drive or similar offering as I want to keep my data under my control. With the desktop application running I was able to access and edit files from my multiple systems while only actually having a single source of those files, i.e. OwnCloud.

The only downside I’ve encountered was with the Mobile app – which I wanted to use to backup the photos taken on my phone in the same way that the Google Photos app does (because I want to store my own data remember!). Well, the app just refuses to do this without manual intervention (and even then it’s buggy) which is kind of defeating the object.

Then, while listening to the Security Now podcast I heard Steve Gibson talking about his quest to find a suitable file syncing setup. He discussed the pros and cons of different systems and finally opted for Dropbox – which I don’t want to use. Then – out of the blue – the podcast host, Leo Laporte, mentioned ‘Synology Drive’ and his initial description seemed to tick all my boxes … plus I already have a Synology NAS.

I’ve had a look at the capabilities of the system and it seems to do what I want it to do but of course I now have the problem of accessing this from outside my network – I need a Dynamic DNS resolver service of some description. Sure, I could just put my hand in my pocket and pay for one but where is the fun in that?

OK, what’s the plan?

Right, what does a Dynamic DNS resolver do (or what do I think it does anyway)?

Well, in my experience when I was configuring the DNS for my domain I simply pointed the resource, e.g. a website, at the DynDNS service and it would redirect requests to my home broadband IP address. Simple huh?

But how does the DynDNS service know my home IP address and what happens when it changes?

Well, my router has the capability to integrate with these services and notify them when the IP address changes.

So I need to deploy something which will redirect requests for my Synology based services to my home IP address and a mechanism to keep the IP address up to date. How hard can it be?

I plan to create and deploy a simple ASP.NET Core MVC application with a GET and a POST endpoint. The GET will accept incoming requests and perform the redirection while the POST will be used to keep the IP address updated.

Proof of Concept

So, step one is a proof of concept – a super simple MVC application, deployed to my .NET Core enabled Digital Ocean Droplet, which will perform basic redirection to a hard-coded IP address.

I’m not expecting too many problems with this but the one thing that springs to mind is how will it handle HTTPS requests?

If all goes will with step one, the next step will be to set the IP address and keep it up to date. For this I plan to use the Synology’s ability to run Python scripts on a schedule (say every 30 minutes) which will call the POST endpoint. The plan being for the endpoint to be able to resolve the source IP address from the request, rather than the Synology having to resolve it first.

Clearly this will not be as polished a service as the one offered by Oracle but it will use existing resources, so there is no additional cost to be incurred.

Watch this space 🙂

Updating an end of life application

A while ago I posted that the FillLPG for Android application was, in a word, Dead! But in few days time users will notice a new version, 2.0.28.5, hitting the Play Store as well as in-app notifications to update – so what gives?

Have I changed my mind about withdrawing support for this app – no, I haven’t. Essentially my hand has been forced by Google’s recent decision to deprecate some of the functionality I was using to fetch nearby places as part of the ‘Add New Station’ wizard as well as requiring 64 bit support – the latter being little more than a checkbox and nothing that most users will ever notice.

Removal of the Places Picker

Prior to the update, when adding a new station a user could specify the location in one of two ways;

  • Select from a list of locations provided by the Google Places API and flagged as ‘Gas Stations’
  • Open a ‘Place Picker’ in the form of a map and drop a pin on the desired location

It is the second option which is now going away – and there is nothing I can do about it. Google are pulling it and that’s that.

The Place Picker was added as I noticed that a nearby ‘Flogas’ centre, where you could buy cylinders of LPG and also refill your vehicles tank, was not on the list returned by Google Places. Using the Picker it was possible to zoom and pan in order to locate the desired location. It was also useful if you weren’t actually present at the station you wanted to add, i.e. it wasn’t nearby!

So where does that leave users (apart from heading to the Play Store to complain and leave a 1 star review!) – well, if the desired location is not displayed in the list then they will need to head over to the website and add the station there. Not as convenient as using the app maybe but not rocket science either.

But why bother if the app is ‘Dead’?

Well, after viewing some in app analytics I found that on average around 25 new stations were added, via the app, each month so there is a chance that users would bump up against this problem when trying to open the Place Picker – chances are that the app could simply crash out.

If the ‘Add New Station’ feature wasn’t being used I’d probably just have removed the menu option and have done with it.

But enough users were using it so I set aside a couple of hours to investigate and remove the Place Picker and it’s associated code while leaving the ‘Nearby List’ in place – even if it will only contain locations flagged by Google Places as being ‘Gas Stations’.

In Summary

In this instance the required update was not too onerous – just a couple of hours of work really, however, this may not always be the case.

Android is always evolving and Google could well make changes to other areas of functionality that could adversely affect the application and require a lot more development effort, e.g. forcing a migration to the new version of the Maps API.

In that instance it would probably the end of the road for the app and it would either have to hobble along or be pulled from the Play Store altogether.

For now though, it’s business as usual 🙂

Getting to grips with OzCode

When I was at NDC London in January I watched a demonstration of the OzCode extension for Visual Studio. Not only was it well presented but it highlighted some of the pinch points we all have to tolerate while debugging.

In return for a scan of my conference pass, i.e. my contact details, I received a whopping 35% discount off a licence and without completing the 30 day trial I was so impressed that I pulled out my wallet (actually the company wallet!).

While I don’t use all of the features every day there are a few that I use all the time – the first one is called ‘Reveal‘.

Consider the following situation:

But I already knew this was a list of view models!

At this breakpoint I’m looking at collection of View Models – but I knew that already what value am I getting from this window? There are over 600 records here – do I have to expand each one to find what I’m looking for? What if one has a null value that is causing my problem – how will I find it?

Well, I could obviously write a new line after the breakpoint which will give me all of the items with a null value for say the name property. But to do that I need to stop the debugging session, write the new line, restart the application and perform whatever steps I need to perform to get back to the above scenario.

Ok, that’s not going to kill me but it’s wasted time and I have to remember to remove the debugging line once I’m done with it.

Using the OzCode Reveal feature I can specify which properties I was to be displayed instead of the actual, fully qualified, class name.

By expanding any of the records to view the actual property data it is possible to select which ones I want to see – which are important to me at this time – by clicking the star.

Select the properties you want to see

Now when I expand the list I see this instead:

Much more useful

These selections are persisted across sessions and can be changed whenever – maybe I want the email addresses, cities or countries next time – not a problem.

But what about nested properties? Well, that’s not a problem either – just drill in an star the properties you want to see and they will be displayed at the appropriate level in the tree as below:

Here the users first and last names are selected as well as the start date of their subscription

There’s a lot more to OzCode than this and as it becomes more embedded in the way I work I’ll post more about how it has helped me.

NDC London – A Little Review

So, the last developers conference (if you could call it that) I went to was way back in 2001 when we had WAP phones and if you had a Pentium 4 computer you were some super-techno hacker.

Well, time have changed somewhat, we now have smart phones with more memory than we had hard drive space back then and I’m writing this post on a workstation with 8 cores and 32GB RAM (and this isn’t even the cutting edge). Add to that the cloud, driverless cars and social networks with billions of users (for better or worse) and we have come a hell of a long way.

Well, I decided to bite the bullet and shell out for a super early-bird ticket and get myself up to London for a few days to Geek-Out.

I had a clear idea in my head about what I wanted to achieve from the three days and planned to not only attend sessions about the new whizzy stuff like Blazor and ASP.NET Core 2.2 but also some of the more mature technologies and concepts – if you read my tweets from the event I think you’ll see that the scope of the tech. I think it was a little light on mobile development but if there were any more sessions covering that I think I would have had a hard time selecting which ones to go to.

Some of the sessions were presented by the likes of Scott Hanselman, Troy Hunt and Jon Skeet through to those I’d never heard of but who presented some engaging (and enlightening content). I don’t regret a single session choice and came out of each of them with something to follow up on.

The exhibitors were varied and interesting with the likes of DevExpress whose tools I’ve used for over a decade (and who know the proper strength for a real coffee) and JetBrains along with OzCode (who proved that debugging does have to suck to the degree that I bought a licence without trying the 30 day trial) and Twillio.

Although the weather wasn’t great and my flight home was nearly cancelled because of snow I enjoyed my three days rubbing shoulders with celebrity and grass root developers alike.

I have to say that almost 20 years between conferences is far too long – especially in this industry and I’ll certainly be considering the next NDC (probably in London – but with Porto and Barcelona events, who knows).

The videos of the sessions are now on YouTube and I will be re-watching a few of these to refresh my memory. I was taking notes but I was also being careful not to get too absorbed in them so that I missed the actual content being delivered!

FillLPG for Android is dead .. long live FillLPG for Android

TLDR; The FillLPG for Android app has reached end of life and no further development or maintenance will be undertaken. It will remain in the Play Store for as long as the third-party FillLPG website remains online but I have no control over that.

The FillLPG for Android app was initially developed to scratch an itch – I had an LPG powered vehicle (I don’t anymore) and had stumbled on to the FillLPG website. I responded to a request from the website owner for someone to write a mobile and thought it would be a good little project to get started in this arena.

It has proved itself to be just that, initially written in Java and then migrated to C# using Xamarin this little app has helped me keep my skill levels up – or at least it did.

Background

First, a little background on how the app actually works (don’t panic, nothing too technical).

One thing to bear in mind here is that the FillLPG website is nothing to do with me or On The Fence Development. I have no access to the source code, database or server it is running on. I only have access to a specified data feed from the site.

The Android app contacts the FillLPG website for all its data and also sends updates back to the main website. The website database is what we call ‘the single source of truth’ and does not, in any way belong to me or On The Fence Development Ltd.

The app cannot display data that may exist on the website but which the website does not expose via the data feed, e.g. opening hours. This feature has been requested many, many times and I have passed these requests to the FillLPG website developer but the data has never been included in the payload sent to the app. If it’s not in the payload, the app can’t display it – simple as that.

So, basically the app is just a portal into selected data from the main website – that’s it. It fetches data from it and posts data back to it.

Now, the FillLPG website also has an ‘Ideas’ board where users can send feature requests etc and this was quite active with some useful ideas coming in, including the ability to indicate stations that were Disabled Friendly or had ‘Safefill’ facilities. Great ideas but unfortunately they were never implemented by the website so the app cannot display this information.

Previously Planned Development

In a previous post I mentioned starting development a new version of the app and even an iOS version being in the pipeline. Well I’m sorry to say that the investigations into this have partly led me to the decision to cease development altogether.

The problem is that I do not control the data or the service that provides it. On top of that there have been numerous unsuccessful attempts to contact the FillLPG website developer over the past year or two. How can I commit time to the development of apps which is heavily reliant on a system I have no control over and could shut down at any time?

I was considering the creation of an ‘intermediate’ webservice that would pull data from the FillLPG website in the same way that the app currently does. This would be stored in a new database under my control. I could then add new data fields for the features requested on the Ideas board. The app would then connect to this new webservice for it’s data.

The problem here is that this is not my data to do as I please with – that’s not the agreement I have with the developer of the FillLPG website. I also would not have user data so if the FillLPG website went away then I’d still have problems.

The straw the broke the camels back

My decision to cease any further development and maintenance was reinforced last night when I received an email from a disgruntled user:

Hi, now 6 weeks or so since I last contacted you regarding FillLpg app, and nothing seems to have changed, still “unable to connect to FillLpg Web Servers” whenever you try to update. Seems that the app is getting useless as it now quite out-of-date given the number of lpg stations closing down, shame really, it WAS pretty good and useful.

Update [22/12/2018]: I think it is only fair to point out that this user subsequently contacted me to let me know that he had located the problem and that it was nothing to do with the app. It turns out it was AdGuard which was blocking the apps access to the internet. A quick configuration update and he was back up and running.

This is typical of the emails I receive regarding the app and reviews submitted to the Play Store – although some of the 8500+ active users do take the time to leave a positive review (but not many).

Receiving 1 star reviews for problems that I have no control over is very frustrating but then I suppose the app is out there and the Play Store has a mechanism for review/feedback so it takes the hit, not the website that is feeding it or the FillLPG community that is maintaining the data.

Going forward

Consider what would happen if the FillLPG site was taken offline – permanently or temporarily. The app would have no access to the database to fetch or update station details.

As it stands the domain registration only runs until mid-June next year so if the current owner doesn’t renew it then the site will essentially fall off the internet. If that comes to pass then I will pull the app from the store as there would clearly be no point in its existence.

For now there are over 1700 stations reaching from the UK into Europe and beyond and over 500 prices have been updated in the last 30 days. People are clearly finding the app useful. So the app will remain in the Play Store and any future reviews will just get a link to this post. Whether they read it or not is down to them.

Your app does not support Andorid Pie!

I receive a lot of SPAM regarding the FillLPG mobile app, mainly offering to increase my install numbers and ratings, but today new approach appeared in my Inbox.

Screenshot showing email in mobile client

Hi there,

I was reviewing your app FillLPG – LPG Station Finder, and it looks great. But it appears the app does not support new Android Pie.  Without it, more than half of the users cannot use the app properly. I am an app developer and I can update your app in a couple of days if you want.

Thanks

Jon

It may not be obvious but this is just SPAM, sent by a bot and I doubt that anyone called ‘Jon’  is actually involved in the process, let alone reviewing my app.

The reason I can be so sure of that is threefold:

  1. The app does support Android Pie – or rather Android Pie supports it! As it stands the app doesn’t use any features that require this level of the SDK so doesn’t need any special coding. I have a Google Pixel 2 XL, running Android Pie, and it’s been running the app fine for months.
  2. The claim that ‘more then half of the users cannot use the app properly‘ is obviously targeted at people who are not in the mobile development space. Looking an the Android distribution dashboard it is clear the Android Pie is still in it’s infancy – with a distribution of less than 0.1% it doesn’t even make it onto the chart! Not surprising when only a small handful of phones, including the Pixel 2/XL, are actually running it.
  3. Finally, the email says that the app ‘looks great’ – which, I’m afraid to say, it doesn’t! It looks dated and is in need of an overhaul (which is on the cards in the New Year). Maybe the email is just trying to appeal to my ego – but then again I doubt it.

So yet another scam email trying to fleece the unwary of ‘a couple of days’ of development time for some questionable updated along with, undoubtedly, the loss of control of their source code and God only knows what hidden costs.

I obviously didn’t respond but there are plenty of apps out there that have been written for companies with no internal development resource and these are the real targets of this type of scam. I’ve written apps for a number of clients in this situation.

Using the ‘scatter-gun’ approach they are bound to find a few unwary app owners out there who may just be taken in by this. And herein lies the problem – to make this kind of thing worthwhile when the response will be quite low, the returns need to be high from each victim!

Caveat Emptor!

PersonalEncrypter – New and Improved

A couple of months ago I updated the PersonalCLI project to remove information leakage in the form of the original filename of the encrypted data. In my original post about this project I eluded to updating it to move away from the Microsoft encryption libraries and to implement the libSodium library instead.

Well, I’m happy to say that I’ve found some time to do this and moved to .NET Standard at the same time – yep, the future is here.

The initial project was really a quick and dirty exercise to demonstrate the ease with which these applications can be developed. It didn’t really lend itself well for extension – it was only meant to be a proof of concept after all.

In the new project I have created a class library to handle the mechanics of the encryption and a separate project for the CLI. There is also a skeleton project for a UWP desktop application which I’m currently working on (and committed in error – but hey, I’m only human). The UWP application will be initially aimed at Windows 10 but the ultimate aim to for a version of this project to be installable on just about any platform, e.g. Mac, Linux, iOS and Android.

Note: The libSodium project uses a different encryption algorithm to the Microsoft libraries so it will require a new key pair to be generated.

So, I’m now working on the UWP application which may even find it’s way onto the Microsoft Store (if they’ll have it) – afterall, that’s the aim isn’t it – to get encryption into the hands of the masses so that they can have some semblance of privacy.

I’ll be pushing updates to the public Github repo and announcing major milestones here so …… watch this space!

A New Freelance Project – and a Chance to Use .Net Core on Linux

I was recently approached by a website owner, who had seen the FillLPG for Android application and wanted a similar mobile application, i.e. an app which displayed a number of points of interest on a map and allowed it’s users to select a location and have further details displayed.

With my experience with FillLPG I was happy enough that I would be able to create applications to render the map with the points displayed in the appropriate location. The big question is – where is the data coming from?

The current situation

The website in question is fairly dated, in the order of 10 years old, written from scratch in PHP (not something like Drupal or Joomla) and backed up by a MySQL database.

The existing database is somewhat badly structured (in terms of todays programming practices) with no foreign keys, endless bit fields for HasThis and HasThat properties and a disjointed table structure (properties which you would think would reside in one table are in fact in another – manually linked by an id value).

There is no API that I can access from the mobile application – it’s just not there and there is no resource to create one.

The way forward

So, how do I access the data return it in a sensible format?

After a bit of thought I decided that the best option would be for me to create a WebApi project for the mobile apps to access. This project will access a separate MySQL database (with a structure more in line with what I need) which will be populated & updated by a utility program that I will also develop and execute on a regular basis (this is not highly volatile information that changes frequently).

So why .NET Core? You don’t need that to develop a WebApi project!

Glad you asked and my initial reply would probably be – ‘Why not?’. As a contractor I feel it is vital to keep your axe sharp and your skills up to date. I’m starting to see a few contracts now for .NET Core so it makes sense to keep my skills relevant.

After come careful analysis I decided that the most cost effective hosting for this solution was on a Linux based server. Yes, I know I can do that on Microsoft Azure but there are far cheaper services out there offering Linux hosting, so I’m going to use one of those.

Now, the only reason I can even consider using a Linux host is because of .NET Core. This allows me to develop using .NET technologies and C# – my development stack of choice.

But would it allow me to do what I intended to do? Could I create a WebAPI application to allow the mobile applications to access the data? What about the ‘Data Shuttle’ utility that will populate and maintain the data between the website database and the new WebAPI one?

Well, I’m happy to say that the answer to that question is yes, it will – and it did.

I’m writing this post after developing the initial, server side components, i.e. the Data Shuttle and WebAPI, and everything it working well – you would not know from the outside that the endpoints are not hanging off an Azure instance.

There were some pain points along the way and I’ve not answered all of my questions quite yet, things like logging and error notification, but everything I need for a Minimum Viable Project (MVP) are now in place from a server side perspective.

I have a handful of posts drafted at the moment which will dive deeper into the development for this project but here are a handful of links that you may find helpful in getting started:

 

The Personal Encryptor 1.0 Released

 

Following on from my post about the UK Governments campaign to erode our privacy by demanding that tech companies put back doors in their encrypted products, I have created a simple utility to demonstrate how easy it is for a reasonably competent developer to create their own using standard development tools and libraries.

Now, I’m not expecting the UK Government to take a blind bit of notice but the fact is that encryption is out there, it’s only mathematics after all, and it’s not going away. You cannot feasibly make maths illegal – although the US did classify encryption as a weapon until 2000 (and to some degree still does).

Anyway, following my commitment to watch at least one Pluralsight course a month during 2018 I opted for Practical Cryptography in .NET by Stephen Haunts to give myself some suitable background.

The course was a minute under four hours and took me a couple of evenings to get through, Cryptography is not the most stimulating subject but Stephen did his best to key the information flowing. At times I did feel frustrated at how he seemed to labour some points but the upshot is that by doing this the information did seem to get through and stick. During the course he slowly increased the complexity, developing and enhancing C# code to demonstrate the principles.

It is this code which I have used as a base to create the ‘Personal Encryptor’ (hereafter referred to as PE) – a command line application that can be used to generate encryption keys, encrypt and, of course, decrypt data into files that can be safely sent over the Internet. Only someone with the required public and private keys will be able to decrypt the file and view the original data.

I’ll probably put another post together shortly diving a bit deeper into the functionality and explain the contents of the output file – but I highly recommend you watch the above course as Stephen know the subject inside out and does a great job of explaining it.

Why would I need/want to encrypt a file?

Imagine the following scenario;

Alice and Bob want to exchange private messages with each other; maybe they are planning  a surprise birthday party or sharing ideas about a new business venture. Whatever the messages contain, they are Alice and Bobs business and nobody elses.

  1. Alice and Bob both download the PE application and copy it to a location on their Windows PC (Mac version coming soon).
  2. They then use the utility to generate a Public and Private key pair – which will create two XML files.
  3. They each send each other their PUBLIC keys (this is just an XML file and can be freely sent over the Internet or via Email).
  4. Both Alice and Bob copy their PRIVATE keys to a safe location (maybe a secure USB key – or a normal USB key which is stored in a safe)

Now Alice wants to encrypt a file, a PowerPoint presentation for their new product, and send it to Bob

  1. Alice uses the PE application to encrypt the file using Bobs PUBLIC key.
  2. The PE application Digitally Signs the encrypted data using Alices PRIVATE key.
  3. A text file is created containing the encrypted data and everything needed to verify the contents has not been tampered with and to confirm that Alice encrypted it.
  4. Alice then emails the file to Bob as she normally would if she was sending a photo of her cat!

Bob receives the message and downloads the encrypted file to his computer.

  1. Bob uses PE to decrypt the file by specifying the location of his PRIVATE key and Alice’s PUBLIC key.
  2. The PE utility will check the digital signature using Alice’s PUBLIC key to confirm that it was signed with her PRIVATE key.
  3. It will then check the integrity of the package to ensure that it has not been tampered with in transit
  4. If all is well then the PE application will decrypt the file and write the contents out to a location that Bob has specified.
  5. Bob can now endure enjoy Alice’s PowerPoint presentation.

Of course if Alice (or Bob) just wanted to encrypt a file for their own personal use and not for sharing it is perfectly feasibly to provide their own Private AND Public keys to encrypt the data. These keys will be required to decrypt the data.

And that’s it, privacy restored/reclaimed.

I can now safely download my Lastpass vault in plain text, encrypt it and save it to any cloud drive I like, secure in the knowledge that, as long as my private key remains under my control, nobody can decrypt it to access it’s contents. Nothing illegal there – these are passwords to legitimate sites (Amazon, Pluralsight, Microsoft, Apple etc) and need to be protected. A valid use of The Personal Encryptor.

Going Forward

Yes, at the moment it requires the users to have some familiarity with the Command Line but this project was always intended to be a proof of concept. The original aim was to explore encryption to enable me to implement it in an existing mobile Chat application.

Creating a simple GUI would certainly be possible – a simple Winforms or WPF application to collect file paths and call out to the command line utility shouldn’t take too long for a competent developer to write. That said, I’m probably going to focus my attention elsewhere.

While using the Microsoft libraries is perfectly valid in my opinion, I am aware that many people will wince just a little bit. With this in mind I intend to investigate using the libSodium Crypto Library which Steve Gibson is such a fan of – so that’s good enough for me.

You can download the latest version of the Personal Encryptor application by clicking here. Alternatively you can download the full source from Github.