Online Tool: UnminifyCode

If you’re a web developer, regardless of what programming language you are using, you’ll be familiar with minified CSS, Javascript and HTML files. For the uninitiated these are files which have had unnecessary whitespace, line breaks and formatting removed with variable/function names shortened where applicable.

While this results in files that are difficult for humans to read, browsers are still able to load and parse the data (unless the minification process has been a bit heavy-handed).

These files are normally used in preference over the unminified versions because of the reduced file size – making for quicker page load times.

That’s all well and good, but what happens when all you have is a minified file, from say a third party library, and you want to edit or, in the case of Javascript, add breakpoints to debug one of these files?

Now, I’ve already reviewed an online ‘Unminifier’ but that doesn’t mean it’s your only choice – as we know, the Internet is full of choices.

Enter – a simple website which will take your minified file contents and present it in a more readable format, adding the whitespace and formatting back in. It obviously won’t try to rename any variables or functions that were shortened during the minification process.

Simply copy the file contents and paste it into the large text box and click ‘Unminify’ – I’ve used moment.min.js here for demo purposes (I know that the unminified version is easily downloaded)

Minified moment.min.js (yes, i know you can get the unminified file – this is just a demo)

After a short delay the nicely formatted code will be presented and you can just copy it and paste it as you see fit.

Unminified version ready for review

Comparing the output of each tool the only differences appears to be a bit of whitespace – nothing that would affect the validity of the content itself.

So there you have it, you wait for an unminification tool and then two come along 😉

Roll my own DDNS? Why Not.!

Update: Well, there’s certainly more to Dynamic DNS than meets the eye – who knew.

Investigations led to the decision that I should put my hand in my pocket and spend my time better elsewhere.

Not that I opted for the overpriced offering from Oracle, signing up with NoIp instead.

Like many devs I have been known to host websites and services behind my home broadband router and therefore needed a Dynamic DNS resolver service of some description. But in my recent moves to limit my reliance on third-party services – including those provided by Google – I wanted to see what would be involved in creating my own service.

Why would I want to roll by own?

Over the last few years I’ve moved my hosted websites outside of my home network and onto services offered by Digital Ocean so I was only really using my DDNS provider for a single resource – my Synology NAS.

Now, in the past I’ve used DynDNS (an Oracle product) and while I’ve had no issues with the service it’s not what you could call cheap – currently starting at $55 a year. When a previous renewal came through, and after reviewing what I was using it for, I decided to let it expire and do without the access to the NAS from outside my network.

Recently though I’ve been using OwnCloud (a Dropbox-like system), hosted on Digital Ocean to replace some of the functions I used to use the Synology for. I don’t really want to use Dropbox or Google Drive or similar offering as I want to keep my data under my control. With the desktop application running I was able to access and edit files from my multiple systems while only actually having a single source of those files, i.e. OwnCloud.

The only downside I’ve encountered was with the Mobile app – which I wanted to use to backup the photos taken on my phone in the same way that the Google Photos app does (because I want to store my own data remember!). Well, the app just refuses to do this without manual intervention (and even then it’s buggy) which is kind of defeating the object.

Then, while listening to the Security Now podcast I heard Steve Gibson talking about his quest to find a suitable file syncing setup. He discussed the pros and cons of different systems and finally opted for Dropbox – which I don’t want to use. Then – out of the blue – the podcast host, Leo Laporte, mentioned ‘Synology Drive’ and his initial description seemed to tick all my boxes … plus I already have a Synology NAS.

I’ve had a look at the capabilities of the system and it seems to do what I want it to do but of course I now have the problem of accessing this from outside my network – I need a Dynamic DNS resolver service of some description. Sure, I could just put my hand in my pocket and pay for one but where is the fun in that?

OK, what’s the plan?

Right, what does a Dynamic DNS resolver do (or what do I think it does anyway)?

Well, in my experience when I was configuring the DNS for my domain I simply pointed the resource, e.g. a website, at the DynDNS service and it would redirect requests to my home broadband IP address. Simple huh?

But how does the DynDNS service know my home IP address and what happens when it changes?

Well, my router has the capability to integrate with these services and notify them when the IP address changes.

So I need to deploy something which will redirect requests for my Synology based services to my home IP address and a mechanism to keep the IP address up to date. How hard can it be?

I plan to create and deploy a simple ASP.NET Core MVC application with a GET and a POST endpoint. The GET will accept incoming requests and perform the redirection while the POST will be used to keep the IP address updated.

Proof of Concept

So, step one is a proof of concept – a super simple MVC application, deployed to my .NET Core enabled Digital Ocean Droplet, which will perform basic redirection to a hard-coded IP address.

I’m not expecting too many problems with this but the one thing that springs to mind is how will it handle HTTPS requests?

If all goes will with step one, the next step will be to set the IP address and keep it up to date. For this I plan to use the Synology’s ability to run Python scripts on a schedule (say every 30 minutes) which will call the POST endpoint. The plan being for the endpoint to be able to resolve the source IP address from the request, rather than the Synology having to resolve it first.

Clearly this will not be as polished a service as the one offered by Oracle but it will use existing resources, so there is no additional cost to be incurred.

Watch this space 🙂

Online Tool of the Month –

Minification and bundling of Javascript and CSS files is obviously a good idea when you are deploying your websites to production – but if you want to use a third-party, minified, resource and want/need to look at the unminified version – it can be a bit of a pain.

I recently purchased a theme for a website which came as a set of CSS, Javascript and image files. There were a number of pages which demonstrated the theme and it was pretty much what I wanted – but not quite, I needed to make a few very minor changes.

These changes were limited to styles which specified ‘hero blocks’ with image backgrounds. I didn’t need a block with an aircraft in the background – I needed one with a camper van.

Now I could have simply renamed my images to match the existing assets, but when they are called something like ‘airplane_x500.png‘ I don’t think that’s a great idea. I certainly would not do it in code – create a class file called airplane.cs and implement a camper van inside it!

So, I needed to update the CSS – and herein lies the rub. The CSS file in question was minified (others did have the original version in the bundle – this one did not).

I could have contacted the designer and request the unminified asset but I really didn’t want to go through the hassle of this with no guarantee of receiving the file.

Enter, a simple website which does exactly what it says it’s going to to – take minified resources and reverse the minification process.

The UI is pretty simple and self explanatory – simply paste your minified text into the central text area….

… click the Unminify button …

Job Done 🙂

Online Tool of the Month – SelfSignedCertificate

In this day and age everything needs to be encrypted to prevent nefarious access to our data. Whether it’s our banking or medical records, our online email inboxes or our browsing and searching habits.

So, when developing websites or APIs I always start with an SSL enabled configuration – and in Visual Studio that’s a pretty easy thing to do, it’s just a checkbox really.

When deploying websites to production servers I, like millions of others, use LetsEncrypt to generate and renew my SSL certificates.

But what about that gap between Development and Production? I am of course talking about ‘Test Servers’.

I’m currently working on a few ASP.NET Core projects that will ultimately be deployed to Linux servers and in order to test this type of deployment I normally like to use a Virtual Machine (VM). This give me good flexibility and allows me to deploy to locally hosted systems with the ability to easily rollback configuration changes etc through the use of snapshots.

But what about the SSL certificate? I can’t use LetsEncrypt because the VM won’t be externally accessible for their system to interogate and validate. What I need is a Self Signed Certficate that I can install on my development system and the VM.

With a Self Signed Certificate in place I can deploy and test my software using SSL thus ensuring that the Test environment matches Production and that there are no sneaky bugs loitering around.

Now – you can use a Powershell command like this:

New-SelfSignedCertificate -DnsName "" -CertStoreLocation "cert:LocalMachineMy"

Or an OpenSSL command like this (if you have OpenSSL installed):

openssl req -x509 -newkey rsa:4096 -sha256 -keyout mysecurecerti.key -out mysecurecertificate.crt -subj “/” -days 600

But do you really want to have to look up all the command line parameters every time you need a certificate? Of course not.

Enter the Self Signed Certificate Generator ( which will generate a suitable certificate with a single click on the mouse.

Simply navigate to and enter the domain you want to generate the certificate for and click the big green Generate button.

The tool will then display a page allowing you to download the certificate and key files as well as providing additional information about configuring the Apache web server to use the new certificate and how to generate the same certificate from the command line (if you really wanted to).

Now, assuming that you have configured your system recognise the domain, e.g. edited your hosts file, and even though your web browser will complain that it cannot verify the origin of the certificate you should be able to navigate to your website using HTTPS, e.g., and it should work just fine.

If you really wanted to keep your browser happy you can export the certificate via the browser and import it into your local certificate store. There are plenty of tutorials out there on how to do this but this one is as good as any other I’ve found.

So there you have it – generating Self Signed Certificates the easy way.

Online Tool of the Month –

Following the migration of this website to WordPress I installed the ‘redirection’ plugin in an effort to handle the difference in the url structure used by Squarespace (my previous host/platform). This plugin allows me to redirect users using the old Url to the correct content and also see any 404 errors for resources I may have missed.

While reviewing this 404 log I noticed a few rather odd entries, such as

  • apple-touch-icon-precomposed.png and
  • apple-touch-icon.png.

Knowing that I never had such content on the site before I did a bit of Googling and found that these are favicons (the little square logos that reside on your browser tabs) that Apple’s Safari browser will attempt to load along with your page.

Now that I knew what the problem was, how do i fix it? I mean, I had a favicon on the site – why couldn’t Safari just use that one?

After a bit more searching I found the Real Favicon Generator site where I was able to enter the Url of my site and it would generate a report on what it found (or didn’t find as the case maybe).

This is actually for another site I manage but the result is similar to when I ran it on this site

Now, that’s a lot of red but fear not – help is at hand. As the sites name eludes, it is also able to generate the appropriate icons as well. But how will I get these into WordPress – it only has the facility to allow me to specify a single favicon file.

Well – as well as the testing of the site and generation of the icons, there is also a WordPress plugin which makes things super easy.

With the plugin installed, navigate to the Appearance menu and select ‘Favicon’.

Select your desired image, they recommend 260x260px but mine was 150x150px and it looks fine for my purposes.

Click Generate and you will be taken to an editor page which allows you to tweak a few colours and settings – I didn’t bother with this, I just clicked the ‘Generate your Favicons and HTML code’ button at the bottom of the page.

And that’s it – I re-scanned the site and (unsurprisingly enough) everything was now green – and the 404 errors are no longer appearing in the redirection logs.

A New Freelance Project – and a Chance to Use .Net Core on Linux

I was recently approached by a website owner, who had seen the FillLPG for Android application and wanted a similar mobile application, i.e. an app which displayed a number of points of interest on a map and allowed it’s users to select a location and have further details displayed.

With my experience with FillLPG I was happy enough that I would be able to create applications to render the map with the points displayed in the appropriate location. The big question is – where is the data coming from?

The current situation

The website in question is fairly dated, in the order of 10 years old, written from scratch in PHP (not something like Drupal or Joomla) and backed up by a MySQL database.

The existing database is somewhat badly structured (in terms of todays programming practices) with no foreign keys, endless bit fields for HasThis and HasThat properties and a disjointed table structure (properties which you would think would reside in one table are in fact in another – manually linked by an id value).

There is no API that I can access from the mobile application – it’s just not there and there is no resource to create one.

The way forward

So, how do I access the data return it in a sensible format?

After a bit of thought I decided that the best option would be for me to create a WebApi project for the mobile apps to access. This project will access a separate MySQL database (with a structure more in line with what I need) which will be populated & updated by a utility program that I will also develop and execute on a regular basis (this is not highly volatile information that changes frequently).

So why .NET Core? You don’t need that to develop a WebApi project!

Glad you asked and my initial reply would probably be – ‘Why not?’. As a contractor I feel it is vital to keep your axe sharp and your skills up to date. I’m starting to see a few contracts now for .NET Core so it makes sense to keep my skills relevant.

After come careful analysis I decided that the most cost effective hosting for this solution was on a Linux based server. Yes, I know I can do that on Microsoft Azure but there are far cheaper services out there offering Linux hosting, so I’m going to use one of those.

Now, the only reason I can even consider using a Linux host is because of .NET Core. This allows me to develop using .NET technologies and C# – my development stack of choice.

But would it allow me to do what I intended to do? Could I create a WebAPI application to allow the mobile applications to access the data? What about the ‘Data Shuttle’ utility that will populate and maintain the data between the website database and the new WebAPI one?

Well, I’m happy to say that the answer to that question is yes, it will – and it did.

I’m writing this post after developing the initial, server side components, i.e. the Data Shuttle and WebAPI, and everything it working well – you would not know from the outside that the endpoints are not hanging off an Azure instance.

There were some pain points along the way and I’ve not answered all of my questions quite yet, things like logging and error notification, but everything I need for a Minimum Viable Project (MVP) are now in place from a server side perspective.

I have a handful of posts drafted at the moment which will dive deeper into the development for this project but here are a handful of links that you may find helpful in getting started:


Online Tool of the Month – BuiltWith

Have you ever looked at a website and thought ‘I wonder what that’s written in’? Well, even if you haven’t I certainly have which is why I was interested to hear about BuiltWith.

Simply put, BuiltWith puts websites under the microscope and produces a detailed report about what it sees.

From Webservers and SSL Certificate providers through programming languages, Javascript libraries, content management systems and advertising platforms it sees it all. The produced report contains links to allow you to see technology trends across the Internet which may well assist with infrastructure decisions for your next project.

Pointing BuiltWith at this site I was able to see that not only were the current technologies being listed, including the current version numbers where applicable, but it also reported that 56 technologies had been removed since July 2012 (I migrated from SquareSpace to WordPress fairly recently).

Registering for a free account would allow me to see much more detail and view historical data but for now I have found the open version sufficient for my needs.


Online Tool of the Month –

Browser compatibility is a pain – and that’s a fact. But when your client says, “we need to support IE9 and upwards” you can feel the dread come over you.

Well, fear not as help is at hand in the form of, a CDN hosted service which will serve up a custom set of polyfills to fill those functionality  gaps in whatever browser makes the call.

Need some of those ‘Math’ or ‘console’ functions that are so readily available in non-IE browsers? Well, has your back.

Add a single line of code to your mark up and voila – you’re good to go. Remember, before you start panicking about large request sizes the CDN will tailor the response to only those features that the current browser is lacking, which is pretty neat.

From the feature list on their site it is easy to see what is included naively in each browser and what can be polyfilled.

But there’s more – what if you only wanted a single feature, say ‘’, in IE9/10&11. Even though the service will return a tailored set of polyfills it is possible to view the detection and polyfill scripts for any feature by clicking on the ‘hamburger’ menu to the right of the feature – this takes you to the appropriate location in the Github repo (yep, this is all open source stuff).

The downside to using a CDN though is that if it goes down (and it did once for a client of mine) then it could leave your site somewhat ‘hobbled’ depending on the features you were relying on

Stackify Prefix – first thoughts

Listening to one of my favorite podcast (.Net Rocks) I heard a plug for the Stackify Prefix tool which claims to help the developer fix problems before anyone else sees them – a bold claim. Well as I am currently working on a greenfield development project I decided to give is a whirl – it’s free after all so why not.

Now I was not expecting to find too much wrong with the application and thankfully I was right – but I was getting errors.

The highlighted call is to a WebAPI method from an AngularJS controller (a JavaScript file on the client) and as you can see from the right hand pane it does succeed. In fact the data is returned as I’d expect and the application works without any issue. So why is Prefix flagging this?

Well, looking at the stack trace a little more carefully I see that the exception is being raised by the XmlMediaTypeFormatter when it is creating it’s default serializer. But the WebAPI is returning JSON so why it is spinning up an XML serialiser?

Well, my WebAPI endpoint took this form:

The problem is on line 8 where I’m returning the OK status with the required content – which is an anonymous object that I’ve just put together on the fly. The WebAPI is configured to accept the ‘application/json’ header and to use an appropriate JSON Formatter – which it does.

The problem is that I still have the default XML Formatter in the list and for some reason the framework is trying to use it to serialize my anonymous object – and failing (silently).

So all I need to do is to remove the Formatter during the WebAPI registration – within WebApiConfig.cs in the App_Start folder (see line 12 below).

Now this was fairly trivial but a bug is a bug and as we know – exceptions are expensive. They take time to raise while they pull the required information together and work their way through to the calling code – which in this instance appeared to simply discard it. A small performance hit but if this scaled then it could have become a bigger problem in the future – and probably harder to find.

Prefix highlighted it straight away and the issue is now fixed. It never made it to production, in fact it never made if off my desk – it was Pre-Fixed!

StructureMap and MVC5 – Resolving the resolution exception for IUserStore

I’m currently working on an MVC5 website which uses StructureMap for Inversion of Control operations. Everything was working fine but then I started to get exceptions when I ran the application and attempted to login (not sure if Update 3 was responsible for surfacing this problem but it certainly worked without any problems previously)

Now, clicking ‘Continue’ in Visual Studio (or pressing F5) allowed the application to continue with no further issues but I wanted to get to the bottom of this issue – I certainly didn’t want this coming back at me when the site goes live.

Clicking on ‘View Detail’ I found the following error message which seemed surprisingly informative;

No default Instance is registered and cannot be automatically determined for type ‘IUserStore<ApplicationUser>’

There is no configuration specified for IUserStore<ApplicationUser>

1.) new ApplicationUserManager(*Default of IUserStore<ApplicationUser>*)
2.) MyDiveLogs.ApplicationUserManager
3.) Instance of MyDiveLogs.ApplicationUserManager
4.) new AccountController(*Default of ApplicationUserManager*, *Default of ApplicationSignInManager*)
5.) MyDiveLogs.Controllers.AccountController
6.) Instance of MyDiveLogs.Controllers.AccountController
7.) Container.GetInstance(MyDiveLogs.Controllers.AccountController)

So – what does that mean?

Well, in a nutshell it means that StructureMap could not determine how to create a concrete instance of IUserStore, which is required to create an ApplicationManager instance which is in turn required to create the AccountController.

This is caused by StructureMap being ‘greedy’ and will attempt to use the constructor with the most parameters. In the case of the AccountController it will ignore the parameterless constructor and try to use the one that requires an ApplicationUserManager (the root of my issue) and an ApplicationSignInManager.

Having trawled around the web trying to locate a resolution I’ve found a couple of options;

  • Tell StructureMap to use the parameterless constructor instead or
  • Tell StructureMap how to instantiate the instances correctly

The first option is a simple matter of adding an attribute to the AccountControllers parameterless constructor as below;

public AccountController()

This works fine and if you are worried about how the AccountController will function without being given an ApplicationUserManager and ApplicationSignInManager to use then fret not – take a look at its UserManager and SignInManager properties. The getter will perform a null check and create the appropriate instance as required.

Interestingly, this helps us with the second option for resolving this problem, i.e. telling StructureMap how to create the instances for us.

Locate the IoC.cs class (mine was in a folder called DependancyResolution in the main MVC project) and update the Initialize method as below (note that you may have to import some namespaces)

public static IContainer Initialize() {
    return new Container(c =>

        // ADD THESE LINES
        c.For<System.Data.Entity.DbContext>().Use(() => new ApplicationDbContext());
        c.For<Microsoft.Owin.Security.IAuthenticationManager>().Use(() => HttpContext.Current.GetOwinContext().Authentication);


And that’s it – you should now be able to access Actions within the AccountController without this exception being thrown.