Blog

Taking control of my Domain

Some time ago I was watching a Pluralsight course called ‘Master Your Domain’ where Rob Conery explained how to break your reliance on the major service providers for email, source code, blogs and file-sharing and create your own domain to host your data.

Following the course I started hosting my own Git server, Blog and File Sharing service but Email …. well that was too big a step for me to take at that time. However, times change and when I started experiencing issues with my email that was the trigger for me to take the plunge.

What was the problem with Google Mail?

When GMail moved to Inbox I have to say I was less than impressed. For my personal email it was fine – I didn’t mind moving to ‘Inbox Zero’ but, call me a dinosaur, it just jarred with me when it came to my business account.

Now, I really didn’t want the hassle of moving email providers so as many of the ‘problems’ were to do with the Inbox application I decided to use an alternative email client called WeMail on my Android phone and this served me very well for a couple of years.

Recently however, I started noticing multiple ‘draft’ messages hanging around (basically ‘snapshots’ of messages as I was typing and had then sent) and issues with message signatures – sometimes they were added, sometimes not. Was this a problem with the WeMail or Google – who knows.

What about Google Docs?

I was also not overly impressed to see that Google had blocked ‘large numbers’ of users from accessing their files on Google Docs. Admittedly Google were trying to filter out malicious content but the fact remains that they were scanning everybody’s files for what it deemed to be inappropriate. What if content in my files triggered a false positive and they blocked my access to an important document? What about my email? I have all sorts of information in there from company accounts and confidential client discussions to inane conversations with recruiters and colleagues.

Making the move

After deciding to make the move I had a look at what I actually stored on Google services and what needed to be migrated.

Obviously there was a couple of gigabytes of email but I also had a lot of stuff in Google Docs – from invoices to memes – where was I going to put all this stuff?

Files and Stuff

As already mentioned above, following Rob Conery’s course I had configured my own Dropbox-like File Sharing service using OwnCloud and this had been running fine for a while now. I had the server installed on a Raspberry Pi sitting on top of a 250GB SSD in an external drive enclosure. With the appropriate DNS configurations and port forwarding on my router this configuration worked well for me, allowing me to share folders with clients for them to upload large media files and letting me transfer files between my Windows workstation & laptops as well as my iMac and MacBook Pro.

Email

As Rob mentions in his course, it’s really not viable to host your own email these days. Messages coming from unknown sources are blacklisted by many services in an effort to reduce the level of spam reaching our inboxes. For this I needed to find an alternate provider; one that provided me with all the features I already had (spam protection, mobile access etc) but with some increased confidence over the privacy of my inbox.

In the course Rob recommends Fastmail and reading their Privacy Policy I was happy to give them a try – they offer a 30 day free trial and I did give them a try previously but not ‘in anger’ as it were, i.e. I created an account and sent test messages, added appointments etc but never actually used it on a daily basis.

After exporting my Calendar and Contacts from GMail I set about the import process from within Fastmail. The process itself was pretty straightforward with clear instructions and troubleshooting advise. I experienced no real problems but I’m sure that Fastmail support would have been on the case if I had.

The only ‘grumble’ I had at the time was that my Gmail data was imported into a folder called ‘migrated’ – I was expecting my Gmail Inbox messages to appear in my new Inbox. This caused a bit of consternation at the time but looking at it now I’m not so sure it’s a problem – all the data is there and I can easily move things around if I so desire.

Re-configuring my DNS to redirect email to the Fastmail servers was also straightforward and I’m happy to say that a couple of weeks into my trial I’m very happy with the service I’m receiving so will definitely be signing up to the full plan.

So what about Backup?

So I now have my email hosted successfully and files are back under my control so we’re all good yes?

Well not quite.

One of the things we don’t really think about it that on top of storing all our information and making it available to us online, Google are actually backing this stuff up. If one server was to totally fail then the data is ‘simply’ pulled from another and we never know there was a problem.

Well, the data is now sitting on a drive in my office – what happens if it fails, or the office burns down? How will I get that data back? I need a regular, offsite backup.

The answer was fairly simple and conforms with my need to keep my information private.

I had previously bought a Mac Mini for developing my Xamarin iOS applications, this was later replaced with an iMac, so I fired it up and installed the OwnCloud client onto it. This was set to sync everything to it’s local drive – and yes, it’s still sitting in my office so at this point I’ve gained nothing.

I then signed up for a SpiderOak account – initially 250GB but they later increased this to 400GB – using their 21 day trial. Their ‘SpiderOak One‘ client was then installed onto the Mac Mini and configured to backup everything in the OwnCloud sync folder.

I’ve also install the One client on my workstation and also mounted a couple of folders from my Synology NAS onto the Mac Mini for good measure and I have backed up almost 100GB of data so there is plenty of headroom for future expansion.

Going Forward

Ok, some of you may be asking about the cost of all this and yes there is some additional outlay – my Google Apps account was created when they were free and to their credit Google have honoured this long after charging for new accounts. But the cost to the business is minimal – and even as a personal user it’s certainly not prohibitive.

The backup solution I have in place does have it’s downsides – we had a power cut here a while back and I totally forgot to reboot the Mac Mini so there were no backups for a while.

But the fact is that I now have control over my data and if this takes a little more work and expense then such is life.

Online Tool of the Month – unixtimestamp.com

As developers we know that one of the biggest problems when working with data is handling dates. It’s such a simple thing surely yet many of us fall foul of formatting issues and time zone offsets – and that’s when the date is actually human readable!

I remember when I first encountered a Unix Timestamp in the ‘LastUpdated’ column of a database table and thinking, ‘what the hell is that?!’.

Bear in mind that this was back in the days before the internet and before I had any real experience with Unix/Linux systems – every Windows application I had seen prior to this had it’s dates stored in the DD/MM/YYYY hh:mm:ss format.

Being faced with something like ‘1519571567‘ was a bit confusing to say the least – unless you know that this represents ‘the number of seconds since midnight (UTC) 1st January 1970‘ where would you even start to know what this meant?

Even now that I now what this is, it is always a bit of a pain point to me when I’m developing or debugging something with a date in this format;

  • What time does the timestamp in that database field represent?
  • What time is it now expressed as a unix timestamp?
  • etc

Over the years I’ve come to fall back on Dan’s Tools Epoch Unix Time Stamp Converter.

The fairly simple interface is great as it doesn’t get in you way and lets you don what you came to do.

To the left of the screen is the current timestamp (or at least the timestamp from when the page was loaded) and how this would be displayed if translated to a number of other standards.

To the right is where you can either enter a date (using clearly marked fields to prevent ambiguity) or a unit timestamp.

Entering the timestamp from a few lines above and clicking ‘Convert’ generated the following:

While entering the date I’m currently writing this blog post returns the following:

This is essentially a copy of the output of the left hand panel but displaying the output for the entered value. The value is also converted into a number of different formats, e.g. ISO 8601 & RFC2822.

I’ve found this to be a helpful little tool when working with Unix Timestamps and be sure to checkout the other date related utilities in the top menu bar, e.g. Timezone Converter and Week of the Year.

The Personal Encryptor 1.0 Released

 

Following on from my post about the UK Governments campaign to erode our privacy by demanding that tech companies put back doors in their encrypted products, I have created a simple utility to demonstrate how easy it is for a reasonably competent developer to create their own using standard development tools and libraries.

Now, I’m not expecting the UK Government to take a blind bit of notice but the fact is that encryption is out there, it’s only mathematics after all, and it’s not going away. You cannot feasibly make maths illegal – although the US did classify encryption as a weapon until 2000 (and to some degree still does).

Anyway, following my commitment to watch at least one Pluralsight course a month during 2018 I opted for Practical Cryptography in .NET by Stephen Haunts to give myself some suitable background.

The course was a minute under four hours and took me a couple of evenings to get through, Cryptography is not the most stimulating subject but Stephen did his best to key the information flowing. At times I did feel frustrated at how he seemed to labour some points but the upshot is that by doing this the information did seem to get through and stick. During the course he slowly increased the complexity, developing and enhancing C# code to demonstrate the principles.

It is this code which I have used as a base to create the ‘Personal Encryptor’ (hereafter referred to as PE) – a command line application that can be used to generate encryption keys, encrypt and, of course, decrypt data into files that can be safely sent over the Internet. Only someone with the required public and private keys will be able to decrypt the file and view the original data.

I’ll probably put another post together shortly diving a bit deeper into the functionality and explain the contents of the output file – but I highly recommend you watch the above course as Stephen know the subject inside out and does a great job of explaining it.

Why would I need/want to encrypt a file?

Imagine the following scenario;

Alice and Bob want to exchange private messages with each other; maybe they are planning  a surprise birthday party or sharing ideas about a new business venture. Whatever the messages contain, they are Alice and Bobs business and nobody elses.

  1. Alice and Bob both download the PE application and copy it to a location on their Windows PC (Mac version coming soon).
  2. They then use the utility to generate a Public and Private key pair – which will create two XML files.
  3. They each send each other their PUBLIC keys (this is just an XML file and can be freely sent over the Internet or via Email).
  4. Both Alice and Bob copy their PRIVATE keys to a safe location (maybe a secure USB key – or a normal USB key which is stored in a safe)

Now Alice wants to encrypt a file, a PowerPoint presentation for their new product, and send it to Bob

  1. Alice uses the PE application to encrypt the file using Bobs PUBLIC key.
  2. The PE application Digitally Signs the encrypted data using Alices PRIVATE key.
  3. A text file is created containing the encrypted data and everything needed to verify the contents has not been tampered with and to confirm that Alice encrypted it.
  4. Alice then emails the file to Bob as she normally would if she was sending a photo of her cat!

Bob receives the message and downloads the encrypted file to his computer.

  1. Bob uses PE to decrypt the file by specifying the location of his PRIVATE key and Alice’s PUBLIC key.
  2. The PE utility will check the digital signature using Alice’s PUBLIC key to confirm that it was signed with her PRIVATE key.
  3. It will then check the integrity of the package to ensure that it has not been tampered with in transit
  4. If all is well then the PE application will decrypt the file and write the contents out to a location that Bob has specified.
  5. Bob can now endure enjoy Alice’s PowerPoint presentation.

Of course if Alice (or Bob) just wanted to encrypt a file for their own personal use and not for sharing it is perfectly feasibly to provide their own Private AND Public keys to encrypt the data. These keys will be required to decrypt the data.

And that’s it, privacy restored/reclaimed.

I can now safely download my Lastpass vault in plain text, encrypt it and save it to any cloud drive I like, secure in the knowledge that, as long as my private key remains under my control, nobody can decrypt it to access it’s contents. Nothing illegal there – these are passwords to legitimate sites (Amazon, Pluralsight, Microsoft, Apple etc) and need to be protected. A valid use of The Personal Encryptor.

Going Forward

Yes, at the moment it requires the users to have some familiarity with the Command Line but this project was always intended to be a proof of concept. The original aim was to explore encryption to enable me to implement it in an existing mobile Chat application.

Creating a simple GUI would certainly be possible – a simple Winforms or WPF application to collect file paths and call out to the command line utility shouldn’t take too long for a competent developer to write. That said, I’m probably going to focus my attention elsewhere.

While using the Microsoft libraries is perfectly valid in my opinion, I am aware that many people will wince just a little bit. With this in mind I intend to investigate using the libSodium Crypto Library which Steve Gibson is such a fan of – so that’s good enough for me.

You can download the latest version of the Personal Encryptor application by clicking here. Alternatively you can download the full source from Github.

Parsing Console Application Arguments using CommandLineParser

When we open Visual Studio and click File > New we are greeted with a huge list of project templates to choose from. Now and then we may opt for a simple Console Application for a quick one off utility, e.g. post processing some .csv files or images.

Similarly we have all used command line utilities which require numerous arguments and switches to ‘tune’ exactly what we want it to do, e.g. git or tar.

Well I’m looking to create a Command Line utility that will allow users to encrypt and decrypt textual messages.

Why? I hear you ask – well check out my thoughts on the UK Governments attempts to get WhatsApp to create a backdoor.

The utility will allow users to

  • Generate Public/Private Key Pairs
  • Encrypt textual messages, packaging them for sending to the intended recipient
  • Decrypt packaged messages

Each action will require/allow a number of parameters to be set to specified key bit lengths, key names, output locations and of course the message to be encrypted or decrypted.

Something like this:

C:Encryptor generateKeys -n Daves -o C:UsersDaveMyKeys

which would create a key pair files called DavesPrivateKey.xml and DavesPublicKey.xml in C:UsersDaveMyKeys

But what is the best way to do this?

We could handle it in the Program.Main method by locating arguments and setting their values, like this
https://gist.github.com/OnTheFenceDevelopment/d08bb935661c91d3a11dac50daacbb09.js

But that’s a bit nasty and relies on each parameter having a value and that it’s been specified in the correct way to allow casting etc. There has to be a better way .. well there is and it comes in the form of a nuget package called, suitably enough, CommandLineParser.

Now, I didn’t find the documentation (the ReadMe.md on the projects Github page) that helpful and it took a lot of trial and error to work out how to use it for my needs. When I got to grips with it though – I was impressed.

After installing the CommandLineParser via nuget I needed to create a new class to define each operation, i.e. GenerateKeys, Encrypt and Decrypt.

Each class is decorated with a ‘Verb’ attribute – this defines the action – while the properties are decorated with ‘Option’ attributes.
https://gist.github.com/OnTheFenceDevelopment/30640f89914a961bf0d2f81995e36c19.js

So the above code defines the Generate Keys action and provides options for Key Length, Name and Output Folder. Notice that the Key Length will default to 2048 bits if not specified by the user. It is also possible to make options mandatory and the operation will fail if they are not specified.

When the Console Application project was created a Program.cs class was added containing a single method called Main. I’ve modified the signature to return an integer instead of a void as I think it is good practice to return exit codes from command line utility.

https://gist.github.com/OnTheFenceDevelopment/2725e47e2482535a681e2b794901fcc0.js

To test the configuration I can right-click on the project file, open the Properties page and click on the ‘Debug’ option. Enter suitable values into the ‘Command Line Arguments’ as below

and then run the application (I had a breakpoint on the return statement within the GenerateKeyPair method).

If the parser fails for any reason then it will automatically generate the required output and error ‘gracefully’.

Try dropping the value for one of the options and rerunning the application – below I dropped the DavesMessage value from the -n parameter:

Obviously I want to add another couple of Option configurations and this is easy enough to do – take a look at the sample app in the github repo, in particular the Program.cs file, to see how this is done. Alternatively you can keep track of this application when it hits GitHub as I will be open sourcing the code.

WhatsApp – a Haven for Paedophiles and Terrorists?

Yep – thought that would get your attention!

It’s headlines like this that the UK Government (and the press) are throwing around in order to drum up support for one of the most intrusive and privacy damaging campaigns to date.

The premise is that bad people use these services, which make heavy use of encryption to keep messages private, and by doing so hamper the security services who can no longer access private information in order to monitor them and stop them from doing bad things.

Now I’m not denying that these bad people do use WhatsApp (and similar applications) to enable them to communicate without their messages being intercepted. But I use WhatsApp and so do my wife and kids and we are not bad people. If WhatsApp are expected to put a backdoor into their systems to allow access to the content by so-called ‘authorised agencies’ then what about our privacy?

When I discuss this with people many will say “well, if you’re not doing anything wrong then what’s the problem?”. However, when I ask them for their email and social media passwords they are a somewhat reluctant to hand them over – “but if you are not doing anything wrong then why do you care?”, I ask.

The answer is simple, their email and social media feeds are private and none of my business. Just because something is private does not mean it’s illegal or even wrong, just private.

We may be discussing our medical history, financial details, travel plans or just what time we will be home for tea but that’s our business, it’s private and nobody else’s business except ours and whoever we’re talking to.

So while I am willing to accept that bad people use these platforms in an effort to hide their activities, I’m pretty sure that they make up a tiny percentage of the 1,000,000,000 (and increasing) WhatsApp users. Do we all have to give up our right to privacy for the sake of these people and will it even make a difference?

The Snoopers Charter

In 2016 the Investigatory Powers Act, or Snoopers Charter as it was dubbed, was passed into Law and with it the privacy of every UK citizen was eroded a little more.

Did you know that under this legislation your Internet Service Provider now has to keep your browsing history for 12 months and provide it on demand to authorised agencies?

If you did then you may have assumed that as long as you are not “doing anything wrong” then you have nothing to worry about as the Police and Security Services are only looking for bad guys.

Well, did you also know that on the list of agencies that can access these records are:

  • HMRC (the tax man)
  • The Department of Work and Pensions
  • The Department of Transport!
  • The Welsh Ambulance Services and National Health Service Trust!!
  • The Food Standards Agency!!!

Now what on earth to the Food Standards Agency need with my internet browsing history? What possible use could it be to them?

If the UK Government were to enforce a backdoor into WhatsApp and other platforms like it – who would be able to access the information and how secure would it be?

But that’s not all. If the Government weakens encryption and demands backdoors be created in otherwise secure systems, who knows who can gain access to the information that was once protected?

If SSL certificates (which put the padlocks on your browsers address bar to indicate that the page is secure) become less secure, how safe are you when you are accessing your online banking or shopping on Amazon?

The truth of the matter is that if the UK Government gets it’s way it’s not really them that we have to worry about – it’s the hackers. They will have a field day with all this insecure data flying over the wire. All it would take would be a poorly implemented backdoor and then all bets are off. If Government agencies cannot even secure their own data, what chance do they have of securing the keys to our data?

A Developers Viewpoint

So, apart from being a UK citizen, what has this got to do with me and why am I ranting about it?

Well, as a developer I know that writing a chat application is not really that hard – in fact I recently read a book which guided the user through cross-platform Xamarin development and the target project was a cross platform chat application. Moreover, the source code is actually on Github so there’s a starting point right there.

Currently that XamChat application stores and sends data in plain text so not secure or private. But how difficult would it be to upgrade the app to use encryption? Even though I am not a cryptographer by any stretch of the imagination I’m guessing not that hard at all.

And that’s the point – if I can do this then any reasonably competent developer could do it too. If the UK Government we to make it unattractive for the bad guys to use secure apps like WhatsApp then there is nothing stopping them from writing their own end-to-end encrypted messaging system using state of the art encryption that cannot be broken with today’s technology.

Meanwhile the rest of us will be using insecure systems that leak information and make us vulnerable to malicious hackers keen to exploit these weakness, gather personal information and use it to their own ends.

Going Forward

In an effort to prove my point, I’m going to take a run at this. Ultimately I’m going to see just how hard bolting encryption into the XamChat application.

I’m not expecting (or intending) to create a WhatsApp killer or even anything that polished – just something usable to prove the point.

First thing to do is to get up to speed on encryption, especially in .NET. There’s a 4 hour course on Pluralsight so I can kill two birds with one stone; my commitment to watch one Pluralsight course a month and create a Command Line application to create Encryption Keys, Encrypt & Decrypt text data in preparation for creating SecureXamChat.

Edit – 15th Feb 2018: Subsequent to me posting this there was a great article in The Guardian which (obviously) make a much better job of getting the point across and it well worth a read.

So, what will 2018 be the year of?

They say that life is what you make it so time to make some resolutions …… yes?

Well, if John Sonmez from Simple Programmer is to be believed – maybe not!

I receive regular email updates from the Simple Programmer website and the one I received on 27th December caused me to stop and think.

Probably based on one of John’s blogs posts from 2016, the subject of the email was ‘Dont make resolutions the New Year, make a commitment’. Now I initially thought that these amounted to the same thing but changed my mind after reading the parting shot of the email which read:

Let me put it this way, when you need to take a taxi to the airport, do you want your taxi driver to resolve to be there at 8:30 AM or do you want him to commit to being there at that time?

The answer is obvious (hopefully) so I’ve decided to make some commitments for 2018:

  • I will watch at least one Pluralsight course a month
    • My technology focus will be .NET Core, Azure, ReactJs
  • I will watch at least one Xamarin University session (attending those required to maintain my certification)
  • I will blog twice a month (not including the Online Tool of the Month posts)
    • To keep me honest I will probably post findings from my Pluralsight courses and Xamarin investigations (proving that I’ve actually honoured the above commitments)
    • Other topics will include Privacy and Encryption which seem to be bad words these days

So that’s what I will commit to this year – maybe I’ll be in a position to commit to more but I’ll review my progress mid-2018 and see how I’m doing.

Online Tool of the Month – BuiltWith

Have you ever looked at a website and thought ‘I wonder what that’s written in’? Well, even if you haven’t I certainly have which is why I was interested to hear about BuiltWith.

Simply put, BuiltWith puts websites under the microscope and produces a detailed report about what it sees.

From Webservers and SSL Certificate providers through programming languages, Javascript libraries, content management systems and advertising platforms it sees it all. The produced report contains links to allow you to see technology trends across the Internet which may well assist with infrastructure decisions for your next project.

Pointing BuiltWith at this site I was able to see that not only were the current technologies being listed, including the current version numbers where applicable, but it also reported that 56 technologies had been removed since July 2012 (I migrated from SquareSpace to WordPress fairly recently).

Registering for a free account would allow me to see much more detail and view historical data but for now I have found the open version sufficient for my needs.

 

Ditching AntiVirus

Just like us, as computers get old they tend to slow down. It’s a fact of life pure and simple.

With computers it tends to be due to the hardware not keeping up with the new requirements of today’s applications (just try running later Windows or Office on a Pentium 4 and you’ll see what I mean). We tend to put up with the slow down until something finally gives out, a hard-drive or motherboard for instance, and then we buy a new one.

Well my Windows 10 development workstation was slowing down and while it’s a few years old now, it is still a pretty high spec – i7-3770 with 32GB RAM and SSDs – this thing used to fly.

But recently it was noticeable that it was taking longer to boot, applications like Visual Studio and SQL Management Studio seems to struggle to load and surfing the web was a bit of a grind.

I decided to reinstall from the ground up and make sure that I didn’t install anything that didn’t really need for development (like Steam!). I also decided that I was not going to reinstall my AntiVirus!!!

“Oh My God!” – I hear you shout. Are you insane? Don’t you know how many viruses there are out there and how quickly your system could be compromised?

Well, no I’m not insane (or at least I don’t think so) and yes I do know that there are a lot of viruses out there but I’m not just doing this without due thought and advice. I also (probably) wouldn’t consider junking it unless it had crossed the line in the number of areas.

Why do I think it’s a good idea to run without Anti-Virus?

I listen to a number of Podcasts and one of them is Security Now from the Twit Network. When someone with the knowledge, experience and understanding that Steve Gibson has says that he doesn’t use a third party Antivirus then there must be something in it.

What does Steve use? Well, as he is running Windows 7, Steve is using the built-in Security Essentials (it’s Defender in Windows 10). Yep – he’s using what comes in the box! And the reason for that is that third party Anti-Virus is incredibly invasive and has to inject code deep into the Operating System. This, perversely, increases the attack surface for malicious code. Bugs in products like Symantec/Norton have exposed users to a greater risk of infection while users believed themselves to be safe. I’m not even going to being talking about Kaspersky!

In the 10 years or so that I’ve been using my current Anti-Virus application, Avast, I’ve only had about half a dozen warnings about suspect files – and there is no reason to believe that Defender would not have detected the same files or whether they were actually malicious (I get a number of false positive alerts when I’m compiling code in Visual Studio – and I don’t write viruses!). I tend not to surf around in the darker parts of the web and am pretty careful about what I install.

So, I’m not running without Anti-Virus – just without third party Anti-Virus.

What lines did Avast do to push me down this road?

Well, there are a couple of reasons really:

Recently it has been getting in the way of my work.

Running a WebAPI application in IIS on the workstation and accessing it from the iPhone simulator on the iMac was never a problem. So when I started getting ‘Failed Connection’ errors I assumed it was a configuration issue or a coding error. After an hour or so of debugging I find that Avast is blocking requests to IIS – which it has never done before. Turning the firewall off confirmed the problem – I just had to remember to do it again when I was next accessing the WebAPI from another system.

Other applications failed to start with the Avast firewall engaged (when they had played well together in the past) and efforts to resolve the problem by Repair/Reinstall all failed.

But the big thing that did it for me? The real big step over that line we call privacy was when I logged onto my internet banking and Avast displayed this:

Now call me a member of the tin-helmet brigade if you like but when I access my online banking over a secure connection I find it a bit disconcerting when something says “I can see what you are doing!”.

It was a reminder to me that like most (all?) third-party AV products out there, Avast can intercept and analyse traffic being sent over a secure connection through my browser. To do so it has install a trusted root certificate on my computer which means it can act as a ‘man in the middle’ – intercepting my traffic, checking it and then passing it on.

And it’s the man in the middle part combined with the increased attack surface and buggy applications part that worries me and that’s why I’ll be sticking with Defender for now.

Why install racing harnesses in your car when the built-in seat belts will keep you just as safe in normal use?

 

Online Tool of the Month – QuickType

I was recently working on a freelance project which required interaction with a 3rd party webservice that returned a JSON result.

While connecting to the service and fetching the data was a fairly trivial task, looking at the data being returned it was clear that a lot of POCO/DTO classes were going to be required.

Obviously these are easy to create but they are time consuming and prone to the odd typo. I then remembered a labour saving service called QuickType which would take the returned JSON and generate the C# code for me – I just needed to pull it into my project.

When you navigate to the QuickType site you are presented with some sample input data in the left panel and the resulting C# code in the main panel. There are also some options in the panel to the top right that we will need to tweak.

In the screenshot below I’ve pasted in some JSON which was generated by another online service called, oddly enough, JSON Generator (I’ll probably feature this tool next month). I’ve also used the ‘Target Language’ dropdown to select C#, specified a Namespace and collection type to use (List<T>).

 

As you can see (notwithstanding the truncated lines which are an artefact of my screen capture tool), not only has QuickType generated the main Person class, it has detected and created the ‘Friend’ class along with some serialisation and conversion classes (with methods inside of which can be extracted and used as required – or discarded).

It has also added JsonProperty attributes to the properties (assuming NewtonSoft.Json) and provided usage details

 

Now obviously there is no voodoo at work here – creating these classes is pretty trivial – but this is a great time saving tool which can free up your development time to actually develop instead of churning out this mundane code.

Online Tool of the Month – Polyfill.io

Browser compatibility is a pain – and that’s a fact. But when your client says, “we need to support IE9 and upwards” you can feel the dread come over you.

Well, fear not as help is at hand in the form of Polyfill.io, a CDN hosted service which will serve up a custom set of polyfills to fill those functionality  gaps in whatever browser makes the call.

Need some of those ‘Math’ or ‘console’ functions that are so readily available in non-IE browsers? Well, Polyfill.io has your back.

Add a single line of code to your mark up and voila – you’re good to go. Remember, before you start panicking about large request sizes the CDN will tailor the response to only those features that the current browser is lacking, which is pretty neat.

From the feature list on their site it is easy to see what is included naively in each browser and what can be polyfilled.

But there’s more – what if you only wanted a single feature, say ‘Function.name’, in IE9/10&11. Even though the service will return a tailored set of polyfills it is possible to view the detection and polyfill scripts for any feature by clicking on the ‘hamburger’ menu to the right of the feature – this takes you to the appropriate location in the Github repo (yep, this is all open source stuff).

The downside to using a CDN though is that if it goes down (and it did once for a client of mine) then it could leave your site somewhat ‘hobbled’ depending on the features you were relying on