End of the road for 8yo Workstation?

Back in 2013 I bought myself a shiny new, custom built workstation from Scan Computers. Costing me around £1300 I had opted for a pretty decent spec for the time:

  • Intel i7-3770 3.4GHz CPU
  • 32 GB RAM
  • Nvidia GeForce GTX 650Ti
  • 250 GB SSD + 500GB spinning rust HDD

By todays standards this is probably pretty lame but it certainly kicked my old PC into a cocked hat! Able to run three monitors and all the speed I needed with a good amount of headroom.

That said, it still runs pretty well today (some 8 years later) and has served me very well throughout my contracting/freelance work. Today it’s my daily driver while I’m working from home.

Booting from cold to logged in and ready to work takes around 25 seconds (pretty slow in these days of instant gratification but it take me that long to pour a coffee so it’s not a problem) and I’ve never really had any major speed issues with anything I’ve thrown at it …. until recently that is.

Over the past few weeks I’ve experienced numerous reboots without any warning whatsoever. Fortunately I’m working via a Remote Desktop Connection to my work PC so all that happened was my connection dropped and I lost nothing. Still frustrating nonetheless.

After trawling through the event logs etc and finding nothing I wondered if this was the end of the road for my trusty workstation – was the hardware starting to show it’s age and let me down?

If so then could I use the MacBook Pro or Surface Pro instead or was I looking down the barrel of specing up a new system and seeing if I could justify the cost considering I’m not working for myself anymore.

For some reason the first thing I thought about was the CPU – maybe it was showing it’s age and I was over-revving it (even though my work PC would be doing all the hard work when I’ve experienced the reboots!). So after looking around for some recommended utilities I installed ‘Core Temp‘ and fired it up – the aim being to monitor the CPU temperature to see if it was increasing to critical levels and causing it to ‘panic’!

Now, the screenshot to the right isn’t from the initial monitoring but what it looks like now and if you know anything about CPU temperatures (I didn’t until I looked it up) you’ll know that this is running within the normal range. This wasn’t the case a day or so ago!

When I initially opened Core Temp the Workstations 4 cores were running in the low 80’s – a quick Goggle told me this wasn’t good and while it may not be my problem it was certainly something that needed attention.

Now, I’ve hoovered out my case a number of times over the years, it’s sitting on the floor under my desk after all, but I was certainly surprised to find that I couldn’t actually see the fins in my CPU heatsink from above.

After removing the heatsink and fan assembly and separating the two this is what I found..!

Now that’s pretty disgusting, but in my defence that’s not just my skin cells – my home office is in the loft room and recently we’ve been in and out of the attic spaces to update the insulation and board it out for storage – this all creates a certain amout of dust which my workstation has been doing a good job of collecting (on the CPU and as it happens the GPU heatsinks).

After a good clean and a fresh layer of thermal paste my teperatures are now back in the normal range (as shown in the above screenshot from Core Temp) and I’ve not suffered a reboot since.

So while I can breathe easy now this has been a warning shot across my bow – this thing isn’t going to last forever afterall.

The experience has prompted me to review the content of this system to see how badly impacted I would be if (when?) it goes bang for good – and the results were encouraging.

There isn’t really anything on here that I couldn’t access from the Mac or Surface. Pretty much everything is either in the cloud or on my Synology NAS (which is also backed up on AWS S3).

All in all I’m happy that I don’t have to put my hand in my pocket for a new system and that the current one is not a single point of failure – I would still be able to call on one of my other systems and get back up and running in pretty much no time.

Happy Days 🙂

Help – A Covid-19 Tracker app has been secretly installed on my Phone!

On LinkedIn the other day I noticed a post claiming that Covid-19 tracker apps had been installed on everybody’s smart phone and that we were all being tracked.

Ok, so lets ignore the fact that we can be tracked via our phones anyway and look at what the fuss is about.

If I open up the settings on my OnePlus 6T Android phone I see a new item – ‘COVID-19 exposure notifications’. This is what some people are losing their minds about right now.

I on the other hand am not. Why not?

Well, as a developer (web and mobile) I listen to numerous podcasts with presenters and guests who know what they are talking about and one such episode focused on the collaboration between Google and Apple to create the foundations for Covid-19 tracking app and roll it out to their phones during a standard update cycle.

In this episode, which non-techies will find boring as hell, security expert Steve Gibson analyses the technology and gives it the thumbs up from a security and privacy point of view. This is a man I trust when it comes to these things.

But here’s the thing that you may have missed – this is the foundation for app development, it is NOT a tracking app itself.

If people bothered to tap on the above settings option to open and read the details, instead of taking screenshots, posting them on LinkedIn (and probably Facebook and Twitter) claiming that something underhanded is going on, then they would have seen that this functionality is only activated when a compatible application is installed.

There’s other information there too which may have reassured them (or not) and a link for them to find out even more about this functionality.

In case you are concerned that someone could create an app and secretly access this functionality for nefarious reasons, not just any Tom, Dick or Harry can knock up Covid-19 Tracking App using this API.

Only third party companies affiliated with a public health authority or government can use it and that will be tightly controlled by Apple and Google.

So what do I do now?

Frankly, nothing. There is nothing you can do to remove this nor is there any need for you to do so. If you don’t install a tracking app, this functionality won’t be activated on your phone. Simple as that.

If you don’t trust Google or Apple (or your Government) then you probably want to think about changing your phone to something a bit less “smart”.

As an aside….

Above I linked out to the Security Now podcast with Steve Gibson where he gave the thumbs up to the technology.

A later podcast is entitled “Contact Tracing Apps RIP” where he explains that while the technology is sound, the system will probably not work in the real world.

Why is this?

Well, research suggests that for the system to work effectively it would need at least 80% of the population to install it on their devices – and if the above hysteria is anything to go by, that’s just not going to happen.

I’ll leave it there….

Covid19 – A Privacy Warning

In these weeks of lockdown in the UK due to Covid-19 there have been a number of incidents of the police overstepping their powers;

The police chief was forced to u-turn in his threat while the forces involved with the other two incidents say that the officers were ‘well intentioned but over zealous’ – but to my mind, that’s not the point.

The point is that there will always be people in a position of authority or power who overstep their remit – and when it come to our privacy that’s not a good thing.

Continue reading “Covid19 – A Privacy Warning”

NDC London – A Little Review

So, the last developers conference (if you could call it that) I went to was way back in 2001 when we had WAP phones and if you had a Pentium 4 computer you were some super-techno hacker.

Well, time have changed somewhat, we now have smart phones with more memory than we had hard drive space back then and I’m writing this post on a workstation with 8 cores and 32GB RAM (and this isn’t even the cutting edge). Add to that the cloud, driverless cars and social networks with billions of users (for better or worse) and we have come a hell of a long way.

Well, I decided to bite the bullet and shell out for a super early-bird ticket and get myself up to London for a few days to Geek-Out.

Continue reading “NDC London – A Little Review”

The Scourge of Email Click-Bait

We all get some SPAM in our Inboxes – despite the best efforts of our email hosts, be they Google or otherwise. But another type of message is starting to gain traction and I receive a number of these a week now – normally from recruiters is has to be said – and they are akin to the Click-Bait links you see all over the web (you know, the ones that normally end with ‘you’ll never guess what happens next’).

So, what am I talking about? Well, from this mornings Inbox we have ‘Exhibit 1’;

I’ve blurred the sender (although as I type I don’t really know why) but the subject line starts ‘Re:’ which would indicate that this is a reply to an email that I’ve sent – standard email client functionality. But I’ve never emailed (or even heard of) the sender or their company.

It’s just a rouse to get me to click on the message and read what they have to say – because the premise is that we have done business in the past.

Continue reading “The Scourge of Email Click-Bait”

On The Fence Development – What’s All That About Then?

I’ve been contracting for over seven years now and during that time I’ve had  a number of clients, friends fellow contractors ask me “…why ‘On The Fence‘? What’s that all about??”.

Ignoring the fact that the blog I initially hosted on this domain was about my experiences with Linux and Open Source while working day to day as a .NET Developer using Windows, I think that the name fits – it’s all about not putting all your eggs in one basket as it were.

I think that there is quite a wide line between trying to be a ‘Jack of All Trades’ and a ‘One Trick Pony’ and as a Contractor I think that this is a good place to be.

Continue reading “On The Fence Development – What’s All That About Then?”

Taking control of my Domain

Some time ago I was watching a Pluralsight course called ‘Master Your Domain’ where Rob Conery explained how to break your reliance on the major service providers for email, source code, blogs and file-sharing and create your own domain to host your data.

Following the course I started hosting my own Git server, Blog and File Sharing service but Email …. well that was too big a step for me to take at that time. However, times change and when I started experiencing issues with my email that was the trigger for me to take the plunge.

What was the problem with Google Mail?

When GMail moved to Inbox I have to say I was less than impressed. For my personal email it was fine – I didn’t mind moving to ‘Inbox Zero’ but, call me a dinosaur, it just jarred with me when it came to my business account.

Now, I really didn’t want the hassle of moving email providers so as many of the ‘problems’ were to do with the Inbox application I decided to use an alternative email client called WeMail on my Android phone and this served me very well for a couple of years.

Recently however, I started noticing multiple ‘draft’ messages hanging around (basically ‘snapshots’ of messages as I was typing and had then sent) and issues with message signatures – sometimes they were added, sometimes not. Was this a problem with the WeMail or Google – who knows.

What about Google Docs?

I was also not overly impressed to see that Google had blocked ‘large numbers’ of users from accessing their files on Google Docs. Admittedly Google were trying to filter out malicious content but the fact remains that they were scanning everybody’s files for what it deemed to be inappropriate. What if content in my files triggered a false positive and they blocked my access to an important document? What about my email? I have all sorts of information in there from company accounts and confidential client discussions to inane conversations with recruiters and colleagues.

Making the move

After deciding to make the move I had a look at what I actually stored on Google services and what needed to be migrated.

Obviously there was a couple of gigabytes of email but I also had a lot of stuff in Google Docs – from invoices to memes – where was I going to put all this stuff?

Files and Stuff

As already mentioned above, following Rob Conery’s course I had configured my own Dropbox-like File Sharing service using OwnCloud and this had been running fine for a while now. I had the server installed on a Raspberry Pi sitting on top of a 250GB SSD in an external drive enclosure. With the appropriate DNS configurations and port forwarding on my router this configuration worked well for me, allowing me to share folders with clients for them to upload large media files and letting me transfer files between my Windows workstation & laptops as well as my iMac and MacBook Pro.


As Rob mentions in his course, it’s really not viable to host your own email these days. Messages coming from unknown sources are blacklisted by many services in an effort to reduce the level of spam reaching our inboxes. For this I needed to find an alternate provider; one that provided me with all the features I already had (spam protection, mobile access etc) but with some increased confidence over the privacy of my inbox.

In the course Rob recommends Fastmail and reading their Privacy Policy I was happy to give them a try – they offer a 30 day free trial and I did give them a try previously but not ‘in anger’ as it were, i.e. I created an account and sent test messages, added appointments etc but never actually used it on a daily basis.

After exporting my Calendar and Contacts from GMail I set about the import process from within Fastmail. The process itself was pretty straightforward with clear instructions and troubleshooting advise. I experienced no real problems but I’m sure that Fastmail support would have been on the case if I had.

The only ‘grumble’ I had at the time was that my Gmail data was imported into a folder called ‘migrated’ – I was expecting my Gmail Inbox messages to appear in my new Inbox. This caused a bit of consternation at the time but looking at it now I’m not so sure it’s a problem – all the data is there and I can easily move things around if I so desire.

Re-configuring my DNS to redirect email to the Fastmail servers was also straightforward and I’m happy to say that a couple of weeks into my trial I’m very happy with the service I’m receiving so will definitely be signing up to the full plan.

So what about Backup?

So I now have my email hosted successfully and files are back under my control so we’re all good yes?

Well not quite.

One of the things we don’t really think about it that on top of storing all our information and making it available to us online, Google are actually backing this stuff up. If one server was to totally fail then the data is ‘simply’ pulled from another and we never know there was a problem.

Well, the data is now sitting on a drive in my office – what happens if it fails, or the office burns down? How will I get that data back? I need a regular, offsite backup.

The answer was fairly simple and conforms with my need to keep my information private.

I had previously bought a Mac Mini for developing my Xamarin iOS applications, this was later replaced with an iMac, so I fired it up and installed the OwnCloud client onto it. This was set to sync everything to it’s local drive – and yes, it’s still sitting in my office so at this point I’ve gained nothing.

I then signed up for a SpiderOak account – initially 250GB but they later increased this to 400GB – using their 21 day trial. Their ‘SpiderOak One‘ client was then installed onto the Mac Mini and configured to backup everything in the OwnCloud sync folder.

I’ve also install the One client on my workstation and also mounted a couple of folders from my Synology NAS onto the Mac Mini for good measure and I have backed up almost 100GB of data so there is plenty of headroom for future expansion.

Going Forward

Ok, some of you may be asking about the cost of all this and yes there is some additional outlay – my Google Apps account was created when they were free and to their credit Google have honoured this long after charging for new accounts. But the cost to the business is minimal – and even as a personal user it’s certainly not prohibitive.

The backup solution I have in place does have it’s downsides – we had a power cut here a while back and I totally forgot to reboot the Mac Mini so there were no backups for a while.

But the fact is that I now have control over my data and if this takes a little more work and expense then such is life.

WhatsApp – a Haven for Paedophiles and Terrorists?

Yep – thought that would get your attention!

It’s headlines like this that the UK Government (and the press) are throwing around in order to drum up support for one of the most intrusive and privacy damaging campaigns to date.

The premise is that bad people use these services, which make heavy use of encryption to keep messages private, and by doing so hamper the security services who can no longer access private information in order to monitor them and stop them from doing bad things.

Now I’m not denying that these bad people do use WhatsApp (and similar applications) to enable them to communicate without their messages being intercepted. But I use WhatsApp and so do my wife and kids and we are not bad people. If WhatsApp are expected to put a backdoor into their systems to allow access to the content by so-called ‘authorised agencies’ then what about our privacy?

When I discuss this with people many will say “well, if you’re not doing anything wrong then what’s the problem?”. However, when I ask them for their email and social media passwords they are a somewhat reluctant to hand them over – “but if you are not doing anything wrong then why do you care?”, I ask.

The answer is simple, their email and social media feeds are private and none of my business. Just because something is private does not mean it’s illegal or even wrong, just private.

We may be discussing our medical history, financial details, travel plans or just what time we will be home for tea but that’s our business, it’s private and nobody else’s business except ours and whoever we’re talking to.

So while I am willing to accept that bad people use these platforms in an effort to hide their activities, I’m pretty sure that they make up a tiny percentage of the 1,000,000,000 (and increasing) WhatsApp users. Do we all have to give up our right to privacy for the sake of these people and will it even make a difference?

The Snoopers Charter

In 2016 the Investigatory Powers Act, or Snoopers Charter as it was dubbed, was passed into Law and with it the privacy of every UK citizen was eroded a little more.

Did you know that under this legislation your Internet Service Provider now has to keep your browsing history for 12 months and provide it on demand to authorised agencies?

If you did then you may have assumed that as long as you are not “doing anything wrong” then you have nothing to worry about as the Police and Security Services are only looking for bad guys.

Well, did you also know that on the list of agencies that can access these records are:

  • HMRC (the tax man)
  • The Department of Work and Pensions
  • The Department of Transport!
  • The Welsh Ambulance Services and National Health Service Trust!!
  • The Food Standards Agency!!!

Now what on earth to the Food Standards Agency need with my internet browsing history? What possible use could it be to them?

If the UK Government were to enforce a backdoor into WhatsApp and other platforms like it – who would be able to access the information and how secure would it be?

But that’s not all. If the Government weakens encryption and demands backdoors be created in otherwise secure systems, who knows who can gain access to the information that was once protected?

If SSL certificates (which put the padlocks on your browsers address bar to indicate that the page is secure) become less secure, how safe are you when you are accessing your online banking or shopping on Amazon?

The truth of the matter is that if the UK Government gets it’s way it’s not really them that we have to worry about – it’s the hackers. They will have a field day with all this insecure data flying over the wire. All it would take would be a poorly implemented backdoor and then all bets are off. If Government agencies cannot even secure their own data, what chance do they have of securing the keys to our data?

A Developers Viewpoint

So, apart from being a UK citizen, what has this got to do with me and why am I ranting about it?

Well, as a developer I know that writing a chat application is not really that hard – in fact I recently read a book which guided the user through cross-platform Xamarin development and the target project was a cross platform chat application. Moreover, the source code is actually on Github so there’s a starting point right there.

Currently that XamChat application stores and sends data in plain text so not secure or private. But how difficult would it be to upgrade the app to use encryption? Even though I am not a cryptographer by any stretch of the imagination I’m guessing not that hard at all.

And that’s the point – if I can do this then any reasonably competent developer could do it too. If the UK Government we to make it unattractive for the bad guys to use secure apps like WhatsApp then there is nothing stopping them from writing their own end-to-end encrypted messaging system using state of the art encryption that cannot be broken with today’s technology.

Meanwhile the rest of us will be using insecure systems that leak information and make us vulnerable to malicious hackers keen to exploit these weakness, gather personal information and use it to their own ends.

Going Forward

In an effort to prove my point, I’m going to take a run at this. Ultimately I’m going to see just how hard bolting encryption into the XamChat application.

I’m not expecting (or intending) to create a WhatsApp killer or even anything that polished – just something usable to prove the point.

First thing to do is to get up to speed on encryption, especially in .NET. There’s a 4 hour course on Pluralsight so I can kill two birds with one stone; my commitment to watch one Pluralsight course a month and create a Command Line application to create Encryption Keys, Encrypt & Decrypt text data in preparation for creating SecureXamChat.

Edit – 15th Feb 2018: Subsequent to me posting this there was a great article in The Guardian which (obviously) make a much better job of getting the point across and it well worth a read.

So, what will 2018 be the year of?

They say that life is what you make it so time to make some resolutions …… yes?

Well, if John Sonmez from Simple Programmer is to be believed – maybe not!

I receive regular email updates from the Simple Programmer website and the one I received on 27th December caused me to stop and think.

Probably based on one of John’s blogs posts from 2016, the subject of the email was ‘Dont make resolutions the New Year, make a commitment’. Now I initially thought that these amounted to the same thing but changed my mind after reading the parting shot of the email which read:

Let me put it this way, when you need to take a taxi to the airport, do you want your taxi driver to resolve to be there at 8:30 AM or do you want him to commit to being there at that time?

The answer is obvious (hopefully) so I’ve decided to make some commitments for 2018:

  • I will watch at least one Pluralsight course a month
    • My technology focus will be .NET Core, Azure, ReactJs
  • I will watch at least one Xamarin University session (attending those required to maintain my certification)
  • I will blog twice a month (not including the Online Tool of the Month posts)
    • To keep me honest I will probably post findings from my Pluralsight courses and Xamarin investigations (proving that I’ve actually honoured the above commitments)
    • Other topics will include Privacy and Encryption which seem to be bad words these days

So that’s what I will commit to this year – maybe I’ll be in a position to commit to more but I’ll review my progress mid-2018 and see how I’m doing.

Ditching AntiVirus

Just like us, as computers get old they tend to slow down. It’s a fact of life pure and simple.

With computers it tends to be due to the hardware not keeping up with the new requirements of today’s applications (just try running later Windows or Office on a Pentium 4 and you’ll see what I mean). We tend to put up with the slow down until something finally gives out, a hard-drive or motherboard for instance, and then we buy a new one.

Well my Windows 10 development workstation was slowing down and while it’s a few years old now, it is still a pretty high spec – i7-3770 with 32GB RAM and SSDs – this thing used to fly.

But recently it was noticeable that it was taking longer to boot, applications like Visual Studio and SQL Management Studio seems to struggle to load and surfing the web was a bit of a grind.

I decided to reinstall from the ground up and make sure that I didn’t install anything that didn’t really need for development (like Steam!). I also decided that I was not going to reinstall my AntiVirus!!!

“Oh My God!” – I hear you shout. Are you insane? Don’t you know how many viruses there are out there and how quickly your system could be compromised?

Well, no I’m not insane (or at least I don’t think so) and yes I do know that there are a lot of viruses out there but I’m not just doing this without due thought and advice. I also (probably) wouldn’t consider junking it unless it had crossed the line in the number of areas.

Why do I think it’s a good idea to run without Anti-Virus?

I listen to a number of Podcasts and one of them is Security Now from the Twit Network. When someone with the knowledge, experience and understanding that Steve Gibson has says that he doesn’t use a third party Antivirus then there must be something in it.

What does Steve use? Well, as he is running Windows 7, Steve is using the built-in Security Essentials (it’s Defender in Windows 10). Yep – he’s using what comes in the box! And the reason for that is that third party Anti-Virus is incredibly invasive and has to inject code deep into the Operating System. This, perversely, increases the attack surface for malicious code. Bugs in products like Symantec/Norton have exposed users to a greater risk of infection while users believed themselves to be safe. I’m not even going to being talking about Kaspersky!

In the 10 years or so that I’ve been using my current Anti-Virus application, Avast, I’ve only had about half a dozen warnings about suspect files – and there is no reason to believe that Defender would not have detected the same files or whether they were actually malicious (I get a number of false positive alerts when I’m compiling code in Visual Studio – and I don’t write viruses!). I tend not to surf around in the darker parts of the web and am pretty careful about what I install.

So, I’m not running without Anti-Virus – just without third party Anti-Virus.

What lines did Avast do to push me down this road?

Well, there are a couple of reasons really:

Recently it has been getting in the way of my work.

Running a WebAPI application in IIS on the workstation and accessing it from the iPhone simulator on the iMac was never a problem. So when I started getting ‘Failed Connection’ errors I assumed it was a configuration issue or a coding error. After an hour or so of debugging I find that Avast is blocking requests to IIS – which it has never done before. Turning the firewall off confirmed the problem – I just had to remember to do it again when I was next accessing the WebAPI from another system.

Other applications failed to start with the Avast firewall engaged (when they had played well together in the past) and efforts to resolve the problem by Repair/Reinstall all failed.

But the big thing that did it for me? The real big step over that line we call privacy was when I logged onto my internet banking and Avast displayed this:

Now call me a member of the tin-helmet brigade if you like but when I access my online banking over a secure connection I find it a bit disconcerting when something says “I can see what you are doing!”.

It was a reminder to me that like most (all?) third-party AV products out there, Avast can intercept and analyse traffic being sent over a secure connection through my browser. To do so it has install a trusted root certificate on my computer which means it can act as a ‘man in the middle’ – intercepting my traffic, checking it and then passing it on.

And it’s the man in the middle part combined with the increased attack surface and buggy applications part that worries me and that’s why I’ll be sticking with Defender for now.

Why install racing harnesses in your car when the built-in seat belts will keep you just as safe in normal use?


Windows 10 (and 7) Built-In MD5 Checksum Calculator

I recently paved my main development workstation after it started misbehaving (slow start up, some applications not opening consistently etc) and am trying to be careful about what I install on it going forward.

Previously I had all manner of applications, games (including Steam) and utilities installed and the chances of finding what was causing the problems was pretty remote. There could of course be multiple culprits.

Today I needed to install MySQL Workbench so I headed off to download it and noticed the MD5 checksum beneath the link. Now, I don’t always check these and maybe this is why my workstation ended up in a bit of a mess. But with a view to keeping this system as clean as I can I decided to make a point going forward of checking these checksums when they are available.

The “problem” is which utility do you use to calculate the checksum of the downloaded file?

If you Google for ‘MD5 checker’ you will see a number of utilities and while I have no reason to doubt the integrity of any of these I stopped short of installing any of them.

Obviously each download was accompanied by it’s MD5 checksum so that I could verify the file but after freely installing all manner of utilities in the past I was a little bit wary this time around.

Now, MD5 is not a new thing and you would think that Windows 10 would have some form of utility built in that would calculate the hash – and there is. Apparently it is also available in Windows 7 but I no longer have any systems running Win7 so I cannot verify that.

Open a command prompt and enter the following:

CertUtil -hashfile <path to file> MD5

Depending on the size of the file it may take a few seconds to run the calculation but if successful the MD5 hash will be displayed as below.

It is also possible to generate checksums for other hash algorithms by replacing the MD5 parameter used above with any of the following (note that if you don’t specify a value then SHA1 is used by default):

  • MD2
  • MD4
  • MD5
  • SHA1
  • SHA256
  • SHA384
  • SHA512

So, if all you need is to determine the checksum of a downloaded file then there really isn’t any reason to install yet another utility to do so.

Stackify Prefix – first thoughts

Listening to one of my favorite podcast (.Net Rocks) I heard a plug for the Stackify Prefix tool which claims to help the developer fix problems before anyone else sees them – a bold claim. Well as I am currently working on a greenfield development project I decided to give is a whirl – it’s free after all so why not.

Now I was not expecting to find too much wrong with the application and thankfully I was right – but I was getting errors.

The highlighted call is to a WebAPI method from an AngularJS controller (a JavaScript file on the client) and as you can see from the right hand pane it does succeed. In fact the data is returned as I’d expect and the application works without any issue. So why is Prefix flagging this?

Well, looking at the stack trace a little more carefully I see that the exception is being raised by the XmlMediaTypeFormatter when it is creating it’s default serializer. But the WebAPI is returning JSON so why it is spinning up an XML serialiser?

Well, my WebAPI endpoint took this form:


The problem is on line 8 where I’m returning the OK status with the required content – which is an anonymous object that I’ve just put together on the fly. The WebAPI is configured to accept the ‘application/json’ header and to use an appropriate JSON Formatter – which it does.

The problem is that I still have the default XML Formatter in the list and for some reason the framework is trying to use it to serialize my anonymous object – and failing (silently).

So all I need to do is to remove the Formatter during the WebAPI registration – within WebApiConfig.cs in the App_Start folder (see line 12 below).


Now this was fairly trivial but a bug is a bug and as we know – exceptions are expensive. They take time to raise while they pull the required information together and work their way through to the calling code – which in this instance appeared to simply discard it. A small performance hit but if this scaled then it could have become a bigger problem in the future – and probably harder to find.

Prefix highlighted it straight away and the issue is now fixed. It never made it to production, in fact it never made if off my desk – it was Pre-Fixed!

Another year over, what have I learned?

It’s been an interesting year all things considered, I’ve had a couple of good contracts as well as some time ‘between contracts’ (see my last post for more details on that). The time when I was not actually engaged on a contract I’ve been able to pick up some freelance work and invest some time on personal projects and watching numerous Pluralsight courses to keep my skills current. All in all it was a pretty good year.

In January I declared 2014 ‘The Year of the Mobile’ and invested a lot of time (and money) into mobile development using the Xamarin platform. The FillLPG app is now in Beta release and I’m getting some good feedback from the small team of testers. I also have a mini-project which should hopefully see my first iPhone application in the App Store.

In 2015 I am hoping to continue with my Mobile development and am hoping to get FillLPG onto the iPhone/iPad. I do however realise that not everyone is looking for mobile developers so I need to keep my web development skills sharp as well.

I’ve also had an idea for an online service which will require the development of a website – so this should enable me to demonstrate MVC/WebAPI skills. At the same time the site may generate an amount of income, from user subscriptions and maybe Google Ads (shudders).

Now, I’m not one for making New Year Resolutions so I’ll call this my 2015 Personal Development Plan.

  • Watch at least 1 Pluralsight course a month, 2 if time allows.
  • Contribute to at lease 2 Open Source projects (probably via Github)
  • Release the Xamarin implementation of FillLPG for Android
  • Release first iOS application – basic application
  • Begin development of FillLPG for iOS
  • Design, develop and release subscription based website
  • Blog more!

So, once I’ve shaken of this cold – and the hangover from seeing in the New Year, it looks like I will be pretty busy.

The Ups and Downs of being a Contractor

As with most careers being a contractor has it’s ups and downs and I have certainly had my share of both over the last couple of months.

When my last contract came to an end I entered the limbo land that is ‘between contracts’. Due to the nature of contracting many clients require someone who is immediately available, or at least within a week or so. Looking further than a couple of weeks ahead and you are probably going to be passed over for someone who can get their feet under the desk much quicker.

I had not been looking for a follow-on contract because I already had a holiday booked and, knowing that I would have minimal phone & email access for a couple of weeks, I had decided to leave that until I returned to the UK. I then gave myself a couple of weeks ‘off’ during which time I would continue work on the Xamarin implementation of the FillLPG for Android application and hit Pluralsight to keep my axe sharp and my stills current.

This worked out quite well and I secured a contract with a previous client pretty quickly – but this was subsequently withdrawn as their client put the brakes on the project. Disappointing but that’s how it goes sometimes. At this point I was approaching the end of the two weeks I’d given myself to find a new contract and for the first time in three and a half years I was not actually billing any time at the end of a month. Still – something else would come along….

And it did, in the form of a potential 12 month contract which required SC clearance (which I have). The location was a comfortable commute and the day rate was pretty good too. At the start of October 2014 I was offered the contract (subject to references) which I duly accepted. It turned out that the client was using an external resourcing company and they required 3 years of references – which as a contractor amounts to quite a few people to contact. Fortunately my recruitment agent dealt with most of this for me and after completing numerous forms I was finally given a start date.

But the story does not end there – more’s the pity. While driving to the client site I received a phone call (yes I have hands-free in the car) advising me that the external resourcing company had not processed my security clearance and I would not be allowed on site. This was disappointing but it turned out that while my SC clearance was valid it had deactivated because I was not in a security role for more than 12 months – who knew.

Worse was to come as the process of reactivating my clearance relied on a clunky website that frequently displayed the good old ‘Yellow Screen of Death’ but eventually served up a PDF form to be completed and was followed by a 10 day process of validation. This would be more billable time that would pass me by. I was able to pick up some freelance work, make good progress with my Xamarin development and watch a number of Pluralsight courses so it was not as if I was sitting around watching Daytime TV.

The validation process ended up taking almost 3 weeks and I was eventually given another start date and my recruitment agent was just waiting for the purchase order so that he could generate the contract.

The start date arrived but I still didn’t have a contract to sign. Finally, at 2:00pm I was advised that the client had reorganised resources and didn’t need my services anymore (apparently I was not the only contractor who was cut loose).

So between the incompetence of the external resourcing company and the unprofessional behavior of the end client, who has expressed their desire to get me on site as soon as possible, I had essentially ‘lost’ over £6500 in billable time. This was bad enough but I had been contacted about numerous other contracts while I was waiting but due to the nature of this one I decided to wait it out.

But it’s not been all bad. During the last month I have been able to migrate a local church website from an aging version of Joomla to a slick Squarespace site and secure a maintenance contract to administrate and maintain it. I’ve also migrated their creaking database (created by a parishioner) to an MVC4 web application and add new, desperately needed, features that the previous implementation could not support. The FillLPG for Android application has been written using Xamarin, has gone into Beta testing and should be released before Christmas. I’ve also watched a number of Pluralsight courses to help keep my skills current and improve my chances of securing interesting contracts in the future.

They say ‘You Can’t Keep a Good Man Down’ and as I type I have an interview for a Bristol based contract later today and have a number of other irons in the fire so things are looking up.

Fixing Xamarin COMPILETODALVIK build error in VS2013

After clearing the decks of some other freelancing projects I’ve had on the go recently I’ve been able to get back to looking at my FillLPG application that I’ve decided to rewrite from scratch using Xamarin.

It was time to start working on the UI and with the new Xamarin.Forms library promising cross platform UI development I thought I’d create a quick project to compare the old approach with the new (and maybe write blog post or two about it). However, I ran into problems very quickly and thought I’d pass on my experience.

I decided to create a Solution with Android, iOS and Portable Class library projects. The application would open with a Map (Google Maps on Android and Apple Maps on iOS) and would have a menu to open a standard login page. Now, in order to display a map on Android you need to reference the Google Play Services library and make some changes to the AndroidManifest.xml file to configure the required permissions etc – and this is where the problems started.

I’d created the Solution, added the Android and PCL projects and used the Component Store to include the Google Play Services component. When I attempted to build the solution I was greeted with a rather unhelpful error message.

At this point I’ve not written a line a code – but the solution won’t build!

The issue appeared to be caused by the presence of the Google Play Services component and it’s worth bearing in mind that this comes in the form of a Java .jar file (why wouldn’t it – that’s the native language of Android after all). Well, as part of the compilation a Java process is spun up and the .jar is loaded and (presumably) compiled. During this process the Java process has an amount of memory allocated to it but the default size (whatever that happens to be) appears to be too small and an OutOfMemoryException is thrown.

After quite a lot of Googling I found two solutions that worked for me;

The first approach involved simply adding a new Environment Variable to increase the Java heap size on a machine-wide basis, i.e. for all Java processes that run on your system.

Variable: _JAVA_OPTIONS (the leading _ is deliberate)

Value: -Xmx1g

This may be what you want but if, like me, you split your development between multiple systems are are working in a team of developers then this may not be the optimal solution.

The second solution requires opening up the .csproj file for the Android project and making a small adjustment. This has the benefit of being part of the project so, if checked into source control is will be set for all development systems that it is checked out on.

Open up the .csproj file into an XML editor (if you’re doing this in Visual Studio then you’ll need to unload the project first) and locate the JavaMaximumHeapSize element. Then update it as follows:


Both of these options increase the heap size to 1GB which should be more than enough.

With that fixed I can get back to the Xamarin.Forms sample project and get back to work.