Unit Testing with the Xamarin.Forms MessagingCenter

While we all know that Test Driven Development (TDD) is a good idea, in practice it’s not always viable. It could be a time constraint, a resource issue or the project just doesn’t warrant it.

While TDD may sometimes be an option, unit tests themselves should really be considered to be a must. They will save you a lot of time in the long run and while they may not prevent you from going grey, ask me how I know, they will reduce your stress levels when bugs raise their ugly heads.

So, whether you write your tests before you write the code or vice-versa, if you are developing production code you really should write tests to cover it.

Now, one of the requirements for unit testing is the ability to mock out a components dependencies so that you are only testing the component itself.

Normally you would use the Dependancy Injection pattern to help develop loosely coupled systems but Xamarin.Forms has a few components which can be a fly in the ointment – one of these is the MessagingCenter.

MessagingCenter allows you to send messages from one component to another using the publish-subscribe pattern to do so in a loosely coupled manner.

It’s build into Xamarin.Forms and is very easy to use;

On one side of the process you will send (publish) your message;

MessagingCenter.Send(this, "ForceLogout", message);

One the other side you will subscribe to the message;

 MessagingCenter.Subscribe<WelcomePage, string>
  (this, "ForceLogout", async (sender, arg) => {
    await ForceLogout(arg); 
});

The problem is that MessagingCenter cannot be mocked and injected into your class – so how do you unit test it?

Well, the answer is embarrassingly straightforward – you may already be using a similar approach to test for events being raised (I know I am).

Basically, all we need to do is to subscribe to the message in our test, setting a flag when the message is received;

var messageSent = false;

// Arrange 
MessagingCenter.Subscribe<WelcomePage, string>
  (this, "ForceLogout", (sender, arg) =>
        {
            messageSent = true;
        });

// Act
.
.
// Assert
Assert.IsTrue(messageSent);

It really is as simple as that. While we can’t mock out the MessagingCenter, we can simply subscribe to the message we are interested in – just as we would in the application code itself.

IR35 2020 – Thoughts from the Coal Face

I’ve been contracting for over eight years now and in that time I’ve been careful to ensure that, to the best of my abilites, I operate in a manner that places me outside of the IR35 legislation. That is, to provide a service to my clients and not to be seen as an employee.

Currently it is my responsibility to determine the employment status of a role with regards IR35. I do this by having contracts independantly reviewed to ensure that they comply with the legislation and take steps to ensure that the actual working conditions are in accordance with service provision rather than employment.

If I get it wrong then it is down to me to justify my determination, in court if need be, and pay any unpaid taxes should I be unable to do so.

However, in April 2020 that determination could largely be taken out of my hands and placed in those of the fee payer, e.g. the client if I’ve been engaged directly or otherwise a recruitment agency that have facilited the engagement.

HMRC have decided to make this change stating their belief that most contractors are incorrectly self-declaring themselves as being outside of IR35 and avoiding paying the correct level of tax.

They have not been able to substantiate these claims, despite repeated calls to do so – but that’s not the reason for this post.

There are a number of problems with this, seemingly subtle shift in responsibilities but, as I see it, the main one that a great many end clients have no concept of IR35 or how to interpret it. Until now they’ve really not had to worry about it – the contractor turned up, did the work and left again. The client paid the contractors Limited Company or the recruitment agency an agreed fee and life was simple.

From April 2020, fee payers that meet the size and turnover criteria will need to determine whether IR35 applies and is so deduct tax as source prior to paying the contractors company. However, the IR35 legislation is complex and confusing – with HMRC losing more cases than it wins it is clear that they don’t fully understand it either.

If, in the eyes of HMRC, the fee payer gets the IR35 determination wrong and have deemed a contract to be outside of the legislation instead of inside then they will be responsible for paying the overdue tax – which may not be trivial. So what do we think clients are going to do? Are they going to invest the time into understanding IR35 so that they can defend and ‘outside’ determination or just take the ‘easy’ route?

Well, in spite of HMRC saying it wouldn’t happen, many major clients, particularly in the banking sector, have already stated that they will either not be engaging with contractors after April 2020 or will be considering ALL contract positions to be within IR35. That way they will stay on the right side of HMRC and everything is simple again.

The problem is that while HMRC may be happy with this, afterall they will be receiving additional revenue from these blanket IR35 determinations, many contractors are not.

Without getting into the nitty-gritty of IR35 it is not easy to see the problems that this will create for contractors but I think that there is an opportunity to educate the clients here. To make them aware of the difference between contractors, who provide a business-to-business service, and their permanent employees.

Provision of Services – A ‘Tradesperson’

First of all imagine that you run a company and want to freshen the office up a little bit.

You contact a decorating company and explain your requirements and agree on a price and timescale for the work. The company duly send a suitably qualified decorator to your site, you tell them what you want done and they get on and they get on and do it.

When they are done they leave and if you are happy with the work you pay the invoice.

So, what have you done here?

You have engaged with a company to provide you with a service. They have sent one (or more) of their employees to carry out the work and then invoiced you for that work.

The key here is that the company has provided you with a service – you have not directly employed the person who arrived to do the work. You don’t need to pay their taxes or into their pension. If they are unavailable the company can send someone else, suitably qualified of course, to continue with the work.

If the work was scheduled to take a week and it’s finished in three days then you are not obliged to provide additional work to fill the time – and neither is the decorator or company obliged to accept it if you do. The engagement was for a set piece of work and once that is done the engagement is over.

Now, I think that’s pretty straightforward and nobody should have a problem with that. But the thing is that the provision of service is somewhat clear cut; you have engaged with a company who provide services that your company is not skilled in.

Provision of Services – Contractors

Imagine now that you run a company which has a Software Development aspect to it and you need some additional resource to fulfil a project on time.

You reach out, directly or via an agency, to a company providing contracted software development services, explain your requirements and agree on a price and timescale for the work.

The contractor arrives on site, is told what needs to be done and gets on and does it.

Once the work is completed there is no expectation that further work needs to be offered or accepted and the engagement is over.

During the engagement should the contractor fall ill there is no expectation that their services will be invoiced unless they are able to send a suitable replacement (at the contracting companies cost) to continue with the work.

Does this sound familiar? It should do – it’s pretty much the same as the previous example with the decorator.

The difference is that the service being provided is something that your company already does and the contractor is probably expected to work within an existing team.

This is where the lines start to blur and the grey areas start to appear.

Why does this make a difference?

With a contractor more embedded in the business and it’s team it is easy for the client to see them as an employee and attempt to treat them as such.

I’ve had instances where a client will attempt to dictate my working hours (despite working a standard 7.5 hours) and my dress code (I was wearing a polo shirt with my company branding).

Would the client feel able to do this with their decorator? I think not.

If they arrive at 9 and leave at 3 but get the work done to the required standard and complete the job on time then surely that’s what matters. If the quality isn’t there or the job isn’t finished then that’s another matter altogether.

The client wouldn’t expect to be able to dictate the dress code for these external workers, or when they take their lunch breaks so why expect to do it with a contractor?

I may well sound like a bit of a diva here – I mean, who do I think I am? But this is the reality of the matter. The client has engaged my company, directly or indirectly, to provide a service – I am not their employee.

Now, in order to integrate into a team a contractor will normally work similar hours but their hours should only really be dictated by their ability to access the building (assuming they are working on site).

I will obviously make efforts to integrate with the team and working conditions but this is in order to maximise the value of the services I provide – I’m kinda old school like that.

The Take Away

I believe that unless clients can understand this concept then they will have little chance of being able to make an accurate determination of employment status.

If they see contractors as employees, albeit temporary, then they will simply not grasp the importance of determining an accurate employment status with regards IR35.

It may well be that a particular role is correctly deemed to be within IR35 and if this is the case then that’s fine.

However, clients should avoid taking the stance that all contracts are within IR35 – doing so will not only damage the contracting industry in the UK but the client will also severely hamper their ability to engage with an experienced, temporary and flexible workforce to delivery their project and products.

AppSettings in Xamarin.Forms

If you have used ASP.NET in recent years you will probably be familiar with the appSettings.jsonfile and it’s associated, build-specific transformations, e.g. appSettings.development.json and appSettings.release.json.

Essentially these allow developers to define multiple configuration settings which will be swapped out based on the build configuration in play. Common settings are stored in the main appSettings.json file while, for instance, API Endpoint Urls for development and production deployments are stored in the development and release files.

At compile-time any values specified in the deployment version of the file overwrite those in the common version.

Simple – works well and we use it all the time without thinking about it. But what about Xamarin.Forms – it doesn’t have such a mehanism out of the box so how do we achieve this and prevent accidently publishing an app to the App/Play Stores which are pointing to your development/staging servers?

The Problem

Previously I have tried a number of approaches and while they mainly worked there were always shortcomings which meant that I couldn’t really rely on them.

One used a pre-deployment step in Visual Studio to copy the appropriate file, based on the build configuration, from one location to another where it would be picked up by the applications startup code. This worked fine when running on Windows but not when building on the Mac (using Visual Studio for Mac) because it uses the cp command and not copy.

Yes, I could have created an alias from copy to cp but what about when I want to configure a Continuous Integration build on Azure DevOps?

Another approach had me creating Operating System specific script files, .bat on Windows and .sh on the Mac, and again using a pre-build task to run the appropriate script (executing the extension-less command would run the appropriate version on each platform). But passing arguments for the build configuration was clunky and again Azure DevOps pipelines didn’t really seem to want to play ball – maybe Microsoft is a bit cautious about letting anyone execute scripts on their Azure servers 😉

Well, after deploying and testing the wrong version of an app onto my phone and scratching my head for twenty minutes I decided enough was enough and using the first approach above as a base I have come up with a simple mechanism which does the job.

The (my) Solution

The solution I’m running with requires a configuration file to be created for each build configuration, e.g. Debug and Release (by default), which are configured as ‘Embedded Resources’.

A static property is added to the App class which implements the Singleton pattern to load and return an instance of a class that represents the contents of the configuration file – I’m using Newtonsoft.Json to deserialise the file and hydrate an instance of the class.

Add some Conditional Compilation Symbols to the mix and we are just about there.

If you would rather look at the code in Visual Studio then you can download it here.

Enough talk – let’s get to it 😉

To the Code

I’ve created a basic Xamarin.Forms project using Visual Studio 2019 Community Edition, stripped out all the default stuff and added in a single Content Page, ViewModel, Class and two Configuration Files:

The Solution (in all it’s glory)

The appsettings.debug.json file has the following content:

The appsettings.debug.json file

You probably don’t need any clues to the content of the other version 😉

The only thing to remember here is to ensure that you set the ‘Build Action’ for these files as ‘Embedded Resource’:

Setting Build Action on .json files

The AppSettings.cs class is a simple POCO with a single property which corresponds to the json files:

The AppSettings.cs POCO class file (yes – I could remove the redundant ‘using’ statements!)

Now, when a Xamarin.Forms app starts up the App class is, for all intents and purposes, the entry point and is generally available to the Content Pages throughout the app. So this is a good place to expose our AppSettings:

App.xaml.cs

Notice how the property getter will instantiate the AppSettings instance if it is not already in place.

Also notice the use of the conditional compilation statements (#if .. #else .. #endif) in the LoadAppSettings method. This is where the ‘magic’ happens. I know that some people may shy away from this approach but this is the way I’ve gone for now.

Basically the LoadAppSettings method will read in the specified file depending on which build configuration is in play at the time. The file is deserialised into an instance of AppSettings and the local variable updated.

As the .json files are Embedded Resources we can address them using their fully qualified names, noting that the namespace is made up of the overall Namespace (AppSettingsPoc), the folder name containing the files (Configuration) and the actual filenames. Yours will be different for sure – just remember how it’s comprised.

For the conditional compilation to work we need to specify the appropriate symbols (the ‘RELEASE’ text in the above code).

To do this, Right-Click on the shared project (the one with the App.xaml file) and select ‘Properties’:

Project properties for AppSettingsPoc (shared project)

Select the ‘Build’ tab from the left hand side and set the configuration to ‘Release’ if it’s not already.

In the ‘Conditional Compilation Symbols’ field enter ‘RELEASE’ (or whatever you want to call it – just match it up with what you use in the App.xaml.cs file). If there are other values already present then tag this new one to the end, delimiting it with a semi-colon.

So, we have the configuration files, we are loading them and making the available to the Application. Now we just need to consume the data and for this I’m using a ViewModel, HomeViewModel.cs, which will be the Binding Context to our Page.

The ViewModel class is another simple POCO with a single property which reads the WelcomeText from the AppSettings instance via the App class:

The HomeViewModel.cs POCO class – with redundant ‘using’ statements removed 😉

Finally, we just need to bind this property to a UI element on our Page:

HomePage.xaml

Notice that on line 10 in the above markup I’m specifying the BindingContext to an instance of the HomeViewModel. As with many things in Xamarin.Forms, there are numerous ways of acheiving this.

The label has a Text property which I’ve bound to the WelcomeText property (which will be on the ViewModel remember) and that’s about it.

If I run the app in Debug mode I will see the message as read from the appsettings.debug.json file. Run it in Release mode and it will be the message from the appsettings.release.json file:

In Summary

The solution presented above requires a small amount of setup and then it pretty much fire-and-forget. If you need a new setting then update your AppSettings.cs class, add the values to your .json files and you are good to go.

Need a new build configuration, say Staging? No problem. Just create the new .json file (remembering to set it’s build action as ‘Embedded Resource’), add a new ‘STAGING’ symbol to the project properties, update the LoadAppSettings method to check for it and you are done. Simple as that really.

Now, some may say that Conditional Compilation is bad and that using Reflection is the work of the Devil. But frankly – I’m quite pragmatic about these things and if it works and doesn’t have a dreadful code smell then I’m not going to lose too much sleep over that.

Roll my own DDNS? Why Not.!

Update: Well, there’s certainly more to Dynamic DNS than meets the eye – who knew.

Investigations led to the decision that I should put my hand in my pocket and spend my time better elsewhere.

Not that I opted for the overpriced offering from Oracle, signing up with NoIp instead.

Like many devs I have been known to host websites and services behind my home broadband router and therefore needed a Dynamic DNS resolver service of some description. But in my recent moves to limit my reliance on third-party services – including those provided by Google – I wanted to see what would be involved in creating my own service.

Why would I want to roll by own?

Over the last few years I’ve moved my hosted websites outside of my home network and onto services offered by Digital Ocean so I was only really using my DDNS provider for a single resource – my Synology NAS.

Now, in the past I’ve used DynDNS (an Oracle product) and while I’ve had no issues with the service it’s not what you could call cheap – currently starting at $55 a year. When a previous renewal came through, and after reviewing what I was using it for, I decided to let it expire and do without the access to the NAS from outside my network.

Recently though I’ve been using OwnCloud (a Dropbox-like system), hosted on Digital Ocean to replace some of the functions I used to use the Synology for. I don’t really want to use Dropbox or Google Drive or similar offering as I want to keep my data under my control. With the desktop application running I was able to access and edit files from my multiple systems while only actually having a single source of those files, i.e. OwnCloud.

The only downside I’ve encountered was with the Mobile app – which I wanted to use to backup the photos taken on my phone in the same way that the Google Photos app does (because I want to store my own data remember!). Well, the app just refuses to do this without manual intervention (and even then it’s buggy) which is kind of defeating the object.

Then, while listening to the Security Now podcast I heard Steve Gibson talking about his quest to find a suitable file syncing setup. He discussed the pros and cons of different systems and finally opted for Dropbox – which I don’t want to use. Then – out of the blue – the podcast host, Leo Laporte, mentioned ‘Synology Drive’ and his initial description seemed to tick all my boxes … plus I already have a Synology NAS.

I’ve had a look at the capabilities of the system and it seems to do what I want it to do but of course I now have the problem of accessing this from outside my network – I need a Dynamic DNS resolver service of some description. Sure, I could just put my hand in my pocket and pay for one but where is the fun in that?

OK, what’s the plan?

Right, what does a Dynamic DNS resolver do (or what do I think it does anyway)?

Well, in my experience when I was configuring the DNS for my domain I simply pointed the resource, e.g. a website, at the DynDNS service and it would redirect requests to my home broadband IP address. Simple huh?

But how does the DynDNS service know my home IP address and what happens when it changes?

Well, my router has the capability to integrate with these services and notify them when the IP address changes.

So I need to deploy something which will redirect requests for my Synology based services to my home IP address and a mechanism to keep the IP address up to date. How hard can it be?

I plan to create and deploy a simple ASP.NET Core MVC application with a GET and a POST endpoint. The GET will accept incoming requests and perform the redirection while the POST will be used to keep the IP address updated.

Proof of Concept

So, step one is a proof of concept – a super simple MVC application, deployed to my .NET Core enabled Digital Ocean Droplet, which will perform basic redirection to a hard-coded IP address.

I’m not expecting too many problems with this but the one thing that springs to mind is how will it handle HTTPS requests?

If all goes will with step one, the next step will be to set the IP address and keep it up to date. For this I plan to use the Synology’s ability to run Python scripts on a schedule (say every 30 minutes) which will call the POST endpoint. The plan being for the endpoint to be able to resolve the source IP address from the request, rather than the Synology having to resolve it first.

Clearly this will not be as polished a service as the one offered by Oracle but it will use existing resources, so there is no additional cost to be incurred.

Watch this space 🙂

Sorry Facebook – that’s enough

Sometime ago I deleted my personal Facebook profile and found that I really didn’t miss it – at all. Quite liberating in fact – you should try it. My departure pre-dated the Cambridge Analytica scandal and was more to do with the garbage that ended up in my feed than anything else.

That said, recent reports of plain text passwords and other dubious operating tactics of Facebook would have seen me making the same decision to get off the platform.

But – there was a problem! I had a Facebook business page and it needed an active user account to be associated with.

I duly created a new profile and locked down the permissions/privacy settings as hard as I could and then associated the page with that account – leaving me free to delete my personal one.

The business page was little more than a presence and while I posted links to it via Buffer I didn’t really use it to engage with any potential clients. Looking at the engagement of the links I did post they were pretty low (double figures or lower) and it was really an aide-memoir for me if nothing else – a reading list if you like.

Recently the news has been peppered with stories of Facebook and their handling/selling of personal information as well as some shocking security issues including the storage of plain text passwords – as a developer I don’t know why this would ever be seen as a good idea.

So the question I asked myself was;

“Do I want to be associated with a company that operates in this manner?”

It didn’t take long to come to the conclusion that I didn’t.

It’s not that I’m a high flying operation or the next up and coming big deal – I’m absolutely certain that Facebook won’t notice my departure on any level whatsoever.

It’s not that any of my clients (past, existing or future) would make any judgement on me for being on Facebook (it’s not that I’m advocating animal research after all).

It’s just that I don’t want to be reliant on an unreliable platform run by people I frankly don’t trust. Just because I don’t pay for the service doesn’t mean that they can expect to do as they please – not to me anyway.

So that’s that – the social link in the sidebar will link to this post and all my Buffered posts also went to Twitter so I’ve lost nothing at all. I will be looking at creating a ‘Link Directory’ page here but haven’t decided on a plugin yet.

Updating an end of life application

A while ago I posted that the FillLPG for Android application was, in a word, Dead! But in few days time users will notice a new version, 2.0.28.5, hitting the Play Store as well as in-app notifications to update – so what gives?

Have I changed my mind about withdrawing support for this app – no, I haven’t. Essentially my hand has been forced by Google’s recent decision to deprecate some of the functionality I was using to fetch nearby places as part of the ‘Add New Station’ wizard as well as requiring 64 bit support – the latter being little more than a checkbox and nothing that most users will ever notice.

Removal of the Places Picker

Prior to the update, when adding a new station a user could specify the location in one of two ways;

  • Select from a list of locations provided by the Google Places API and flagged as ‘Gas Stations’
  • Open a ‘Place Picker’ in the form of a map and drop a pin on the desired location

It is the second option which is now going away – and there is nothing I can do about it. Google are pulling it and that’s that.

The Place Picker was added as I noticed that a nearby ‘Flogas’ centre, where you could buy cylinders of LPG and also refill your vehicles tank, was not on the list returned by Google Places. Using the Picker it was possible to zoom and pan in order to locate the desired location. It was also useful if you weren’t actually present at the station you wanted to add, i.e. it wasn’t nearby!

So where does that leave users (apart from heading to the Play Store to complain and leave a 1 star review!) – well, if the desired location is not displayed in the list then they will need to head over to the website and add the station there. Not as convenient as using the app maybe but not rocket science either.

But why bother if the app is ‘Dead’?

Well, after viewing some in app analytics I found that on average around 25 new stations were added, via the app, each month so there is a chance that users would bump up against this problem when trying to open the Place Picker – chances are that the app could simply crash out.

If the ‘Add New Station’ feature wasn’t being used I’d probably just have removed the menu option and have done with it.

But enough users were using it so I set aside a couple of hours to investigate and remove the Place Picker and it’s associated code while leaving the ‘Nearby List’ in place – even if it will only contain locations flagged by Google Places as being ‘Gas Stations’.

In Summary

In this instance the required update was not too onerous – just a couple of hours of work really, however, this may not always be the case.

Android is always evolving and Google could well make changes to other areas of functionality that could adversely affect the application and require a lot more development effort, e.g. forcing a migration to the new version of the Maps API.

In that instance it would probably the end of the road for the app and it would either have to hobble along or be pulled from the Play Store altogether.

For now though, it’s business as usual 🙂

Getting to grips with OzCode

When I was at NDC London in January I watched a demonstration of the OzCode extension for Visual Studio. Not only was it well presented but it highlighted some of the pinch points we all have to tolerate while debugging.

In return for a scan of my conference pass, i.e. my contact details, I received a whopping 35% discount off a licence and without completing the 30 day trial I was so impressed that I pulled out my wallet (actually the company wallet!).

While I don’t use all of the features every day there are a few that I use all the time – the first one is called ‘Reveal‘.

Consider the following situation:

But I already knew this was a list of view models!

At this breakpoint I’m looking at collection of View Models – but I knew that already what value am I getting from this window? There are over 600 records here – do I have to expand each one to find what I’m looking for? What if one has a null value that is causing my problem – how will I find it?

Well, I could obviously write a new line after the breakpoint which will give me all of the items with a null value for say the name property. But to do that I need to stop the debugging session, write the new line, restart the application and perform whatever steps I need to perform to get back to the above scenario.

Ok, that’s not going to kill me but it’s wasted time and I have to remember to remove the debugging line once I’m done with it.

Using the OzCode Reveal feature I can specify which properties I was to be displayed instead of the actual, fully qualified, class name.

By expanding any of the records to view the actual property data it is possible to select which ones I want to see – which are important to me at this time – by clicking the star.

Select the properties you want to see

Now when I expand the list I see this instead:

Much more useful

These selections are persisted across sessions and can be changed whenever – maybe I want the email addresses, cities or countries next time – not a problem.

But what about nested properties? Well, that’s not a problem either – just drill in an star the properties you want to see and they will be displayed at the appropriate level in the tree as below:

Here the users first and last names are selected as well as the start date of their subscription

There’s a lot more to OzCode than this and as it becomes more embedded in the way I work I’ll post more about how it has helped me.

Hey Dude – where’s my online tool?

Over the past 18 months I’ve been posting details of various online tools that I’ve encountered – but not this month I’m afraid.

Why? Well, frankly I’ve run out! I’ve got nothing!

I’ve been hard at work but haven’t actually found the need for an online tool that I haven’t used before.

Now that’s not to say that there aren’t any tools out there – it’s just that I didn’t need them, not this month anyway.
Next month may be different but I think it’s probably best to drop the ‘of the month’ banner and just categorise them as ‘Online Tool’

If you know of a tool that I’ve not featured and think I, as a .NET / Xamarin developer may find useful then feel free to use the contact page to point me in it’s direction.

NDC London – A Little Review

So, the last developers conference (if you could call it that) I went to was way back in 2001 when we had WAP phones and if you had a Pentium 4 computer you were some super-techno hacker.

Well, time have changed somewhat, we now have smart phones with more memory than we had hard drive space back then and I’m writing this post on a workstation with 8 cores and 32GB RAM (and this isn’t even the cutting edge). Add to that the cloud, driverless cars and social networks with billions of users (for better or worse) and we have come a hell of a long way.

Well, I decided to bite the bullet and shell out for a super early-bird ticket and get myself up to London for a few days to Geek-Out.

I had a clear idea in my head about what I wanted to achieve from the three days and planned to not only attend sessions about the new whizzy stuff like Blazor and ASP.NET Core 2.2 but also some of the more mature technologies and concepts – if you read my tweets from the event I think you’ll see that the scope of the tech. I think it was a little light on mobile development but if there were any more sessions covering that I think I would have had a hard time selecting which ones to go to.

Some of the sessions were presented by the likes of Scott Hanselman, Troy Hunt and Jon Skeet through to those I’d never heard of but who presented some engaging (and enlightening content). I don’t regret a single session choice and came out of each of them with something to follow up on.

The exhibitors were varied and interesting with the likes of DevExpress whose tools I’ve used for over a decade (and who know the proper strength for a real coffee) and JetBrains along with OzCode (who proved that debugging does have to suck to the degree that I bought a licence without trying the 30 day trial) and Twillio.

Although the weather wasn’t great and my flight home was nearly cancelled because of snow I enjoyed my three days rubbing shoulders with celebrity and grass root developers alike.

I have to say that almost 20 years between conferences is far too long – especially in this industry and I’ll certainly be considering the next NDC (probably in London – but with Porto and Barcelona events, who knows).

The videos of the sessions are now on YouTube and I will be re-watching a few of these to refresh my memory. I was taking notes but I was also being careful not to get too absorbed in them so that I missed the actual content being delivered!

Online Tool of the Month – converter.telerik.com

Telerik shouldn’t be a new name to you but if it is then I suggest you head on over to their main site, https://www.telerik.com, and have a look.

This post relates to there Code Converter utility which I found a need for recently while maintaining a clients VB.NET application.

Although I used to develop mainly in VB.NET I am now predominantly working in C# and as anyone who has used both languages will tell you, flipping between one and the other is not exactly painless.

On this particular occasion I had an idea how I could fix the issue with the clients application code – but my idea was in C# – enter the Telerik Code Convertor.

The UI is simple – two text panes, one on the left for the language you want to convert and the other for the convertion output, a button to switch between converting from C# or VB.NET and a button to perform the conversion.

Simply set the appropriate ‘from’ language, enter the code to convert and then click the big red ‘Convert Code’ button. The converted code is displayed in the left-hand pane.

The tool is quite forgiving and won’t complain about missing brackets or semi-colons but remember – ‘Garbage In – Garbage Out’.