Getting Started with Visual Studio 2019 Android Navigation Drawer template

So, I’ve had an idea for another privacy-focused application, this time aimed at mobile devices – Android in particular (I know that Apple are a little touchy about encryption apps – maybe I’ll venture into iOS at a later date).

Notwithstanding my desire to keep my skills up to date I knew that the project I have in mind would require a lot of platform specific logic. While Xamarin Forms can handle this I prefer to take the hit, roll my sleeves up and I opted for a native Android project instead – and that’s where the trouble/fun started.

If you go through the ‘New Project’ process below you will end up with an application which will look something like the one above;

Yep -just what I needed, an application with a slide out menu. Now all I need to do is to replace the default options with my own and then open the appropriate views when they are clicked – what could be eaiser?

Well, when you venture into the code you will find the following event handler within the MainActivity class:

Hmm – so no actual code in place for when an option is selected then!? Running the app and selection various options confirms this is the case. So how hard can it be … really?

In the past I’ve used the Xamarin Forms Drawer Navigation model to create this sort of interface so was a little out of touch with the way native Android was doing things. I knew that I was probably looking at using Fragments, rather than Activities, to act as the content views but had never really used them.

Fragment is Obsolete – eh, what?

I know enough about Fragments to know that I needed a Fragment class and an associated Layout file (the view if you like).

I created a Fragments folder and, using the Add > New Item option, added a Fragment class.

The class was duly created and Visual Studio displayed a green squiggly under the Fragment base class reference. No, big deal – we’ve all seen this before, probably nothing too worrying;

Oh! It would appear that Fragment is deprecated – what the what?

This was rabbit hole number 1 – if Fragment is deprecated what do I use instead?

After a couple of hours trawling the Internet and generally getting frustrated I began to piece together fragments of information (pun intended) which led me to a solution, actually a couple of them.

  1. Drop the ‘Compile Target’ from 9.0 (Pie) to 8.1 (Oreo) where the Fragment class is not marked as obsolete.
  2. Reference the Support Library implementation of the Fragment class instead

Right, option 1 isn’t really an option unless you are happy to tether your application to Oreo. This may be fine for say a proof of concept but not really for a production application. An app compiled against Oreo should run fine on Pie and above – but your mileage may vary so make you you check that out!

Option 2 should not have come as a surprise to you or anyone else familiar with Android development – very few apps these days can be written without using the Support Libraries.

The complication came because when you search the Internet for information on the Android Support Libraries you will run into posts about AndroidX and Jetpack – which are essentially the new Support Library implementations. Trying to ties these back to Xamarin proved to be excessively time consuming and in the end I found that their implementations, e.g. nuget packages, are still in pre-release and documentation is sparce so I was reluctant to use them unless I really had no option. At this point I was considering using the Xamarin Forms option instead – but where’s the “fun” in that..?

The fix for my Fragment was to simply update the Android.App using statement to target the Support library instead:

No more green squiggly line and the application builds just fine – all I need now is to create the Layout and write the code to swap out the Fragments based on the selected option from the menu in the slide out drawer – simple…..!

Wiring Up the Fragments

If you look at the structure of the solution that the template generated you will see that the content of the displayed in the main area of the UI is contained in a Layout view called content_main (Resources > layout > content_main.xml).

Opening this up we can see that this layout consists of a RelativeView with a TextView inside it.

Fairly straightforward stuff here – but how do we swap our fragments in and out?

Well, as it stands – we can’t..! What we need to do is to change the RelativeLayout to a FrameLayout, give it an Id and remove the TextView (as it won’t be needed).

Now we have a container for our fragments we need to respond to the menu selections, inflate the appropriate fragment and ‘inject’ it into our FrameLayout.

I have created three fragment classes with associated layouts – these are pretty basic with the layouts containing a single TextBox each and the corresponding classes just inflating them. You can download the finished solution from the link in the resource section at the bottom of this post.

The first thing we need to do is to set the initial layout state and for this we will display the ‘Welcome’ fragment as out ‘Home/Landing Page’

To do this we will need a FragmentManager, in particular – a Support Fragment Manager. Fortunately our generated MainActivity inherits from AppCompatActivity so it already has a property containing an instance of this class.

To set our initial layout we will need to create and commit a transaction via the Fragment Manager – don’t panic, it’s three lines of code which needs to be added to the OnCreate method of the MainActivity.

        protected override void OnCreate(Bundle savedInstanceState)
            var welcomeTransaction = SupportFragmentManager.BeginTransaction();
            welcomeTransaction.Add(Resource.Id.fragment_container, new Welcome(), "Welcome");

This will add in instance of a WelcomeFragment to the FrameLayout – essentially pushing the view onto it’s internal stack.

Now, I’ve seen a couple of trains of thought as to how to handle displaying the appropriate transaction based on the menu selection and they each have pros and cons;

  1. Add all of the required fragments to the FrameLayout, showing and hiding them as required (the FrameLayout will store the fragments in a Stack)
  2. Swap out fragments on demand so that the FrameLayout only ever contains a single child view.

With option 1 all of the fragments are loaded into the FrameLayout we are using as a Fragment Container with all but the initial view, in our case the Welcome fragment, hidden as part of the transaction. The upside of this approach is that it will maintain the state of each fragment even when it is hidden. The downside is that you will need to keep track of the currently displayed fragment so that it can be hidden when another menu option is selected and of course there is the memory consumption to think about – but this may not be a concern depending on your requirements.

Because my app is aimed at a persons privacy I don’t want all of the state to be kept in memory – when a menu option is selected I want the current fragment to be destroyed. With this in mind I’ll be implementing Option 2 and replacing the fragments as required so that the FrameLayout stack only ever contains a single fragment.

Scrolling down to the OnNavigationItemSelected handler we can now add our code to swap out the fragments (I’ve just updated the Gallery and Slideshow options here)

            else if (id == Resource.Id.nav_gallery)
                var menuTransaction = SupportFragmentManager.BeginTransaction();
                menuTransaction.Replace(Resource.Id.fragment_container, new Fragment1(), "Fragment1");
            else if (id == Resource.Id.nav_slideshow)
                var menuTransaction = SupportFragmentManager.BeginTransaction();
                menuTransaction.Replace(Resource.Id.fragment_container, new Fragment2(), "Fragment2");

So, almost the same three line as when we added the Welcome fragment (so a refactoring target) but this time we are replacing the contents of the FrameLayout (not replacing the layout itself, just it’s child views).

The result will be when the menu it opened (using the hamburger button or swiping from the left edge of the screen) and the Gallery or Slideshow option is clicked that the appropriate fragment will be loaded into the FrameLayout, replacing the one that was already these.

In Summary

Most apps need some form of menu and the fact that the NavigationDrawer template provides all the nice UI it does leave the developer having to trawl the internet to work out what to do next.

Even though I knew that fragments would be required there was very little in the way of documentation to be found, specifically about extending this template. Add this to the rabbit hole that is AndroidX/JetPack and it can become frustrating for developers new to Xamarin development.

While this post walks you through one implementation there are doubtless others out there. If you feel I’ve missed something then either leave a comment below or head over to the Github repository to post an issue of submit a pull request (whichever works best for you).


Full Source Code

IR35 – Living with a Broken Promise

Well I guess it’s old news now, although it was quite foreseeable, but despite a pre-election promise the Conservatives have reneged on their commitment to review the IR35 legislation. Instead they will review the process for rolling the changes into the private sector – not the same thing at all.

Instead of me going over old ground, take a look at my previous IR35 post which was published prior to the election (and it’s broken promises).

In the weeks that have followed Twitter has been ablaze with tweets tagged with #IR35 – many are mine. There is a lot of anger out there and our worst fears, that end clients would take the ‘easy option’ and just stop using contractors altogether has come to pass (despite HMRC saying it wouldn’t).

Take a look at the site and you’ll see the extent of the problem that is unfolding.

Many contractors are being let go and the work is being farmed out overseas. Those clients choosing to keep their flexible workforce are either making blanket assessments that all roles are ‘Inside IR35’ (as they see it safer that way) or using the HMRC CEST tool with limited understanding on how to answer the questions it asks – this will normally result in an ‘Inside IR35’ determination.

Many will point at the CEST results and declare that most historic roles should have been deemed ‘Inside IR35’ so the system is working correctly and that we have all be fleecing the system for years.

There is no doubt that some contractors, knowingly or otherwise, have been operating on the wrong side of IR35 – it would be foolish of me to say otherwise. But why is it that whenever a minority are found to be bending or breaking the rules everyone has to suffer the consequences?

So where does that leave me?

Well, I guess I’m lucky in that I’m in the process of securing a role with a client that will be exempt from the changes – they are a start up and satisfy the ‘Small Business’ definition. This means that I’m responsible for making the IR35 determination. I’ve discussed this with the client and they am not an employee and that I’m providing a service via a business-to-business arrangement.

The project involves the development of a viable proof of concept and the contract will last 6 months.

Maybe the dust will have settled a bit by August 2020 and clients will realise that banning PSCs or imposing blanket ‘Inside IR35’ decisions isn’t working for them.

I’m sure that the government (small ‘g’ was deliberate!) won’t have changed their stance – they will more than likely spin whatever happens to their advantage, that’s what they do.

But it all comes down to this – what happens if, when this contract comes to an end, all the suitable roles are ‘Inside IR35’? What will I do then?

Well, it’s simple – if I cannot find an ‘Outside IR35’ role or secure a role with an exempt company, I’ll have to look for a permanent role instead and to close down my Personal Service Company.

I will not work ‘Inside IR35’ as a ‘No Rights Employee’ – period. It won’t happen, ever!

Dave Carson (On The Fence Development) January 2020

This would be a shame and frankly it makes little sense for the government (small ‘g’ is deliberate remember) to stand there and accept that.

I pay more tax as a contractor that I would as a permanent employee – many contractors are the same. So by forcing us out of business there will be an inevitable loss in revenue – where is the sense in that?

Unit Testing with the Xamarin.Forms MessagingCenter

While we all know that Test Driven Development (TDD) is a good idea, in practice it’s not always viable. It could be a time constraint, a resource issue or the project just doesn’t warrant it.

While TDD may sometimes be an option, unit tests themselves should really be considered to be a must. They will save you a lot of time in the long run and while they may not prevent you from going grey, ask me how I know, they will reduce your stress levels when bugs raise their ugly heads.

So, whether you write your tests before you write the code or vice-versa, if you are developing production code you really should write tests to cover it.

Now, one of the requirements for unit testing is the ability to mock out a components dependencies so that you are only testing the component itself.

Normally you would use the Dependancy Injection pattern to help develop loosely coupled systems but Xamarin.Forms has a few components which can be a fly in the ointment – one of these is the MessagingCenter.

MessagingCenter allows you to send messages from one component to another using the publish-subscribe pattern to do so in a loosely coupled manner.

It’s build into Xamarin.Forms and is very easy to use;

On one side of the process you will send (publish) your message;

MessagingCenter.Send(this, "ForceLogout", message);

One the other side you will subscribe to the message;

 MessagingCenter.Subscribe<WelcomePage, string>
  (this, "ForceLogout", async (sender, arg) => {
    await ForceLogout(arg); 

The problem is that MessagingCenter cannot be mocked and injected into your class – so how do you unit test it?

Well, the answer is embarrassingly straightforward – you may already be using a similar approach to test for events being raised (I know I am).

Basically, all we need to do is to subscribe to the message in our test, setting a flag when the message is received;

var messageSent = false;

// Arrange 
MessagingCenter.Subscribe<WelcomePage, string>
  (this, "ForceLogout", (sender, arg) =>
            messageSent = true;

// Act
// Assert

It really is as simple as that. While we can’t mock out the MessagingCenter, we can simply subscribe to the message we are interested in – just as we would in the application code itself.

IR35 2020 – Thoughts from the Coal Face

I’ve been contracting for over eight years now and in that time I’ve been careful to ensure that, to the best of my abilites, I operate in a manner that places me outside of the IR35 legislation. That is, to provide a service to my clients and not to be seen as an employee.

Currently it is my responsibility to determine the employment status of a role with regards IR35. I do this by having contracts independantly reviewed to ensure that they comply with the legislation and take steps to ensure that the actual working conditions are in accordance with service provision rather than employment.

If I get it wrong then it is down to me to justify my determination, in court if need be, and pay any unpaid taxes should I be unable to do so.

However, in April 2020 that determination could largely be taken out of my hands and placed in those of the fee payer, e.g. the client if I’ve been engaged directly or otherwise a recruitment agency that have facilited the engagement.

HMRC have decided to make this change stating their belief that most contractors are incorrectly self-declaring themselves as being outside of IR35 and avoiding paying the correct level of tax.

They have not been able to substantiate these claims, despite repeated calls to do so – but that’s not the reason for this post.

There are a number of problems with this, seemingly subtle shift in responsibilities but, as I see it, the main one that a great many end clients have no concept of IR35 or how to interpret it. Until now they’ve really not had to worry about it – the contractor turned up, did the work and left again. The client paid the contractors Limited Company or the recruitment agency an agreed fee and life was simple.

From April 2020, fee payers that meet the size and turnover criteria will need to determine whether IR35 applies and is so deduct tax as source prior to paying the contractors company. However, the IR35 legislation is complex and confusing – with HMRC losing more cases than it wins it is clear that they don’t fully understand it either.

If, in the eyes of HMRC, the fee payer gets the IR35 determination wrong and have deemed a contract to be outside of the legislation instead of inside then they will be responsible for paying the overdue tax – which may not be trivial. So what do we think clients are going to do? Are they going to invest the time into understanding IR35 so that they can defend and ‘outside’ determination or just take the ‘easy’ route?

Well, in spite of HMRC saying it wouldn’t happen, many major clients, particularly in the banking sector, have already stated that they will either not be engaging with contractors after April 2020 or will be considering ALL contract positions to be within IR35. That way they will stay on the right side of HMRC and everything is simple again.

The problem is that while HMRC may be happy with this, afterall they will be receiving additional revenue from these blanket IR35 determinations, many contractors are not.

Without getting into the nitty-gritty of IR35 it is not easy to see the problems that this will create for contractors but I think that there is an opportunity to educate the clients here. To make them aware of the difference between contractors, who provide a business-to-business service, and their permanent employees.

Provision of Services – A ‘Tradesperson’

First of all imagine that you run a company and want to freshen the office up a little bit.

You contact a decorating company and explain your requirements and agree on a price and timescale for the work. The company duly send a suitably qualified decorator to your site, you tell them what you want done and they get on and they get on and do it.

When they are done they leave and if you are happy with the work you pay the invoice.

So, what have you done here?

You have engaged with a company to provide you with a service. They have sent one (or more) of their employees to carry out the work and then invoiced you for that work.

The key here is that the company has provided you with a service – you have not directly employed the person who arrived to do the work. You don’t need to pay their taxes or into their pension. If they are unavailable the company can send someone else, suitably qualified of course, to continue with the work.

If the work was scheduled to take a week and it’s finished in three days then you are not obliged to provide additional work to fill the time – and neither is the decorator or company obliged to accept it if you do. The engagement was for a set piece of work and once that is done the engagement is over.

Now, I think that’s pretty straightforward and nobody should have a problem with that. But the thing is that the provision of service is somewhat clear cut; you have engaged with a company who provide services that your company is not skilled in.

Provision of Services – Contractors

Imagine now that you run a company which has a Software Development aspect to it and you need some additional resource to fulfil a project on time.

You reach out, directly or via an agency, to a company providing contracted software development services, explain your requirements and agree on a price and timescale for the work.

The contractor arrives on site, is told what needs to be done and gets on and does it.

Once the work is completed there is no expectation that further work needs to be offered or accepted and the engagement is over.

During the engagement should the contractor fall ill there is no expectation that their services will be invoiced unless they are able to send a suitable replacement (at the contracting companies cost) to continue with the work.

Does this sound familiar? It should do – it’s pretty much the same as the previous example with the decorator.

The difference is that the service being provided is something that your company already does and the contractor is probably expected to work within an existing team.

This is where the lines start to blur and the grey areas start to appear.

Why does this make a difference?

With a contractor more embedded in the business and it’s team it is easy for the client to see them as an employee and attempt to treat them as such.

I’ve had instances where a client will attempt to dictate my working hours (despite working a standard 7.5 hours) and my dress code (I was wearing a polo shirt with my company branding).

Would the client feel able to do this with their decorator? I think not.

If they arrive at 9 and leave at 3 but get the work done to the required standard and complete the job on time then surely that’s what matters. If the quality isn’t there or the job isn’t finished then that’s another matter altogether.

The client wouldn’t expect to be able to dictate the dress code for these external workers, or when they take their lunch breaks so why expect to do it with a contractor?

I may well sound like a bit of a diva here – I mean, who do I think I am? But this is the reality of the matter. The client has engaged my company, directly or indirectly, to provide a service – I am not their employee.

Now, in order to integrate into a team a contractor will normally work similar hours but their hours should only really be dictated by their ability to access the building (assuming they are working on site).

I will obviously make efforts to integrate with the team and working conditions but this is in order to maximise the value of the services I provide – I’m kinda old school like that.

The Take Away

I believe that unless clients can understand this concept then they will have little chance of being able to make an accurate determination of employment status.

If they see contractors as employees, albeit temporary, then they will simply not grasp the importance of determining an accurate employment status with regards IR35.

It may well be that a particular role is correctly deemed to be within IR35 and if this is the case then that’s fine.

However, clients should avoid taking the stance that all contracts are within IR35 – doing so will not only damage the contracting industry in the UK but the client will also severely hamper their ability to engage with an experienced, temporary and flexible workforce to delivery their project and products.

AppSettings in Xamarin.Forms

If you have used ASP.NET in recent years you will probably be familiar with the appSettings.jsonfile and it’s associated, build-specific transformations, e.g. appSettings.development.json and appSettings.release.json.

Essentially these allow developers to define multiple configuration settings which will be swapped out based on the build configuration in play. Common settings are stored in the main appSettings.json file while, for instance, API Endpoint Urls for development and production deployments are stored in the development and release files.

At compile-time any values specified in the deployment version of the file overwrite those in the common version.

Simple – works well and we use it all the time without thinking about it. But what about Xamarin.Forms – it doesn’t have such a mehanism out of the box so how do we achieve this and prevent accidently publishing an app to the App/Play Stores which are pointing to your development/staging servers?

The Problem

Previously I have tried a number of approaches and while they mainly worked there were always shortcomings which meant that I couldn’t really rely on them.

One used a pre-deployment step in Visual Studio to copy the appropriate file, based on the build configuration, from one location to another where it would be picked up by the applications startup code. This worked fine when running on Windows but not when building on the Mac (using Visual Studio for Mac) because it uses the cp command and not copy.

Yes, I could have created an alias from copy to cp but what about when I want to configure a Continuous Integration build on Azure DevOps?

Another approach had me creating Operating System specific script files, .bat on Windows and .sh on the Mac, and again using a pre-build task to run the appropriate script (executing the extension-less command would run the appropriate version on each platform). But passing arguments for the build configuration was clunky and again Azure DevOps pipelines didn’t really seem to want to play ball – maybe Microsoft is a bit cautious about letting anyone execute scripts on their Azure servers 😉

Well, after deploying and testing the wrong version of an app onto my phone and scratching my head for twenty minutes I decided enough was enough and using the first approach above as a base I have come up with a simple mechanism which does the job.

The (my) Solution

The solution I’m running with requires a configuration file to be created for each build configuration, e.g. Debug and Release (by default), which are configured as ‘Embedded Resources’.

A static property is added to the App class which implements the Singleton pattern to load and return an instance of a class that represents the contents of the configuration file – I’m using Newtonsoft.Json to deserialise the file and hydrate an instance of the class.

Add some Conditional Compilation Symbols to the mix and we are just about there.

If you would rather look at the code in Visual Studio then you can download it here.

Enough talk – let’s get to it 😉

To the Code

I’ve created a basic Xamarin.Forms project using Visual Studio 2019 Community Edition, stripped out all the default stuff and added in a single Content Page, ViewModel, Class and two Configuration Files:

The Solution (in all it’s glory)

The appsettings.debug.json file has the following content:

The appsettings.debug.json file

You probably don’t need any clues to the content of the other version 😉

The only thing to remember here is to ensure that you set the ‘Build Action’ for these files as ‘Embedded Resource’:

Setting Build Action on .json files

The AppSettings.cs class is a simple POCO with a single property which corresponds to the json files:

The AppSettings.cs POCO class file (yes – I could remove the redundant ‘using’ statements!)

Now, when a Xamarin.Forms app starts up the App class is, for all intents and purposes, the entry point and is generally available to the Content Pages throughout the app. So this is a good place to expose our AppSettings:


Notice how the property getter will instantiate the AppSettings instance if it is not already in place.

Also notice the use of the conditional compilation statements (#if .. #else .. #endif) in the LoadAppSettings method. This is where the ‘magic’ happens. I know that some people may shy away from this approach but this is the way I’ve gone for now.

Basically the LoadAppSettings method will read in the specified file depending on which build configuration is in play at the time. The file is deserialised into an instance of AppSettings and the local variable updated.

As the .json files are Embedded Resources we can address them using their fully qualified names, noting that the namespace is made up of the overall Namespace (AppSettingsPoc), the folder name containing the files (Configuration) and the actual filenames. Yours will be different for sure – just remember how it’s comprised.

For the conditional compilation to work we need to specify the appropriate symbols (the ‘RELEASE’ text in the above code).

To do this, Right-Click on the shared project (the one with the App.xaml file) and select ‘Properties’:

Project properties for AppSettingsPoc (shared project)

Select the ‘Build’ tab from the left hand side and set the configuration to ‘Release’ if it’s not already.

In the ‘Conditional Compilation Symbols’ field enter ‘RELEASE’ (or whatever you want to call it – just match it up with what you use in the App.xaml.cs file). If there are other values already present then tag this new one to the end, delimiting it with a semi-colon.

So, we have the configuration files, we are loading them and making the available to the Application. Now we just need to consume the data and for this I’m using a ViewModel, HomeViewModel.cs, which will be the Binding Context to our Page.

The ViewModel class is another simple POCO with a single property which reads the WelcomeText from the AppSettings instance via the App class:

The HomeViewModel.cs POCO class – with redundant ‘using’ statements removed 😉

Finally, we just need to bind this property to a UI element on our Page:


Notice that on line 10 in the above markup I’m specifying the BindingContext to an instance of the HomeViewModel. As with many things in Xamarin.Forms, there are numerous ways of acheiving this.

The label has a Text property which I’ve bound to the WelcomeText property (which will be on the ViewModel remember) and that’s about it.

If I run the app in Debug mode I will see the message as read from the appsettings.debug.json file. Run it in Release mode and it will be the message from the appsettings.release.json file:

In Summary

The solution presented above requires a small amount of setup and then it pretty much fire-and-forget. If you need a new setting then update your AppSettings.cs class, add the values to your .json files and you are good to go.

Need a new build configuration, say Staging? No problem. Just create the new .json file (remembering to set it’s build action as ‘Embedded Resource’), add a new ‘STAGING’ symbol to the project properties, update the LoadAppSettings method to check for it and you are done. Simple as that really.

Now, some may say that Conditional Compilation is bad and that using Reflection is the work of the Devil. But frankly – I’m quite pragmatic about these things and if it works and doesn’t have a dreadful code smell then I’m not going to lose too much sleep over that.

Roll my own DDNS? Why Not.!

Update: Well, there’s certainly more to Dynamic DNS than meets the eye – who knew.

Investigations led to the decision that I should put my hand in my pocket and spend my time better elsewhere.

Not that I opted for the overpriced offering from Oracle, signing up with NoIp instead.

Like many devs I have been known to host websites and services behind my home broadband router and therefore needed a Dynamic DNS resolver service of some description. But in my recent moves to limit my reliance on third-party services – including those provided by Google – I wanted to see what would be involved in creating my own service.

Why would I want to roll by own?

Over the last few years I’ve moved my hosted websites outside of my home network and onto services offered by Digital Ocean so I was only really using my DDNS provider for a single resource – my Synology NAS.

Now, in the past I’ve used DynDNS (an Oracle product) and while I’ve had no issues with the service it’s not what you could call cheap – currently starting at $55 a year. When a previous renewal came through, and after reviewing what I was using it for, I decided to let it expire and do without the access to the NAS from outside my network.

Recently though I’ve been using OwnCloud (a Dropbox-like system), hosted on Digital Ocean to replace some of the functions I used to use the Synology for. I don’t really want to use Dropbox or Google Drive or similar offering as I want to keep my data under my control. With the desktop application running I was able to access and edit files from my multiple systems while only actually having a single source of those files, i.e. OwnCloud.

The only downside I’ve encountered was with the Mobile app – which I wanted to use to backup the photos taken on my phone in the same way that the Google Photos app does (because I want to store my own data remember!). Well, the app just refuses to do this without manual intervention (and even then it’s buggy) which is kind of defeating the object.

Then, while listening to the Security Now podcast I heard Steve Gibson talking about his quest to find a suitable file syncing setup. He discussed the pros and cons of different systems and finally opted for Dropbox – which I don’t want to use. Then – out of the blue – the podcast host, Leo Laporte, mentioned ‘Synology Drive’ and his initial description seemed to tick all my boxes … plus I already have a Synology NAS.

I’ve had a look at the capabilities of the system and it seems to do what I want it to do but of course I now have the problem of accessing this from outside my network – I need a Dynamic DNS resolver service of some description. Sure, I could just put my hand in my pocket and pay for one but where is the fun in that?

OK, what’s the plan?

Right, what does a Dynamic DNS resolver do (or what do I think it does anyway)?

Well, in my experience when I was configuring the DNS for my domain I simply pointed the resource, e.g. a website, at the DynDNS service and it would redirect requests to my home broadband IP address. Simple huh?

But how does the DynDNS service know my home IP address and what happens when it changes?

Well, my router has the capability to integrate with these services and notify them when the IP address changes.

So I need to deploy something which will redirect requests for my Synology based services to my home IP address and a mechanism to keep the IP address up to date. How hard can it be?

I plan to create and deploy a simple ASP.NET Core MVC application with a GET and a POST endpoint. The GET will accept incoming requests and perform the redirection while the POST will be used to keep the IP address updated.

Proof of Concept

So, step one is a proof of concept – a super simple MVC application, deployed to my .NET Core enabled Digital Ocean Droplet, which will perform basic redirection to a hard-coded IP address.

I’m not expecting too many problems with this but the one thing that springs to mind is how will it handle HTTPS requests?

If all goes will with step one, the next step will be to set the IP address and keep it up to date. For this I plan to use the Synology’s ability to run Python scripts on a schedule (say every 30 minutes) which will call the POST endpoint. The plan being for the endpoint to be able to resolve the source IP address from the request, rather than the Synology having to resolve it first.

Clearly this will not be as polished a service as the one offered by Oracle but it will use existing resources, so there is no additional cost to be incurred.

Watch this space 🙂

Sorry Facebook – that’s enough

Sometime ago I deleted my personal Facebook profile and found that I really didn’t miss it – at all. Quite liberating in fact – you should try it. My departure pre-dated the Cambridge Analytica scandal and was more to do with the garbage that ended up in my feed than anything else.

That said, recent reports of plain text passwords and other dubious operating tactics of Facebook would have seen me making the same decision to get off the platform.

But – there was a problem! I had a Facebook business page and it needed an active user account to be associated with.

I duly created a new profile and locked down the permissions/privacy settings as hard as I could and then associated the page with that account – leaving me free to delete my personal one.

The business page was little more than a presence and while I posted links to it via Buffer I didn’t really use it to engage with any potential clients. Looking at the engagement of the links I did post they were pretty low (double figures or lower) and it was really an aide-memoir for me if nothing else – a reading list if you like.

Recently the news has been peppered with stories of Facebook and their handling/selling of personal information as well as some shocking security issues including the storage of plain text passwords – as a developer I don’t know why this would ever be seen as a good idea.

So the question I asked myself was;

“Do I want to be associated with a company that operates in this manner?”

It didn’t take long to come to the conclusion that I didn’t.

It’s not that I’m a high flying operation or the next up and coming big deal – I’m absolutely certain that Facebook won’t notice my departure on any level whatsoever.

It’s not that any of my clients (past, existing or future) would make any judgement on me for being on Facebook (it’s not that I’m advocating animal research after all).

It’s just that I don’t want to be reliant on an unreliable platform run by people I frankly don’t trust. Just because I don’t pay for the service doesn’t mean that they can expect to do as they please – not to me anyway.

So that’s that – the social link in the sidebar will link to this post and all my Buffered posts also went to Twitter so I’ve lost nothing at all. I will be looking at creating a ‘Link Directory’ page here but haven’t decided on a plugin yet.

Updating an end of life application

A while ago I posted that the FillLPG for Android application was, in a word, Dead! But in few days time users will notice a new version,, hitting the Play Store as well as in-app notifications to update – so what gives?

Have I changed my mind about withdrawing support for this app – no, I haven’t. Essentially my hand has been forced by Google’s recent decision to deprecate some of the functionality I was using to fetch nearby places as part of the ‘Add New Station’ wizard as well as requiring 64 bit support – the latter being little more than a checkbox and nothing that most users will ever notice.

Removal of the Places Picker

Prior to the update, when adding a new station a user could specify the location in one of two ways;

  • Select from a list of locations provided by the Google Places API and flagged as ‘Gas Stations’
  • Open a ‘Place Picker’ in the form of a map and drop a pin on the desired location

It is the second option which is now going away – and there is nothing I can do about it. Google are pulling it and that’s that.

The Place Picker was added as I noticed that a nearby ‘Flogas’ centre, where you could buy cylinders of LPG and also refill your vehicles tank, was not on the list returned by Google Places. Using the Picker it was possible to zoom and pan in order to locate the desired location. It was also useful if you weren’t actually present at the station you wanted to add, i.e. it wasn’t nearby!

So where does that leave users (apart from heading to the Play Store to complain and leave a 1 star review!) – well, if the desired location is not displayed in the list then they will need to head over to the website and add the station there. Not as convenient as using the app maybe but not rocket science either.

But why bother if the app is ‘Dead’?

Well, after viewing some in app analytics I found that on average around 25 new stations were added, via the app, each month so there is a chance that users would bump up against this problem when trying to open the Place Picker – chances are that the app could simply crash out.

If the ‘Add New Station’ feature wasn’t being used I’d probably just have removed the menu option and have done with it.

But enough users were using it so I set aside a couple of hours to investigate and remove the Place Picker and it’s associated code while leaving the ‘Nearby List’ in place – even if it will only contain locations flagged by Google Places as being ‘Gas Stations’.

In Summary

In this instance the required update was not too onerous – just a couple of hours of work really, however, this may not always be the case.

Android is always evolving and Google could well make changes to other areas of functionality that could adversely affect the application and require a lot more development effort, e.g. forcing a migration to the new version of the Maps API.

In that instance it would probably the end of the road for the app and it would either have to hobble along or be pulled from the Play Store altogether.

For now though, it’s business as usual 🙂

Getting to grips with OzCode

When I was at NDC London in January I watched a demonstration of the OzCode extension for Visual Studio. Not only was it well presented but it highlighted some of the pinch points we all have to tolerate while debugging.

In return for a scan of my conference pass, i.e. my contact details, I received a whopping 35% discount off a licence and without completing the 30 day trial I was so impressed that I pulled out my wallet (actually the company wallet!).

While I don’t use all of the features every day there are a few that I use all the time – the first one is called ‘Reveal‘.

Consider the following situation:

But I already knew this was a list of view models!

At this breakpoint I’m looking at collection of View Models – but I knew that already what value am I getting from this window? There are over 600 records here – do I have to expand each one to find what I’m looking for? What if one has a null value that is causing my problem – how will I find it?

Well, I could obviously write a new line after the breakpoint which will give me all of the items with a null value for say the name property. But to do that I need to stop the debugging session, write the new line, restart the application and perform whatever steps I need to perform to get back to the above scenario.

Ok, that’s not going to kill me but it’s wasted time and I have to remember to remove the debugging line once I’m done with it.

Using the OzCode Reveal feature I can specify which properties I was to be displayed instead of the actual, fully qualified, class name.

By expanding any of the records to view the actual property data it is possible to select which ones I want to see – which are important to me at this time – by clicking the star.

Select the properties you want to see

Now when I expand the list I see this instead:

Much more useful

These selections are persisted across sessions and can be changed whenever – maybe I want the email addresses, cities or countries next time – not a problem.

But what about nested properties? Well, that’s not a problem either – just drill in an star the properties you want to see and they will be displayed at the appropriate level in the tree as below:

Here the users first and last names are selected as well as the start date of their subscription

There’s a lot more to OzCode than this and as it becomes more embedded in the way I work I’ll post more about how it has helped me.

Hey Dude – where’s my online tool?

Over the past 18 months I’ve been posting details of various online tools that I’ve encountered – but not this month I’m afraid.

Why? Well, frankly I’ve run out! I’ve got nothing!

I’ve been hard at work but haven’t actually found the need for an online tool that I haven’t used before.

Now that’s not to say that there aren’t any tools out there – it’s just that I didn’t need them, not this month anyway.
Next month may be different but I think it’s probably best to drop the ‘of the month’ banner and just categorise them as ‘Online Tool’

If you know of a tool that I’ve not featured and think I, as a .NET / Xamarin developer may find useful then feel free to use the contact page to point me in it’s direction.