AppSettings in Xamarin.Forms

If you have used ASP.NET in recent years you will probably be familiar with the appSettings.jsonfile and it’s associated, build-specific transformations, e.g. appSettings.development.json and appSettings.release.json.

Essentially these allow developers to define multiple configuration settings which will be swapped out based on the build configuration in play. Common settings are stored in the main appSettings.json file while, for instance, API Endpoint Urls for development and production deployments are stored in the development and release files.

At compile-time any values specified in the deployment version of the file overwrite those in the common version.

Simple – works well and we use it all the time without thinking about it. But what about Xamarin.Forms – it doesn’t have such a mehanism out of the box so how do we achieve this and prevent accidently publishing an app to the App/Play Stores which are pointing to your development/staging servers?

The Problem

Previously I have tried a number of approaches and while they mainly worked there were always shortcomings which meant that I couldn’t really rely on them.

One used a pre-deployment step in Visual Studio to copy the appropriate file, based on the build configuration, from one location to another where it would be picked up by the applications startup code. This worked fine when running on Windows but not when building on the Mac (using Visual Studio for Mac) because it uses the cp command and not copy.

Yes, I could have created an alias from copy to cp but what about when I want to configure a Continuous Integration build on Azure DevOps?

Another approach had me creating Operating System specific script files, .bat on Windows and .sh on the Mac, and again using a pre-build task to run the appropriate script (executing the extension-less command would run the appropriate version on each platform). But passing arguments for the build configuration was clunky and again Azure DevOps pipelines didn’t really seem to want to play ball – maybe Microsoft is a bit cautious about letting anyone execute scripts on their Azure servers πŸ˜‰

Well, after deploying and testing the wrong version of an app onto my phone and scratching my head for twenty minutes I decided enough was enough and using the first approach above as a base I have come up with a simple mechanism which does the job.

The (my) Solution

The solution I’m running with requires a configuration file to be created for each build configuration, e.g. Debug and Release (by default), which are configured as ‘Embedded Resources’.

A static property is added to the App class which implements the Singleton pattern to load and return an instance of a class that represents the contents of the configuration file – I’m using Newtonsoft.Json to deserialise the file and hydrate an instance of the class.

Add some Conditional Compilation Symbols to the mix and we are just about there.

If you would rather look at the code in Visual Studio then you can download it here.

Enough talk – let’s get to it πŸ˜‰

To the Code

I’ve created a basic Xamarin.Forms project using Visual Studio 2019 Community Edition, stripped out all the default stuff and added in a single Content Page, ViewModel, Class and two Configuration Files:

The Solution (in all it’s glory)

The appsettings.debug.json file has the following content:

The appsettings.debug.json file

You probably don’t need any clues to the content of the other version πŸ˜‰

The only thing to remember here is to ensure that you set the ‘Build Action’ for these files as ‘Embedded Resource’:

Setting Build Action on .json files

The AppSettings.cs class is a simple POCO with a single property which corresponds to the json files:

The AppSettings.cs POCO class file (yes – I could remove the redundant ‘using’ statements!)

Now, when a Xamarin.Forms app starts up the App class is, for all intents and purposes, the entry point and is generally available to the Content Pages throughout the app. So this is a good place to expose our AppSettings:

App.xaml.cs

Notice how the property getter will instantiate the AppSettings instance if it is not already in place.

Also notice the use of the conditional compilation statements (#if .. #else .. #endif) in the LoadAppSettings method. This is where the ‘magic’ happens. I know that some people may shy away from this approach but this is the way I’ve gone for now.

Basically the LoadAppSettings method will read in the specified file depending on which build configuration is in play at the time. The file is deserialised into an instance of AppSettings and the local variable updated.

As the .json files are Embedded Resources we can address them using their fully qualified names, noting that the namespace is made up of the overall Namespace (AppSettingsPoc), the folder name containing the files (Configuration) and the actual filenames. Yours will be different for sure – just remember how it’s comprised.

For the conditional compilation to work we need to specify the appropriate symbols (the ‘RELEASE’ text in the above code).

To do this, Right-Click on the shared project (the one with the App.xaml file) and select ‘Properties’:

Project properties for AppSettingsPoc (shared project)

Select the ‘Build’ tab from the left hand side and set the configuration to ‘Release’ if it’s not already.

In the ‘Conditional Compilation Symbols’ field enter ‘RELEASE’ (or whatever you want to call it – just match it up with what you use in the App.xaml.cs file). If there are other values already present then tag this new one to the end, delimiting it with a semi-colon.

So, we have the configuration files, we are loading them and making the available to the Application. Now we just need to consume the data and for this I’m using a ViewModel, HomeViewModel.cs, which will be the Binding Context to our Page.

The ViewModel class is another simple POCO with a single property which reads the WelcomeText from the AppSettings instance via the App class:

The HomeViewModel.cs POCO class – with redundant ‘using’ statements removed πŸ˜‰

Finally, we just need to bind this property to a UI element on our Page:

HomePage.xaml

Notice that on line 10 in the above markup I’m specifying the BindingContext to an instance of the HomeViewModel. As with many things in Xamarin.Forms, there are numerous ways of acheiving this.

The label has a Text property which I’ve bound to the WelcomeText property (which will be on the ViewModel remember) and that’s about it.

If I run the app in Debug mode I will see the message as read from the appsettings.debug.json file. Run it in Release mode and it will be the message from the appsettings.release.json file:

In Summary

The solution presented above requires a small amount of setup and then it pretty much fire-and-forget. If you need a new setting then update your AppSettings.cs class, add the values to your .json files and you are good to go.

Need a new build configuration, say Staging? No problem. Just create the new .json file (remembering to set it’s build action as ‘Embedded Resource’), add a new ‘STAGING’ symbol to the project properties, update the LoadAppSettings method to check for it and you are done. Simple as that really.

Now, some may say that Conditional Compilation is bad and that using Reflection is the work of the Devil. But frankly – I’m quite pragmatic about these things and if it works and doesn’t have a dreadful code smell then I’m not going to lose too much sleep over that.

Roll my own DDNS? Why Not.!

Update: Well, there’s certainly more to Dynamic DNS than meets the eye – who knew.

Investigations led to the decision that I should put my hand in my pocket and spend my time better elsewhere.

Not that I opted for the overpriced offering from Oracle, signing up with NoIp instead.

Like many devs I have been known to host websites and services behind my home broadband router and therefore needed a Dynamic DNS resolver service of some description. But in my recent moves to limit my reliance on third-party services – including those provided by Google – I wanted to see what would be involved in creating my own service.

Why would I want to roll by own?

Over the last few years I’ve moved my hosted websites outside of my home network and onto services offered by Digital Ocean so I was only really using my DDNS provider for a single resource – my Synology NAS.

Now, in the past I’ve used DynDNS (an Oracle product) and while I’ve had no issues with the service it’s not what you could call cheap – currently starting at $55 a year. When a previous renewal came through, and after reviewing what I was using it for, I decided to let it expire and do without the access to the NAS from outside my network.

Recently though I’ve been using OwnCloud (a Dropbox-like system), hosted on Digital Ocean to replace some of the functions I used to use the Synology for. I don’t really want to use Dropbox or Google Drive or similar offering as I want to keep my data under my control. With the desktop application running I was able to access and edit files from my multiple systems while only actually having a single source of those files, i.e. OwnCloud.

The only downside I’ve encountered was with the Mobile app – which I wanted to use to backup the photos taken on my phone in the same way that the Google Photos app does (because I want to store my own data remember!). Well, the app just refuses to do this without manual intervention (and even then it’s buggy) which is kind of defeating the object.

Then, while listening to the Security Now podcast I heard Steve Gibson talking about his quest to find a suitable file syncing setup. He discussed the pros and cons of different systems and finally opted for Dropbox – which I don’t want to use. Then – out of the blue – the podcast host, Leo Laporte, mentioned ‘Synology Drive’ and his initial description seemed to tick all my boxes … plus I already have a Synology NAS.

I’ve had a look at the capabilities of the system and it seems to do what I want it to do but of course I now have the problem of accessing this from outside my network – I need a Dynamic DNS resolver service of some description. Sure, I could just put my hand in my pocket and pay for one but where is the fun in that?

OK, what’s the plan?

Right, what does a Dynamic DNS resolver do (or what do I think it does anyway)?

Well, in my experience when I was configuring the DNS for my domain I simply pointed the resource, e.g. a website, at the DynDNS service and it would redirect requests to my home broadband IP address. Simple huh?

But how does the DynDNS service know my home IP address and what happens when it changes?

Well, my router has the capability to integrate with these services and notify them when the IP address changes.

So I need to deploy something which will redirect requests for my Synology based services to my home IP address and a mechanism to keep the IP address up to date. How hard can it be?

I plan to create and deploy a simple ASP.NET Core MVC application with a GET and a POST endpoint. The GET will accept incoming requests and perform the redirection while the POST will be used to keep the IP address updated.

Proof of Concept

So, step one is a proof of concept – a super simple MVC application, deployed to my .NET Core enabled Digital Ocean Droplet, which will perform basic redirection to a hard-coded IP address.

I’m not expecting too many problems with this but the one thing that springs to mind is how will it handle HTTPS requests?

If all goes will with step one, the next step will be to set the IP address and keep it up to date. For this I plan to use the Synology’s ability to run Python scripts on a schedule (say every 30 minutes) which will call the POST endpoint. The plan being for the endpoint to be able to resolve the source IP address from the request, rather than the Synology having to resolve it first.

Clearly this will not be as polished a service as the one offered by Oracle but it will use existing resources, so there is no additional cost to be incurred.

Watch this space πŸ™‚

Sorry Facebook – that’s enough

Sometime ago I deleted my personal Facebook profile and found that I really didn’t miss it – at all. Quite liberating in fact – you should try it. My departure pre-dated the Cambridge Analytica scandal and was more to do with the garbage that ended up in my feed than anything else.

That said, recent reports of plain text passwords and other dubious operating tactics of Facebook would have seen me making the same decision to get off the platform.

But – there was a problem! I had a Facebook business page and it needed an active user account to be associated with.

I duly created a new profile and locked down the permissions/privacy settings as hard as I could and then associated the page with that account – leaving me free to delete my personal one.

The business page was little more than a presence and while I posted links to it via Buffer I didn’t really use it to engage with any potential clients. Looking at the engagement of the links I did post they were pretty low (double figures or lower) and it was really an aide-memoir for me if nothing else – a reading list if you like.

Recently the news has been peppered with stories of Facebook and their handling/selling of personal information as well as some shocking security issues including the storage of plain text passwords – as a developer I don’t know why this would ever be seen as a good idea.

So the question I asked myself was;

“Do I want to be associated with a company that operates in this manner?”

It didn’t take long to come to the conclusion that I didn’t.

It’s not that I’m a high flying operation or the next up and coming big deal – I’m absolutely certain that Facebook won’t notice my departure on any level whatsoever.

It’s not that any of my clients (past, existing or future) would make any judgement on me for being on Facebook (it’s not that I’m advocating animal research after all).

It’s just that I don’t want to be reliant on an unreliable platform run by people I frankly don’t trust. Just because I don’t pay for the service doesn’t mean that they can expect to do as they please – not to me anyway.

So that’s that – the social link in the sidebar will link to this post and all my Buffered posts also went to Twitter so I’ve lost nothing at all. I will be looking at creating a ‘Link Directory’ page here but haven’t decided on a plugin yet.

Updating an end of life application

A while ago I posted that the FillLPG for Android application was, in a word, Dead! But in few days time users will notice a new version, 2.0.28.5, hitting the Play Store as well as in-app notifications to update – so what gives?

Have I changed my mind about withdrawing support for this app – no, I haven’t. Essentially my hand has been forced by Google’s recent decision to deprecate some of the functionality I was using to fetch nearby places as part of the ‘Add New Station’ wizard as well as requiring 64 bit support – the latter being little more than a checkbox and nothing that most users will ever notice.

Removal of the Places Picker

Prior to the update, when adding a new station a user could specify the location in one of two ways;

  • Select from a list of locations provided by the Google Places API and flagged as ‘Gas Stations’
  • Open a ‘Place Picker’ in the form of a map and drop a pin on the desired location

It is the second option which is now going away – and there is nothing I can do about it. Google are pulling it and that’s that.

The Place Picker was added as I noticed that a nearby ‘Flogas’ centre, where you could buy cylinders of LPG and also refill your vehicles tank, was not on the list returned by Google Places. Using the Picker it was possible to zoom and pan in order to locate the desired location. It was also useful if you weren’t actually present at the station you wanted to add, i.e. it wasn’t nearby!

So where does that leave users (apart from heading to the Play Store to complain and leave a 1 star review!) – well, if the desired location is not displayed in the list then they will need to head over to the website and add the station there. Not as convenient as using the app maybe but not rocket science either.

But why bother if the app is ‘Dead’?

Well, after viewing some in app analytics I found that on average around 25 new stations were added, via the app, each month so there is a chance that users would bump up against this problem when trying to open the Place Picker – chances are that the app could simply crash out.

If the ‘Add New Station’ feature wasn’t being used I’d probably just have removed the menu option and have done with it.

But enough users were using it so I set aside a couple of hours to investigate and remove the Place Picker and it’s associated code while leaving the ‘Nearby List’ in place – even if it will only contain locations flagged by Google Places as being ‘Gas Stations’.

In Summary

In this instance the required update was not too onerous – just a couple of hours of work really, however, this may not always be the case.

Android is always evolving and Google could well make changes to other areas of functionality that could adversely affect the application and require a lot more development effort, e.g. forcing a migration to the new version of the Maps API.

In that instance it would probably the end of the road for the app and it would either have to hobble along or be pulled from the Play Store altogether.

For now though, it’s business as usual πŸ™‚

Getting to grips with OzCode

When I was at NDC London in January I watched a demonstration of the OzCode extension for Visual Studio. Not only was it well presented but it highlighted some of the pinch points we all have to tolerate while debugging.

In return for a scan of my conference pass, i.e. my contact details, I received a whopping 35% discount off a licence and without completing the 30 day trial I was so impressed that I pulled out my wallet (actually the company wallet!).

While I don’t use all of the features every day there are a few that I use all the time – the first one is called ‘Reveal‘.

Consider the following situation:

But I already knew this was a list of view models!

At this breakpoint I’m looking at collection of View Models – but I knew that already what value am I getting from this window? There are over 600 records here – do I have to expand each one to find what I’m looking for? What if one has a null value that is causing my problem – how will I find it?

Well, I could obviously write a new line after the breakpoint which will give me all of the items with a null value for say the name property. But to do that I need to stop the debugging session, write the new line, restart the application and perform whatever steps I need to perform to get back to the above scenario.

Ok, that’s not going to kill me but it’s wasted time and I have to remember to remove the debugging line once I’m done with it.

Using the OzCode Reveal feature I can specify which properties I was to be displayed instead of the actual, fully qualified, class name.

By expanding any of the records to view the actual property data it is possible to select which ones I want to see – which are important to me at this time – by clicking the star.

Select the properties you want to see

Now when I expand the list I see this instead:

Much more useful

These selections are persisted across sessions and can be changed whenever – maybe I want the email addresses, cities or countries next time – not a problem.

But what about nested properties? Well, that’s not a problem either – just drill in an star the properties you want to see and they will be displayed at the appropriate level in the tree as below:

Here the users first and last names are selected as well as the start date of their subscription

There’s a lot more to OzCode than this and as it becomes more embedded in the way I work I’ll post more about how it has helped me.

Hey Dude – where’s my online tool?

Over the past 18 months I’ve been posting details of various online tools that I’ve encountered – but not this month I’m afraid.

Why? Well, frankly I’ve run out! I’ve got nothing!

I’ve been hard at work but haven’t actually found the need for an online tool that I haven’t used before.

Now that’s not to say that there aren’t any tools out there – it’s just that I didn’t need them, not this month anyway.
Next month may be different but I think it’s probably best to drop the ‘of the month’ banner and just categorise them as ‘Online Tool’

If you know of a tool that I’ve not featured and think I, as a .NET / Xamarin developer may find useful then feel free to use the contact page to point me in it’s direction.

NDC London – A Little Review

So, the last developers conference (if you could call it that) I went to was way back in 2001 when we had WAP phones and if you had a Pentium 4 computer you were some super-techno hacker.

Well, time have changed somewhat, we now have smart phones with more memory than we had hard drive space back then and I’m writing this post on a workstation with 8 cores and 32GB RAM (and this isn’t even the cutting edge). Add to that the cloud, driverless cars and social networks with billions of users (for better or worse) and we have come a hell of a long way.

Well, I decided to bite the bullet and shell out for a super early-bird ticket and get myself up to London for a few days to Geek-Out.

I had a clear idea in my head about what I wanted to achieve from the three days and planned to not only attend sessions about the new whizzy stuff like Blazor and ASP.NET Core 2.2 but also some of the more mature technologies and concepts – if you read my tweets from the event I think you’ll see that the scope of the tech. I think it was a little light on mobile development but if there were any more sessions covering that I think I would have had a hard time selecting which ones to go to.

Some of the sessions were presented by the likes of Scott Hanselman, Troy Hunt and Jon Skeet through to those I’d never heard of but who presented some engaging (and enlightening content). I don’t regret a single session choice and came out of each of them with something to follow up on.

The exhibitors were varied and interesting with the likes of DevExpress whose tools I’ve used for over a decade (and who know the proper strength for a real coffee) and JetBrains along with OzCode (who proved that debugging does have to suck to the degree that I bought a licence without trying the 30 day trial) and Twillio.

Although the weather wasn’t great and my flight home was nearly cancelled because of snow I enjoyed my three days rubbing shoulders with celebrity and grass root developers alike.

I have to say that almost 20 years between conferences is far too long – especially in this industry and I’ll certainly be considering the next NDC (probably in London – but with Porto and Barcelona events, who knows).

The videos of the sessions are now on YouTube and I will be re-watching a few of these to refresh my memory. I was taking notes but I was also being careful not to get too absorbed in them so that I missed the actual content being delivered!

Online Tool of the Month – converter.telerik.com

Telerik shouldn’t be a new name to you but if it is then I suggest you head on over to their main site, https://www.telerik.com, and have a look.

This post relates to there Code Converter utility which I found a need for recently while maintaining a clients VB.NET application.

Although I used to develop mainly in VB.NET I am now predominantly working in C# and as anyone who has used both languages will tell you, flipping between one and the other is not exactly painless.

On this particular occasion I had an idea how I could fix the issue with the clients application code – but my idea was in C# – enter the Telerik Code Convertor.

The UI is simple – two text panes, one on the left for the language you want to convert and the other for the convertion output, a button to switch between converting from C# or VB.NET and a button to perform the conversion.

Simply set the appropriate ‘from’ language, enter the code to convert and then click the big red ‘Convert Code’ button. The converted code is displayed in the left-hand pane.

The tool is quite forgiving and won’t complain about missing brackets or semi-colons but remember – ‘Garbage In – Garbage Out’.

The Scourge of Email Click-Bait

We all get some SPAM in our Inboxes – despite the best efforts of our email hosts, be they Google or otherwise. But another type of message is starting to gain traction and I receive a number of these a week now – normally from recruiters is has to be said – and they are akin to the Click-Bait links you see all over the web (you know, the ones that normally end with ‘you’ll never guess what happens next’).

So, what am I talking about? Well, from this mornings Inbox we have ‘Exhibit 1’;

I’ve blurred the sender (although as I type I don’t really know why) but the subject line starts ‘Re:’ which would indicate that this is a reply to an email that I’ve sent – standard email client functionality. But I’ve never emailed (or even heard of) the sender or their company.

It’s just a rouse to get me to click on the message and read what they have to say – because the premise is that we have done business in the past.

Now, I may be getting old but I know if I don’t know someone and I’ve never heard of this guy. Add to that the fact that I can see the initial content of the email and that I have never, ever hired C# developers and it was pretty clear what this was – basically just SPAM sent by a low-end recruiter (not tarring all with the same brush here, I deal with many good ones) in an effort to appear to be known to me and to have an understanding of my requirements – neither of which is true.

It’s really no better than the email below it – which did slip though the SPAM filter.

The thing is this is not limited to low-end recruiters, I’m seeing this all the time now. Is this how people and companies think they can get an edge these days?

OK, maybe it’s not really a scourge but certainly a bit on the sly and under handed side of the wire.

Online Tool of the Month – base64-image.de

I recently had a need to create HTML email templates for a project which needed to include a couple of images, e.g. the company logo and feature image. I have of course done this in the past and just linked to the required assets on the hosting website – but this time is was different.

This time the website was potentially not accessible from outside of the local intranet and the requirement was to embed the images directly into the message.

One way to do this is to encode the images to a base64 string and use the resulting text, with a ‘data:image/jpeg;base64,‘ prefix in the src attribute of the image tag – something like this;

<img src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/4QBoRXh....." />

Now all I needed to do was encode the images.

I could of course have written a quick app to get the image bytes and encode them as a base64 string – but why reinvent the wheel.

A quick Google search took me toΒ https://www.base64-image.de/ where you can simply drag and drop your images and it will generate the base64 string for you – even providing options to copy preformatted text to use in image tags and CSS background urls.

When it comes to using these encoded strings there is something that you should be aware of and that is file size.

I ran into problems with some email clients not rendering properly because the message, with the additional embedded images, was too large. I had to use resize the images a bit and then ensure that I enabled ‘Image Optimisation’ during the encoding process to get them as small as possible without losing too much quality.