NDC London – A Little Review

So, the last developers conference (if you could call it that) I went to was way back in 2001 when we had WAP phones and if you had a Pentium 4 computer you were some super-techno hacker.

Well, time have changed somewhat, we now have smart phones with more memory than we had hard drive space back then and I’m writing this post on a workstation with 8 cores and 32GB RAM (and this isn’t even the cutting edge). Add to that the cloud, driverless cars and social networks with billions of users (for better or worse) and we have come a hell of a long way.

Well, I decided to bite the bullet and shell out for a super early-bird ticket and get myself up to London for a few days to Geek-Out.

I had a clear idea in my head about what I wanted to achieve from the three days and planned to not only attend sessions about the new whizzy stuff like Blazor and ASP.NET Core 2.2 but also some of the more mature technologies and concepts – if you read my tweets from the event I think you’ll see that the scope of the tech. I think it was a little light on mobile development but if there were any more sessions covering that I think I would have had a hard time selecting which ones to go to.

Some of the sessions were presented by the likes of Scott Hanselman, Troy Hunt and Jon Skeet through to those I’d never heard of but who presented some engaging (and enlightening content). I don’t regret a single session choice and came out of each of them with something to follow up on.

The exhibitors were varied and interesting with the likes of DevExpress whose tools I’ve used for over a decade (and who know the proper strength for a real coffee) and JetBrains along with OzCode (who proved that debugging does have to suck to the degree that I bought a licence without trying the 30 day trial) and Twillio.

Although the weather wasn’t great and my flight home was nearly cancelled because of snow I enjoyed my three days rubbing shoulders with celebrity and grass root developers alike.

I have to say that almost 20 years between conferences is far too long – especially in this industry and I’ll certainly be considering the next NDC (probably in London – but with Porto and Barcelona events, who knows).

The videos of the sessions are now on YouTube and I will be re-watching a few of these to refresh my memory. I was taking notes but I was also being careful not to get too absorbed in them so that I missed the actual content being delivered!

The Scourge of Email Click-Bait

We all get some SPAM in our Inboxes – despite the best efforts of our email hosts, be they Google or otherwise. But another type of message is starting to gain traction and I receive a number of these a week now – normally from recruiters is has to be said – and they are akin to the Click-Bait links you see all over the web (you know, the ones that normally end with ‘you’ll never guess what happens next’).

So, what am I talking about? Well, from this mornings Inbox we have ‘Exhibit 1’;

I’ve blurred the sender (although as I type I don’t really know why) but the subject line starts ‘Re:’ which would indicate that this is a reply to an email that I’ve sent – standard email client functionality. But I’ve never emailed (or even heard of) the sender or their company.

It’s just a rouse to get me to click on the message and read what they have to say – because the premise is that we have done business in the past.

Now, I may be getting old but I know if I don’t know someone and I’ve never heard of this guy. Add to that the fact that I can see the initial content of the email and that I have never, ever hired C# developers and it was pretty clear what this was – basically just SPAM sent by a low-end recruiter (not tarring all with the same brush here, I deal with many good ones) in an effort to appear to be known to me and to have an understanding of my requirements – neither of which is true.

It’s really no better than the email below it – which did slip though the SPAM filter.

The thing is this is not limited to low-end recruiters, I’m seeing this all the time now. Is this how people and companies think they can get an edge these days?

OK, maybe it’s not really a scourge but certainly a bit on the sly and under handed side of the wire.

On The Fence Development – What’s All That About Then?

I’ve been contracting for over seven years now and during that time I’ve hadĀ  a number of clients, friends fellow contractors ask me “…why ‘On The Fence‘? What’s that all about??”.

Ignoring the fact that the blog I initially hosted on this domain was about my experiences with Linux and Open Source while working day to day as a .NET Developer using Windows, I think that the name fits – it’s all about not putting all your eggs in one basket as it were.

I think that there is quite a wide line between trying to be a ‘Jack of All Trades’ and a ‘One Trick Pony’ and as a Contractor I think that this is a good place to be.

Some will disagree, Jon Sonmez of Simple Programmer certainly advocates specialising and he’s retired in his 30’s so maybe I’m the one who’s wrong here. But while I can see the merits in this approach and it worked out well for Jon, I don’t think that being ‘the expert in the Xamarin.Forms Grid control‘ is going to get me that far (that’s not to say that the Grid control is a trivial thing of course).

I also don’t think that trying to be a Guru in Desktop, Web and Mobile developmentĀ  is viable either. We all know that this would be virtually impossible to achieve with the technology shifting under us all the time.

I do however think that to be a viable option as a contractor you need a good foundation knowledge, spread of skills across various technologies and a desire to learn as you go (let’s face it – nobody can know it all).

So that’s what I aim for.

I have experience with Desktop development using WinForms, WPF and am starting to look at UWP.

On the Web development front I am currently working with a client who has an ASP.NET WebForms application and another using MVC while a freelance project has me ramping up on ASP.NET MVC Core development on a Linux host.

With Mobile development I am all Xamarin, whether it’s the native flavours for Android and iOS or Xamarin.Forms. Hooking these up to Azure (or ASP.NET WebAPI Core) backend is also within my skillset.

I’m always looking to keep my skills up to date, while able to support existing deployments using older technologies – and even migrate them forwards should that be the desire.

Taking control of my Domain

Some time ago I was watching a Pluralsight course called ‘Master Your Domain’ where Rob Conery explained how to break your reliance on the major service providers for email, source code, blogs and file-sharing and create your own domain to host your data.

Following the course I started hosting my own Git server, Blog and File Sharing service but Email …. well that was too big a step for me to take at that time. However, times change and when I started experiencing issues with my email that was the trigger for me to take the plunge.

What was the problem with Google Mail?

When GMail moved to Inbox I have to say I was less than impressed. For my personal email it was fine – I didn’t mind moving to ‘Inbox Zero’ but, call me a dinosaur, it just jarred with me when it came to my business account.

Now, I really didn’t want the hassle of moving email providers so as many of the ‘problems’ were to do with the Inbox application I decided to use an alternative email client called WeMail on my Android phone and this served me very well for a couple of years.

Recently however, I started noticing multiple ‘draft’ messages hanging around (basically ‘snapshots’ of messages as I was typing and had then sent) and issues with message signatures – sometimes they were added, sometimes not. Was this a problem with the WeMail or Google – who knows.

What about Google Docs?

I was also not overly impressed to see that Google had blocked ‘large numbers’ of users from accessing their files on Google Docs. Admittedly Google were trying to filter out malicious content but the fact remains that they were scanning everybody’s files for what it deemed to be inappropriate. What if content in my files triggered a false positive and they blocked my access to an important document? What about my email? I have all sorts of information in there from company accounts and confidential client discussions to inane conversations with recruiters and colleagues.

Making the move

After deciding to make the move I had a look at what I actually stored on Google services and what needed to be migrated.

Obviously there was a couple of gigabytes of email but I also had a lot of stuff in Google Docs – from invoices to memes – where was I going to put all this stuff?

Files and Stuff

As already mentioned above, following Rob Conery’s course I had configured my own Dropbox-like File Sharing service using OwnCloud and this had been running fine for a while now. I had the server installed on a Raspberry Pi sitting on top of a 250GB SSD in an external drive enclosure. With the appropriate DNS configurations and port forwarding on my router this configuration worked well for me, allowing me to share folders with clients for them to upload large media files and letting me transfer files between my Windows workstation & laptops as well as my iMac and MacBook Pro.

Email

As Rob mentions in his course, it’s really not viable to host your own email these days. Messages coming from unknown sources are blacklisted by many services in an effort to reduce the level of spam reaching our inboxes. For this I needed to find an alternate provider; one that provided me with all the features I already had (spam protection, mobile access etc) but with some increased confidence over the privacy of my inbox.

In the course Rob recommends Fastmail and reading their Privacy Policy I was happy to give them a try – they offer a 30 day free trial and I did give them a try previously but not ‘in anger’ as it were, i.e. I created an account and sent test messages, added appointments etc but never actually used it on a daily basis.

After exporting my Calendar and Contacts from GMail I set about the import process from within Fastmail. The process itself was pretty straightforward with clear instructions and troubleshooting advise. I experienced no real problems but I’m sure that Fastmail support would have been on the case if I had.

The only ‘grumble’ I had at the time was that my Gmail data was imported into a folder called ‘migrated’ – I was expecting my Gmail Inbox messages to appear in my new Inbox. This caused a bit of consternation at the time but looking at it now I’m not so sure it’s a problem – all the data is there and I can easily move things around if I so desire.

Re-configuring my DNS to redirect email to the Fastmail servers was also straightforward and I’m happy to say that a couple of weeks into my trial I’m very happy with the service I’m receiving so will definitely be signing up to the full plan.

So what about Backup?

So I now have my email hosted successfully and files are back under my control so we’re all good yes?

Well not quite.

One of the things we don’t really think about it that on top of storing all our information and making it available to us online, Google are actually backing this stuff up. If one server was to totally fail then the data is ‘simply’ pulled from another and we never know there was a problem.

Well, the data is now sitting on a drive in my office – what happens if it fails, or the office burns down? How will I get that data back? I need a regular, offsite backup.

The answer was fairly simple and conforms with my need to keep my information private.

I had previously bought a Mac Mini for developing my Xamarin iOS applications, this was later replaced with an iMac, so I fired it up and installed the OwnCloud client onto it. This was set to sync everything to it’s local drive – and yes, it’s still sitting in my office so at this point I’ve gained nothing.

I then signed up for a SpiderOak account – initially 250GB but they later increased this to 400GB – using their 21 day trial. Their ‘SpiderOak One‘ client was then installed onto the Mac Mini and configured to backup everything in the OwnCloud sync folder.

I’ve also install the One client on my workstation and also mounted a couple of folders from my Synology NAS onto the Mac Mini for good measure and I have backed up almost 100GB of data so there is plenty of headroom for future expansion.

Going Forward

Ok, some of you may be asking about the cost of all this and yes there is some additional outlay – my Google Apps account was created when they were free and to their credit Google have honoured this long after charging for new accounts. But the cost to the business is minimal – and even as a personal user it’s certainly not prohibitive.

The backup solution I have in place does have it’s downsides – we had a power cut here a while back and I totally forgot to reboot the Mac Mini so there were no backups for a while.

But the fact is that I now have control over my data and if this takes a little more work and expense then such is life.

WhatsApp – a Haven for Paedophiles and Terrorists?

Yep – thought that would get your attention!

It’s headlines like this that the UK Government (and the press) are throwing around in order to drum up support for one of the most intrusive and privacy damaging campaigns to date.

The premise is that bad people use these services, which make heavy use of encryption to keep messages private, and by doing so hamper the security services who can no longer access private information in order to monitor them and stop them from doing bad things.

Now I’m not denying that these bad people do use WhatsApp (and similar applications) to enable them to communicate without their messages being intercepted. But I use WhatsApp and so do my wife and kids and we are not bad people. If WhatsApp are expected to put a backdoor into their systems to allow access to the content by so-called ‘authorised agencies’ then what about our privacy?

When I discuss this with people many will say “well, if you’re not doing anything wrong then what’s the problem?”. However, when I ask them for their email and social media passwords they are a somewhat reluctant to hand them over – “but if you are not doing anything wrong then why do you care?”, I ask.

The answer is simple, their email and social media feeds are private and none of my business. Just because something is private does not mean it’s illegal or even wrong, just private.

We may be discussing our medical history, financial details, travel plans or just what time we will be home for tea but that’s our business, it’s private and nobody else’s business except ours and whoever we’re talking to.

So while I am willing to accept that bad people use these platforms in an effort to hide their activities, I’m pretty sure that they make up a tiny percentage of the 1,000,000,000 (and increasing) WhatsApp users. Do we all have to give up our right to privacy for the sake of these people and will it even make a difference?

The Snoopers Charter

In 2016 the Investigatory Powers Act, or Snoopers Charter as it was dubbed, was passed into Law and with it the privacy of every UK citizen was eroded a little more.

Did you know that under this legislation your Internet Service Provider now has to keep your browsing history for 12 months and provide it on demand to authorised agencies?

If you did then you may have assumed that as long as you are not “doing anything wrong” then you have nothing to worry about as the Police and Security Services are only looking for bad guys.

Well, did you also know that on the list of agencies that can access these records are:

  • HMRC (the tax man)
  • The Department of Work and Pensions
  • The Department of Transport!
  • The Welsh Ambulance Services and National Health Service Trust!!
  • The Food Standards Agency!!!

Now what on earth to the Food Standards Agency need with my internet browsing history? What possible use could it be to them?

If the UK Government were to enforce a backdoor into WhatsApp and other platforms like it – who would be able to access the information and how secure would it be?

But that’s not all. If the Government weakens encryption and demands backdoors be created in otherwise secure systems, who knows who can gain access to the information that was once protected?

If SSL certificates (which put the padlocks on your browsers address bar to indicate that the page is secure) become less secure, how safe are you when you are accessing your online banking or shopping on Amazon?

The truth of the matter is that if the UK Government gets it’s way it’s not really them that we have to worry about – it’s the hackers. They will have a field day with all this insecure data flying over the wire. All it would take would be a poorly implemented backdoor and then all bets are off. If Government agencies cannot even secure their own data, what chance do they have of securing the keys to our data?

A Developers Viewpoint

So, apart from being a UK citizen, what has this got to do with me and why am I ranting about it?

Well, as a developer I know that writing a chat application is not really that hard – in fact I recently read a book which guided the user through cross-platform Xamarin development and the target project was a cross platform chat application. Moreover, the source code is actually on Github so there’s a starting point right there.

Currently that XamChat application stores and sends data in plain text so not secure or private. But how difficult would it be to upgrade the app to use encryption? Even though I am not a cryptographer by any stretch of the imagination I’m guessing not that hard at all.

And that’s the point – if I can do this then any reasonably competent developer could do it too. If the UK Government we to make it unattractive for the bad guys to use secure apps like WhatsApp then there is nothing stopping them from writing their own end-to-end encrypted messaging system using state of the art encryption that cannot be broken with today’s technology.

Meanwhile the rest of us will be using insecure systems that leak information and make us vulnerable to malicious hackers keen to exploit these weakness, gather personal information and use it to their own ends.

Going Forward

In an effort to prove my point, I’m going to take a run at this. Ultimately I’m going to see just how hard bolting encryption into the XamChat application.

I’m not expecting (or intending) to create a WhatsApp killer or even anything that polished – just something usable to prove the point.

First thing to do is to get up to speed on encryption, especially in .NET. There’s a 4 hour course on Pluralsight so I can kill two birds with one stone; my commitment to watch one Pluralsight course a month and create a Command Line application to create Encryption Keys, Encrypt & Decrypt text data in preparation for creating SecureXamChat.

Edit – 15th Feb 2018: Subsequent to me posting this there was a great article in The Guardian which (obviously) make a much better job of getting the point across and it well worth a read.

So, what will 2018 be the year of?

They say that life is what you make it so time to make some resolutions …… yes?

Well, if John Sonmez from Simple Programmer is to be believed – maybe not!

I receive regular email updates from the Simple Programmer website and the one I received on 27th December caused me to stop and think.

Probably based on one of John’s blogs posts from 2016, the subject of the email was ‘Dont make resolutions the New Year, make a commitment’. Now I initially thought that these amounted to the same thing but changed my mind after reading the parting shot of the email which read:

Let me put it this way, when you need to take a taxi to the airport, do you want your taxi driver to resolve to be there at 8:30 AM or do you want him to commit to being there at that time?

The answer is obvious (hopefully) so I’ve decided to make some commitments for 2018:

  • I will watch at least one Pluralsight course a month
    • My technology focus will be .NET Core, Azure, ReactJs
  • I will watch at least one Xamarin University session (attending those required to maintain my certification)
  • I will blog twice a month (not including the Online Tool of the Month posts)
    • To keep me honest I will probably post findings from my Pluralsight courses and Xamarin investigations (proving that I’ve actually honoured the above commitments)
    • Other topics will include Privacy and Encryption which seem to be bad words these days

So that’s what I will commit to this year – maybe I’ll be in a position to commit to more but I’ll review my progress mid-2018 and see how I’m doing.

Ditching AntiVirus

Just like us, as computers get old they tend to slow down. It’s a fact of life pure and simple.

With computers it tends to be due to the hardware not keeping up with the new requirements of today’s applications (just try running later Windows or Office on a Pentium 4 and you’ll see what I mean). We tend to put up with the slow down until something finally gives out, a hard-drive or motherboard for instance, and then we buy a new one.

Well my Windows 10 development workstation was slowing down and while it’s a few years old now, it is still a pretty high spec – i7-3770 with 32GB RAM and SSDs – this thing used to fly.

But recently it was noticeable that it was taking longer to boot, applications like Visual Studio and SQL Management Studio seems to struggle to load and surfing the web was a bit of a grind.

I decided to reinstall from the ground up and make sure that I didn’t install anything that didn’t really need for development (like Steam!). I also decided that I was not going to reinstall my AntiVirus!!!

“Oh My God!” – I hear you shout. Are you insane? Don’t you know how many viruses there are out there and how quickly your system could be compromised?

Well, no I’m not insane (or at least I don’t think so) and yes I do know that there are a lot of viruses out there but I’m not just doing this without due thought and advice. I also (probably) wouldn’t consider junking it unless it had crossed the line in the number of areas.

Why do I think it’s a good idea to run without Anti-Virus?

I listen to a number of Podcasts and one of them is Security Now from the Twit Network. When someone with the knowledge, experience and understanding that Steve Gibson has says that he doesn’t use a third party Antivirus then there must be something in it.

What does Steve use? Well, as he is running Windows 7, Steve is using the built-in Security Essentials (it’s Defender in Windows 10). Yep – he’s using what comes in the box! And the reason for that is that third party Anti-Virus is incredibly invasive and has to inject code deep into the Operating System. This, perversely, increases the attack surface for malicious code. Bugs in products like Symantec/Norton have exposed users to a greater risk of infection while users believed themselves to be safe. I’m not even going to being talking about Kaspersky!

In the 10 years or so that I’ve been using my current Anti-Virus application, Avast, I’ve only had about half a dozen warnings about suspect files – and there is no reason to believe that Defender would not have detected the same files or whether they were actually malicious (I get a number of false positive alerts when I’m compiling code in Visual Studio – and I don’t write viruses!). I tend not to surf around in the darker parts of the web and am pretty careful about what I install.

So, I’m not running without Anti-Virus – just without third party Anti-Virus.

What lines did Avast do to push me down this road?

Well, there are a couple of reasons really:

Recently it has been getting in the way of my work.

Running a WebAPI application in IIS on the workstation and accessing it from the iPhone simulator on the iMac was never a problem. So when I started getting ‘Failed Connection’ errors I assumed it was a configuration issue or a coding error. After an hour or so of debugging I find that Avast is blocking requests to IIS – which it has never done before. Turning the firewall off confirmed the problem – I just had to remember to do it again when I was next accessing the WebAPI from another system.

Other applications failed to start with the Avast firewall engaged (when they had played well together in the past) and efforts to resolve the problem by Repair/Reinstall all failed.

But the big thing that did it for me? The real big step over that line we call privacy was when I logged onto my internet banking and Avast displayed this:

Now call me a member of the tin-helmet brigade if you like but when I access my online banking over a secure connection I find it a bit disconcerting when something says “I can see what you are doing!”.

It was a reminder to me that like most (all?) third-party AV products out there, Avast can intercept and analyse traffic being sent over a secure connection through my browser. To do so it has install a trusted root certificate on my computer which means it can act as a ‘man in the middle’ – intercepting my traffic, checking it and then passing it on.

And it’s the man in the middle part combined with the increased attack surface and buggy applications part that worries me and that’s why I’ll be sticking with Defender for now.

Why install racing harnesses in your car when the built-in seat belts will keep you just as safe in normal use?

 

Windows 10 (and 7) Built-In MD5 Checksum Calculator

I recently paved my main development workstation after it started misbehaving (slow start up, some applications not opening consistently etc) and am trying to be careful about what I install on it going forward.

Previously I had all manner of applications, games (including Steam) and utilities installed and the chances of finding what was causing the problems was pretty remote. There could of course be multiple culprits.

Today I needed to install MySQL Workbench so I headed off to download it and noticed the MD5 checksum beneath the link. Now, I don’t always check these and maybe this is why my workstation ended up in a bit of a mess. But with a view to keeping this system as clean as I can I decided to make a point going forward of checking these checksums when they are available.

The “problem” is which utility do you use to calculate the checksum of the downloaded file?

If you Google for ‘MD5 checker’ you will see a number of utilities and while I have no reason to doubt the integrity of any of these I stopped short of installing any of them.

Obviously each download was accompanied by it’s MD5 checksum so that I could verify the file but after freely installing all manner of utilities in the past I was a little bit wary this time around.

Now, MD5 is not a new thing and you would think that Windows 10 would have some form of utility built in that would calculate the hash – and there is. Apparently it is also available in Windows 7 but I no longer have any systems running Win7 so I cannot verify that.

Open a command prompt and enter the following:

CertUtil -hashfile <path to file> MD5

Depending on the size of the file it may take a few seconds to run the calculation but if successful the MD5 hash will be displayed as below.

It is also possible to generate checksums for other hash algorithms by replacing the MD5 parameter used above with any of the following (note that if you don’t specify a value then SHA1 is used by default):

  • MD2
  • MD4
  • MD5
  • SHA1
  • SHA256
  • SHA384
  • SHA512

So, if all you need is to determine the checksum of a downloaded file then there really isn’t any reason to install yet another utility to do so.

Stackify Prefix – first thoughts

Listening to one of my favorite podcast (.Net Rocks) I heard a plug for the Stackify Prefix tool which claims to help the developer fix problems before anyone else sees them – a bold claim. Well as I am currently working on a greenfield development project I decided to give is a whirl – it’s free after all so why not.

Now I was not expecting to find too much wrong with the application and thankfully I was right – but I was getting errors.

The highlighted call is to a WebAPI method from an AngularJS controller (a JavaScript file on the client) and as you can see from the right hand pane it does succeed. In fact the data is returned as I’d expect and the application works without any issue. So why is Prefix flagging this?

Well, looking at the stack trace a little more carefully I see that the exception is being raised by the XmlMediaTypeFormatter when it is creating it’s default serializer. But the WebAPI is returning JSON so why it is spinning up an XML serialiser?

Well, my WebAPI endpoint took this form:

https://gist.github.com/OnTheFenceDevelopment/c8bc6729ad09d3b60579f191ffafeec7.js

The problem is on line 8 where I’m returning the OK status with the required content – which is an anonymous object that I’ve just put together on the fly. The WebAPI is configured to accept the ‘application/json’ header and to use an appropriate JSON Formatter – which it does.

The problem is that I still have the default XML Formatter in the list and for some reason the framework is trying to use it to serialize my anonymous object – and failing (silently).

So all I need to do is to remove the Formatter during the WebAPI registration – within WebApiConfig.cs in the App_Start folder (see line 12 below).

https://gist.github.com/OnTheFenceDevelopment/5441c335465d87e2c9addb19d29eed2c.js

Now this was fairly trivial but a bug is a bug and as we know – exceptions are expensive. They take time to raise while they pull the required information together and work their way through to the calling code – which in this instance appeared to simply discard it. A small performance hit but if this scaled then it could have become a bigger problem in the future – and probably harder to find.

Prefix highlighted it straight away and the issue is now fixed. It never made it to production, in fact it never made if off my desk – it was Pre-Fixed!

Another year over, what have I learned?

It’s been an interesting year all things considered, I’ve had a couple of good contracts as well as some time ‘between contracts’ (see my last post for more details on that). The time when I was not actually engaged on a contract I’ve been able to pick up some freelance work and invest some time on personal projects and watching numerous Pluralsight courses to keep my skills current. All in all it was a pretty good year.

In January I declared 2014 ‘The Year of the Mobile’ and invested a lot of time (and money) into mobile development using the Xamarin platform. The FillLPG app is now in Beta release and I’m getting some good feedback from the small team of testers. I also have a mini-project which should hopefully see my first iPhone application in the App Store.

In 2015 I am hoping to continue with my Mobile development and am hoping to get FillLPG onto the iPhone/iPad. I do however realise that not everyone is looking for mobile developers so I need to keep my web development skills sharp as well.

I’ve also had an idea for an online service which will require the development of a website – so this should enable me to demonstrate MVC/WebAPI skills. At the same time the site may generate an amount of income, from user subscriptions and maybe Google Ads (shudders).

Now, I’m not one for making New Year Resolutions so I’ll call this my 2015 Personal Development Plan.

  • Watch at least 1 Pluralsight course a month, 2 if time allows.
  • Contribute to at lease 2 Open Source projects (probably via Github)
  • Release the Xamarin implementation of FillLPG for Android
  • Release first iOS application – basic application
  • Begin development of FillLPG for iOS
  • Design, develop and release subscription based website
  • Blog more!

So, once I’ve shaken of this cold – and the hangover from seeing in the New Year, it looks like I will be pretty busy.