Intellisense not working via COM

I recently needed to write a class in C# that would be accessed by a VB6 application – something quite new to me.

The existing VB6 application (which is in the process of being rewritten in C#) currently uses OLE Automation to generate MSWord documents based on templates. While this approach was working fine it was pretty slow. Having made the decision that the new version of the application would be capable of generating Word 2007 documents it was decided that the resulting class should be accessible to the VB6 application via COM Interop.

The process of exposing a C# class via COM is well documented out there and only a matter of clicking a couple of project level options. To test the concept (which as already mentioned was new to me) I created a simple class in C# with a public method to create a text file.

With the class compiled I opened up VB6 (it’s been a while!) and after adding the COM class as a reference was pleased to see that I could create an instance of it as normal:

Dim engine As OpenXMLDocumentEngine.Engine
Set engine = New OpenXMLDocumentEngine.Engine

The problem arose when I attempted to call one of the COM Class methods – there was no Intellisense (at least nothing showing my test method). However, entering the method name manually and running the code resulted in the test file being created so the handles were there.

I Googled the problem and many solutions involved creating and implimenting a Public Interface for the C# Class. However this approach did not work for me and after a lot of hunting around I found a solution that worked for me and it was as simple as adding a references to the InteropServices assembly and a Class Attribute setting the ClassInterfaceType to AutoDual.

using System.Runtime.InteropServices;

namespace OpenXMLDocumentEngine
public class Engine

A quick recompile and behold full method level Intellisense in VB6:


Upgrading to Windows 7 (Dual Boot with Ubuntu)

I’ve run with Vista for about a year now and on the whole have had no problems with it. Now I’m one of those people that hasn’t really had any problems with Vista – and frankly I’m not sure what all the fuss was about. Yeah, I was happy with XP and only upgraded to Vista because my new employer was using it.

So why was I looking to upgrade to Windows 7 now..? Well there were a number of reasons but the most pressing was that Vista was starting to get on my nerves. So some reason it started to run painfully slow. This was mainly due to the fact that the hard drive would be chattering away for a good 20 minutes after I had logged in – now that’s infuriating! What the hell was it doing.? Well I didn’t really have the drive to spend hours looking for the source so had just lived with it.


When Windows 7 hit Release Candidate I thought – “What the Hell”. If it all went to hell in a wheel barrow then I could either restore my XP system with the CD that came with the laptop or install Vista from the CD I had bought for the upgrade (OEM version purchased with a new HDD).


In a previous post I explained that how I configured the laptop to dual-boot with Vista/Ubuntu and when I took the decision to upgrade the Vista part of this configurations I had a few alarm bells going off in my head. Would it adversly affect my Ubuntu installation (apart from overwritting the grub bootloader), would it lose all my documents, music and photos.?


Well, it was as good a time as any to do some housekeeping on my systems to delete old stuff and backup the rest. So after my little spring clean I downloaded the latest Release Candidate (7100), burnt the iso to a CD and kicked off the setup process.


The first part of the process was to perform a compatibility check which advised me that Skype, SQL Server 2005 and my Sony Ericsson PC Suite may have problems running on the new operating system. No big deal, I knew that Skype had a fix for this and was pretty sure that Microsoft would sort out SQL Server. As for the mobile phone software – well I don’t ue it that much and if a new version was not available then it was no big deal.


The rest of the installation took about 4 hours and restarted the system a number of times – no interaction needed from me so that was good.


When the installation was complete I booted into Windows 7 and had a nose about – and although it’s early days I’m quite impressed. Impressed enough to buy it when it’s released..? Well let’s no get ahead of ourselves here :-).


Ok, so know I had a system running Windows 7 and a Ubuntu installation that I could not access – the grub bootloader had been overwritten (which I knew would happen before I started).


So, how do I reinstall grub so that I could get me dual boot back?


Well the Ubuntu Documentation has this article on just this process which worked like a charm for me. I used the first method (with the Live CD) and within 5 minutes I was booting into Ubuntu (which to my relief was still there).


Tweeting my External IP Address from Ubuntu Server

In my previous post I described the problems I encountered while trying to configure my Ubuntu Server to be able to send emails via the command line (in my case it was actually via a script). The reason I wanted to do this was so that I could run the script on a scheduled basis to check my external IP address and notify me when it changed.

Why? Well my ISP has provided me with a dynamic IP address which changes on a periodic basis – not that I normally notice. Well if I want to be able to administrate my Ubuntu Server from outside my local network, i.e. over the Internet via SSH, then I need to know the outward facing IP address of my router.

I had already found a script to do this and tweaked it a little to run under a non-admin user but while it could detect the change I needed some mechanism for it to tell me. After a fruitless evening trying to set up email I gave up and decided to use the Twitter API instead.


Now there are a few downsides to using Twitter:

  • The data is on someone elses server – i.e. Twitters. This is not a major problem but what if Twitter decided that they were now going to charge for thier service, or if they were (for some reason) forced to take it down altogether (unlikely I know)
  • Username/Passwords are sent over the wire in plain text (at least in my implementation they are)
  • I may not want to stick around on Twitter forever (the SPAM is driving me nuts and although I have protected my updates it does defeat the point a little bit IMHO)

 Having said that – it is so damn simple it makes no sense not to have a go 🙂


The logic of the script is this:

  1. Store the current IP Address in file
  2. Resolve External IP Address using
  3. Compare value with that stored in IP address file
  4. If the IP Addresses are different, send a Tweet

Simple eh?


Well first of all I had to get my Twitter script sorted. My previous post links to the site where I found this script but I’ll post it here for reference:

curl --basic -- user "username:password" --data-ascii 
      "status=`echo $@|tr ' ' '+'`" 
          "" -o /dev/null

Note: This is all on one line and that you will obviously need to specify your own Twitter username/password.


With that in place (saved as /usr/bin/twitter) I just needed my IP Checking script and here is the script I used:

CURRENT_IP=$(wget -q -O -
if [ -f $IPFILE ]; then
        KNOWN_IP=$(cat $IPFILE)

if [ "$CURRENT_IP" != "$KNOWN_IP" ]; then
        echo $CURRENT_IP > $IPFILE
        twitter "External IP Address has changed to $CURRENT_IP"

Running the following command will make the script (called checkipaddress) executable:

sudo chmod +x checkipaddress

I decided to create a new Twitter account for my Ubuntu Server and to protect it’s updates – no offence but I didn’t think that my External IP address was a good thing to be broadcasting to the world. As my updates are also protected (I was just getting too much annoying SPAM) I’m happy that my IP Address is as secure as it needs to be.


Running the checkipaddress script does what it says on the tin, it checks the IP address and Tweets me if it is different from the last time it checked, so I can now SSH intp my Ubuntu Server when I like. Simple!


Now I didn’t come up with the checkipaddress script myself – although it’s not difficult – but I cannot remember where I copied it from. If I remember then I’ll post the link, if you know then post a comment and I’ll update the post


Command Line EMail on Ubuntu Server Failed – Tweeting Instead

Now that my Ubuntu Server is up and running and configured for the network I want to be able to remotely access it via SSH. Now that’s easy, even for me. I have OpenSSH installed and my router configured and it works like a charm. The problem is that I don’t have a static IP address from my ISP so periodically I will be assigned with a new one – so how do i know what it is at any point in time? Answer, I don’t. So after browsing around I found a script that would resolve my external IP address and email it to me. Brilliant! Everything was working up to the point where it needed to send the email.

Well I didn’t think it would be that difficult – I just wanted to be able to configure the system so that it could send me an email via the script. I found a few tutorials on the web and decided to follow one which boasted to be able to complete the setup in just 5 steps. There was even a comment from someone saying ‘Thanks, it worked where others didn’t’ – well it didn’t for me. Why is it so difficult?
Once I had tweaked the script to include details for the appropriate SMTP server etc I could not get past an annoying ‘Invalid HELO message‘ error. Using the same settings in my email client proved that they were valid.
At one point I thought I had it working, i.e. the command didn’t result in a nasty error and an email duly appeared in my Inbox – success? Not quite, it appeared to come from ‘Unknown Sender’ even though I had configured a From address. I tweaked the script and was promptly returned to the ‘Invalid HELO message‘ error that had been greeting me for the previous couple of hours.
I trawled the Web for answers but the results were far to cryptic for my frame of mind and in the end I decided that it was too close to midnight to be banging my head on the wall and called it a night.
This morning on the way into work I was thinking about alternative approaches, knowing that I would have to revisit the whole email server setup process in the future, but for now I just wanted a working solution. Then it hit me – Twitter!
I know that Twitter has a RESTful API and with that in mind I knew it could not be that difficult – at least compared to my efforts last night with mailx. As it turns out typing “command line twitter ubuntu” into Google gave me the answer – use Curl!
curl --basic --user "USERNAME:PASSWORD" --data-ascii
      "status=`echo $@|tr ' ' '+'`"
           "" -o /dev/null
This should all be on one line and insert your Twitter username and password – I’ve split it to prevent it scrolling off the page.
Now I just have to create an account for my Ubuntu Server (protecting the updates of course), install curl, create the script and I’m away. Once this is sorted I’ll be able to update the script to resolve the External IP address and configure it as an hourly cron job.
I know I will have to revisit the configuration of the email system – afterall it’s not as if it’s a totally new technology. I knew that this process would not be straightforward and that I would hit obstacles like this along the way so I’m not that surprised, although I am still frustrated.
If anyone has any input, help or advise then please leave a comment.
Update: I’ve resolved this issue now – see next post..

Remotely Connecting to Ubuntu Server

In a previous post I managed to get my Ubuntu Server test system connected to my home network and the Internet (at least from the inside looking out). If you read the post then you will know that because my house is almost 100% wireless I needed to move the system into the hallway to be next to the router in order to physically connect.

Now although I have a very nice hallway I don’t fancy sitting in it for hours with a keyboard on my lap. I also don’t really want to be running wires around the house and as I think that the system will end up in the garage I need to be able to connect remotely, from my Ubuntu or Windows systems.
Although it’s been a while since I used any flavour of Linux (and even then in my use was pretty limited) I knew that remote connections were the way to go and remembered that OpenSSH was a good tool for the job.
After checking that it was not already installed by running:
dpgk --get-selections | grep openssh-server
I went about the installation by running:
sudo apt-get install openssh-server
followed by:
sudo /etc/init.d/ssh start
to get things going.
So far so good, I now have OpenSSH installed and running – how do i connect to it..?
Well I had already booted into Ubuntu (9.04 – Jaunty) on my laptop so it was just a matter of running: [see note at end of post] 
ssh dilbert@ 
[dilbert being the username I provided during the installation of Ubuntu Server and being it’s IP address].
I was prompted for the password for the dilbert user and I was in … but I was not quite finished.
By default OpenSSH accepts remote logins from the root user – not a good idea! So while I was connected I editted the configuration file:
sudo vi /etc/ssh/sshd_config
and set PermitRootLogin to no.
All that’s needed now is to restart OpenSSH so that the new settings take effect:
sudo /etc/init.d/ssh restart
So there it is – I now have my Ubuntu Server setup so that I can access it remotely from my laptop when I boot into Ubuntu. But what if I’ve booted into Windows? Well I’ve heard that Putty is a good tool to connect to SSH from Windows so I’ll have to check that out.
While I was able to run SSH from my Ubuntu laptop means that the client must have been installed already – although I can’t remember doing it so it must be there by default. If it’s not installed on your setup you will need to run
sudo apt-get install openssh-client



Configuring Networking on Ubuntu Server After Installation

I recently bought an ‘old’ PC to use as a test system, running XP for my .NET development and Ubuntu Server (Jaunty 9.04) for my investigations into Linux. I have a few spare drives kicking around so having a totally separate installation of XP and Ubuntu would be a doddle.

I fitted an additional 20GB drive and for now I’m content to open the side and swap the cables until I get around to sorting out a suitable boot loader.
The installation progressed without any problems until I reached the network configuration. As the router is in the hall and my usual  connects via the wireless I didn’t have a network socket nearby – we only have one wired PC in the house. I opted for ‘Setup Networking Later’ option and the process completed without any further problems. Now, as a Windows ludite how do I configure the network without a GUI..?
Well the first step was to get the box connected to the router – which acts at the DHCP server for my network, i.e. move it into the hall.
After logging in the first thing to do is to configure the network card – this is done by running the following command:
sudo vi /etc/network/interfaces
which will open the file in the ‘vi’ editor.
Navigate to the end of the file and then press ‘i’ (which will put vi into Insert mode) and add the following lines:
auto eth0
iface eth0 inet dhcp
Now press the escape key to take vi out of Insert mode. So far so good but now we need to write the changes to disk and exit the editor. To write the updated file back to disk simply enter :w and press enter. Now to exit the vi editor enter :q and press enter.
So what have we just done? Well the file contains details of all network devices connected to the system and specifies how they are configured.
The first line we added auto eth0 tells the system to start this adapter, eth0, when it boots up. The next line iface eth0 inet dhcp tells the system to query DHCP for an IP address for this adapter.
Now we just need to restart the networking service to apply the new settings. To do that run the following command:
sudo /etc/init.d/networking restart
To check that all is well enter
and review the resulting output for details of the eth0 adapter which should include an IP address as assigned by the router, in my case
Once I had completed this I was able to ping the router and, thus confirming that I had Internet access.

ASP.NET Date Validation

A simple requirement at first sight not one with a simple solution. You have an ASP.NET page which allows the user to specify a date which is then used as an input parameter for an SQL Stored Procedure. What is stopping the user from entering ‘Hello World’ and submitting it?

Answer, nothing unless you configure some sort of validation.
So what sort of validatator do you use? At first sight there does not appear to be a suitable candidate but to enforce a specific format, e.g. dd/mm/yyyy, the RegularExpressionValidator could be called into play here. This will stop the user from entering data in the wrong format but what about invalid date such as 32/12/07 or 29/02/07? How do make sure that the date is a real one?
This is where I was a little while ago and I decided to opt for the CustomValidator, writing my own Server Side and Client Side validation functions that Cast the user input to a DateTime type and catching any exceptions.
While this worked I was not too happy with the operation of the code. Then I watched a PodCast from dnrTV where Peter Blum demonstrated some lesser known features of the ASP.NET validator controls. He used a CompareValidator to check for a valid date – a CompareValidator??!!
I normally associate the CompareValidator to check one value against another, e.g. Password and Confirm Password. So how does it validate a date?
  • Simply add a CompareValidator and set it’s ControlToValidate to the textbox containing the date.
  • Now locate the Operator property and Select DataTypeCheck from the dropdown list.
  • Finally set the Type property to Date and you’re done.
So there you have it – no need for custom code on Server or Client side.

Reading an RSS Feed with C# and Python

When I started this site I had a project in mind that would download Podcasts as they were posted and maintain the content of my MP3 player so that I didn’t have to do it myself. Well since then I have lost my iTunes virginity and while it doesn’t do everything that I wanted (like telling me that a new episode has been downloaded) it does automatically download and delete them once I’ve watched/listened to them.

But just because I don’t need to develop a complete application there is still an itch to scratch here – a few of them in fact.
  • How do I download an RSS stream – it’s not just podcasts that uses them
  • How do I parse the resulting XML
  • How do I download a file and store it locally
  • and how do I do this in C# and Python
Well this post will answer the first two questions using C# and LINQ and Python and it’s XML library.
Using C#
First of all lets take a look at using C# and LINQ to pull an RSS feed down and read the resulting XML, outputting the results to a console application.
Using Visual Studio 2008 create a C# Console application – it should present you with a default Program.cs file with a few namespace imports and a Main method.
Before we can start we need to import the .NET Framework LINQ to XML library. To do this, add the following line under the last using statement at the top of the file.
using System.Xml.Linq;
With that in place we can now declare an XDocument specifying the URL to the feed we want to download.
Add the following line to the Main method:
XDocument feedXML = XDocument.Load("");
That single line pretty much taked care of downloading the XML (I’ve targeted the 20 show feed from DotNetRocks), all we need to do now is parse the content.
Now I could write a simple class and fill a generic collection and then do a foreach over the collection … but this is .NET 3.5 and I have implicit types and LINQ at my disposal.
The following simple LINQ query will populate a collection of XElements:
var feeds = from feed in feedXML.Descendants("item")
select new
    Title = feed.Element("title"),
    Link = feed.Element("link"),
    Date = feed.Element("pubDate"),
    Description = feed.Element("description")
Each element has Title, Link, Date and Description properties which will show up in intellisense.
A simple foreach statement over this collection, again using implicit typing, will provide access to the content:
foreach (var feed in feeds)
    Console.WriteLine("Title:{0}, Date{1}",feed.Title.Value,feed.Date.Value);
So that’s how it can be done in a handful of lines using C#, so how about Python?
Using Python
Now I’m not going to go into project structure here (two reasons, I’m new to Python and I’m in the process of changing from Eclipse to Python Machine as my IDE of choice – more on that in another post), I’m just going to review the code required to do the job.
Just as with C# there is a library to do much of the grunt work leaving us to add a handful of lines of code. The library in question is urllib. With this imported it is just a matter of navigating to the RSS feed and reading it into a file.
This is the code I used for downloading the RSS to a local file:
import urllib
file = urllib.urlopen("")
text =
fileObj = open("feed.xml","w")
Line 2 creates a ‘file-like’ object that is connected to the specified network resource, in this case the RSS feed for FLOSS Weekly on the TWiT network. This object exposes a number of methods for reading the content of the resource including .read() which I use in line 3 to simply read the contents into a text object. Lines 4 and 5 simply create a filesystem object which creates a file called feed.xml and writes the contents of the text object into it. 
So now we have the contents of the RSS feed we still need to parse it – but we don’t have LINQ, how will we cope?
Well as luck would have it Python is Open Source and there are plenty of libraries out there that will provide the required functionality. I’ve chosen feedparser for this exercise – why, because it was written with this task in mind and it does exactly what it says it does.
The code required to provide a similar output to the C# example is:
import feedparser
feed = feedparser.parse('feed.xml')
print feed["channel"]["title"]

for item in feed["entries"]:
    encLocation = item["enclosures"][0]["href"]
    encLength = item["enclosures"][0]["length"]
    if encLength.isdigit():
   print encLocation, int(item["enclosures"][0]["length"])/(1024*1024),"MB"
   print encLocation,"N/A"
Now I could try to explain all of this but the documentation is pretty good itself, at least this will provide a starting point. Essentially the feed.xml document is opened and the channel title is printed out. After this a for loop is executed to iterate over the entries and output the required information. I’ve added a little check to ensure that a missing ‘length’ value does not throw an exception.
So it is clear that with a similar number of lines of code it is possible to download and parse an RSS feed from a remote computer using either C# or Python. While C# has the weight of Microsoft behind it and the functionality of LINQ, Python has the Open Source community and the functionality of … well whatever the community decides it needs.

Upgrading Ubuntu Intrepid to Jaunty

When it comes to upgrading Operating Systems I’m not known as an early adopter, I normally wait a while for others to have the headache of encountering and resolving problems. However, in a moment of madness I decided to upgrade my fully functional Ubuntu 8.10 (Intrepid Ibex) installation to 9.04 (Jaunty Jackalope).

I’ve read a few blog posts where users have upgraded and then found that thier sound no longer works or that thier display crashes or won’t hit the previous resolution so I made sure that I had a backup of my /home directory and exported a list of my installed applications.
The backup was performed using rsync and the following command:
rsync -av /home/dave "/media/FREECOM HDD/ubuntu"
Note that the quotes were required because when my external HDD connects it is given a name with a space in it!
I pinched a tip from the LifeHacker site to use a CLI command to generate a text file containing a list of my installed applications. This file could be used, again via the CLI, to reinstall the applications should the worst happen.
The command was:
sudo dpkg --get-selections "*" > current-installations.txt
This file was then copied to the USB drive containing the backup. Apparently, to reinstall from this file I would just need to run the following commands:
sudo apt-get update
dpkg --set-selections < current-installations.txt
apt-get -u dselect-upgrade
I say apparently because my upgrade went without a hitch 🙂
I opened the Update Manager and sure enough was prompted that a Distribution Upgrade was available. Before progressing with the upgrade I thought it prudent to install the handful of Updates that were listed as well.
Once the updates were installed I took a deep breath and went for it.
After downloading the Upgrade Installer I was prompted that the download would take in excess of 3 hours – not totally unexpected but this would tie up my laptop all night, so i went to the gym instead (i’ve got a holiday coming up and I don’t want GreenPeace trying to roll me back into the sea!).
The process still has about an hour to go when I went to bed so I expected it to be finished when I came down this morning – which it sort of was… It was waiting for me to respond to a prompt! It also indicated that there was another hour to go … great.
The prompt was to determine what I wanted to do with the grub menu.lst file. I run the laptop as a Dual-Boot with Vista on the other partition (yeah, yeah, I need it for my day job!) so as I didn’t really know what it would do to my settings I opted for the ‘leave it alone’ option.
I was prompted twice more for the same thing which was frustrating as I was getting ready for work and started to think that when I got home I would be faced with a prompt saying ‘Are you sure?’ or similar. As it happened the installation completed within about 5 minutes (not the hour it originally indicated).
Once the reboot had completed I logged in (via the new swishy login screen) and gave it a quick once over. Everything was there (despite the installation removing numerous, obsolete, packages) and appeared to be working normally.
The only thing I noticed was the my Eclipse IDE failed to load my PyDev perspective when I opened if for the first time after the upgrade. I loaded it manually and it seems to be fine now.
So for me, the upgrade process was a piece of cake, if a bit lengthy.

Request format is unrecognized for URL unexpectedly ending in …

Another thing that you think would be straight forward but turned out to be quite frustrating.

I have written Web Services in the past without any problems but recently when I was writing one specifically for Excel (2007) I could not get it to link properly.
Running the web service in VS2005 resulted in the test page we all know and love and the Invoke button worked fine! Taking the URL and pasting it into Excel resulted in a “Request format is unrecognized for URL unexpectedly ending in…” error!
It turns out that this is by design – sort of. Basically the HTTP Get and HTTP Post protocols are disabled by default (not the case in .NET 1.0). Enabling them in is simple matter of adding the following to the system.web section of the web.config:
       <add name="HttpGet"/>
       <add name="HttpPost"/>
TaDa! it worked.
If you want some more information then click here to got to the Knowledge Base article [KB819267]

Calling Code-Behind Method from JavaScript

While this page is still popular I have blogged about an alternate method for sending data from client to server using Javascript. You may want to check that out while you are here.

It shouldn’t be too hard should it? You want to use some Javascript on the client-side to call a method on in the Code-Behind of your aspx page – how hard can that be? Surely it must be possible.

I was developing a DotNetNuke module (so the code below is VB.NET and not C#) which would allow the user to search for locations using Virtual Earth and store the latitude and longitude (as well as the zoom level) in a database.
The problem is that the Virtual Earth API is a Javascript (and hence Client-Side) technology and in the normal scheme of things cannot access .NET code on the server. So how do you do it? How do you get a Javascript function to call a Server-Side method – Is it even possible?
Well yes it is – and it ain’t that difficult either 🙂
Here is a sample that I used:
In the .aspx source code (actually in the <head> section of the page):
function geoLock()
    var loc = map.GetCenter();
    var args = loc.Longitude + ":" + loc.Latitude + ":" + map.GetZoomLevel();
    return false;

function geoLockComplete(result)
    if(result = 1)
      document.getElementById("geoLockResult").innerHTML = 'GeoLock Successful';
      document.getElementById("geoLockResult").innerHTML = 'GeoLock Failed';
I also had a button with the onclick event set to onclick=’geoLock()’
In the Code Behind I had the following:
After the Page/Class definition add the following interface implementation:
Implements System.Web.UI.ICallbackEventHandler
In the Page Load:
' Create Callback Reference
Dim cbReference As String
cbReference = Page.ClientScript.GetCallbackEventReference(Me, "arg", "geoLockComplete", "context")

' Create Callback Script
Dim callbackScript As String = ""
callbackScript += "function performGeoLock(arg, context) { " + cbReference + "} ;"
' Register the Script
Page.ClientScript.RegisterClientScriptBlock(Me.GetType(), "performGeoLock", callbackScript, True)
You will also need to implement the RaiseCallbackEvent and GetCallbackResult methods.
Public Sub RaiseCallbackEvent(ByVal eventArgument As String) Implements System.Web.UI.ICallbackEventHandler.RaiseCallbackEvent
    ' [Your Logic in Here]
    ' Need to Set Value to be passed back to Client Script
    result = GeoBlogController.performGeoLock(newGeoLock)
    ' Note: result is an Int32 with Page Level Scope
End Sub

Public Function GetCallbackResult() As String Implements System.Web.UI.ICallbackEventHandler.GetCallbackResult
    ' Pass Back Info Value
    Return result
End Function

And that’s it.
When the user clicks the button the client-side geoLock() method is called.
It in turn creates an args variable (which in this case holds a colon separated list of values) and calls the performGeoLock() callback method.
The code in the Code Behind ‘RaiseCallbackEvent’ method is executed performing some logic and setting the value of a return variable that will be passed back to the client script via the GetCallbackResult method.
Finally the client side method specified in the CallBackReference (in this case geoLockComplete) is called and passed the return value from the Code Behind methods.
I hope you followed all that (easier to do than to describe) but if not then take a look at which provides both explanation and example.
As promised in the comments I’ve created a sample Visual Studio solution to demonstrate this code in action. Note that the sample project was written in VS2010 using C# (not VB.NET as above).

Problems with the WAMP Stack

Had an interesting problem at work today trying to configure the WAMP Stack on a Vista PC. I needed this configuration to evaluate the Joomla CMS application for an internal project. I had some minor experience with Joomla and the LAMP stack on a Ubuntu Linux system but as it didn’t fit my needs I never got very far into it. Now I had a change to continue my climb the learning curve and get paid for it – neat huh!

WAMP is to Windows what LAMP is to Linux, basically the installation of the Apache webserver, the MySQL database and the PHP (or PERL/Python) programming language.

I downloaded WAMP from

and, after stopping my local instance of IIS to prevent any port 80 clashes, did a basic Next/Next/Next type installation which resulted in a new icon in the System Tray (see below) – but all was not well!


When I tried to kickoff the Joomla installation it complained that it did not have any MySQL support – kinda serious for a CMS I suppose.

I hadn’t checked that the installation was successful but WAMP had not complained and all the services appeared to be running. Clicking on the System Tray Icon and opening the MyPHPAdmin (which should allow administration of the MySQL database) all I got was an error – looks like my problem was caused because an extension was not loaded, but why not?


I checked the php.ini files and the extension_dir parameter was set correctly and the required extensions were uncommented and should therefore be loaded. However, when I navigated to the phpinfo page the Extension Directory was set to


and not


as specified in the php.ini file – so where was it getting that from?

After a little hunting around I remembered that I already had PHP installed on this PC – maybe it was picking that one up. The problem was that even if this was the case I needed/wanted to keep WAMP totally separate from everything else on my system, so how did I tell WAMP to use it’s own php.ini file? Turns out it was not too difficult after all.

Adding the following line to the http.conf file within the WAMP Apache configuration resolved the problem

PHPIniDir “c:/wamp/bin/php/php5.2.9/”

So, now that I have that sorted I can install Joomla and start playing – I mean learning.

Recording Internet Radio Stream with Ubuntu

A friend of mine recently started co-presenting a show on a local radio station but due to the timings I was unable to listen to it live. I thought about trying to record it off the radio but this was not going to be straightforward as I would not be in when the show started or finished – and this is the 21st century for heavens sake, why was I thinking about going back to cassettes and ‘pressing play and record as the same time’? Surely there had to be another way, preferably one that meant I could get the show in MP3 format so that I could listen in the car, at the gym or at work.

A podcast was what I needed but unfortunately the radio station in question did not produce one. However, they did provide an live stream – great, now what is the equivalent to pressing play and record at the same time on a computer?
There are probably ways of doing it in Windows using Media Player or iTunes (I don’t know because I haven’t checked) but where was the fun in that? I wanted to see if Ubuntu could do it – for free and maybe learn something in the process!
I had a quick Google around and sure enough there were a number of posts on recording internet streams (see Related Links below). The first link will take you to an ‘’ page which will guide you through a 5 step process and forms the basis of my configuration. However, in it’s published form their method did not work on my setup (Ubuntu 8.10) and I needed to tweak the configuration of one of the applications just a little bit. The process also utilised KCron which is a KDE application and as I am running Gnome I needed/wanted an alternative. So while the bulk of the process is taken from the Instructables site this post details the tweaks I needed to make to get it working for me and a bit more explanation of some of the components used.
The first thing to do is to install all the bits and pieces required so open a terminal window and enter
sudo apt-get install lame mplayer gnome-schedule

This will install mplayer (the application that will actually connect to the stream and output a .wav file), lame (an Open Source utility that will convert the .wav to an MP3) and gnome-schedule (which will schedule the starting and stopping of the recording).
The creation and configuration of the folders as well as the creation of the scripts that control mplayer are detailed on the Instructables site but I’ll just give a bit of extra detail on the options and switches applied to the commands:
The line in the streamrecord script that actually records the stream is:
mplayer -playlist "" -ao pcm:file=/tmp/mystream.wav -vc dummy -vo null;

So what does that all mean?
  1. -playlist simply tells mplayer what file to play – I inserted the path to the radio stream here.
  2. -ao sets the audio driver. Here I’m using the pcm driver for RAW PCM/WAVE file writer audio output. This is followed by the output path, i.e. where the temporary audio file will be saved.
  3. -vo sets the video driver which is not needed here so I just passed in null
  4. -vc sets the video codec to use which is set to dummy as we are not needing video here
The next line performs the conversion from .wav to .mp3:
lame -m s /tmp/mystream.wav -o "/home/dave/Music/PhonicFM/Show - $NOW.mp3";

The ‘m’ switch sets the mode to simple, that’s what the ‘s’ is for and good enough for our needs.
The ‘o’ flags the file as non-original (for some reason).
At this point it should have been possible to test the setup by running the scripts in a terminal session. Whan I tried however when I ran the streamrecord script I got an error when it executed the mplayer command.
Couldn’t resolve name for AF_INET6:
So what was all that about? Well, believing that I would not be the first person to have experienced this problem I turned to Google and sure enough a solution presented itself (see the link at the end of this post). Basically I needed to tweak the mplayer config file to prevent it from trying to use the IPv6 protocol first and just use the plain old IPv4 version.
Adding prefer-ipv4=yes to the mplayer.config file (on my system it was located in /etc/mplayer) sorted this problem for me.
Now i could test the configuration by entering ./scripts/streamrecord in a terminal window to start the recording, waiting a few minutes and (in another terminal window) entering ./scripts/pkill to stop it.
While the recording is taking place a .wav file is created in /tmp and will grow as the recording continues. For the 2 hour show that I recorded this file grew to around 1.3GB so make sure you have enough room on the drive that you stream this file to. When you run the pkill command the mplayer process will stop and the next command in the streamrecord script will run and start the lame utility which will convert the .wav file into an MP3, storing it on the location you specified. This conversion took about 10 minutes for my 1.3GB wav and resulted in a more manageable 110MB MP3 file. When the conversion completes the .wav file will be deleted and you are left with the radio show in MP3 format. Simple eh?
All that remained to do was to configure the Gnome Scheduler to start and stop the recording.
The Gnome Scheduler can be found in the System Tools menu (within the Applications menu) and you will need to create two jobs, one to start the recording and one to stop it.
The process is pretty straightforward and detailed nicely in a post on Make Tech Easier (see Related Links below).
I created a task called ‘Start Recording’ which ran the following command at 10 o’clock:

and one called ‘Stop Recording’ which ran the following command at Midday:

And that’s it, as long as your system is running and can access the Internet at the specifed time your show will be recorded and will be waiting for you in MP3 format.
Related Links:


Dirty Checking and the .NET GridView

Most Web Developers have come across this problem at one time or another, how do I stop someone navigating away from a page if they have not saved any changes they have made?

The answer is that you need to employ ‘Dirty Checking’, basically set a flag when the page data is changed and check it if the user tries to navigate away. Sounds simple (and in many instances it maybe so) but I have encountered a problem recently (today!) which took some thought to get around. Basically I had a Master page with the following Javascript in the head section:

var isDirty = false; 
function setDirty(changeVal) { 
window.onbeforeunload = checkExit; 
function checkExit() { 
    if(isDirty == true) {
        return confirmExit(); 
function confirmExit() {
    // this value will be return as a part of the confirmation message 
    return "You have not saved your changes to this record."; 

Any page utilising the dirty checking had some code in the Page_Load event that looked something like this:

ddlCategory.Attributes.Add("onchange", "setDirty(true);");  // Dropdown List
txtWorkArea.Attributes.Add("onchange", "setDirty(true);"); // TextBox
chkAssessmentRequired.Attributes.Add("onclick", "setDirty(true);");  // Checkbox
buttonSave.Attributes.Add("onclick", "setDirty(false);");  // Button
function confirmExit() {
    // this value will be return as a part of the confirmation message 
    return "You have not saved your changes to this record."; 

So when the page is loaded the edittable controls have an attribute added which will execute the setDirty function and set the dirty status accordingly. Note that the Save button is configured to clear the Dirty Status (i.e. set it to false).

So, what’s wrong with that? Well nothing, everything works as expected – but enter the .NET GridView control! The dirty checking was then added to a page that contained a .NET GridView control and the problems started. The GridView allows in-line editing and can be configured to commit changes to the underlying database upon clicking the ‘Ok’ button within the editted row. However, as the GridView only held part of the information on the page we wanted it all to be committed at the same time, i.e. update the GridView but not call the Update method of it’s DataSource until the whole page was being committed. I had to wire up the dirty checking to flag any changes made to the data it contained.

My first approach was to set the dirty status of the page when any row was actually updated, i.e. when the user clicked on the OK button of the Edit Row. This could easily be caught in the RowUpdated event and seemed a logical choice and initially seemed to work as expected – until one of those damn tester people got hold of it. It was found that if the page was in a dirty state before any data in the GridView was editted then clicking on the Edit link button would result in the ‘Unsaved Changes’ dialog being displayed. It was clear that the onbeforeunload event is being fired by the GridView going into Edit Mode and there was nothing we could do about this – we just had to find a way around it.

We decided that as well as flagging the Dirty State we also had to flag and maintain the ‘edit’ state of the page. This way we could ignore Postbacks that occured when the GridView entered Edit Mode – simple….. The Javascript code in the Master Page was updated by adding a new ‘IsEditing’ function and updating the condition within the checkExit function.

var isEditting = false;
function setEditing(nextValue)
    isEditting = nextValue;
function checkExit()
    if(isDirty == true && isEditting == false)
        return confirmExit();

So now we could flag a page as being in Edit Mode and bail out of checkExit if it was (even if the apge was flagged as being dirty). But we were not out of the wood quite yet – we still needed to wire the new functions into the rest of the code. We needed to add the ‘onclick’ attribute to each Edit button in the GridView to flag the page as being in Edit Mode. The simplest way of doing this is from within the RowCreated event:

protected void GridView1_RowCreated(object sender, GridViewRowEventArgs e)
  LinkButton editButton = (LinkButton)e.Row.FindControl("LinkButton1");
  if (editButton != null)
    editButton.Attributes.Add("onclick", "setEditing(true);");

When the row has been updated we need to clear the Edit mode flag and set the pages Dirty state to True. This can be done in the RowUpdated event which is fired when the user clicks the OK button of a row in Edit Mode.

protected void GridView1_RowUpdated(object sender, GridViewUpdatedEventArgs e)
  ScriptManager.RegisterStartupScript(this, this.GetType(), "SetDirty", "setDirty(true);setEditing(false);", true);

So, how do I know that the Edit button will be called LinkButton1? Well by default when a CommandField is added to a GridView there is no way of knowing – so you have to convert the field into a Template field instead.

With this code in place clicking on the Edit button sets the page in Edit mode and the ‘Unsaved Changes’ dialog is not displayed. If they make changes to the data fields and click OK then the flags are reset accordingly. Now we just need to handle the situation where the user clicks on the Cancel button of a row in Edit mode. Adding the following code to the RowCancellingEdit event sorts that out:

protected void riskAssessmentDetails_RowCancelingEdit(object sender, GridViewCancelEditEventArgs e)
    ScriptManager.RegisterStartupScript(this, this.GetType(), "SetEditing", "setEditing(false);", true);

So there you have it, one method of utilising Dirty Checking in ASP.NET page utilising the .NET GridView control. It may not be the only, or the best way but it solved the problem that we encountered. If you spot any mistakes or have any suggestions then feel free to leave a comment.

All Change……

If you are one of the 100 or so previous visitors to the site then you may notice that it has changed quite a lot, in fact the more observant of you will notice that I am now running a WordPress Blog Engine rather than a Joomla! based CMS.
I decided to change to WordPress for a number of reasons but I belive in trying to use the right tools for the job and as the site was growing into more of a blog and while Joomla! can handle this it is pretty well accepted that WordPress is the No1 Blog engine on the Net. That’s not to say that Joomla was a load of rubbish but the learning curve was much, much greater with than with WordPress.

I also found that trying to find and install plugins was not as easy with Joomla! as it is with WordPress. The site will probably have quite a bit of code on it and trying to get Syntax Highlighting plugins for Joomla! was a nightmare. In contrast I had a WordPress one installed and working in minutes.

I’m not totally turning my back on Joomla! though as we are going to be using it within one of the projects we are running at work – so I’ll still need to climb the learning curve but at least I’ll be getting paid to do it which can’t be bad.

The only thing lacking (so far) is the ability to take the site offline (although I understand that the is a plugin for this) so if you are reading this around the same time I’m typing this post then be aware that I am still working with the imported data from the Joomla! site so some of the posts will be missing until I update the links etc.