Tuples are Evil and Should be Treated as Such

While working on a particular area of the a clients code (C#) I came across method with a signature similar to the following:

public Tuple<bool, decimal, decimal, decimal, int, int> CheckState(int aParameter)

At first glace you may think that there is nothing wrong with this but think about it – what does that bool actually represent? What about the second of the decimal values? Well, the developer had obviously thought about this and had added a comment above the signature – but it only specified 5 values, not the 6 that are in the Tuples definition (that’s because code comments lie).

But the problems with Tuples don’t stop with not being able to see what the method is returning – oh no. What happens when you are on the other side of the method – calling it and consuming the response?

var stateResult = CheckState(1);

bool isValid = stateResult.Item1;
decimal highValue = stateResult.Item2;
decimal lowValue = stateResult.Item3;
decimal alarmValue = stateResult.Item4;
int inPort = stateResult.Item5;
int outPort = stateResult.Item6;

What is not obvious from this is that when you are using Visual Studio the Intellisence is not going to help you out here – Item1, what is that again? Ok, you may be able to see it’s a bool but how much does that help you? What about the decimal values? Which is which?

Obviously the developer who put this together needed to return a handful of values but why oh why did he use a Tuple instead of creating a simple POCO/DTO?

public class StateResult
    public bool IsValid {get; set;}
    public decimal HighValue {get; set;}
    public decimal LowValue {get; set;}
    public decimal AlarmValue {get; set;}
    public int InPort {get; set;}
    public int OutPort {get; set;}

The method signature now becomes:

public StateResult CheckState(int aParameter)

While the usage would now be:

var stateResult = CheckState(1);
bool isValue = stateResult.IsValid;
decimal highValue = stateResult.HighValue;
decimal lowValue = stateResult.LowValue;
decimal alarmValue = stateResult.AlarmValue;
int inPort = stateResult.InPort;
int outPort = stateResult.OutPort;

For the sake a less than a dozen lines of code it has become more readable, almost self-describing, easier to use and to maintain.

So the next time you think about using a Tuple – remember, the next developer may know where you live and may own an axe 😉


Configuring Dropbox on an Ubuntu Server – Pt 1

If you’ve been reading any of my recent posts you will know that I have been moving from a shared server host to a virtual private server and that I have also experienced issues with my backups. Well with the blog configured and running well the migration was almost complete, but what about backing up my data? I’d already seen how easy it was to run into problems so this time I wanted to be a little bit more in control and not just blindly rely on third party plugins. I was keen to use Dropbox but was unsure about installing and configuring it on a headless Ubuntu Server. As it happens it was not really that difficult to get it up and running.

The Dropbox Wiki has a very good article on configuring Dropbox on an text based Linux system like mine but there are a couple of steps that are a little light on detail and it does not really flow very well so it’s easy to miss something and end up scratching your head for a while. I’ll not reproduce the entire article here but just highlight the areas I think need a little bit more explanation.

Using the ‘step-by-step’ setup process in the above article everything goes fine until step 7 at which point you are asked to navigate to a particular URL, from the server, to link the installation to a Dropbox account. Well there’s a problem – we are running a headless server so we can’t really run a browser can we. Well actually yes we can – just not a very pretty one!

The article suggests the installation of a text based browser called lynx which can be installed with the following command;

apt-get install lynx

So once you have lynx installed you will need to have two ssh sessions running to complete the sync process, one to start the daemon which prompts you with the appropriate URL and one to run the lynx browser in to navigate to it.

In one of the sessions enter


and when the URL is displayed use the mouse to select it and copy it to the clipboard using CTRL+C. Now in the other session, enter


and you should see something like this:

It should come as no surprise that the browser is driven by the keyboard and to navigate to a URL we just need to press ‘g’ and paste the value copied from the other session (or you could just type it in) and then press ‘Enter’. At this point you may get a few messages about accepting cookies etc and I just pressed ‘A’ to accept all (although I needed to do this a couple of times!). The resulting webpage may not look like Dropbox – but in text mode it’s as close as we’ll get.

Using the TAB key you should be able to jump down to locate the Username and Password fields. Enter the appropriate values and then TAB down to the ‘Submit’ text/link and press ‘Enter’.

After successfully logging in you should now be able to TAB down to locate a prompt asking you for your password again to link the host server with the current Dropbox account. Do this and TAB down to the ‘Submit’ button and press ‘Enter’. If successful then the other ssh session should now display a welcome message.

In the lynx session, exit the browser by pressing ‘q’. You should now find a Dropbox folder in your /home directory which will contain (or still being syncing) all of your existing files (or the default Dropbox ones if you linked to a fresh account).

So that’s that then – well not quite. We currently have an ssh session locked running the daemon – if we close it down then the daemon will also close. So we need some way of running it as part of the normal startup routines which can be achieved with some scripts in the appropriate locations. I’ll run through this part of the wiki article in part 2.

Configuring vsftp on Ubuntu Server 10.04 LTS

So my move from shared hosting to virtual private server (VPS) was going well so far – I’d moved this blog across and  wired up Exim4 to handling all my emailing needs. Then yesterday I saw an interesting article on Lifehacker about the MobilePress plugin for WordPress which rendered the blog in a format more suitable for mobile devices. Excellent, I thought – I’ll have that! That’s when I found out that the blog itself was not as ready as I thought it was – the fact was I could not install any plugins due to the lack of an FTP server on my VPS – so my next job was to resolve that!

When I tried to install the MobilePress plugin I was presented with the following page which is requesting FTP credentials – which I didn’t have.

But once again Open Source came to the rescue through the vsftp project which claims to be the fastest and most secure FTP server for Linux systems. A bold claim indeed but surely this means massive amounts of configuration etc, well as it happens (and for my purposes at least) this is not the case. That said, all of the tutorials I found out on the web were missing one key step which left me scratching head for a little while until the penny dropped.

First of all I installed the package from the standard repository:

sudo apt-get install vsftpd

Next I needed to make a quick change to the main configuration file which is located at /etc/vsftpd.conf by making sure that the following lines are uncommented

local_enable=YES write_enable=YES

With the update made and the file saved I restarted the FTP server

sudo /etc/init.d/vsftpd restart

This is the point were all of the tutorials I found move onto securing the server further (it is apparently pretty secure already so these steps are optional). But at this stage I should have been able to access my FTP server from an external client as well as configure the blog to use if for its plugin installation downloads – and I could do neither.

I checked the ftp user existed, changed it’s password and made sure it was not in the /etc/ftpusers which (ironically) lists users who cannot access via ftp. I checked that the appropriate ports were open and that the service was in fact running. Everything looked fine.

Then I wondered what if the ftp user is a special, restricted, user and what if I created another user;

sudo adduser ftpuser

and after specifying a password (and accepting the default values for the other user related fields) added them to the ftp user group

sudo adduser ftpuser ftp

Tada! I could now access the FTP site from my local browser and after entering the appropriate information into the WordPress configuration screen (using localhost for the hostname and the username/password I’d created) I was in business. I finally installed the MobilePress plugin (remember that) and the blog now provides a more mobile friendly experience.

Migrating Blog to Virtual Private Server with Exim4 email support

In my previous post I explained that I was migrating this blog from a shared server provided by 5quidhosting onto a virtual private server (VPS) provided by BHost. In theory this should have been quite straightforward but in practice, and bearing in mind I’m a developer and not an infrastructure boffin, there was a lot more to is that just moving zip files around and running a few commands.

That said, that’s exactly where I started off – I accessed my 5quidhosting reseller control panel and backed up my home directory for this blog and the MySQL database associated with it. After creating a suitable folder within /var/www I uploaded (via scp) the home directory backup and extracted the public_html folder into the newly created location. Next I created a new database in MySQL, created a new user and then GRANTed the appropriate permissions to that user for that database. Finally, I uploaded the backup of the MySQL database (actually just a SQL script) and read it into the new database.

CREATE USER 'onthefence'@'localhost' IDENTIFIED BY 'secret_password';
GRANT ALL PRIVILEGES ON onthefence.* TO 'onthefence'@'localhost';

Finally I opened the wp-config.php file and changed the DB_NAME, DB_USER and DB_PASSWORD values to the ones I used above.


At this point I was able to access the blog using a url like this: But that’s not really very nice so a little bit of Apache tweaking was called for. I added this to the top of /etc/apache2/sites-enabled/000-default

ServerName onthefencedevelopment.com
ServerAlias *.onthefencedevelopment.com
DocumentRoot /var/www/blog
        Options FollowSymLinks
        AllowOverride None

Now I could load up the blog by navigating to (no path to the actual sub-folder). While the home page was displayed fine when I clicked on any of the posts I was taken to the live blog rather than navigating to the page on the VPS instance. Why? Well because when WordPress saves some of it’s data it saves that fully qualified domain name too, i.e. the full path including the http://www.onthefencedevelopment.com part. So I’d gone as far as I could go without actually breaking the link to 5quidhosting. By resetting the nameservers I’d essentially go live with the VPS instance of the blog.


Safe in the knowledge that I could always repoint the nameservers back to 5quidhosting, I bit the bullet and did it.


With the nameservers reset to my registrars defaults I was able to create two A records in my DNS settings. An A record is really what maps a domain, e.g. onthefencedevelopment.com, to an IP address. So why two records? Well, initially I just created one for ‘www.onthefencedevelopment.com’ and after a short wait (no where near the 72 hours quoted by many a sys admin) this took effect – but the blog itself failed to load. The reason was that for some reason I’d configured it without the leading www. Not withstanding the arguments about which is right it’s probably prudent to have the two variants configured, i.e. http://www.onthefencedevelopment.com and just onthefencedevelopment.com, both pointing to the same IP address. A short wait later and bingo the blog was up and running – finished yes? Erm, actually no.


As mentioned in my previous post, 5quidhosting also provided email hosting. This was not only my Inbox for the domain but also my outgoing email server. The new blog did not have anyway of sending out email notifications and until it did I could not really consider this instance to be ready for service.


I’d already decided to switch my email hosting over to Google Apps and will blog about that separately but even with that sorted and my mailboxes accessible via the GMail interface I was still no closer to getting the server configured.


I’d blogged about sending emails from the command line on an Ubuntu server already so surely it was laughing – my blog had come to it’s own rescue. Well, not quite. In the previous configuration I’d used a package called nullmailer which, paired with the mutt email client, worked well for me using my previous outgoing email server (provided by 5quidhosting). The problem is that the smtp servers provided by Google Apps required a level of encryption that nullmailer did not support. But all was not lost because, as usual in the open source world, there is normally numerous solutions to a problem. In this instance the alternative came in the form of Exim4. Exim4 provides the support that the Google Apps server required and replaced nullmailer in the solution.

sudo apt-get install exim4 mutt

With the packages installed I followed the process defined on the Debian Wiki (Ubuntu is Debian based after all) and within about 10 minutes I was able to run the following command to get mutt to send me a test message:

echo "This is a test" | mutt -s "Test Message" nobody@nowhere.com

A final test was to get the blog to send me an email – which I did by logging out and requesting a password reset, which duly arrived a few minutes later.


Moving from Shared Hosting to a Virtual Private Server

Until recently I have been hosting this blog, and a few other sites and proof of concepts, using a reseller account provided by 5quidhosting costing, well £5. Now before I continue I should say that I have been very happy with the service they have provided me over the past two years or so and have no hesitation in recommending them for standard hosting services. The server performance was good and on the few times I needed to contact support their response was prompt and efficient. So, why have I moved this blog? Well, it’s all about access, or more accurately the lack of it.

I am of course talking about SSH access which was not provided on the 5quidhosting servers due to previous hack attempts. This is fair enough, most users will have no need for this level of access – indeed neither did I for two years.
In many of my previous blogs posts I have been using Ubuntu server to host my own source control server (Subversion), numerous Drupal test sites and a series of scripts monitoring the stability of my broadband connection to what time my girlfriends son turned his computer off on a school night. All this was done using an old grey tower PC in the garage – which I of course accessed via SSH. Now this has been a good workhorse but it’s an old PC and sooner or later it’s going to fail on me on the recommendation of a colleague at work I provisioned a virtual private server from BHost.
Despite the rather uninspiring website – BHost provide a very good service at a very affordable price. I opted for the Bronze configuration (20GB HDD and 1TB bandwidth) for less than £5. The control panel is really easy to use and I was able to reinstall the server with Ubuntu 10.04 64-bit (replacing centOS) in just a few minutes.
I was also going to have to source an alternative email provider – 5quidhosting was providing that for me but BHost did not. Although I could have used the VPS as an email server (my work colleague has done so – but he’s an infrastructure guru so that’s only to be expected!) I decided to go down the Google Apps road. This was pretty simple to do and once I’d set up my MX records I was in business – more on this in a future post.
So now I had my VPS, what was I going to do with it?
Well, I made a list:
  1. Relocate this Blog
    • The very fact you are reading this means I succeeded here but the above three words grew arms, legs and teeth (by which I mean ‘not as easy as I’d though/hoped’)
  2. Relocate my proof of concept Drupal sites
  3. Relocate my SVN server from the garage and implement SSL access
    • I want to be able to access my code securely from home or from customer sites without any bandwidth issues or failed access due to someone unplugging the server to plug the pressure washer in..!!
  4. Install and configure CruiseControl
    • This is mainly for my Android development – I want to be able to perform continuous integration builds as well as publishing to the Android Market with minimum effort (or risk of forgetting one of the many steps along the way).
  5. Configure Dropbox daemon and configure for backups for all the above
    • Using plugins etc that use the Dropbox API are ok but I’m getting a lot of failures at the moment, apparently due to the Web API
As ever, so that I don’t forget anything, I’ll be blogging my progress so if you are interested in any of the above (or have some advise or suggestions) then please leave a comment.

When is a backup not a backup ….. ?

The answer is of course, when you can’t restore from it. This is the situation I found myself in last weekend after receiving an email from a friend saying that he tried to post a comment but only received a blank page for his efforts. Sure enough, when I tried I got the same result.

Logging into the WordPress admin panel I was greeted with a message saying that the Akismet anti-spam plugin had not been loaded due to an error. Sounds like the most likely suspect, so what do I need to do about it? Well there was not an update available so it can’t be that. However, I had recently upgraded to WordPress 3.1 so maybe that’s the culprit. So clicking on the Reinstall button might sort me out right …. that’s where the problems really started.
Now before you start thinking “you forgot to backup didn’t you!?” the answer is no, well not quite. I have configured the wpTimeMachine plugin to perform a weekly backup and send it to Dropbox so presumed I was ‘safe’.
Well after clicking Reinstall and returning to the site all I saw was a php error – not good. Worse still was the fact that I could not access the Wodrpress dashboard either. In short the site was hosed – but I had last weeks backup so I was fine right…..?
A wpTimeMachine backup consists of five files, a zip file containing the contents of wp-content, a sql script containing all the data, the htaccess file, a script and a text file containing restoration instructions which assumes that you’ve lost the site entirely and are starting with an empty directory. The general gist is to get a vanilla install of WordPress running, extract your content into the new installation and restore your data.
Simple eh? Well no, not when the archive containing the content files, i.e. plugins, themes and uploads, is corrupted! Opening the archive displays a single folder (which was an upload from November last year – but the file is 13.5MB! Attempting to extract the archive resulted in an error saying “?End-of-central-directory signature not found” – this is not good!
Well I was not trying to recover from a total lost – I still had my content and my database. So I decided to create my own backup along the same lines as wpTimeMachine and see how I got on. Creating a vanilla install of WordPress 3.1 was pretty straightforward as was creating a local copy of my database (using the wpTimeMachine sql script). All that remained was to pull the contents of my wp-content down and and copy that into my local instance. Bad news was that trying to run the local instance resulted in the same errors – but that was because the links were still pointing (from my local instance) to the live site.
I zipped up the local instance and ftp’d it up to the site and, with breathe held, I unzipped it. With breathe still held – I attempted to login to the admin panel – SUCCESS!
Well, the site is back online (obviously) but there is still the problem of the corrupted backups – but at least I know about it know and can do something about it.

Android: Not Free as in Speech, but does it really matter?

So I was at work the other day and a tweet grabbed my attention. It said “Steve Jobs vindicated: Google Android is not Open” and linked to an article on The Register. After reading through the article I was reminded of my childhood when kids would say things like “yeah, well you eat snails!“. Steve Jobs vindicated!! Well I bet he’ll sleep better for all of that!

It’s not that I am an Apple basher, I have an iPod Nano after all, but how can a company with no Open Source culture to speak of start pointing the finger at another? It’s like being back in the playground!
The fact of the matter is that Android is not a totally Open Source platform, but it never has been. As the article points out, Google Maps and the Android Market Place are proprietary and I would not expect either to be made open source any time soon. But does that matter? Judging by the number of people buying Android handsets I’d say not at all! It has to be said that most people buying Android devices are oblivious to the fact that they are touted as open source with many of them not knowing what that means anyway – it simply does not matter!
I opted for an Android device well over a year ago now and can’t see myself buying an iPhone (or Windows Phone 7) in the near future and this has nothing to do with the ‘Freeness’ of the operating system. Bear in mind that the Windows Phone 7 platform was not released when I made my decision so I had a straight choice between Android and iOS.
As a developer I was interested in writing applications for the mobile platform and Apple’s Developer Agreement was far too restrictive for my liking and with draconian decisions like iOS not supporting Flash® and then all the controversy surrounding MonoTouch my gut told me to steer clear. That said, a big part of the decision was that I already know how to program in Java so had a head start with Android, whereas iOS requires me to learn Objective C. I was also able to download the Android SDK without needing to register with Google (although some people would say that they knew I was there anyway!) while Apple requires me to register with the site first (which will result in more Apple-based Spam in my Inbox – I mentioned that I already had an iPod!). Then there is the cost of developing for the Apple App Store, albeit only $99/year, while targeting the Android Market Place is only $25. This does have it’s drawbacks, the recent Android Market Place ‘Hack’ for one and just plain crap applications for another, and highlights the Danger of Free but life is all about mitigating risk, some people pay to do it, others apply common sense and will need to deal with the concequences if they get it wrong.
But of course, most of the people buying Android devices are not developers, just the run of the mill public, so why do they do it? I think that it all comes down to choice. In Apples corner there is the iPhone while in the Android corner there are dozens of phones from HTC, Samsung, LG, Motorola and Sony Ericsson. While the iPhone is a fantastic piece of technology (no denying it), choice is good, people like choice. That said, they are also mindful of the cost of ownership so they tend to go for the devices that do all of the flashy things that the iPhone does (and of course the Flash® things it cannot do) without having to spend £200 plus having to take out a two year contract.
So at the end of the day (I believe) whether Android is Open Source or not does not really make a jot of difference in the real world. Most users of Android devices don’t know what Open Source is, or just don’t care. So vindicated or not, Steve Jobs playground antics will not really have any impact on the general public and their apparent love of Android.

Upgrading my HTC Hero to Android 2.2 (CyanogenMod-6)

When I bought my HTC Hero I was both impressed and disappointed. Yes it was the best phone I’d even owned but as it was running Android 1.5 it didn’t do some of the whizzy things that I’d read about; things like Turn-By-Turn Navigation, Speech to Text etc.

When the official upgrade to Android 2.1 was released I was pleasently surprised to find that Google Navigation (along with Turn-By-Turn) was now available but again disappointed that Speech to Text was not. Indeed, contacting HTC they confirmed that it would not be available for the Hero. Nevermind I thought, I’ll survive – and until recently I have.
One irritation was as a result of purchasing the CoPilot SatNav application (I didn’t want to pay data charges when using Google Navigation abroad) and it takes up quite a bit of internal storage which limited the number of other applications I can install. If I had Android 2.2 (Froyo) then I could move the application to the SD Card and free up some internal space. There was also the issue of Google Readers latest upgrade which included new widgets which were only available on Froyo. So the decision was made, I wanted to move to Froyo – either on the Hero or I’d buy a new phone running Froyo and have done with it. As it turned out, a casual post on Identi.ca saved me £350 when fellow user ‘0x6d686b‘ directed me to an idiot guide to flashing the ROM on my Hero without bricking it.
The guide itself can be found here on the cyanogenmod WIKI and had enough detail to convince me to have a go (which is no mean feat I can tell you).
After backing up everything I wanted off the SD-Card etc, the first step of the flashing process was to root the phone which required downloading a file to the SD-Card and installing a free file manager from the market in order to run/install it – which went without a hitch.
Next was the installation of a Custom Recovery Image and the guide offered two alternatives, Amon_Ra and ClockworkMod. I opted for ClockworkMod for a couple of reasons, first of all because there was an easy method and als because Dan from Linux Outlaws gave it a good review recently. Flashing the radio went without any problems (whatever that achieves) and then it was onto flashing the ROM itself.
Because I’d downloaded the free version of ClockworkMod I didn’t have the option to download a new ROM from within the application, but I could download it manually and copy it to the SD-Card and install it from there, so that’s what I did. I did however download the wrong Google Apps package the first time around (tiny instead of medium) – this was not a major problem (it still installed) but the correct one had a later version of Maps (and maybe other apps too).
It was here that I hit a minor problem – when I tried to boot into the ClockworkMod Recovery I was greeted with an image of the phone with an exclamation mark through it, not a good sign. The phone did boot normally though, i.e. back into the original ROM, so all was not lost. I reached out to Identi.ca again and my Identi.ca buddy was still online and said that I should just open ClockworkMod and reflash it – which I did. This time the phone booted into the Recovery Manger and I was able to complete the process.
The final reboot took a few minutes but I was pleased to see the skateboarding robot that is the CyanogenMod logo so I knew I was on the right track. Once the boot was complete I was greeted with Android in the raw, i.e. without the HTC SenseUI, and started digging around.
So what’s it like? Was it worth the risk of potentially bricking my phone? Oh yes! The team at Cyanogen have done a great job and my phone has a new lease of life. It’s snappy, responsive and best of all now has Speech to Text available – take that HTC!!! I also have the older Market app installed which I preferred to the latest one which wastes far too much of the screen with a pretty but pointless banner (tell will tell if I gets updated or not). I’ve reinstalled all my applications, moved CoPilot to run from the SD-Card and so far all is well.
It’s not all good news though! There was a problem with the camera running on original ROM which caused the phone to reboot if I took a picture, navigated to the home screen and then back to the camera within a few minutes. Well the problem is still there – sort of. While it’s apparent that this is not a problem with the new ROM the phone still has a bit of a fit if I try to reopen the camera after a few minutes. However, it does at least offer the option to Force Close rather than just reboot the phone (which takes about 3 minutes) so at least that’s an improvement. This is not a major issue for me though.
Also, the phone has rebooted itself (but just once) – I was testing CoPilot on my way home from work and without warning the phone just restarted. Now this could be a problem with the ROM or CoPilot, I’m not sure but it didn’t happen again – maybe the battery was loose and neither the ROM or CoPilot were at fault.
Overall I’m very happy with the result and feel that my ‘ageing’ HTC Hero now has a new lease of life, and now that it will fit on the SD-Card maybe I can complete Angry Birds 😉

Accessing my Netgear Stora from Ubuntu

Like many tech-savy people these days I’ve been thinking about buying a Network Attached Storage device (NAS) for some time but being more into software than hardware I’ve never really understood enough about them to lay down my hard earned cash. Well recently my hand was forced by my external hard drive starting to act up and my girlfriends son filling his 70GB HDD with downloaded Flight Simulator extensions (and god knows what else!). After quizzing the infrastructure guy at work about his thoughts I ended up buying the Netgear Stora enclosure along with a Western Digital Caviar Black 1TB drive (another drive to follow next month).

One of the reasons I opted for the Netgear unit was that it stated that it was compatible with Linux (in addition to Windows and Mac). As I run Ubuntu as my current desktop (and server) of choice while the rest of the systems in the house are XP/Vista this seemed to fit the bill nicely. However, as with most products that claim to be Linux compatible, there was no software provided for use on Linux nor was there any documentation. After trawling the forums I find that Netgear back this claim up by stating (something like) “you can access the device and the files via a web interface”, i.e. its compatible with any device with a web browser. Well I’m sorry but that is not acceptable! How do I get my Ubuntu server in the garage to send it’s backups etc when it doesn’t have a GUI let alone a browser? I want to be able to mount the device as a drive just like the Windows systems do – surely that’s not too much to ask (especially since – apparently – the Stora is running a version of Linux itself).
Well, as it happens it is possible to mount and access the Stora as a drive and access it just like any other.
With the Stora connected and registered (which needs an internet connection by the way!) and while the Windows systems in the house started chewing up the disk space I started the hunt for informtion.
As is normal with these things, the information is seldom in one place. A clue here and a command line there and slowly but surely I homed in on a solution.
First of all I found a post on the Netgear forums describing the process for connecting a Fedora system to a stora. Good start, if Fedora can do it then surely Ubuntu could too. The problem here was that the process not only used Fedora but KDE and as I use Gnome this was not really useful to me – but as always with forums there’s normally hidden gold in the comments and this was no exception. One of the forum moderators posted a link to a post and explained that it would be simpler to configure fstab to mount the Stora instead – adding that he was more familar with doing this on Ubuntu. The downside is that (apart from the link being down as I type) the configuration didn’t appear to work – not for Ubuntu 10.10 anyway. But all was not lost, looking further down the comments of the original post (to #5) I found a command that did work.
sudo mount -t cifs -o uid=YOUR_UID,gid=YOUR_GID,file_mode=0777,
        //IP_OF_STORA/FamilyLibrary /media/stora/FamilyLibrary
[Command should obviously be on a single line!]
To find your UID and GID just run the id command in a terminal window (assuming you are logged in as the correct user!).
Also the /media/stora/FamilyLibrary folder needs to exist before executing this command or you can change the mount point as required.
So that’s it then – well not quite. This will mount the FamilyLibrary and Ubuntu will display a suitable shortcut on my desktop (or at least, does for me) but it’s not automatic, I’d have to run the command each time I log in – not major issue but then again not a slick solution either. So could I translate that command line into something that I could enter into fstab? Turned out it was easier than I thought 🙂
The format for a fstab entry is
Just about all of the information I needed was in the mount command above.
device is the path to the stora (in particular, to the root filesystem we want to mount)
mountpoint is where we want to mount the device to the filesystem
filesystemtype is just what it says!
options this is the comma delimited list in the middle of our command
dump is used by a backup utility called dump to decide if a filesystem should be backed up or not
fsckorder is used by the filesystem check utility fsck to determine the order in which filesystems should be checked
So our fstab entry looks like this (should be on one line but split here to fit the screen):
IP_OF_STORA/FamilyLibrary /media/stora/FamilyLibrary cifs
         username=USERNAME,password=PASSWORD 0 0
Add this (with the appropriate values) into fstab and run
sudo mount -a
from the command line and (all things being equal) the drive should mount, a shortcut should appear on your desktop and in the Places menu.
Job done, yes?
Well almost….
While this does work we have our username and password in plain text in file which can be opened by anyone (maybe not updated, but read none the less). If only there was a way to tell fstab to look somewhere else for the username and password …… well as luck would have it there is.
First of all we need to create a hidden file in our /home directory which will contain two lines (one for the username and one for the password).
Fire up your editor of choice (I like nano for this sort of thing) 
nano /home/dave/.storacredentials
and enter the following two lines:
Save it (CTRL+o then CTRL+x if you are using nano).
Now open up fstab and replace the username &amp; password options with the following:
sudo mount -a

again and the drive should be remounted with the new configuration. If everything goes to plan then your should not really notice very much.

Finally, there is little point in moving your credentials from one file that everyone can access to another – so to ‘secure’ the .storacredentials file we need to configure it so that only root can read it.
chmod 400 .storacredentials
sudo chown root.root .storacredentials
I’ve added another fstab line for my MyLibrary area on the stora by changing FamilyLibrary for MyLibrary – works like a charm 🙂

Configuring Android Debug Bridge Server to start as root

In my previous post I mentioned that one of the problems I faced when trying to install and debug my first Android application was that the Android Debug Bridge (adb) was not running as root. Now shutting the service down and restarting it with sudo was all that was required but that’s a bit of a faff, there had to be a way to configure adb to be started with root privileges – and there is.

Disclaimer: Before I continue I have to say that I claim no credit for this configuration, I just found it buried in a forum and merely replicating it here for my own reference and just in case the post is deleted.


I should also point out that my configurations were done on a system running Ubuntu 10.04.


01/07/2012: I’ve just upgraded to Mint 13 and tweaked the script to take account of the movement of adb from the tools folder into platform-tools.

Step 1: Using your text editor of choice create


and enter the following:

# For ADB deamon (Android Device Bridge)
case "$1" in
        # Replace the path below with your path to adb
        /opt/android-sdk-linux_x86/platform-tools/adb start-server
        # Replace the path below with your path to adb
        /opt/android-sdk-linux_x86/platform-tools/adb kill-server
echo "Usage: $0 start|stop" >&2
exit 3

When I’m working with system files I normally use nano as my editor and fire it off with sudo so that I have the correct permissions to save the file when I’m done.

Step 2: Make the file executable with the following command:

sudo chmod a+x /etc/init.d/adbd

Step 3: Create a link between the boot process and the above file

sudo ln -s /etc/init.d/adbd /etc/rc2.d/S10adbd

Step 4: Reboot your system

When the system restarts you can confirm that the service is running as root with the following command:

ps -afe | grep adb

On my system this returns two results:

root 1083   1  0 19:59 ?      00:00:00 adb fork-server server
dave 2417  1931  0 20:20 pts/0  00:00:00 grep --color=auto adb

as you can see the server (the first result) is running with as root.


Android Development : Moving from Emulator to Device

So I wanted to roll my sleeves up and get going with Android and have come up with a suitable ‘pet project’ to work towards. Now that I’ve got my development environment sorted out, and with my pet project in mind, I decided to do something with the accelerometer. In the Android Development book that I bought there is a sample application to display the current G being experienced by the device as well as the maximum G experienced.

Now the keying in of the code was a useful exercise in itself but there was an obvious flaw in my selection of project – can you guess what it is? That’s right my laptop doesn’t have an accelerometer so even if the code builds (and it does) and runs in the emulator (and it did) the G readings are not going to change no matter how much I shake the laptop around! Doh!

So, the only way forward would be to generate an Android package file and install it on my HTC Hero. Now there were a number of steps I had to go through before I could find out how many Gs I can put on my phone.

Now you can’t just connect your phone, select ‘Build to Phone’ in the IDE and TaDa! There’s a little bit more to it than that – but thankfully not too much.

Assuming that your application builds and runs in the emulator you are almost ready to generate your package. If you intend to debug your application while it is running on a physical device (and why wouldn’t you) then you need to specify this intention in the AndroidManifest.xml file.

  • If you are using the Eclipse IDE then open the AndroidManifest.xml file and it won’t be displayed as a normal XML file but as a more user friendly configuration form.
    • Locate the ‘debuggable’ property and set this to ‘true’ using the dropdown list.

  • If you are not using Eclipse then open the file in text editor, locate the main ‘application’ element and add the following attribute to it: android:debuggable=”true”

Obviously you will want to reset this back to false if you are about to deploy into production, e.g. to the Android Market.

We are now ready to export our package:

  • Right-Click on the project and select ‘Export Signed Application Package’ from the ‘Android Tools’ menu.
  • The next dialog asks you to select the project to build, but if you right-clicked on the right project then you can just click Next here.
  • Next you need to specify the keystore to sign your applications with, don’t panic – there is a default one ready for you to use. This is normally located in a file called debug.keystore in a hidden folder in your home folder, called .android.
    • So my keystore is in /home/dave/.android/debug.keystore
    • The default password is (surprisingly enough) ‘android
    • Click Next and specify a Key Alias
      • Select the default alias of androiddebugkey
      • Enter the password of android
      • The final screen is really a summary of the current configuration as well as some useful information on the certification expiry.
      • After checking that everything is ok, Click Finish and your package will be generated in the location specified.

So the next step is getting this package onto the phone – how hard could that be……………..

The first step (before plugging the phone in) is to turn on USB Debugging on the phone.

  • Access the main menu and select Settings
  • Select Applications
  • Select Development
  • Tap USB Debugging to turn it on

You can now plug your phone into your system. If you are using Windows then you will need to install some additional USB drivers that need to be installed.

Now for the bit that took the time – hence I’m writing it down. You now need to use the Android Debug Bridge (aka adb) to communicate with the phone.

To see if your phone has been detected:

  • Open a Terminal Window
  • Navigate to the tools folder within your SDK location
    • In my case this is /home/dave/Applications/android-sdk-linux_x86/platform-tools (moved from the tools folder ‘recently’)
    • Type ./adb devices and press enter

‘No Permissions’ – what’s that about?

Well (after quite a lot of Googling) I found that this problem is caused by the adb server not running as root – it’s actually running with your normal permissions. Resolving this problem is as simple as stopping the service and restarting it again using sudo.

Job done – I can now access the phone and hence install the signed package I generated earlier by running

./adb install [full path to signed package]

All things being equal you should now find your application in with all of your others where you can tap to run it.

So there we have it – an application installed from laptop to phone. But that matter of having to stop and restart the adb server niggles me a little and I’m going to look into sorting that out. If it needs to run with root permissions then why does it not start up that way? When I have a solution, I’ll post it here – otherwise I’ll forget myself 😉

Moving from Amazon S3 to Dropbox for my server backups

For the past few years I’ve been using Amazon S3 for my remote file storage needs, initially it was used for a site I created blog-type in DotNetNuke for my girlfriends daughter to use on her Gap Year trip around the world. The hosting I had at the time was quite expensive in terms of disk space so the low cost of the Amazon service made it an easy choice.

Time passes ….. and I start to use Amazon for storing backups of my websites (this one included), initially manually but more recently I’ve been using plugins to do it for me and just send me an email when they are done. Even with this increase in storage I still only run up a $0.10 bill with Amazon every month – very cost effective I think you’ll agree.
Now I have had no problems with the Amazon service – none what so ever, so why am I moving away – surely it’s not just to save $0.10 a month! Well no, even I’m not that tight fisted ;-). The problem is that I am charged in US dollars and my UK bank have great pleasure in charging me £1 to perform the conversion from $ to £ – and that annoys me, on principal! Now I’m not going to get on my soapbox about the UK banks but they have already had enough of my money recently and I don’t see why I should cough up another £12 a year for a computer to work out the exchange rate. Amazon confirm that they will only be charging in $ for the foreseeable future so I need to find an alternative solution to backing up my server data.
Enter Dropbox – 2GB of free space and cross platform capabilities to boot. The best part, I already use it to synchronize files between my numerous systems, be it my Laptop or Netbook (both running Linux) to my Vista system or my Windows 7 work PC. Saving stuff to Dropbox at home and then having it sync to my work system (and vice versa) has been very useful. But how useful is it for automated processes, i.e. can I find plugins for the WordPress blogs and Drupal sites that I develop and maintain? Well, yes it is (with a little work on the Drupal side of things).
The WordPress side of things is a snap as there is a simple plugin called WP Time Machine that just does what I want. After installing and activating the plugin it is literally a matter of a few mouse clicks, a username and password and a folder name and you’re done. With everything configured you can click the ‘Generate wp Time Machine Archive’ button and the backup files will magically appear in your dropbox folder – marvellous! But what about scheduled backups I hear you ask – well this is handled via a cron script (details of which can be found here). After a few test runs I have the settings I wanted and duly setup a cron job in my hosting panel to run the script on a weekly basis. WordPress sorted, not what about Drupal?
Well first of all there doesn’t seem to be a backup module out there (at the moment) that includes Dropbox as a target service. The Backup and Migrate module for Drupal is probably the best known and widely used solution but it ‘only’ targets S3, FTP and Email. Not to be defeated I Googled around and found a short but informative blog article on the subject of adding Dropbox support to the Backup and Migrate module – so it was possible!
While the article author (Alon Pe’er) has done all of the main leg work in terms of reviewing suitable Dropbox API libraries and producing a patch for the Backup and Migrate module he stops short of explaining how to put it all together into a working solution. That’s not a critisism, just an observation! So in reference to the strapline for my blog – If I don’t write it down myself, I’ll forget how to so it.
The first thing to do is to download the three components I’ll be needing, namely
Now, to apply the patch I extracted the Backup and Migrate module to my desktop. Opening the extracted folder I navigated to the includes folder and dropped a copy of Alons patch into it. How did I know to do that? Well opening the patch file the first line refers to a file called destinations.inc – and that’s the folder it’s stored in (there are probably there ways to determine this – but it worked for me).
I then popped open a terminal window and navigated to the same includes folder and entered the following command:
patch -p0 > backup_migrate-dropbox.patch
Finally I dropped the DropboxUploader.php file into the includes folder.
So that’s that – I have the module patched and the Dropbox library in the right place – all I need to do now is copy the backup_migrate folder up to my Drupal installation, activate and configure it.
Configuration is virtually the same as with the normal unpatched version of the Backup &amp; Migrate module so I’ll not regurgitate it here. The only difference is that when configuring a Destination, Dropbox is now an option. Selecting it prompts you for a destination name, e.g. ‘My Dropbox Backup’, a path inside your Dropbox folder, e.g. backups/mydrupalsite and your Dropbox credentials.
And that’s it – with the destination configured it can be applied to a schedule or just run on demand.
Obviously when there is an upgrade to the Backup and Migrate module I’ll have to go through this process again otherwise I’ll lose the Dropbox functionality. There is also the possibility of the patch not working on future releases of the module so I’ll have to be mindful of that.
So with my Amazon S3 account now empty and everything now backing up into Dropbox I no longer have any of those annoying (albeit trivial in terms of value) bank charges – so I suppose they will have to find some other way to fund their bonuses and Christmas parties 😉
Well would you believe it, the day after I upload the patched version of Backup and Migrate to my Drupal sites an upgrade comes out! Well at least I can say that the patch etc works with the latest version (6.x-2.4) of the module!


Running Screwturn Wiki under Mono

So at work we had a need for a Wiki of some description and after evaluating a number of different systems we decided to go with ScrewTurn. This is written in C# and was initially installed on a Windows Server under IIS – and all was well. At the same time I was evaluating the Statusnet microblogging platform – which we then also decided to implement for internal communication (Yammer and Twitter were deemed as not being suitable).

Now, here lies a problem; Screwturn is written in C# and supported to run on Windows but statusnet is written in PHP and I had no end of problems getting all of the requirements (mainly php_curl) running under IIS. On the flip-side, installing statusnet on a Linux platform like Ubuntu is a breeze once the LAMP stack is installed – but what about Screwturn? Obviously I knew about Mono but didn’t have any really experience with it – would it support enough of the .NET Framework to run the Wiki? If so then we could implement both Screwturn and Statusnet on an Ubuntu server (actually a VM) our DMZ – replacing the ageing tin box Windows server.

The upshot is that Screwturn WILL run on Ubuntu under Mono – with some very minor tweaks to a small number of files. You don’t need to rebuild the source, you don’t need to know any C# – assuming you have the LAMP stack and Mono installed already, all you need is a text editor and you are good to go. If you just want to see how I did it rather than read about how I worked it out then just scroll to the bottom of this post – I’ve summarised it there.

My starting point was to create a Virtual Machine running Ubuntu Server 10.04 LTS (remember this is going to be a production system and using the LTS instead of the recently released 10.10 made sense from this point of view). After installing the LAMP stack (click here if you need help with this) and Mono (click here for the step by step installation that I used for my investigations) I set about downloading Screwturn itself.

Bearing in mind I installed the server with no Desktop I used good old wget to fetch the latest package, locating the URL by hovering of the download link on the website.

My command line was;

wget -O screwturn.zip http://www.screwturn.eu/GetFile.aspx?File=/Releases/ScrewTurnWiki-

Obviously this will change as new versions are released but you get the idea.

After unzipping this package (which I needed to install ‘unzip’ from the repo) into a new folder I created in the the Home folder I copied the WebApplication folder into /var/www and renamed it to screwturn. Reading the Install.txt file (back where I extracted the zip file I downloaded earlier) I decided to leave the master password for the timebeing but did update the permissions on the public folder using chmod a+w /var/www/screwturn/public and then restarted Apache (just for good measure).

With fingers crossed I navigated from my laptop browser to the VM location of the new ‘site’ and after about 15-20 seconds (quite common for an ASP.NET site starting up) the Screwturn Home Page was displayed – job done yes?

Well, actually no. While clicking on some of the menu links, like ‘All Pages’, ‘Categories’ or ‘Navigation Paths’ took me to the appropriate pages, clicking on ‘Create a New Page’ or ‘Administration’ resulted in a Yellow Screen of Death.

As an ASP.NET developer I’m quite used to seeing these and duly editted the web.config file (as indicated in the YSoD) to turn Custom Error off, restarted Apache (just to be sure) and tried again. After a short delay (ASP.NET startup remember) yet another YSoD was displayed – this time telling me that it could not find a file called MasterPageSA.master – progress of sorts!

Listing the files in the root of the application showed that the file was actually present so I wondered if it was the tilde ‘~’ character that was causing the problem – maybe Mono didn’t like it. I knocked up a quick test application with a master page and it worked fine so that ruled that out. Then I noticed that the filename of the actually file was MasterPageSA.Master (note the upper case M in the extension). “Surely Not”, I thought but renamed the file with a lower case m instead and clicked on refresh and again success of sorts – a few seconds later a different Yellow Screen of Death appeared – this time with what appeared to be a Javascript error.

Googling for this error I was led to the Screwturn Wiki forum where a guy appeared to be pointing the finger at Screwturns use of Webparts and Monos lack of support for this feature. Damn – was this a deadend? I ‘tweeted’ my frustration and within a few minutes I got a reply from the ScrewTurnWiki Twitter account saying that the application didn’t use Webparts but it did use an AJAX component called Athem.NET which might do.

Ok, how hard can it be? Screwturn is written in C# and I develop in C# for a living. Surely I can sort this out. So I downloaded the source code (the joy of Open Source) and opened it in Visual Studio 2008 on my Windows system. After checking that everything built and ran ok I located all of the references to the Webpart namespace and commented them out! An extreme approach but I needed to see if the code made use of the namespace or if the references were left over from previous development. I did not, for one second, expect the code to build but was quietly pleased when it did! So whoever was on the Twitter account was right – Screwturn does not use Webparts. So what about this Anthem.NET component?

Well Anthem.NET provides AJAX functionality to the application and while I did not relish having to tear this out and replace it with MS AJAX I removed the reference to the library, fixed all of the compilation errors resulting from the loss of the reference and after a couple of hours I had the code building again – but would it run?

The answer was, surprisingly, yes. However the Yellow Screen of Death with the Javascript error was once again displayed when clicking on the ‘Create a New Page’ menu item. All that work and I had achieved nothing – or had I? Well I knew that Webparts were not causing the problem and neither was the Anthem.NET library – good news as I didn’t have to replace it with MS AJAX. So that meant that the problem appeared not to be caused by Mono not supporting aspects of the .NET Framework. But how was I going to move forward?

Well, the only thing I could think of was to flip back to Ubuntu and run the source in a development IDE like MonoDevelop – which is when I did (on my laptop, not the server!). It was in MonoDevelop where I found the last piece of the puzzle.

While running the application in MonoDevelop and clicking on the ‘Create a New Page’ menu item I saw this stack trace:

Googling the Parser error I came across a forum posting indicating that Mono was a little stricter when it came to parsing and that the angle brackets in Javascript were particularly troublesome. He overcame the problem by commenting the Javascript out – a trick used to stop older browsers, which did not support Javascript, from trying to interpret the script. Looking at the code for the Editor.ascx control I could see that the two script blocks were commented out already – but not consistently.

Notice how the closing comment tag also has the // comment notation while the opening one doesn’t. Could this be the problem? Well after kicking a few permutations around I found that if I changed the tags around both of the script blocks as below then the page ‘Create a New Page’ page loaded fine (after prompting me to login) – indeed so did the other pages that failed to load earlier – because the Editor control is present in each of them.

// <![CDATA[

Javascript in here
// ]]>

There was one more tweak though. With the ‘Create a New Page’ loaded, clicking on the ‘Visual’ tab resulted in an error being displayed within the page itself (not a Yellow Screen of Death).

The error message indicated that it cannot find a file called IframeEditor.aspx – well knowing from above how strict Mono is about casing I assumed (correctly) that the file was probably present but cased differently – which proved to be the case. Locating the reference to the incorrectly cased file and changing it to IFrameEditor.aspx resolved the issue.

And that’s that really – A few tweaks and Screwturn Wiki is running perfectly well on Ubuntu under Mono. The changes have been fed back to the guys at Screwturn and hopefully they will see the benefit of incorporating them into the main build – assuming they do not break anything else when running under Window/IIS that is.

So in summary the required changes are:

  • Rename MasterPageSA.Master to MasterPageSA.master (lower case ‘m’ on extension)
  • Replace the comment tags around the two Javascript blocks in Editor.ascx as below
    • <!– needs to be replaced with // <![CDATA[
    • // –> needs to be replaced with // ]]>
    • Locate reference to IframeEditor.aspx in the Editor.ascx markup and change it to IFrameEditor.aspx (upper case F)

Note: I am using the Wiki in File Storage mode and have not attempted to connect to a database MySQL or otherwise. The File Storage mode fits our needs at this time.

So Skype on Android, What’s it like?

So today I found out that Skype have released an app for Android 2.1+ devices and just had to give it a whirl. I have friends in France and Thailand that I’d like to keep in touch with and Skype is a very cost effective way of doing so. Ok I could use the full application on my laptop/netbook (yep – the Linux versions run just fine) but hard to believe as it may be, I don’t have them turned on all the time. Having Skype in my pocket would be fantastic, but would it work?

Well the answer is (as usual) it depends! I initially tried it on my works WiFi and the experience was not great. I called the Test number and it was like I had a bad line – the speech was broken and laggy. When I recorded my message to be played back it too came back slightly broken.
But not to worry, this was my works WiFi during my lunchbreak, god knows who was downloading what at that time. I took a wander around the town (need to get away from my desk for at least 30 mins a day) and tried the test call again. Unfortunately I can only get the EDGE network near my work and the experience was no better, so I’d have to wait until I got home to my own Wifi and where I can actually get a 3G/HSDPA signal (depending where in the house I stand).
Well the news is a little better from home 🙂 but not much 🙁
Using my WiFi (not super fast but still 4.5 – 5 Mbps) I still experienced the choppy connection. However, on a 3G and HSDPA network the speech quality improved to a point where it would be easily usable, especially with the latter. Odd results there – I would have thought that I would be better off on my WiFi (that was the hope) but this does not appear to be the case.
There is always the possibility that my HTC Hero, which is not listed as being supported, is just not up to the job. I hear it runs fine on the Nexus One which is not supported either. It would be a shame as I really like the Hero, although I’m not saying I’d get a new phone just to get Skype working but then not ruling it out either. I mean, looking at the other features of the app it does look really good, but it’s not a lot of use if I can’t make calls on it really is it 😉
Well my Camera just caused a my phone to reboot (old news) and now I can’t login to Skype at all, but it seems to fail a little be too quickly, you know – like it’s not really trying. Maybe a glitch with the service (although I can login via the desktop).
[Update: 06/10/2010: This morning I seem to be able to login again, although the call quality is still the same 🙁   ]

Writing a plugin for Statusnet : Part 2

In a previous post I configured my development environment so that I could start work on my Statusnet plugin. Well, a little later that anticipated, I’ve managed to get enough time together to get a working plugin – although it’s not perfect by any means.

Initially I had wanted to use the PingFM API to send notices from our internal statusnet instance through PingFM to configured services such as Twitter, LinkedIn and Facebook. However, to get an API key you need to apply to PingFM directly and this is a lengthy process (just look at their Google Groups page and you’ll see people chasing up requests which are weeks/months old). While my approval was pending I opted to develop the plugin to take advantage of PingFMs ‘Post By EMail’ functionality. Basically each user has a unique email address which they can send posts to. These are then distributed to the services they have configured on their account. This, in my opinion, is not as slick an approach as using the API but it’s better than nothing.

However, shortly after getting the plugin working I received approval from PingFM and subsequently I updated the code to send posts through the API. This didn’t really require a lot of changes to the code so I’ll just concentrate on the API approach.

Cast your mind back to my previous post and you will recall that I downloaded a Helper Plugin which I saved into the /local folder beneath the statusnet installation. This plugin basically gave me a basic template to work with so after removing all of the functions I did not need I saved the template as PingFMPlugin.php in the /local folder.

The specification of the plugin states that only certain uses can post to PingFM and moreover, only selected notices should be posted, not all of them. I decided to implement a !pingfm group and basically any notice posted to this group, i.e. containing the text !pingfm, would be processed accordingly.

The first thing to do was to determine whether the user was authorised to post to PingFM and to do this I opted for a simple text file containing colon delimited usernames and associated Application keys (which the each user can access by logging into PingFM and clicking the Application Keys link) with each line being terminated by a semi-colon (as below).


So now all I need to do is to scan this file (in our case less than half a dozen records) and compare the usernames with the currently logged in user, if there was a match then the user was authorised and the notice would be sent to PingFM via the API.

So our OnEndNoticeSave function now looks like this:

function onEndNoticeSave($notice)
        // This event will fire once the notice has been saved

        // Was the notice sent to the !pingfm group
        $pos = strpos($notice-&gt;content, '!pingfm');
        if($pos !== false) {

            // If so, resolve the list of 'authorised' PingFM posters and match to the appropriate PingFM user
            // # Need PHP function to read usermappings file into a local array
            //      mumble_username:web_key
            //      jbloggs:62228f3fb09b60cc73c34d6dd8149fe9-0987654321 

            $userFile = file_get_contents('/var/www/statusnet/local/userMappings.txt');
            $userMappings = explode(";",$userFile);
            $user = common_current_user();

            // ... check for matching username
            foreach($userMappings as $mapping) {
                if(strpos($mapping,$user-&gt;id) !== false) {

                    // User is authorised to post to PingFM - process notice


So far we have a $notice object which contains the original notice and a $mapping object which contains the username and api key of the user who has posted the notice into the PingFM group. What we need now is to package this up in a suitable form to send to the PingFM API and for this I’m using cURL and it’s curl_exec command in particular.

The PingFM API documentation states that the user.post method requires the developer api key, user application key, post method and the notice body itself.

This is how I’ve implemented this in the code:

if(strpos($mapping,$user-&gt;id) !== false) {
    // The mapping will be in the form username:pingfm_application_key
    //  split these out into an array
    $userDetails = explode(":",$mapping);
    // $userDetails[0]: Username
    // $userDetails[1]: PingFM User Application Key

    // Trim off the !pingfm group 'tag'
    $trimmedNotice = str_replace('!pingfm','',$notice-&gt;content);

    $curl = curl_init('http://api.ping.fm/v1/user.post');
    $curl_post_data = array(
                "api_key" =&gt; "",
                "user_app_key" =&gt; $userDetails[1],
                "post_method" =&gt; "default",
                "body" =&gt; $trimmedNotice,
    curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
   curl_setopt($curl, CURLOPT_POST, true);
   curl_setopt($curl, CURLOPT_POSTFIELDS, $curl_post_data);
   $curl_response = curl_exec($curl);

The above code can be downloaded here.

Note the I trim off the !pingfm text from the original notice as this really only has an internal meaning, i.e. we are using it as a flag or trigger to forward the notice.

So that’s it. With this in place and the usernames/application keys in the /var/www/statusnet/local/userMappings.txt file the plugin enabled in config.php (see previous post) we are good to go. Authorised users can append their notices with !pingfm and in addition to the internal stream they will be sent out to their configured services, such as Twitter and FaceBook. All from the webportal/client they are currently used to.