Tuples are Evil and Should be Treated as Such

While working on a particular area of the a clients code (C#) I came across method with a signature similar to the following:

public Tuple<bool, decimal, decimal, decimal, int, int> CheckState(int aParameter)

At first glace you may think that there is nothing wrong with this but think about it – what does that bool actually represent? What about the second of the decimal values? Well, the developer had obviously thought about this and had added a comment above the signature – but it only specified 5 values, not the 6 that are in the Tuples definition (that’s because code comments lie).

But the problems with Tuples don’t stop with not being able to see what the method is returning – oh no. What happens when you are on the other side of the method – calling it and consuming the response?

var stateResult = CheckState(1);

bool isValid = stateResult.Item1;
decimal highValue = stateResult.Item2;
decimal lowValue = stateResult.Item3;
decimal alarmValue = stateResult.Item4;
int inPort = stateResult.Item5;
int outPort = stateResult.Item6;

What is not obvious from this is that when you are using Visual Studio the Intellisence is not going to help you out here – Item1, what is that again? Ok, you may be able to see it’s a bool but how much does that help you? What about the decimal values? Which is which?

Obviously the developer who put this together needed to return a handful of values but why oh why did he use a Tuple instead of creating a simple POCO/DTO?

public class StateResult
{
    public bool IsValid {get; set;}
    public decimal HighValue {get; set;}
    public decimal LowValue {get; set;}
    public decimal AlarmValue {get; set;}
    public int InPort {get; set;}
    public int OutPort {get; set;}
}

The method signature now becomes:

public StateResult CheckState(int aParameter)

While the usage would now be:

var stateResult = CheckState(1);
bool isValue = stateResult.IsValid;
decimal highValue = stateResult.HighValue;
decimal lowValue = stateResult.LowValue;
decimal alarmValue = stateResult.AlarmValue;
int inPort = stateResult.InPort;
int outPort = stateResult.OutPort;

For the sake a less than a dozen lines of code it has become more readable, almost self-describing, easier to use and to maintain.

So the next time you think about using a Tuple – remember, the next developer may know where you live and may own an axe 😉

 

Configuring Dropbox on an Ubuntu Server – Pt 1

If you’ve been reading any of my recent posts you will know that I have been moving from a shared server host to a virtual private server and that I have also experienced issues with my backups. Well with the blog configured and running well the migration was almost complete, but what about backing up my data? I’d already seen how easy it was to run into problems so this time I wanted to be a little bit more in control and not just blindly rely on third party plugins. I was keen to use Dropbox but was unsure about installing and configuring it on a headless Ubuntu Server. As it happens it was not really that difficult to get it up and running.

The Dropbox Wiki has a very good article on configuring Dropbox on an text based Linux system like mine but there are a couple of steps that are a little light on detail and it does not really flow very well so it’s easy to miss something and end up scratching your head for a while. I’ll not reproduce the entire article here but just highlight the areas I think need a little bit more explanation.

Using the ‘step-by-step’ setup process in the above article everything goes fine until step 7 at which point you are asked to navigate to a particular URL, from the server, to link the installation to a Dropbox account. Well there’s a problem – we are running a headless server so we can’t really run a browser can we. Well actually yes we can – just not a very pretty one!

The article suggests the installation of a text based browser called lynx which can be installed with the following command;

apt-get install lynx

So once you have lynx installed you will need to have two ssh sessions running to complete the sync process, one to start the daemon which prompts you with the appropriate URL and one to run the lynx browser in to navigate to it.

In one of the sessions enter

~/.dropbox-dist/dropboxd

and when the URL is displayed use the mouse to select it and copy it to the clipboard using CTRL+C. Now in the other session, enter

lynx

and you should see something like this:

It should come as no surprise that the browser is driven by the keyboard and to navigate to a URL we just need to press ‘g’ and paste the value copied from the other session (or you could just type it in) and then press ‘Enter’. At this point you may get a few messages about accepting cookies etc and I just pressed ‘A’ to accept all (although I needed to do this a couple of times!). The resulting webpage may not look like Dropbox – but in text mode it’s as close as we’ll get.

Using the TAB key you should be able to jump down to locate the Username and Password fields. Enter the appropriate values and then TAB down to the ‘Submit’ text/link and press ‘Enter’.

After successfully logging in you should now be able to TAB down to locate a prompt asking you for your password again to link the host server with the current Dropbox account. Do this and TAB down to the ‘Submit’ button and press ‘Enter’. If successful then the other ssh session should now display a welcome message.

In the lynx session, exit the browser by pressing ‘q’. You should now find a Dropbox folder in your /home directory which will contain (or still being syncing) all of your existing files (or the default Dropbox ones if you linked to a fresh account).

So that’s that then – well not quite. We currently have an ssh session locked running the daemon – if we close it down then the daemon will also close. So we need some way of running it as part of the normal startup routines which can be achieved with some scripts in the appropriate locations. I’ll run through this part of the wiki article in part 2.

Configuring vsftp on Ubuntu Server 10.04 LTS

So my move from shared hosting to virtual private server (VPS) was going well so far – I’d moved this blog across and  wired up Exim4 to handling all my emailing needs. Then yesterday I saw an interesting article on Lifehacker about the MobilePress plugin for WordPress which rendered the blog in a format more suitable for mobile devices. Excellent, I thought – I’ll have that! That’s when I found out that the blog itself was not as ready as I thought it was – the fact was I could not install any plugins due to the lack of an FTP server on my VPS – so my next job was to resolve that!

When I tried to install the MobilePress plugin I was presented with the following page which is requesting FTP credentials – which I didn’t have.

But once again Open Source came to the rescue through the vsftp project which claims to be the fastest and most secure FTP server for Linux systems. A bold claim indeed but surely this means massive amounts of configuration etc, well as it happens (and for my purposes at least) this is not the case. That said, all of the tutorials I found out on the web were missing one key step which left me scratching head for a little while until the penny dropped.

First of all I installed the package from the standard repository:

sudo apt-get install vsftpd

Next I needed to make a quick change to the main configuration file which is located at /etc/vsftpd.conf by making sure that the following lines are uncommented

local_enable=YES write_enable=YES

With the update made and the file saved I restarted the FTP server

sudo /etc/init.d/vsftpd restart

This is the point were all of the tutorials I found move onto securing the server further (it is apparently pretty secure already so these steps are optional). But at this stage I should have been able to access my FTP server from an external client as well as configure the blog to use if for its plugin installation downloads – and I could do neither.

I checked the ftp user existed, changed it’s password and made sure it was not in the /etc/ftpusers which (ironically) lists users who cannot access via ftp. I checked that the appropriate ports were open and that the service was in fact running. Everything looked fine.

Then I wondered what if the ftp user is a special, restricted, user and what if I created another user;

sudo adduser ftpuser

and after specifying a password (and accepting the default values for the other user related fields) added them to the ftp user group

sudo adduser ftpuser ftp

Tada! I could now access the FTP site from my local browser and after entering the appropriate information into the WordPress configuration screen (using localhost for the hostname and the username/password I’d created) I was in business. I finally installed the MobilePress plugin (remember that) and the blog now provides a more mobile friendly experience.

Migrating Blog to Virtual Private Server with Exim4 email support

In my previous post I explained that I was migrating this blog from a shared server provided by 5quidhosting onto a virtual private server (VPS) provided by BHost. In theory this should have been quite straightforward but in practice, and bearing in mind I’m a developer and not an infrastructure boffin, there was a lot more to is that just moving zip files around and running a few commands.

That said, that’s exactly where I started off – I accessed my 5quidhosting reseller control panel and backed up my home directory for this blog and the MySQL database associated with it. After creating a suitable folder within /var/www I uploaded (via scp) the home directory backup and extracted the public_html folder into the newly created location. Next I created a new database in MySQL, created a new user and then GRANTed the appropriate permissions to that user for that database. Finally, I uploaded the backup of the MySQL database (actually just a SQL script) and read it into the new database.

CREATE DATABASE onthefence;
CREATE USER 'onthefence'@'localhost' IDENTIFIED BY 'secret_password';
GRANT ALL PRIVILEGES ON onthefence.* TO 'onthefence'@'localhost';

Finally I opened the wp-config.php file and changed the DB_NAME, DB_USER and DB_PASSWORD values to the ones I used above.

 

At this point I was able to access the blog using a url like this: http://78.129.251.119/blog. But that’s not really very nice so a little bit of Apache tweaking was called for. I added this to the top of /etc/apache2/sites-enabled/000-default

ServerName onthefencedevelopment.com
ServerAlias *.onthefencedevelopment.com
DocumentRoot /var/www/blog
        Options FollowSymLinks
        AllowOverride None

Now I could load up the blog by navigating to http://78.129.251.119 (no path to the actual sub-folder). While the home page was displayed fine when I clicked on any of the posts I was taken to the live blog rather than navigating to the page on the VPS instance. Why? Well because when WordPress saves some of it’s data it saves that fully qualified domain name too, i.e. the full path including the http://www.onthefencedevelopment.com part. So I’d gone as far as I could go without actually breaking the link to 5quidhosting. By resetting the nameservers I’d essentially go live with the VPS instance of the blog.

 

Safe in the knowledge that I could always repoint the nameservers back to 5quidhosting, I bit the bullet and did it.

 

With the nameservers reset to my registrars defaults I was able to create two A records in my DNS settings. An A record is really what maps a domain, e.g. onthefencedevelopment.com, to an IP address. So why two records? Well, initially I just created one for ‘www.onthefencedevelopment.com’ and after a short wait (no where near the 72 hours quoted by many a sys admin) this took effect – but the blog itself failed to load. The reason was that for some reason I’d configured it without the leading www. Not withstanding the arguments about which is right it’s probably prudent to have the two variants configured, i.e. www.onthefencedevelopment.com and just onthefencedevelopment.com, both pointing to the same IP address. A short wait later and bingo the blog was up and running – finished yes? Erm, actually no.

 

As mentioned in my previous post, 5quidhosting also provided email hosting. This was not only my Inbox for the domain but also my outgoing email server. The new blog did not have anyway of sending out email notifications and until it did I could not really consider this instance to be ready for service.

 

I’d already decided to switch my email hosting over to Google Apps and will blog about that separately but even with that sorted and my mailboxes accessible via the GMail interface I was still no closer to getting the server configured.

 

I’d blogged about sending emails from the command line on an Ubuntu server already so surely it was laughing – my blog had come to it’s own rescue. Well, not quite. In the previous configuration I’d used a package called nullmailer which, paired with the mutt email client, worked well for me using my previous outgoing email server (provided by 5quidhosting). The problem is that the smtp servers provided by Google Apps required a level of encryption that nullmailer did not support. But all was not lost because, as usual in the open source world, there is normally numerous solutions to a problem. In this instance the alternative came in the form of Exim4. Exim4 provides the support that the Google Apps server required and replaced nullmailer in the solution.

sudo apt-get install exim4 mutt

With the packages installed I followed the process defined on the Debian Wiki (Ubuntu is Debian based after all) and within about 10 minutes I was able to run the following command to get mutt to send me a test message:

echo "This is a test" | mutt -s "Test Message" nobody@nowhere.com

A final test was to get the blog to send me an email – which I did by logging out and requesting a password reset, which duly arrived a few minutes later.

 

Moving from Shared Hosting to a Virtual Private Server

Until recently I have been hosting this blog, and a few other sites and proof of concepts, using a reseller account provided by 5quidhosting costing, well £5. Now before I continue I should say that I have been very happy with the service they have provided me over the past two years or so and have no hesitation in recommending them for standard hosting services. The server performance was good and on the few times I needed to contact support their response was prompt and efficient. So, why have I moved this blog? Well, it’s all about access, or more accurately the lack of it.

I am of course talking about SSH access which was not provided on the 5quidhosting servers due to previous hack attempts. This is fair enough, most users will have no need for this level of access – indeed neither did I for two years.
 
In many of my previous blogs posts I have been using Ubuntu server to host my own source control server (Subversion), numerous Drupal test sites and a series of scripts monitoring the stability of my broadband connection to what time my girlfriends son turned his computer off on a school night. All this was done using an old grey tower PC in the garage – which I of course accessed via SSH. Now this has been a good workhorse but it’s an old PC and sooner or later it’s going to fail on me on the recommendation of a colleague at work I provisioned a virtual private server from BHost.
 
Despite the rather uninspiring website – BHost provide a very good service at a very affordable price. I opted for the Bronze configuration (20GB HDD and 1TB bandwidth) for less than £5. The control panel is really easy to use and I was able to reinstall the server with Ubuntu 10.04 64-bit (replacing centOS) in just a few minutes.
 
I was also going to have to source an alternative email provider – 5quidhosting was providing that for me but BHost did not. Although I could have used the VPS as an email server (my work colleague has done so – but he’s an infrastructure guru so that’s only to be expected!) I decided to go down the Google Apps road. This was pretty simple to do and once I’d set up my MX records I was in business – more on this in a future post.
 
So now I had my VPS, what was I going to do with it?
 
Well, I made a list:
  1. Relocate this Blog

    • The very fact you are reading this means I succeeded here but the above three words grew arms, legs and teeth (by which I mean ‘not as easy as I’d though/hoped’)
  2. Relocate my proof of concept Drupal sites
  3. Relocate my SVN server from the garage and implement SSL access

    • I want to be able to access my code securely from home or from customer sites without any bandwidth issues or failed access due to someone unplugging the server to plug the pressure washer in..!!
  4. Install and configure CruiseControl

    • This is mainly for my Android development – I want to be able to perform continuous integration builds as well as publishing to the Android Market with minimum effort (or risk of forgetting one of the many steps along the way).
  5. Configure Dropbox daemon and configure for backups for all the above

    • Using plugins etc that use the Dropbox API are ok but I’m getting a lot of failures at the moment, apparently due to the Web API
As ever, so that I don’t forget anything, I’ll be blogging my progress so if you are interested in any of the above (or have some advise or suggestions) then please leave a comment.
 

When is a backup not a backup ….. ?

The answer is of course, when you can’t restore from it. This is the situation I found myself in last weekend after receiving an email from a friend saying that he tried to post a comment but only received a blank page for his efforts. Sure enough, when I tried I got the same result.

Logging into the WordPress admin panel I was greeted with a message saying that the Akismet anti-spam plugin had not been loaded due to an error. Sounds like the most likely suspect, so what do I need to do about it? Well there was not an update available so it can’t be that. However, I had recently upgraded to WordPress 3.1 so maybe that’s the culprit. So clicking on the Reinstall button might sort me out right …. that’s where the problems really started.
 
Now before you start thinking “you forgot to backup didn’t you!?” the answer is no, well not quite. I have configured the wpTimeMachine plugin to perform a weekly backup and send it to Dropbox so presumed I was ‘safe’.
 
Well after clicking Reinstall and returning to the site all I saw was a php error – not good. Worse still was the fact that I could not access the Wodrpress dashboard either. In short the site was hosed – but I had last weeks backup so I was fine right…..?
 
A wpTimeMachine backup consists of five files, a zip file containing the contents of wp-content, a sql script containing all the data, the htaccess file, a script and a text file containing restoration instructions which assumes that you’ve lost the site entirely and are starting with an empty directory. The general gist is to get a vanilla install of WordPress running, extract your content into the new installation and restore your data.
 
Simple eh? Well no, not when the archive containing the content files, i.e. plugins, themes and uploads, is corrupted! Opening the archive displays a single folder (which was an upload from November last year – but the file is 13.5MB! Attempting to extract the archive resulted in an error saying “?End-of-central-directory signature not found” – this is not good!
 
Well I was not trying to recover from a total lost – I still had my content and my database. So I decided to create my own backup along the same lines as wpTimeMachine and see how I got on. Creating a vanilla install of WordPress 3.1 was pretty straightforward as was creating a local copy of my database (using the wpTimeMachine sql script). All that remained was to pull the contents of my wp-content down and and copy that into my local instance. Bad news was that trying to run the local instance resulted in the same errors – but that was because the links were still pointing (from my local instance) to the live site.
 
I zipped up the local instance and ftp’d it up to the site and, with breathe held, I unzipped it. With breathe still held – I attempted to login to the admin panel – SUCCESS!
 
Well, the site is back online (obviously) but there is still the problem of the corrupted backups – but at least I know about it know and can do something about it.

Android: Not Free as in Speech, but does it really matter?

So I was at work the other day and a tweet grabbed my attention. It said “Steve Jobs vindicated: Google Android is not Open” and linked to an article on The Register. After reading through the article I was reminded of my childhood when kids would say things like “yeah, well you eat snails!“. Steve Jobs vindicated!! Well I bet he’ll sleep better for all of that!

It’s not that I am an Apple basher, I have an iPod Nano after all, but how can a company with no Open Source culture to speak of start pointing the finger at another? It’s like being back in the playground!
 
The fact of the matter is that Android is not a totally Open Source platform, but it never has been. As the article points out, Google Maps and the Android Market Place are proprietary and I would not expect either to be made open source any time soon. But does that matter? Judging by the number of people buying Android handsets I’d say not at all! It has to be said that most people buying Android devices are oblivious to the fact that they are touted as open source with many of them not knowing what that means anyway – it simply does not matter!
 
I opted for an Android device well over a year ago now and can’t see myself buying an iPhone (or Windows Phone 7) in the near future and this has nothing to do with the ‘Freeness’ of the operating system. Bear in mind that the Windows Phone 7 platform was not released when I made my decision so I had a straight choice between Android and iOS.
 
As a developer I was interested in writing applications for the mobile platform and Apple’s Developer Agreement was far too restrictive for my liking and with draconian decisions like iOS not supporting Flash® and then all the controversy surrounding MonoTouch my gut told me to steer clear. That said, a big part of the decision was that I already know how to program in Java so had a head start with Android, whereas iOS requires me to learn Objective C. I was also able to download the Android SDK without needing to register with Google (although some people would say that they knew I was there anyway!) while Apple requires me to register with the site first (which will result in more Apple-based Spam in my Inbox – I mentioned that I already had an iPod!). Then there is the cost of developing for the Apple App Store, albeit only $99/year, while targeting the Android Market Place is only $25. This does have it’s drawbacks, the recent Android Market Place ‘Hack’ for one and just plain crap applications for another, and highlights the Danger of Free but life is all about mitigating risk, some people pay to do it, others apply common sense and will need to deal with the concequences if they get it wrong.
 
But of course, most of the people buying Android devices are not developers, just the run of the mill public, so why do they do it? I think that it all comes down to choice. In Apples corner there is the iPhone while in the Android corner there are dozens of phones from HTC, Samsung, LG, Motorola and Sony Ericsson. While the iPhone is a fantastic piece of technology (no denying it), choice is good, people like choice. That said, they are also mindful of the cost of ownership so they tend to go for the devices that do all of the flashy things that the iPhone does (and of course the Flash® things it cannot do) without having to spend £200 plus having to take out a two year contract.
 
So at the end of the day (I believe) whether Android is Open Source or not does not really make a jot of difference in the real world. Most users of Android devices don’t know what Open Source is, or just don’t care. So vindicated or not, Steve Jobs playground antics will not really have any impact on the general public and their apparent love of Android.

Upgrading my HTC Hero to Android 2.2 (CyanogenMod-6)

When I bought my HTC Hero I was both impressed and disappointed. Yes it was the best phone I’d even owned but as it was running Android 1.5 it didn’t do some of the whizzy things that I’d read about; things like Turn-By-Turn Navigation, Speech to Text etc.

When the official upgrade to Android 2.1 was released I was pleasently surprised to find that Google Navigation (along with Turn-By-Turn) was now available but again disappointed that Speech to Text was not. Indeed, contacting HTC they confirmed that it would not be available for the Hero. Nevermind I thought, I’ll survive – and until recently I have.
 
One irritation was as a result of purchasing the CoPilot SatNav application (I didn’t want to pay data charges when using Google Navigation abroad) and it takes up quite a bit of internal storage which limited the number of other applications I can install. If I had Android 2.2 (Froyo) then I could move the application to the SD Card and free up some internal space. There was also the issue of Google Readers latest upgrade which included new widgets which were only available on Froyo. So the decision was made, I wanted to move to Froyo – either on the Hero or I’d buy a new phone running Froyo and have done with it. As it turned out, a casual post on Identi.ca saved me £350 when fellow user ‘0x6d686b‘ directed me to an idiot guide to flashing the ROM on my Hero without bricking it.
 
The guide itself can be found here on the cyanogenmod WIKI and had enough detail to convince me to have a go (which is no mean feat I can tell you).
 
After backing up everything I wanted off the SD-Card etc, the first step of the flashing process was to root the phone which required downloading a file to the SD-Card and installing a free file manager from the market in order to run/install it – which went without a hitch.
 
Next was the installation of a Custom Recovery Image and the guide offered two alternatives, Amon_Ra and ClockworkMod. I opted for ClockworkMod for a couple of reasons, first of all because there was an easy method and als because Dan from Linux Outlaws gave it a good review recently. Flashing the radio went without any problems (whatever that achieves) and then it was onto flashing the ROM itself.
 
Because I’d downloaded the free version of ClockworkMod I didn’t have the option to download a new ROM from within the application, but I could download it manually and copy it to the SD-Card and install it from there, so that’s what I did. I did however download the wrong Google Apps package the first time around (tiny instead of medium) – this was not a major problem (it still installed) but the correct one had a later version of Maps (and maybe other apps too).
 
It was here that I hit a minor problem – when I tried to boot into the ClockworkMod Recovery I was greeted with an image of the phone with an exclamation mark through it, not a good sign. The phone did boot normally though, i.e. back into the original ROM, so all was not lost. I reached out to Identi.ca again and my Identi.ca buddy was still online and said that I should just open ClockworkMod and reflash it – which I did. This time the phone booted into the Recovery Manger and I was able to complete the process.
 
The final reboot took a few minutes but I was pleased to see the skateboarding robot that is the CyanogenMod logo so I knew I was on the right track. Once the boot was complete I was greeted with Android in the raw, i.e. without the HTC SenseUI, and started digging around.
 
So what’s it like? Was it worth the risk of potentially bricking my phone? Oh yes! The team at Cyanogen have done a great job and my phone has a new lease of life. It’s snappy, responsive and best of all now has Speech to Text available – take that HTC!!! I also have the older Market app installed which I preferred to the latest one which wastes far too much of the screen with a pretty but pointless banner (tell will tell if I gets updated or not). I’ve reinstalled all my applications, moved CoPilot to run from the SD-Card and so far all is well.
 
It’s not all good news though! There was a problem with the camera running on original ROM which caused the phone to reboot if I took a picture, navigated to the home screen and then back to the camera within a few minutes. Well the problem is still there – sort of. While it’s apparent that this is not a problem with the new ROM the phone still has a bit of a fit if I try to reopen the camera after a few minutes. However, it does at least offer the option to Force Close rather than just reboot the phone (which takes about 3 minutes) so at least that’s an improvement. This is not a major issue for me though.
 
Also, the phone has rebooted itself (but just once) – I was testing CoPilot on my way home from work and without warning the phone just restarted. Now this could be a problem with the ROM or CoPilot, I’m not sure but it didn’t happen again – maybe the battery was loose and neither the ROM or CoPilot were at fault.
 
Overall I’m very happy with the result and feel that my ‘ageing’ HTC Hero now has a new lease of life, and now that it will fit on the SD-Card maybe I can complete Angry Birds 😉
 

Accessing my Netgear Stora from Ubuntu

Like many tech-savy people these days I’ve been thinking about buying a Network Attached Storage device (NAS) for some time but being more into software than hardware I’ve never really understood enough about them to lay down my hard earned cash. Well recently my hand was forced by my external hard drive starting to act up and my girlfriends son filling his 70GB HDD with downloaded Flight Simulator extensions (and god knows what else!). After quizzing the infrastructure guy at work about his thoughts I ended up buying the Netgear Stora enclosure along with a Western Digital Caviar Black 1TB drive (another drive to follow next month).

One of the reasons I opted for the Netgear unit was that it stated that it was compatible with Linux (in addition to Windows and Mac). As I run Ubuntu as my current desktop (and server) of choice while the rest of the systems in the house are XP/Vista this seemed to fit the bill nicely. However, as with most products that claim to be Linux compatible, there was no software provided for use on Linux nor was there any documentation. After trawling the forums I find that Netgear back this claim up by stating (something like) “you can access the device and the files via a web interface”, i.e. its compatible with any device with a web browser. Well I’m sorry but that is not acceptable! How do I get my Ubuntu server in the garage to send it’s backups etc when it doesn’t have a GUI let alone a browser? I want to be able to mount the device as a drive just like the Windows systems do – surely that’s not too much to ask (especially since – apparently – the Stora is running a version of Linux itself).
 
Well, as it happens it is possible to mount and access the Stora as a drive and access it just like any other.
With the Stora connected and registered (which needs an internet connection by the way!) and while the Windows systems in the house started chewing up the disk space I started the hunt for informtion.
 
As is normal with these things, the information is seldom in one place. A clue here and a command line there and slowly but surely I homed in on a solution.
 
First of all I found a post on the Netgear forums describing the process for connecting a Fedora system to a stora. Good start, if Fedora can do it then surely Ubuntu could too. The problem here was that the process not only used Fedora but KDE and as I use Gnome this was not really useful to me – but as always with forums there’s normally hidden gold in the comments and this was no exception. One of the forum moderators posted a link to a post and explained that it would be simpler to configure fstab to mount the Stora instead – adding that he was more familar with doing this on Ubuntu. The downside is that (apart from the link being down as I type) the configuration didn’t appear to work – not for Ubuntu 10.10 anyway. But all was not lost, looking further down the comments of the original post (to #5) I found a command that did work.
sudo mount -t cifs -o uid=YOUR_UID,gid=YOUR_GID,file_mode=0777,
     dir_mode=0777,username=USERNAME,password=PASSWORD
        //IP_OF_STORA/FamilyLibrary /media/stora/FamilyLibrary
[Command should obviously be on a single line!]
To find your UID and GID just run the id command in a terminal window (assuming you are logged in as the correct user!).
 
Also the /media/stora/FamilyLibrary folder needs to exist before executing this command or you can change the mount point as required.
 
So that’s it then – well not quite. This will mount the FamilyLibrary and Ubuntu will display a suitable shortcut on my desktop (or at least, does for me) but it’s not automatic, I’d have to run the command each time I log in – not major issue but then again not a slick solution either. So could I translate that command line into something that I could enter into fstab? Turned out it was easier than I thought 🙂
 
The format for a fstab entry is
<device><mountpoint><filesystemtype><options><dump><fsckorder>
Just about all of the information I needed was in the mount command above.
 
device is the path to the stora (in particular, to the root filesystem we want to mount)
mountpoint is where we want to mount the device to the filesystem
filesystemtype is just what it says!
options this is the comma delimited list in the middle of our command
dump is used by a backup utility called dump to decide if a filesystem should be backed up or not
fsckorder is used by the filesystem check utility fsck to determine the order in which filesystems should be checked
 
So our fstab entry looks like this (should be on one line but split here to fit the screen):
IP_OF_STORA/FamilyLibrary /media/stora/FamilyLibrary cifs
     uid=YOUR_UID,gid=YOUR_GID,file_mode=0777,dir_mode=0777,
         username=USERNAME,password=PASSWORD 0 0
Add this (with the appropriate values) into fstab and run
sudo mount -a
from the command line and (all things being equal) the drive should mount, a shortcut should appear on your desktop and in the Places menu.
 
Job done, yes?
 
Well almost….
 
While this does work we have our username and password in plain text in file which can be opened by anyone (maybe not updated, but read none the less). If only there was a way to tell fstab to look somewhere else for the username and password …… well as luck would have it there is.
 
First of all we need to create a hidden file in our /home directory which will contain two lines (one for the username and one for the password).
 
Fire up your editor of choice (I like nano for this sort of thing) 
nano /home/dave/.storacredentials
and enter the following two lines:
username=your_username
password=your_password
Save it (CTRL+o then CTRL+x if you are using nano).
 
Now open up fstab and replace the username &amp; password options with the following:
credentials=/home/dave/.storacredentials
Run 
sudo mount -a

again and the drive should be remounted with the new configuration. If everything goes to plan then your should not really notice very much.

 
Finally, there is little point in moving your credentials from one file that everyone can access to another – so to ‘secure’ the .storacredentials file we need to configure it so that only root can read it.
chmod 400 .storacredentials
sudo chown root.root .storacredentials
I’ve added another fstab line for my MyLibrary area on the stora by changing FamilyLibrary for MyLibrary – works like a charm 🙂
 

Configuring Android Debug Bridge Server to start as root

In my previous post I mentioned that one of the problems I faced when trying to install and debug my first Android application was that the Android Debug Bridge (adb) was not running as root. Now shutting the service down and restarting it with sudo was all that was required but that’s a bit of a faff, there had to be a way to configure adb to be started with root privileges – and there is.

Disclaimer: Before I continue I have to say that I claim no credit for this configuration, I just found it buried in a forum and merely replicating it here for my own reference and just in case the post is deleted.

 

I should also point out that my configurations were done on a system running Ubuntu 10.04.

 

01/07/2012: I’ve just upgraded to Mint 13 and tweaked the script to take account of the movement of adb from the tools folder into platform-tools.

Step 1: Using your text editor of choice create

/etc/init.d/adbd

and enter the following:

#!/bin/sh
#
# For ADB deamon (Android Device Bridge)
#
case "$1" in
  start)
        # Replace the path below with your path to adb
        /opt/android-sdk-linux_x86/platform-tools/adb start-server
        ;;
  stop)
        # Replace the path below with your path to adb
        /opt/android-sdk-linux_x86/platform-tools/adb kill-server
        ;;
*)
echo "Usage: $0 start|stop" >&2
exit 3
;;
esac

When I’m working with system files I normally use nano as my editor and fire it off with sudo so that I have the correct permissions to save the file when I’m done.

Step 2: Make the file executable with the following command:

sudo chmod a+x /etc/init.d/adbd

Step 3: Create a link between the boot process and the above file

sudo ln -s /etc/init.d/adbd /etc/rc2.d/S10adbd

Step 4: Reboot your system

When the system restarts you can confirm that the service is running as root with the following command:

ps -afe | grep adb

On my system this returns two results:

root 1083   1  0 19:59 ?      00:00:00 adb fork-server server
dave 2417  1931  0 20:20 pts/0  00:00:00 grep --color=auto adb

as you can see the server (the first result) is running with as root.