Monday, 6 December 2010

Web accessibility – Code of practice

BSi have just released a new British Standard, BS 8878:2010 Web accessibility – Code of practice. First things first, this is not another set of web accessibility guidelines. This document is meant to complement existing guidelines such as WCAG and provide a framework for companies to implement accessible web products. It recognises that accessibility goes well beyond HTML tags and requires the right processes to be in place in the organisation, from planning for accessibility to delivering and testing it. To quote from the standard itself:

This British Standard sets a standard for the quality of the process of creating accessible web products, rather than a standard for the quality of accessibility of web products resulting from it.

I've only had time to quickly browse through it and I'll do a more thorough write-up when I can. First impressions are very good and it's no surprise considering that AbilityNet, who are one of the most knowledgeable organisations on the subject, made significant contributions to it. The document is clearly and concisely written, there's no jargon unless absolutely necessary and it includes an awful lot of useful information, such as how to build a business case for accessibility, what processes to put in place to deliver it, how to make justifiable decisions on accessibility, write an accessibility policy, etc. It is also very pragmatic and concentrates on the ultimate goal, delivering accessibility efficiently, rather than on specific processes and guidelines.

So if you're about to start a new web project, grab a copy of BS 8878 and ensure that what you deliver is accessible to all: there is no excuse!

Sunday, 7 November 2010

Reading Paradise... or DRM Hell?

New Toy

Yesterday, I bought myself a Sony PRS-650 Reader Touch Edition as a belated birthday present. I could have bought the new Kindle, as there are lots of adverts for it in the tube, it's cheaper and it has Wi-Fi. I went for the Sony PRS reader instead because I don't really need Wi-Fi and it has a touch screen, which means that it's not encumbered by a keyboard and is therefore a lot smaller for the same screen size (6" display). The Sony device does really fit in a pocket as it's about the size of a very thin paperback. If you want even smaller, you can get the Pocket edition, a.k.a. PRS-350.

So the first thing I did was go to Waterstone's as I know they stock the PRS-650. Unfortunately, between two shops I visited, they had 5 demo devices, only one of which seemed to work and no staff in sight to help. So I ended up going to the Sony Centre in Tottenham Court Road, where I was served by very helpful staff who were happy to answer any question and demo the device. What's more, if you buy the device from Sony direct, you can also have it in a very nice red colour that is much less boring than the black and grey offered by Waterstone's.

Back home, I installed calibre from the Ubuntu repositories and, lo and behold, my new toy was immediately recognised and supported out of the box! On a side note, the connection is a standard Micro-B USB connector, which means that the cable to connect it to the computer and charge it is the same as with my Nokia N900.

But getting the device to work is only the start, you then need to load it with books. The PRS-650 supports a variety of file formats and whatever it doesn't support, calibre should be able to convert to EPUB. So you basically need to find books to download. Here's a quick list of what I tried.

The Good

  • The first thing to do when you get your Sony reader is to register it on My Sony, you will then be able to download 100 free classic books in Sony's own format, including titles like Don Quixote, Gulliver's Travels, The Importance of Being Earnest or David Copperfield.
  • If you want more classics, Project Gutenberg is the place to go to: thousands of books in a variety of languages for which the copyright has expired.
  • If you are into science fiction, Baen Publishing offer most of their titles as non-DRM e-books that you can then load into calibre. They even offer some of them for free through their free library.
  • For technical books, both The Pragmatic Bookshelf and O'Reilly offer their titles in a variety of formats without DRM.

The Bad

  • Waterstone's, WHSmith, Penguin, rbooks (Random House) and kobo will only sell you DRM-encumbered books that require Adobe Digital Editions, which of course doesn't exist for Linux. Apparently it works well under WINE but Adobe won't offer the Windows download if you connect to their web site using Ubuntu. It also means that you are forced to use a particular piece of software, which may or may not be practical. Add to this that, apart from kobo, the other web sites are not very forthcoming with that information so you may end up buying an e-book that you can't download and only discover that after you've actually paid for it!
  • Amazon, quite predictably, will only sell you books for the Kindle but won't let you download the file, you have to use either a Kindle device or the Kindle software, which only works on Mac OS-X or Windows and won't work with the Sony reader anyway.

The Ugly

  • Foyles and Books, etc. have a web site that seems to be able to show me e-books or fiction books but not fiction e-books so I was unable to find what I wanted and eventually gave up.
  • Blackwell's seem to have a very limited list of titles available as e-books so I gave up and also failed to find out any information on the e-book formats they offer and whether they were DRM-encumbered.

It looks like the publishing industry is following blindly in the footsteps of the music industry, leaving very few options for Linux users to buy e-books legally. And as usual it's not a question of hardware support, it's all about restricting what customers can do with the media they purchase. Surely, there must be a better solution than this mess?

Update 1

I originally thought that The Book Depository didn't offer e-books but in fact they do and, like a number of others, only sell DRM-encumbered books, require you to use Adobe Digital Editions and don't tell you until you've actually bought the book. So that's one more candidate for the bad list above.

Update 2

I mistakenly said above that The Book Depository didn't warn you about the requirement for Adobe Digital Editions. In fact they do, just not in a place where I was expecting it so I didn't notice it. As a result they won't refund you if you mistakenly buy an e-book that you can't download.

WHSmith also mention that you need Adobe Digital Editions. However, they do that at the very bottom of the book's page below adverts for other e-books and customer reviews, so not quite as prominently as you would expect. They won't refund you either if you make a mistake.

rBooks will refund you if you ask them politely.

Update 3

Waterstone's will refund you too if you ask politely.

Sunday, 17 October 2010

R.I.P. Benoit Mandelbrot

Benoit Mandelbrot passed away on Thursday at the age of 85. He last talked about his work at TED2010 earlier this year.

I remember being fascinated by images of the Mandelbrot set when I was a child and I did a series of articles last year on how to create them using GNU Octave so if you want to have a go yourself, feel free to grab the code and follow the examples.

Thursday, 9 September 2010

Of the Perception of Open Source Tools

Today, while discussing options for an identity federation solution, I had the following comment from one of my colleagues (I paraphrase as I don't remember the exact words):

The advantage with a Microsoft product compared to an open source product is that you get a good administration interface.

Note that no specific package had actually been mentioned so this was an obvious generalisation which may or may not be true. However, irrespective of the validity of the statement, why is it that some IT professionals have this view of open source software? Is there any grounding in this comment? And is there something that we, as the open source community, can do to change this mindset?

Sunday, 5 September 2010

Adding Support for Alien Database Import in Shotwell

When developing the import from F-Spot feature in Shotwell, I made sure I isolated the F-Spot specific code from the generic code so that the same feature could be easily implemented to import photographs from other photo management application. So if you want to contribute an import from Picasa feature, here's a quick guide on how to do it.

Alien Database Framework

Everything you need in order to implement a new import feature is provided by the alien database framework. It is composed of 5 classes that you will need to know about and 5 interfaces that you will need to implement, as shown in the class diagram below:

Alien database framework class diagram

Here's a quick overview of the different components:

The class where everything starts from. This object is responsible for managing a set of drivers. At the moment, there is only one, the F-Spot driver but hopefully, by the time you finish reading this, there will be a new one.
A light-weight struct that identifies a unique driver. This struct wraps a simple string. In the case of the F-Spot driver, that string is f-spot.
The interface that specifies what the driver implementation needs to provide. It includes a few simple methods that enable the driver support to be included in the Shotwell interface, as well as three more heavy-weight methods that actually perform the database handling. Those methods are called in two steps:
  1. get_discovered_databases is called first so that the driver can provide the UI a list of databases that are automatically discovered, typically database files found in well known locations such as ~/.config/f-spot/photos.db for F-Spot;
  2. open_database or open_database_from_file gets called when the user has selected what database to load: those are the methods that will do the heavy lifting.
A light-weight wrapper that identifies a discovered database and implements lazy loading of the real database.
A light-weight struct that behaves exactly the same as AlienDatabaseDriverID but uniquely identifies a given database, including its related driver.
The interface that specifies what the database implementation needs to provide. The method that does all the work is get_photos.
A light-weight class that implements a version number in the format x.y.z and is able to compare versions between each other. This is meant to make it possible to validate whether the version found is actually supported by the driver. This is also used heavily in the F-Spot implementation to provide support for different versions of the database.
AlienDatabasePhoto, AlienDatabaseTag and AlienDatabaseEvent
A set of interfaces that define data objects handled by the database: photos, tags and events.

Implementing a new driver

Now that you've decided to implement a new driver, let's go through it step by step.

Driver implementation

The first thing to do is to provide an implementation of the AlienDatabaseDriver interface. Let's get the UI related methods out of the way first.

This should return a hard-coded AlienDatabaseDriverID used to identify the driver.
This should return a display name, most likely the name of the application the driver is for. You may want to make it translatable if you know that the application doesn't have the same name in all languages.
The name to be used for the menu identifier, which is referenced in the action (see below). This should be a simple hard-coded string.
A method that returns a Gtk.ActionEntry that will be used to construct menu items. The return value should be hard-coded and be consistent with what the get_menu_name method returns. I know, it looks like there's redundant code in there but it seems that Vala has issues with building structs with non-hard-coded strings. This may be simplified in the future. Don't forget to set the label and the tooltip for the action and to make them translatable: see the F-Spot implementation for an example.

That's the UI out of the way. Now let's have a look at database discovery and load.

This method will be called when the user selects the import menu item and the dialog box appears. It should look in all the well-known locations for a database and return a collection of DiscoveredAlienDatabase objects. Those objects are created from an AlienDatabaseID object that contains two pieces of information: the driver ID and a driver specific string that identifies the database. That driver specific string can be whatever you want. At its simplest, it can just be the path to the database file.
open_database and open_database_from_file
Those two methods are the ones that do the heavy lifting. They basically provide exactly the same function with a slightly different signature so you will probably want to factorise the code and make both of them call an internal private function that performs the bulk of the work. At this point, you need to open the database file and extract the version number out of it. If there is any error, it's the time to report it as this is called while the import dialogue is still displayed to the user and can provide early feedback. There are two error domains you can use for that: use DatabaseError for any generic database issue, such as problems opening the file or reading the tables; and use AlienDatabaseError to report a database that you can read but which has a version number that you don't support. If everything goes well, return an implementation of AlienDatabase.
Database implementation

Now that you've loaded the database, it's time to extract data out of it. For that purpose, you now need to provide an implementation of the AlienDatabase interface. Once again, there are a few UI related methods and some data related ones so let's start with the UI bits.

This method returns the driver specific URI for this database. It must be consistent with what is used in the AlienDatabaseID struct.
A string that is suitable for display in the UI and that identifies the database to the user.

And now for the bulk of the implementation:

Return the version of this database. By the time this method is called, the database should already have been opened so it should only fail if something really unexpected happens.
This is the important method, where the content of the database is read and photo references are extracted. Try to be as lenient as possible in this method so that it doesn't throw an exception. If an unexpected piece of data is returned, try to recover from it rather than throw an exception. If a photo entry can't be read properly, just ignore it and continue with the next one. Throwing an exception will abort the whole import so make sure you only do it if you're absolutely sure that you can't do anything else.
Data objects implementation

The last three interfaces define light-weight data objects that should all be created by the get_photos method in the AlienDatabase implementation. I will only detail AlienDatabasePhoto as the other two are trivial and only return a name.

get_folder_path and get_filename
Both strings together provide the fully qualified path of the image file for the photo.
get_tags and get_event
Return the objects that contain the details of the tags and event for the photo. Note that because Shotwell only supports one event per photo, only a single instance can be returned, not a collection. Also note that because Shotwell does not currently support hierarchical tags, all tags just contain a name. If the database you import from supports hierarchical tags, you should decide whether you want to import all tags or only leaf tags.
Return a five-star rating for the photo. If the database you import from doesn't support ratings, just return Rating.UNRATED. If your rating is stored as an integer, use the Rating.unserialize method.
A title for the photo, if the source database supports it. Otherwise, return null.
This method returns a values that identifies an import roll. It should be a value that is equivalent to a time stamp. If the source database doesn't support this, just return null.

Register your driver

The only thing left to do is to register your new driver with the handler. For this, you will need to modify the AlienDatabaseHandler constructor to add a call to register_driver with an instance of your driver. I know, this is not ideal but there is a ticket on the Yorba tracker to implement a real plugin mechanism. Once this is done, this last step should go away. And as an added bonus, if libpeas is used for this as expected, you should then be able to write your plugins in other languages than Vala!

Go and write great code!

That's it. If you want to enable import from your (old) favourite photo management application to Shotwell, just follow this guide and contribute a patch. Of course, the reality of things means that it will probably not be that simple, it all depends on the complexity of the source database. You may also want to be able to support several versions of that source database. For an example on how to do this, have a look at the F-Spot implementation.

Sound Issue in Ubuntu 10.10 Beta

This morning when logging in to my newly-upgraded-to-Ubuntu-10.10 laptop, sound was not working. It appears that the solution was very simple: my user was not authorised to use audio devices. I don't know why it was disabled as it had always worked fine before but it's very easy to solve so if you have the same problem check that first. To resolve it, go to System → Administration → Users and Groups, select your user, click on the Advanced Settings button, enter your password, click the User Privileges tab and make sure the Use audio devices box is checked. While you're at it, do the same for the other users on your system.

User Settings dialogue

If that doesn't work, there is a handy wiki page on debugging sound problems.

Saturday, 4 September 2010

Memory Usage Graphs with ps and Gnuplot

When developing the import from F-Spot feature in Shotwell, a user who tested the patch found out that there was a bit of a memory leak. After finding the cause, I produced a patch to fix it but I also wanted to identify what the difference was between the development trunk and the patch. So here's how I did it.

Gathering Data

The first step was to gather relevant memory usage data. For this I needed a repeatable test and perform that test both with the trunk build and the patch build. As I had a test F-Spot database, that proved quite straightforward:

  1. Delete the Shotwell database,
  2. Build the trunk version,
  3. Import the test F-Spot database using the trunk build,
  4. Delete the Shotwell database again,
  5. Build the patched version,
  6. Import the test F-Spot database using the patched build.

With the test process sorted, I needed to gather memory data during steps 3 and 6. That's easily done using the ps command in a loop and sending the output to files. So, for the trunk build, I just started this command in a terminal before starting Shotwell and stopped it once finished:

$ while true; do
ps -C shotwell -o pid=,%mem=,vsz= >> mem-trunk.log
sleep 1

The one for the patched version is virtually the same:

$ while true; do
ps -C shotwell -o pid=,%mem=,vsz= >> mem-patch.log
sleep 1

Note the the equal sign (=) after each field specification tells ps not to output the column header. So at the end of this, you end up with two files that contain 3 columns of data each: the PID, the percentage of memory used and the total virtual memory used for the process at intervals of one second. In both cases, I let Shotwell run idle for a few seconds at the end of the import before closing it to ensure that everything had stabilised.

Next, I checked how many lines of data I had in each file:

$ wc -l mem-*.log

And truncated the longer file to the length of the sorter one, just to make sure I had the same number of data points for both.

Creating the Graph

After that, I wanted to create a single graph that included four lines: VSZ and %MEM for both trunk and patch. And I wanted to output the result to a PNG file. Gnuplot can do all of this, you just need to know how to set its myriad of options. So here's the Gnuplot script in details.

set term png small size 800,600
set output "mem-graph.png"

Gnuplot works with the concept of terminals. So the first line tells it to use the special terminal called png with a small font and a size of 800 pixels wide by 600 pixels high. The second line is fairly self-explanatory: output to the given file.

set ylabel "VSZ"
set y2label "%MEM"

The two sets of values I am interested in have very different ranges. VSZ is a number of bytes and will have values in the hundreds of thousands if not millions, while %MEM is a percentage so will have a value somewhere between 0 and 100. So to make sure that both types of graphs fit in the output, I will use the ability that Gnuplot has to use left and right Y axes with different ranges: VSZ will go on the default Y axis (left, called y), while %MEM will go on the other one (right, called y2). So I set the labels for both.

set ytics nomirror
set y2tics nomirror in

As the right Y axis is not used by default, I need to enable it and to set where the tics go. To do that, I first disable the mirror option on the left Y axis and enable tics on the right Y axis by telling Gnuplot that their position will be in.

set yrange [0:*]
set y2range [0:*]

The last piece of setup is to customise the range on both axes. By default, Gnuplot will adjust the range so that there is as little white space as possible above or below the graph. But in this case, I want both sets of graphs to start at zero so that I can have a better idea of total memory used.

The next bit is quite long so I will start by explaining the instruction for a single graph before bringing all four together.

plot "mem-trunk.log" using 3 with lines axes x1y1 title "Trunk VSZ"

In the line above, I tell Gnuplot to take its data from the third column in the file called mem-trunk.log. The with lines section specifies that I want a line graph. The axes x1y1 specifies that I want it to be drawn against the first X axis and the first Y axis (the default, but here for completeness). And the last bit specifies what I want the title for this graph to be. Then it's just a case of plotting all four graphs in a single plot command separated by commas. So here's the full script:

set term png small size 800,600
set output "mem-2334-graph.png"
set ylabel "VSZ"
set y2label "%MEM"
set ytics nomirror
set y2tics nomirror in
set yrange [0:*]
set y2range [0:*]
plot "mem-trunk.log" using 3 with lines axes x1y1 title "Trunk VSZ", \
     "mem-patch.log" using 3 with lines axes x1y1 title "Patch VSZ", \
     "mem-trunk.log" using 2 with lines axes x1y2 title "Trunk %MEM", \
     "mem-patch.log" using 2 with lines axes x1y2 title "Patch %MEM"

Make sure that there is absolutely no white space between the backslash characters and the end of lines in the plot command otherwise Gnuplot will complain. Save the script to a file called mem.gnuplot and run it:

$ gnuplot mem.gnuplot

And here is the output I got, which shows the improvement in memory usage between trunk and patch:

Memory usage graph

Ubuntu 10.10 Beta First Impressions

Ubuntu released the first Maverick beta a couple of days ago. As I had some time on my hands today (including the time to re-install Lucid if it all went pear shaped), I decided the upgrade my ThinkPad T42 so, as instructed, I typed this in a terminal:

update-manager -d

And here's how it went.

The Good

  • The upgrade took a few hours, was extremely smooth and just worked.
  • Everything that I've tried so far just works out of the box, no regressions (apart from a small glitch, see below).
  • I thought Ubuntu 10.04 was fast but 10.10 is even faster! Firefox and Evolution in particular feel snappier.
  • The new keyboard layout indicator is bigger and clearer.
  • Shotwell replaces F-Spot.

The Bad

The Ugly

  • The default background really doesn't look good so the first thing I did was change to a different background image.

All in all, an excellent upgrade!

Tuesday, 24 August 2010

Web Site "Optimisation"

Seen today on a web site that shall remain nameless:

This web site has been optimized for use with Microsoft® Internet Explorer 4.7 and above. Get your free browser update today.

I had to double-check my calendar to make sure I hadn't accidentally walked into a Tardis and been sent back to the last millennium.

Sunday, 15 August 2010

Contributing to Shotwell


I use open source software every day on all the computers I own. In fact, outside of work environments where Windows is still predominant, very little of the software I use is closed source these days. As a result, I've wanted to contribute back to the community that has developed such great software by fixing bugs and implementing new features. However, I've found it a lot more difficult that I expected. To make any meaningful contribution, the project I would contribute back to had to be a piece of software that I used regularly. When I look at the ecosystem of software I use regularly, they are either written in C, a language that I haven't programmed in for 20 years (assuming the stuff I did at university actually counts) or they are extremely complex, or both.

Then, a few months ago, I got wind that one of the planned changes for Ubuntu 10.10 (Maverick Meerkat) was to replace the default photo management software: F-Spot was out, Shotwell was in. Photo management has been a sore spot of mine for ages on Ubuntu. None of the software that was available until now actually met my needs so I ended up not using any photo management software and doing everything through the file manager. Looking into it, it looked like Shotwell was the ideal opportunity for me to get a photo manager I liked. Shotwell uses Vala as a programming language, which I found intriguing and which I thought should be easier than C to get into.

Baby Steps

So I downloaded the source from the SVN repository, built the software and lo and behold, the build was actually easy and 10 minutes later I had a local version of Shotwell to play with. I then had a look at the source code and found it very easy to understand. So I thought: let's see if I can fix an easy bug!

For that experiment, I choose a bug that looked extremely easy: bug 1954 looked like the ideal candidate because it merely consisted in adding a line of information to an existing window in exactly the same way as the other lines already present in the window. And indeed, the fix was trivial. So I created a patch and uploaded it to the bug tracker.

What happened next was very important and is something that all community projects should be very attentive to: I got some feedback on my patch within a day, basically saying that the patch had now been committed to trunk with a minor modification. This is extremely important and is something the Yorba team, who produce Shotwell are very good at doing: all contributors are welcome and any contribution, such as suggestions or patches, are acknowledged very quickly. From a contributor's point of view, it means that you know that you can really participate and help the core development team improve their software.

More Complex Bugs

Quite chuffed by the outcome of my first bug fix, I decided to use Shotwell extensively and try solving more complex bugs. I eventually hit a significant issue with my SLR camera, which I duly reported in the bug tracker and which I decided to attempt to solve. It took a bit of time but I eventually got there and as a side effect actually solved another bug: result!

A Whole New Feature

By now completely sold on Shotwell, I wanted to do my bit so that it would be well received by Ubuntu users. In my mind, the most important thing was to ensure a smooth transition from F-Spot to Shotwell and indeed the migration was one of the major requested features in Launchpad and very high on the Yorba list too.

Implementing such a major feature can feel a bit daunting at first but, at the end of the day, integrating and migrating from weird and wonderful databases is something I get involved in on a regular basis in my day job so bringing that experience to Shotwell sounded like the way to go. And to be honest, the F-Spot database is extremely simple compared to some of the horrors I've faced recently.

It took a bit of time and I learnt a lot about Vala and the GObject library in the process. Thanks to the support from Jim and Adam from Yorba I got there in the end and finished the bulk of it in the middle of GUADEC. A few more extra tweaks were required but it's now in fairly good shape, ready to face the hordes of new users that Ubuntu will bring it.

There you go, that was my first major contribution to an open source project and I'm now hooked so will endeavour to contribute more.

Saturday, 7 August 2010

Flashing the N900


The Nokia N900 is able to upgrade its operating system over the air, which is the best way to upgrade it, especially if you have a fast Internet connection. This is also very practical when you're not running Windows because the Nokia suite only exists on that platform. This is exactly what I had been doing since I bought my N900. Except that a few months ago, I declined to update when requested for a reason I have now forgotten, probably because I didn't have the time then. And since then, it has failed to propose any new upgrade. So the only solution was to flash it but I had no idea how to do that on Linux. Luckily, I found a very handy guide on the subject which includes a version for Linux. Unfortunately, this guide is light on details and not fully correct so here's the fix.


Note that flashing the device rather than upgrading it over the air will remove all software you previously installed, as well as some of your settings. It will not delete your data as it only flashes the root partition. Your address book for instance is safe. But I would still strongly advise that you backup your device before going any further. The N900 ships with a backup utility so please use it and copy the backup file to your computer. If you fail to do so and you lose all your data, you're on your own to recover it.

Flashing the Device

As the guide explains, download the Linux Flasher Tool and the latest available N900 firmware image. You will need to enter your device ID for the image and in both cases you will need to accept Nokia's license agreement.

If you are using Ubuntu, I suggest you download the .deb version of the flasher tool: maemo_flasher-3.5_2.5.2.2_i386.deb, then double-click on the downloaded file and the package installer will open. Install the package. The flasher tool will then be installed to /usr/bin, not in the current directory, as suggested by the article.

For the image, ignore all the red warnings on the page next to the eMCC image links and download the latest combined image for your region, in my case RX-51_2009SE_10.2010.19-1.203.1_PR_COMBINED_203_ARM.bin.

Go through steps 3 and 4 as explained in the guide: turn the phone off and connect it to your computer via the USB cable.

Now comes the glitch in the guide: the actual command to flash the image needs to specify where to find the image in question:

$ sudo flasher-3.5 -F /path/to/image.bin -f -R

When you run the tool, you will get quite a bit of output when it validates the image. It will eventually show the following message:

Suitable USB device not found, waiting.

And the tool will hang. This message is not an error message as the guide says, even though it sounds a bit ominous, it's the flasher tool's way to tell you that you need to switch the device on so that it can find it. So switch the phone on. As soon as you do so, the flasher tool will recognise it and will start loading the image. It will provide on-screen feedback while it does so and the N900 will display a small progress bar at the bottom of the Nokia boot image. Once the tool has finished, the phone will take a lot longer than usual to start up. Don't try to hurry things along, just wait until it is fully booted. Then, and only then, disconnect the phone from the computer.

As mentioned above, you then need to re-install the software you had previously installed on the device. Luckily the OVI application store remembers what you previously paid for and downloaded so you should be able to just re-download software without having to pay for it again.

Sunday, 1 August 2010


I was at GUADEC this week so here is a summary of what I took out of it.

The Location

GUADEC was held in The Hague this year and hosted by De Haagse Hogeschool. The Hague is a nice medium sized town where, in typical Dutch fashion, you go everywhere by bicycle or tram. Public transport is very efficient, fast and always on time. In short, the only similarity with London is that the weather was cold and cloudy most of the time we were there. De Haagse Hogeschool is a great space for such a conference, with good rooms for the talks and a great space to mingle with people during breaks. However, the restaurants around the venue were clearly not used to having that many customers and I missed the afternoon keynote talks both on Thursday and Friday when it took longer than expected to have lunch. On the other hand, lunchtime was always spent with a bunch of very interesting people so no harm done there.

The Talks

As there were different streams running in parallel, I didn't go to all the talks. Also, I may have completely missed the main point of any talk so what follows is my take on things. It may be biased, incomplete or outright wrong due to me misunderstanding something or other.

GNOME, the Web and Freedom, by Luis Villa

Luis is a lawyer at Mozilla and a geek. His premise is that since the first GUADEC in 2000, a universal platform built on open source software has appeared on every computer on the planet and it's a platform upon which Microsoft is actually struggling to keep up with the competition. It's not GNOME, it's the web. And it's not a fad, it's here to stay (despite what the star formerly known as the star formerly know as Prince might say).

Luis suggests that we should strive to make GNOME Shell the best shell for web applications. All GNOME applications should embrace the web and integrate with it. And when you write a new application, you should write it in HTML and JavaScript first.

I completely agree with the idea of integrating the web into the desktop. A photo management application for instance should be able to seamlessly integrate with services like flickr so that you don't have to leave your desktop to upload photos to flickr or to subscribe to someone's photo stream. In practice, this is what The Web Standards Project have been pushing for some time in the form of the semantic web.

I disagree with the idea to write all applications in HTML and JavaScript because that would significantly reduce developer freedom and flexibility in writing those applications. Also, HTML and JavaScript are just a tool, are well suited to some applications but not all of them.

Finally, there is on aspect which I believe Luis is very aware of: I want to know where my data is hosted and be able to make sure that some of my data is exclusively hosted on my own machine, not somewhere in the cloud. That sort of issues will need to be addressed if the desktop platform we use is to integrate more closely with the web.

Who Makes GNOME? by Dave Neary

This talk sparked a bit of controversy. Dave presented the results of the GNOME Census report. This report produced some statistics on the people who contribute to GNOME in the form of commits to the repositories. The idea was to understand better who contributes to GNOME and how much of it is contributed by paid developers.

Obviously, the fact that Red Hat accounts for 16.3% of the total and Canonical accounts for just over 1% sparked controversy with people saying that this was the proof that Canonical didn't contribute. We're talking about statistics here so as usual they don't tell the whole story and I feel it is very dangerous to draw any conclusion from them. Dave is very aware of the fact and took great care in including caveats in his presentation. That didn't prove enough to prevent the controversy though. So here's my take on what aspects of the statistics show that they should be taken with a pinch of salt:

  • The top 2 contributor categories are Volunteer and Unknown and between them represent more than 40% of the contributions. Those people may have become contributors for a variety of reasons and some of them may be from Canonical or Red Hat or any other company, or may have become individual contributors as a result of using Fedora or Ubuntu, we just don't know.
  • There are 11 companies between 1 and 3% (ranks 6 to 16). A small change in the number of contributions could change the list dramatically.
  • Counting commit events only tells part of the story and focuses exclusively on the development effort. From my experience, in order to deliver projects, packaging, testing and bug management is as important as development. And so are specifications, usability studies and artwork. As a community, in order to deliver the best operating system possible, we need all of the above. Ubuntu (and by extension Canonical) has been very successful in addressing the non-development aspects of delivering a great operating system based on great open source software. And I'd rather Canonical concentrated on working on the areas where the community is lacking resources and experience than add more resources to the bits that are already well covered.
State of the GNOME 3 Shell, by Owen Taylor

I have very few notes on this talk, it basically amounts to: some stuff works, some doesn't and it's still a work in progress.

Shell Yes! Deep Inside the GNOME Shell Design, by William John McCann and Jeremy Perry

Another talk where my notes are less than of stellar quality. The idea is that the GNOME Shell developers are here to make GNOME great. And indeed, the demos look fantastic and are very encouraging. A lot of thought has gone into the interface and I think for the most part, it will make application management, from a user's points of view a lot smoother. On the other hand, some bits and bobs look like the same old things re-hashed, in particular how notifications are handled. I'm not sure how Ubuntu will integrate that as they seem to go the opposite way from what Ubuntu has been doing in this area. Time will tell.

GNOME State of the Union, by Fernando Herrera and Xan Lopez

What more to say than absolute genius? There's no point in me telling you what happened, you needed to be there.

Clutter State of the Union, by Emmanuele Bassi

Clutter has a new logo and a new web site. Very interesting is the introduction of ClutterState to provide declarative state changes and animations. The latest implementation also makes use of the GPU if available. Very, very cool technology and it's what makes MeeGo rock!

So you think you can release? by Emmanuele Bassi

This was a very interesting talk and it made a lot of excellent points in how to handle the release of library code. A lot is also applicable to end user applications and everything is equally applicable to open source and closed source code. The main ones I took out of it are:

  • Any new version of the library should not break apps written with older versions.
  • Use the 0.x cycle to refine and optimise the release mechanism.
  • Have reliable release cycles, such as time based cycles (e.g. every 6 months) and stick to it even if it means dropping functionality from a given cycle.
  • Write performance test suites and run them.
  • Replying RTFM to questions implies that you've actually written the manual.
  • Write a coding style guide and require that patches follow it. I would add to that: keep the coding style short and don't diverge too much from what other projects do otherwise contributors will have no incentive to read it or abide by it.
  • When using the phrase patches welcome, make sure that this is really the case and that when someone contributes a patch you actually review and integrate it quickly.
  • When releasing version 1.0, commit to the smallest subset of your API as possible and review it thoroughly: whatever is left behind, you will have to support for some time to come.
  • Analyse what part of your API your users use most: is it the low level API or the convenience API? If an API turns out to be wrong, don't hesitate to deprecate it.
  • Don't develop on the master branch: it needs to build at all times.
  • Resist the urge of changing an implementation too often.
  • Plan for next minor release but also the next major one.

That's quite a list and I agree with all of it!

Multitouching Your Apps, by Carlos Garnacho

A very interesting talk by Carlos on how to integrate multi-touch abilities in GNOME applications. There is a whole new API for that, that basically allows you to specify the device to use rather than assume that there is a single keyboard and mouse connected. The low-down is:

  • Each finger gets a device;
  • Devices emit individual events;
  • There is a need to gather and correlate event:
    • obtain an event packet,
    • calculate the distance and relative angle between 2 events,
    • use GtkDeviceGroup and multi-device-event signal.

If you want to play with it, you will require XOrg 7.5 and you can get the code through git:

$ git clone -b xi2-playground git://
GNOME 3 and Your Application, by Dan Winship and Colin Walters

A quick introduction on the changes brought by GNOME 3. My take on it:

  • No GNOME 2 API will be removed so GNOME 2 applications should keep working with GNOME 3. However, you will need to use a set of new API to take full advantage of GNOME 3.
  • The .desktop will be much more central to the application, will act as a unique ID for it and will provide more features. In particular make sure, to include the StartupNotify=true option.
  • Provide high resolution icons as icons will be displayed much larger.
  • There will be less chrome in application windows.
  • No more tray icons (well, they will be done differently).
  • Notification tray at the bottom.
  • No more libnotify.
Open Design Thinking Workshop, by

Thanks to Andrea Scheer, Sophia Klees, Andreas Weigt, Martin Steck and Clemens Buss for a very inspiring workshop! I failed miserably at my task in the warm-up LEGO exercise. We all had a lot of fun and I think we came up with interesting concepts. If I find the time, I'd love to try to implement some of what the team I was working on came up with.

Tracker's Place in the GNOME Platform, by Martyn Russell

Tracker has evolved a lot since version 0.6, which is the one available in the Ubuntu Lucid repositories and a is a lot less resource intensive than it used to be. It provides application with the ability to share all sorts of data and is standards based. It also has a brand new query interface in the form of tracker-sparql which looks very powerful.

In practice the tracker database provides functionality which is very similar to what Desktop Couch provides, which is what is installed by default in Ubuntu. Tracker does not provide the replication facility but it provides more comprehensive search facilities. The other difference is that Tracker is designed to store non-application specific data (such as meta-data on an image, which should be available to any image management or manipulation program), while Desktop Couch includes application specific data such as preferences, as well as non-application specific data. I don't think the two are incompatible though and creating a Desktop Couch back-end for Tracker could be very interesting and provide a solution that is greater than the sum of its parts.

Identifying Software Projects and Translation Teams in Need, by Andre Klapper

Andre presented an interesting research he did based on code commits and bug activity to try to identify projects and translation teams that were in need of help in GNOME, the results of which can be found online. It still leaves a lot of questions on how to help those projects but identifying them is an essential first step.

Embracing the Web: Integrating Web Services within GNOME, by Rob Bradford

Rob presented libsocialweb, which is used in MeeGo to provide integration to web services such as Facebook and flickr. And there were demos aplenty: very entertaining! Oh and GeoClue looks really cool too.

GNOME Color Manager: Exploring the User Experience and Integration Points for a 100% Colour Managed Desktop, by Richard Hughes

For professional graphic artists, it is essential to have proper colour management throughout the complete image workflow, from the source (camera or scanner) to the output (printer or PDF file). OS-X does that to perfection, Windows 7 does it but not very well, Linux doesn't do it at all. Richard proposes a set of tools and libraries to change this and make the GNOME (and possibly KDE) desktop fully colour managed. Great stuff!

Cairo: 2D in a 3D World, by Christopher Paul Wilson

Maybe it was burnout from 3 days of conference but I found this talk the least interesting of all. All I took out of it was that there are a lot of reasons why Cairo hasn't released a new version for a long time, if Cairo is slow it's because the drivers are broken or developers are not using it properly and it's all going to be better very soon now as they move to a 6-month release cycle.

Anyway, moving on swiftly...

Growing Communities with Launchpad: Ubuntu and GNOME, by Danilo Segan

Danilo's talk was all about how to use Launchpad better to ensure that we can get related communities (such as Ubuntu and GNOME) to work together better. It concentrated particularly on bugs and translations. I think that, as a result of a question I asked, I volunteered myself to contribute code to Launchpad in order to simplify forwarding bugs upstream.

The Parties

Nearly as important as the talks were the parties: it was the opportunity to meet people I'd only ever heard of or talked to via email or IRC.

Canonical Party

The indefatigable Jorge Castro made sure that the party was a success. The music was a bit loud at times but considering it was held in a night-club, it was to be expected. It also proved that the best DJ in the world would find it hard to get a group of geeks to dance.

Collabora Party

The beach party organised by Collabora was great. Less loud music than the Canonical one and a nice barbecue. It would have been perfect if the weather hadn't been cold and windy. We also got to be initiated in the correct Dutch etiquette regarding the use of satay sauce at barbecues: just drown everything in the stuff.

GUADEC After-party at Revelation Space

This hacker space is just great and it was a nice way to wind down.

Shotwell Hacking

It was a pleasure to meet Adam, Jim, Rob and Lucas from Yorba while at GUADEC. We talked a lot about Shotwell and with Jim's help I finalised the code to migrate photos from F-Spot, which Jim committed in trunk. There are a couple of follow-up bugs that were reported by early testers but the bulk of the functionality made it in before the string freeze and the rest should be sorted in time for the release of Shotwell 0.7, which will them make it into Ubuntu 10.10.

Next Year

Next year, GUADEC and Akademy will join forces for a desktop summit in Berlin.

Friday, 23 July 2010

Converting OpenOffice Documents in Bulk

I had a request from a customer earlier this week: they wanted a copy of all the diagrams that are present in a specification I've been writing for them. All those diagrams are in separate OpenOffice Draw (.odg) files. They don't use OpenOffice but were happy to have PDF versions. The only problem is that there are 42 of them so it would take ages to convert them manually. A quick Google later and I found a way to convert documents from the command line.

So, to adapt it to Ubuntu, download the file mentioned in the post above and store it where your documents are. Then, start OpenOffice in headless mode:

$ ooffice -headless -accept="socket,port=8100;urp;" &

As the version of Python installed on Ubuntu already includes the UNO bindings, there is no need to use a special OpenOffice version of Python to do the job, the standard one will do. The script takes two arguments, the input file and the output file, and works out the formats based on the extensions. Doing the bulk convert is therefore extremely easy:

$ ls *.odg | while read f; do
> echo $f
> python $f ${f%.*}.pdf
> done

Job done! It took 5 minutes for my laptop to convert all 42 files, during which time I made coffee rather than repetitively click on UI buttons and I even have enough time left to blog about it. Oh and I have a happy customer: that's the most important.

Friday, 9 July 2010

Negative Unread Messages

Here's what LinkedIn had to say about my message summary today:

 Invitations(0),  Unread Messages(-1),  See all messages »

So I have a negative number of unread messages... Does it mean I've got to unread one of them to bring the counter back to zero?

Saturday, 29 May 2010

Installing OpenERP on Ubuntu 10.04 LTS


It's that time of the year again. I need to prepare all the information required by my accountant to issue my company accounts. Luckily I have most of it saved in a handy directory on my laptop, with a backup copy on the server. However, it still takes a lot of time that I'd rather be spending doing something else. Accounts are not the only process that I'd like to improve: sales, expenses, invoicing, everything that is not part of my daily job is done in an ad-hoc way. This is exactly what ERP systems are designed to address. And as usual, there is an open source solution out there, one that is even fully supported on Ubuntu: OpenERP. Let's install it then!

Before we start, OpenERP is a client-server solution and as such there are two components to consider:

OpenERP Server
This is a central component that is deployed on a central server and is responsible for managing the company database. You should have a single instance of it for the whole company.
OpenERP Client
This is a desktop application that enables users to view and update the data held in the server. It should be installed on every computer that needs access to the ERP system.

In addition, OpenERP also offers another server component called OpenERP Web. This component is meant to be deployed on the same server as OpenERP Server and offers a web interface to the ERP system, meaning that you can access the system using a vanilla web browser, rather than the dedicated client application. This is great if you have a large number of users and don't want to deploy the dedicated client everywhere. I will ignore this component for today and only go through the installation of the server and dedicated client.

OpenERP Server

To install the server component, you need a computer that will act as a server. You could potentially install it on your desktop or laptop if you are sure that you will only ever be the single user of the application but I wouldn't recommend it. In my case, I decided to install it on my existing home server that runs Ubuntu Server 10.04 LTS.

Install PostgreSQL

OpenERP needs a database engine and is designed to run with PostgreSQL so we need to install this first. To do this, connect to the server and just install the relevant package, as detailed in the OpenERP manual:

$ sudo apt-get install postgresql
Create an openerp user in PostgreSQL

The server will need a dedicated user in the database so we need to create it. To do this, we first need to start a session under the identity of the postgres Linux user. We can then create the database user that we will name openerp. When prompted, enter a password and make sure you remember it. There is no need for that user to be an administrator so we answer n when asked that question. Then close the postgres session to come back to your standard Linux user.

$ sudo su - postgres
$ createuser --createdb --username postgres --no-createrole \
 --pwprompt openerp
Enter password for new role: 
Enter it again: 
Shall the new role be a superuser? (y/n) n
$ exit

Now that this new user is created, let's try to connect to the database engine using it:

$ psql -U openerp -W
psql: FATAL:  Ident authentication failed for user "openerp"

It doesn't work. This is because PostgreSQL uses IDENT-based authentication rather than password-based authentication. To solve this, edit the client authentication configuration file:

$ sudo vi /etc/postgresql/8.4/main/pg_hba.conf

Find the line that reads:

local all all ident

Replace the word ident with md5:

local all all md5

And restart the database server:

$ sudo /etc/init.d/postgresql-8.4 restart

Let's try again:

$ psql -U openerp -W
psql: FATAL:  database "openerp" does not exist

OK, that's a different error which is due to the fact that PostgreSQL tries to connect to a database that has the same name as the user if none is specified. So let's try to connect to the postgres database, which is contains system information and is always installed.

$ psql -d postgres -U openerp -W
psql (8.4.4)
Type "help" for help.

That works so we know that the user has been successfully created and there should be no problem connecting with that user identity.

Install OpenERP Server

OpenERP Server is part of the Ubuntu repositories so it's extremely easy to install:

$ sudo apt-get install openerp-server

Now is a good time for a coffee break. On my installation, the openerp-server package triggers the installation of no less than 108 packages in 48.9MB of archives that will require an additional 251MB of disk space. This may take some time. Most of the additional packages are Python packages, which is sensible because OpenERP is written in Python but the fact that it also requires things like xterm and bits of GTK makes me think that the dependency list could be pruned somewhat. At the end of the installation, a message appears saying that you should go and read the file called /usr/share/doc/openerp-server/README.Debian. Go and do this. It mainly explains the PostgreSQL installation that we just did but it also mentions a bit of useful information, namely that the OpenERP Server configuration file is /etc/openerp-server.conf. That's quite handy because we need to update it.

$ sudo vi /etc/openerp-server.conf

If you are installing the server and client on different machine, which I would recommend, you need to find the line that says:

interface = localhost

And replace the word localhost with the IP address of your server. If you don't know the IP address of the server, just run ifconfig with no parameters and look for the words inet addr: at the beginning of the second line of output: the IP address is the set of four number separated by dots that come just after that. You then need to modify another two lines:

dbpassword = the password you chose for the openerp user
dbhost = localhost

In theory you shouldn't have to specify dbhost = localhost because leaving that entry blank should default to the value localhost but when I tried this, OpenERP Server could not connect to PostgreSQL. It's now time to restart the server process but before we do this, it's always good to be able to follow the logs in a second window just in case something goes wrong. The location of the log file is helpfully specified in the configuration file that we just edited. Open another terminal, connect to the server and run the following command:

$ tail -f /var/log/openerp-server.log

The last 10 lines of log will appear and every time a new entry is added to the log file, it will also appear in this window. In the first window, we can restart the server:

$ sudo /etc/init.d/openerp-server restart

If all goes well, no nasty error message should appear in the log window.

OpenERP Client

Now, to install the client software on a desktop, connect to the desktop, open a terminal and install the package:

$ sudo apt-get install openerp-client

That's it. Once finished, you will find an OpenERP Client entry in the Applications -> Internet menu. Click on it to open the client. If you want to fill in the feedback form, do so, otherwise click Cancel. You then need to create a new database. To do this, go to the File -> Databases -> New Database menu:

New Database menu

This will open a dialogue where you can enter the details of the new database.

New Database dialogue

You will need to click on the Change button at the top to specify the name or IP address of the server. The port should be the default, 8070. Of course, if you have a firewall between the client and the server, the firewall configuration will need to be updated to allow traffic to the server through this port. The default password for the super administrator is admin as specified in the dialogue. Note that the database name cannot contain spaces, dashes or any non-alphanumeric characters.

So that's OpenERP installed. It's now time to read the rest of the documentation and get a copy of Accounting for Dummies to understand what it's all about.

Backup and Restore a Subversion Repository

Sometimes, you just have to do serious maintenance to a server, like install a new instance of your operating system from scratch with a new partition layout. If this server happens to be your Subversion server, you need to backup your repository and restore it once you're done with the maintenance. Or maybe you just want to move your repository from one server to another. Here's how to do it. The commands below need to be performed on the SVN server by a user who has write access to the repository (in theory any SVN admnistrator).

First, here's how back up the repository to a compressed file:

$ svnadmin dump /path/to/repo | gzip > backup.gz

And how to restore it:

$ gunzip -c backup.gz | svnadmin load /path/to/repo

Those commands are meant for UNIX or Linux so you will have to adapt them if you are running Windows. It shouldn't be too difficult to do so, especially if you are using Cygwin.

Thursday, 29 April 2010

Shared Folders in Ubuntu with setgid and ACL


There is an often requested feature on Linux (or UNIX) to have the ability to create shared directories similar to what is possible in Windows, that is a directory in which every person who has been given access can read, write or modify files. However, because Linux file systems such as ext4 enforce file permissions that are stricter than any of the windows file systems such as FAT or NTFS, creating such a directory is not obvious. Of course, if you put your shared directory on a FAT or NTFS partition, it will automatically behave just like in Windows but that requires a separate partition and doesn't allow you to enforce permissions on a per-group basis. So here's a quick guide on how to do this with Ubuntu. The same principles apply to other Linux distributions so should be portable.

Use Cases

Let's go through a couple of classic use cases first, to identify exactly what we want to do.

Project Folder

In a company or university setting where users are assigned to project teams or departments, it can be useful to create shared folders where all members of the team can drop files that are useful for the whole team. They need to be able to create, update, delete files, all in the same folder. They also need to be able to read, update or delete files created by other members of the team. However, users external to the team should only have read access.

Web Development

For anybody doing web development on Linux, a classic problem is when you have to deal with development or test web servers. The default web server process runs with the www-data user and the document directory is owned by the same user. It would be great if all web developers on the team were able to update the document directory on the server while not requiring root access to do so.

Linux Default Behaviour

Linux has the concept of user groups. You can check what groups your user belongs to by typing the following on the command line:

$ groups
bruno adm dialout cdrom plugdev lpadmin admin sambashare

On a default Linux installation, groups are used to give access to specific features to different users, such as the ability to administer the system or use the CD-ROM drive. But one of the core feature of user groups is to support file permissions. Each file has separate sets of read, write and execute permissions for the user who is the owner of the file, the group that owns the file and others, that is everybody else. Whenever a user attempts to read, write or execute a file, the system will decide whether he can do it based on the following rules:

  • if the user is the owner of the file, user permissions apply,
  • otherwise, if the user is part of the group that owns the file, group permissions apply,
  • otherwise, others permissions apply.

So to configure a shared directory as defined above, we need to:

  • create a user group for the team,
  • assign all team member users to that user group,
  • create a directory and configure it so that all users in the group can:
    • add new files to the directory,
    • modify any existing file in the directory,
  • and of course, all this should work without users having to do anything special.

How To

Enable ACL

The first thing we need to do is to enable ACL support on the partition where we will create the shared directory. ACL extend the basic Linux permission implementation to enable more fine grained control. As this requires the file system to be able to store more permission meta-data against files, it needs to be configured accordingly. We can do this by adding the acl option to the relevant line in /etc/fstab, such as:

UUID=b8c490d0-0547-4e1f-b052-7130bacfd936 /home ext4 defaults,acl 0 2

The partition then needs to be re-mounted. If the partition to re-mount is /, /usr or /home, you will probably need to restart the machine. Otherwise, the following commands should re-mount the partition:

$ sudo umount partition
$ sudo mount partition

where partition is the mount point of the partition as defined in /etc/fstab, such as /var/www.

Create Group

We then need to create the group to which we will give shared access, let's call that group teamgroup:

$ sudo groupadd teamgroup

Try to give the group a meaningful name while keeping it short. If it's meant to be a team group, give it the name of the team, such as marketing. Note the following restrictions on Debian and Ubuntu for group names (taken from the man page):

It is usually recommended to only use groupnames that begin with a lower case letter or an underscore, followed by lower case letters, digits, underscores, or dashes. They can end with a dollar sign. In regular expression terms: [a-z_][a-z0-9_-]*[$]?

On Debian, the only constraints are that groupnames must neither start with a dash (-) nor contain a colon (:) or a whitespace (space: , end of line: \n, tabulation: \t, etc.).

Groupnames may only be up to 32 characters long.

We then need to assign users to that group:

$ sudo usermod -a -G teamgroup teamuser

Where teamuser is the login name of the user to assign to the group. This assignment will take effect next time the user logs in. Make sure that you do not forget the -a option otherwise you will wipe out all existing group assignment for that user, rather than just adding a new one.

Create the Folder

The next step is to create the shared folder. This is easy:

$ cd /path/to/parent
$ mkdir teamfolder

Where /path/to/parent is the path to the parent folder and teamfolder is the name of the folder you want to create. We then assign group ownership of the folder to the group previously created:

$ chgrp teamgroup teamfolder

And give write access to the group on that folder:

$ chmod g+w teamfolder

Let's check what this folder looks like:

$ ls -l
drwxrwxr-x 2 teamuser teamgroup 4096 2010-03-03 14:32 teamfolder

Now, let's try to create a new file in that directory:

$ touch teamfolder/test1
$ ls -l teamfolder
-rw-r--r--  1 teamuser teamuser 5129 2010-03-03 14:34 test1

That looks good and any other user who is part of teamgroup should be able to create files in this directory. However, group members will not be able to update files created by other members of the group for the following reasons:

  • the group that owns the file is the user's primary group, rather than teamgroup,
  • the file's permissions only allow the owner of the file to update it, not the group.
Set the setgid Bit

We'll solve the first problem by setting the setgid bit on the folder. Setting this permission means that all files created in the folder will inherit the group of the folder rather than the primary group of the user who creates the file.

$ chmod g+s teamfolder
$ ls -l
drwxrwsr-x 2 teamuser teamgroup 4096 2010-03-03 14:32 folder

Note the s in the group permissions instead of the x that was there previously. So now let's try to create another test file.

$ touch teamfolder/test2
$ ls -l teamfolder
-rw-r--r--  1 teamuser teamuser  5129 2010-03-03 14:34 test1
-rw-r--r--  1 teamuser teamgroup 5129 2010-03-03 14:35 test2

So now whenever a file is created in the team directory, it inherits the team's group.

Set Default ACL

The second issue is related to umask, the default mask applied when creating files and directories. By default umask is set to the octal value 0022, as demonstrated if you run the following:

$ umask

This is a negative mask that is applied to the octal permission value of every file or directory created by the user. By default, a file is created with permissions rw-rw-rw-, equivalent to 0666 in octal and a directory is created with permissions rwxrwxrwx, equivalent to 0777 in octal. umask is then subtracted from that default to give the effective permission with which files and directories are created. So for a file, 0666-0022 gives 0644, equivalent to rw-r--r-- and for a directory 0777-0022 gives 0755, equivalent to rwxr-xr-x. This default is sensible for most situations but needs to be overriden for a team directory. The way to do this is to assign specific ACL entries to the team directory. The first thing to do is to install the acl package to obtain the necessary command line tools. Well, in fact, the first thing to do would be to enable acl on the relevant partition but we already did that at the very beginning.

$ sudo apt-get install acl

Now that the package is installed, we have access to the setfacl and getfacl commands. The first one sets ACLs, the second one reads them. In this particular case, we need to set default ACLs on the team folder so that those ACLs are applied to files created inside the directory rather than the directory itself. The syntax is a bit complicated: the -d option specifies that we want to impact the default ACLs, while the -m option specifies that we want to modify the ACLs and expects an ACL specification to follow.

$ setfacl -d -m u::rwx,g::rwx,o::r-x teamfolder
$ touch teamfolder/test3
-rw-r--r--  1 teamuser teamuser  5129 2010-03-03 14:34 test1
-rw-r--r--  1 teamuser teamgroup 5129 2010-03-03 14:35 test2
-rw-rw-r--  1 teamuser teamgroup 5129 2010-03-03 14:36 test3

There we go, it all works as expected: new files created in the team folder are created with the team's group and are group writeable. To finish off, let's have a look at how the folder's ACLs are stored:

$ getfacl teamfolder
# file: teamfolder
# owner: teamuser
# group: teamgroup
Granting and Revoking Access

Granting a user write access to the team folder is now extremely easy: you can just add that user from the team's group when he joins the team:

$ sudo usermod -a -G teamgroup joiner

Where joiner is the user ID of the user joining the team. Revoking access is nearly as easy, you just need to remove the user from the team's group. Unfortunately, there is no way to do this in a simple command so you will have to edit the file /etc/group, find the group and remove the user ID from that group.


Restrict Delete and Rename to Owner

By default, any user who has write access to a file can delete or rename it. This means that any member of the team can delete or rename any file created by another member. This is generally OK but if it is not, it can also be restricted by setting the sticky bit on the directory:

$ chmod +t teamfolder
$ ls -l
drwxrwsr-t 2 teamuser teamgroup 4096 2010-03-03 14:32

This feature is used on the /tmp directory to ensure that all files created in that directory can only be deleted by their owners.

Restrict Access for Others

Another variation that may be more useful is to completely deny access for users that are not part of the team. it may be that a particular team is working on some sensitive stuff and you don't want anybody outside the team to see it. To do this, we just revoke all permissions and ACLs for others on the team folder:

$ chmod o-rx teamfolder
$ setfacl -d -m o::--- teamfolder


Saturday, 17 April 2010

Ubuntu Lucid Netbook Remix from Alternate CD

The Problem

I've had Ubuntu Netbook Remix running on my Asus EeePC 701 for some time. After upgrading my main laptop to the beta 2 of Ubuntu Lucid 10.04, I wanted to do the same to the EeePC so that I could test the new release, report bugs if I found any and generally benefit from the improvements in 10.04.

The first problem I faced was that the EeePC 701 I have only has 4GB of internal storage, which is just enough for Ubuntu but too little to enable me to do a straight upgrade from the previous version (Karmic 9.10). No problem, I thought, I can re-install from scratch as I really have nothing important on that machine.

I created a flash drive image from the UNR CD, as detailed on the Ubuntu web site, booted my EeePC from it and selected to install Ubuntu on the system. However, I then faced another problem related to the limitations of the 701 model: this model has a 7-inch screen with a resolution of 800x480 pixels, which is quite small and the Prepare Disk Space screen doesn't fit in that resolution, making it impossible to install. I duly reported the bug and wondered how I could work around that problem.

Installing from the Alternate CD

One of the great thing with Linux is that you always have a lot of options. Ubuntu in particular also comes with an alternate installer, which is text based and designed to work on very constrained hardware. Exactly what I needed!

I downloaded the alternate CD, created a flash drive image from it, booted the EeePC from it and started installing Ubuntu. The installation from the alternate CD is extremely easy and follows exactly the same path as with the standard CD, with the difference that you don't have the flashy graphical user interface. At the end of the installation, I had a perfectly functional Ubuntu system using the standard desktop. However, what I wanted was the netbook desktop.

From Standard to Netbook Desktop

The Ubuntu netbook desktop looks very different from the standard one but in practice it is just made up of a small number of packages.

Adding the Netbook Packages

Adding the netbook packages is extremely easy. Open a terminal and run the following command:

$ sudo apt-get install go-home-applet maximus\
 netbook-launcher window-picker-applet
Adding Launcher and Maximus to Application Startup

Once this is done, you will need to ensure the netbook launcher and maximus applications are started automatically. To do this, go to System -> Startup Applications and add two entries with the following commands:

  • netbook-launcher
  • maximus

Make sure that both can be started by starting them manually, either in a terminal window or via ALT-F2. Then, disable the desktop to ensure you don't accidentally click on it when you want to start an application:

$ sudo gconftool-2 --type bool\
 --set /apps/nautilus/preferences/show_desktop false
Customising the Gnome Panels

You then need to customise the Gnome panels.

Right click on the top panel, where the Applications menu is and remove it. Do the same with the Firefox and Help icons. Then right-click again, in an empty area, and select "Add to Panel"; add the Go Home Applet, which will add a small Ubuntu icon. Right click on that icon again, select "Move" and move it all the way to the left.

Right click on an empty area of the top panel and select "Add to Panel"; add the Window Picker Applet and move it to the left so that it is flush left against to Go Home Applet.

Finally, right-click on the bottom panel and select "Delete this Panel".

Log out, log back in again and you should have a full netbook desktop that look something like this:

Lucid Netbook Desktop

Lucid Netbook Desktop

At first, the favourites folder will be empty so you will have to add them yourself.

Final Changes

One final change I made to the installation is that I removed and Evolution. The former because it occupies a lot of hard disk space, the latter because it really doesn't work well on so small a screen.

Other Options

An alternative option to install all the required packages is to install the netbook meta-package:

$ sudo apt-get install ubuntu-netbook

I didn't try this so I don't know how much it installs and configures for you. One of the nice side effect of doing it the way I did is that it keeps some standard configuration, such as the multiple workspaces that you can access via CTRL+ALT+arrow keys. I find that this works particularly well with the netbook interface.

Wednesday, 17 March 2010

SVG and HTML5 Support in IE9

The Register has an article today about the next version of Microsoft Internet Explorer, IE9. You can already download a preview and apparently it looks nice and has support for SVG and HTML5. At last! I hear you say, only 2 years after the rest of the pack. The reason why I find this very exciting is because of SVG rather than HTML5. This is the type of technology that could be very useful in all sorts of areas where Flash is used today, as demonstrated by Microsoft's business charts and map examples.

So what can you do with SVG that you can't do with Flash? In a nutshell, here is my short list:

  • You can style SVG using CSS, the same as HTML so if you want to change the look and feel of your site, you can do text and graphics in one go;
  • It is a W3C standard so anybody can provide an implementation and you are not tied to a particular company;
  • It is a dialect of XML so anybody with a text editor can create SVG data;
  • As a dialect of XML, every tool that manipulates XML and HTML, including web based languages like PHP can manipulate SVG so it is extremely easy to generate on the fly and integrate into a web page;
  • You can use AJAX technologies with it the same way you do with HTML.

I'm sure there are others but that's all I can think of at 1:30am. Generally speaking, the huge advantage of SVG is that it can re-use the complete HTML tool chain, which offers a huge variety of tools both open source and commercial.

Tuesday, 16 March 2010

Fractals with Octave: The Whole Shebang

After all this time writing this simple series of how to create fractals with Octave, I finally got to the end of it and published the code on GitHub so here is a nifty summary.


Each article has a link to the previous and following ones so you can just start with the first one and follow up to the last one.


The code is available on GitHub and can be downloaded from the project page, where you will also find some installation instructions.

If you're interested, please download the code and if you use it to create stunning images, I'd love to hear from you. If you think you can improve the code, don't hesitate to suggest changes or fork it.


To quote the project page, none of this was created in a vaccum. I used a number of offline and online references so if you are interested in the subject of fractals, you could do a lot worse than checking them out:

Thursday, 4 March 2010

The Art of Database Analysis, Take 2

Thanks to Ed, I found this interesting post on graphs and decided to adapt it to my previous database graph. So I now have a version of that script that uses the pygraph library:

Database PyGraph

Database Graph

I like this one better, it looks a lot more organic.

Saturday, 27 February 2010

Fractals with Octave: Original Function Studied by Gaston Julia

The story so far

After another long pause, here is the sixth instalment of this series on fractals with Octave. In this article, we have a look at the complex series that started is all, the one that Gaston Julia was interested in.

Once Upon A Time: The Original Fractal Function

The classic Mandelbrot set we all know and love is derived from quite a simple series, described in the very first article of this series. Gaston Julia was interested in a slightly more complex series, as explained by Paul Bourke:

zn+1=z4 + z3/(z-1) + z2/(z3+4z2+5) + c

The initial condition is:


The Mandelbrot Set

We can produce a Mandelbrot set of that equation with the mandelbrot function that we created in previous articles.

octave:1> Mj=mandelbrot(-1.4+1.05i,1.4-1.05i,320,64,
> @(z,c) z.^4.+z.^3./(z-1)+z.^2./(z.^3.+4*z.^2.+5).+c;
octave:2> imagesc(Mj)

And here is the result:

Original Julia Series, Mandelbrot Set

We can then zoom on the bottom right corner of that image.

octave:1> Mjz=mandelbrot(0.25-0.65i,0.45-0.8i,320,64,
> @(z,c) z.^4.+z.^3./(z-1)+z.^2./(z.^3.+4*z.^2.+5).+c;
octave:2> imagesc(Mjz)

Original Julia Series, Mandelbrot Set x10

And again.

octave:1> Mjzz=mandelbrot(0.378-0.725i,0.398-0.74i,320,64,
> @(z,c) z.^4.+z.^3./(z-1)+z.^2./(z.^3.+4*z.^2.+5).+c;
octave:2> imagesc(Mjzz)

Original Julia Series, Mandelbrot Set x100

And again!

octave:1> Mjzzz=mandelbrot(0.3863-0.7314i,0.3883-0.7329i,320,64,
> @(z,c) z.^4.+z.^3./(z-1)+z.^2./(z.^3.+4*z.^2.+5).+c;
octave:2> imagesc(Mjzzz)

Original Julia Series, Mandelbrot Set x1000

The Julia Set

Now, let's have a look at the Julia set for the same series and for c=0.3873-0.7314i, which is the point towards which we zoomed on in the Mandelbrot set.

octave:1> Jj=julia(-1.2+1.6i,1.2-1.6i,240,64,0.3873-0.7314i,
> @(z,c) z.^4.+z.^3./(z-1)+z.^2./(z.^3.+4*z.^2.+5).+c;
octave:2> imagesc(Jj)

Original Julia Set, c=0.3873-0.7314i

Let's zoom into that picture.

octave:1> Jjz=julia(0.2+0.1i,0.4-0.05i,320,64,0.3873-0.7314i,
> @(z,c) z.^4.+z.^3./(z-1)+z.^2./(z.^3.+4*z.^2.+5).+c;
octave:2> imagesc(Jjz)

Original Julia Set, c=0.3873-0.7314i, x10

And again.

octave:1> Jjzz=julia(0.366+0.09i,0.386+0.075i,320,64,0.3873-0.7314i,
> @(z,c) z.^4.+z.^3./(z-1)+z.^2./(z.^3.+4*z.^2.+5).+c;
octave:2> imagesc(Jjzz)

Original Julia Set, c=0.3873-0.7314i, x100

And again!

octave:1> Jjzzz=julia(0.366+0.09i,0.386+0.075i,320,64,0.3873-0.7314i,
> @(z,c) z.^4.+z.^3./(z-1)+z.^2./(z.^3.+4*z.^2.+5).+c;
octave:2> imagesc(Jjzzz)

Original Julia Set, c=0.3873-0.7314i, x1000

The End

This is the last article in this series. I hope you've enjoyed it, even if it took me a very long time to finish it. Have a play with the Octave functions described throughout the articles, modify them, improve them, there's a lot of fun stuff to do and amazing pictures to create with fractals.