Showing posts with label ubuntu. Show all posts
Showing posts with label ubuntu. Show all posts

Sunday, 7 November 2010

Reading Paradise... or DRM Hell?

New Toy

Yesterday, I bought myself a Sony PRS-650 Reader Touch Edition as a belated birthday present. I could have bought the new Kindle, as there are lots of adverts for it in the tube, it's cheaper and it has Wi-Fi. I went for the Sony PRS reader instead because I don't really need Wi-Fi and it has a touch screen, which means that it's not encumbered by a keyboard and is therefore a lot smaller for the same screen size (6" display). The Sony device does really fit in a pocket as it's about the size of a very thin paperback. If you want even smaller, you can get the Pocket edition, a.k.a. PRS-350.

So the first thing I did was go to Waterstone's as I know they stock the PRS-650. Unfortunately, between two shops I visited, they had 5 demo devices, only one of which seemed to work and no staff in sight to help. So I ended up going to the Sony Centre in Tottenham Court Road, where I was served by very helpful staff who were happy to answer any question and demo the device. What's more, if you buy the device from Sony direct, you can also have it in a very nice red colour that is much less boring than the black and grey offered by Waterstone's.

Back home, I installed calibre from the Ubuntu repositories and, lo and behold, my new toy was immediately recognised and supported out of the box! On a side note, the connection is a standard Micro-B USB connector, which means that the cable to connect it to the computer and charge it is the same as with my Nokia N900.

But getting the device to work is only the start, you then need to load it with books. The PRS-650 supports a variety of file formats and whatever it doesn't support, calibre should be able to convert to EPUB. So you basically need to find books to download. Here's a quick list of what I tried.

The Good

  • The first thing to do when you get your Sony reader is to register it on My Sony, you will then be able to download 100 free classic books in Sony's own format, including titles like Don Quixote, Gulliver's Travels, The Importance of Being Earnest or David Copperfield.
  • If you want more classics, Project Gutenberg is the place to go to: thousands of books in a variety of languages for which the copyright has expired.
  • If you are into science fiction, Baen Publishing offer most of their titles as non-DRM e-books that you can then load into calibre. They even offer some of them for free through their free library.
  • For technical books, both The Pragmatic Bookshelf and O'Reilly offer their titles in a variety of formats without DRM.

The Bad

  • Waterstone's, WHSmith, Penguin, rbooks (Random House) and kobo will only sell you DRM-encumbered books that require Adobe Digital Editions, which of course doesn't exist for Linux. Apparently it works well under WINE but Adobe won't offer the Windows download if you connect to their web site using Ubuntu. It also means that you are forced to use a particular piece of software, which may or may not be practical. Add to this that, apart from kobo, the other web sites are not very forthcoming with that information so you may end up buying an e-book that you can't download and only discover that after you've actually paid for it!
  • Amazon, quite predictably, will only sell you books for the Kindle but won't let you download the file, you have to use either a Kindle device or the Kindle software, which only works on Mac OS-X or Windows and won't work with the Sony reader anyway.

The Ugly

  • Foyles and Books, etc. have a web site that seems to be able to show me e-books or fiction books but not fiction e-books so I was unable to find what I wanted and eventually gave up.
  • Blackwell's seem to have a very limited list of titles available as e-books so I gave up and also failed to find out any information on the e-book formats they offer and whether they were DRM-encumbered.

It looks like the publishing industry is following blindly in the footsteps of the music industry, leaving very few options for Linux users to buy e-books legally. And as usual it's not a question of hardware support, it's all about restricting what customers can do with the media they purchase. Surely, there must be a better solution than this mess?

Update 1

I originally thought that The Book Depository didn't offer e-books but in fact they do and, like a number of others, only sell DRM-encumbered books, require you to use Adobe Digital Editions and don't tell you until you've actually bought the book. So that's one more candidate for the bad list above.

Update 2

I mistakenly said above that The Book Depository didn't warn you about the requirement for Adobe Digital Editions. In fact they do, just not in a place where I was expecting it so I didn't notice it. As a result they won't refund you if you mistakenly buy an e-book that you can't download.

WHSmith also mention that you need Adobe Digital Editions. However, they do that at the very bottom of the book's page below adverts for other e-books and customer reviews, so not quite as prominently as you would expect. They won't refund you either if you make a mistake.

rBooks will refund you if you ask them politely.

Update 3

Waterstone's will refund you too if you ask politely.

Sunday, 5 September 2010

Sound Issue in Ubuntu 10.10 Beta

This morning when logging in to my newly-upgraded-to-Ubuntu-10.10 laptop, sound was not working. It appears that the solution was very simple: my user was not authorised to use audio devices. I don't know why it was disabled as it had always worked fine before but it's very easy to solve so if you have the same problem check that first. To resolve it, go to System → Administration → Users and Groups, select your user, click on the Advanced Settings button, enter your password, click the User Privileges tab and make sure the Use audio devices box is checked. While you're at it, do the same for the other users on your system.

User Settings dialogue

If that doesn't work, there is a handy wiki page on debugging sound problems.

Saturday, 4 September 2010

Memory Usage Graphs with ps and Gnuplot

When developing the import from F-Spot feature in Shotwell, a user who tested the patch found out that there was a bit of a memory leak. After finding the cause, I produced a patch to fix it but I also wanted to identify what the difference was between the development trunk and the patch. So here's how I did it.

Gathering Data

The first step was to gather relevant memory usage data. For this I needed a repeatable test and perform that test both with the trunk build and the patch build. As I had a test F-Spot database, that proved quite straightforward:

  1. Delete the Shotwell database,
  2. Build the trunk version,
  3. Import the test F-Spot database using the trunk build,
  4. Delete the Shotwell database again,
  5. Build the patched version,
  6. Import the test F-Spot database using the patched build.

With the test process sorted, I needed to gather memory data during steps 3 and 6. That's easily done using the ps command in a loop and sending the output to files. So, for the trunk build, I just started this command in a terminal before starting Shotwell and stopped it once finished:

$ while true; do
ps -C shotwell -o pid=,%mem=,vsz= >> mem-trunk.log
sleep 1
done

The one for the patched version is virtually the same:

$ while true; do
ps -C shotwell -o pid=,%mem=,vsz= >> mem-patch.log
sleep 1
done

Note the the equal sign (=) after each field specification tells ps not to output the column header. So at the end of this, you end up with two files that contain 3 columns of data each: the PID, the percentage of memory used and the total virtual memory used for the process at intervals of one second. In both cases, I let Shotwell run idle for a few seconds at the end of the import before closing it to ensure that everything had stabilised.

Next, I checked how many lines of data I had in each file:

$ wc -l mem-*.log

And truncated the longer file to the length of the sorter one, just to make sure I had the same number of data points for both.

Creating the Graph

After that, I wanted to create a single graph that included four lines: VSZ and %MEM for both trunk and patch. And I wanted to output the result to a PNG file. Gnuplot can do all of this, you just need to know how to set its myriad of options. So here's the Gnuplot script in details.

set term png small size 800,600
set output "mem-graph.png"

Gnuplot works with the concept of terminals. So the first line tells it to use the special terminal called png with a small font and a size of 800 pixels wide by 600 pixels high. The second line is fairly self-explanatory: output to the given file.

set ylabel "VSZ"
set y2label "%MEM"

The two sets of values I am interested in have very different ranges. VSZ is a number of bytes and will have values in the hundreds of thousands if not millions, while %MEM is a percentage so will have a value somewhere between 0 and 100. So to make sure that both types of graphs fit in the output, I will use the ability that Gnuplot has to use left and right Y axes with different ranges: VSZ will go on the default Y axis (left, called y), while %MEM will go on the other one (right, called y2). So I set the labels for both.

set ytics nomirror
set y2tics nomirror in

As the right Y axis is not used by default, I need to enable it and to set where the tics go. To do that, I first disable the mirror option on the left Y axis and enable tics on the right Y axis by telling Gnuplot that their position will be in.

set yrange [0:*]
set y2range [0:*]

The last piece of setup is to customise the range on both axes. By default, Gnuplot will adjust the range so that there is as little white space as possible above or below the graph. But in this case, I want both sets of graphs to start at zero so that I can have a better idea of total memory used.

The next bit is quite long so I will start by explaining the instruction for a single graph before bringing all four together.

plot "mem-trunk.log" using 3 with lines axes x1y1 title "Trunk VSZ"

In the line above, I tell Gnuplot to take its data from the third column in the file called mem-trunk.log. The with lines section specifies that I want a line graph. The axes x1y1 specifies that I want it to be drawn against the first X axis and the first Y axis (the default, but here for completeness). And the last bit specifies what I want the title for this graph to be. Then it's just a case of plotting all four graphs in a single plot command separated by commas. So here's the full script:

set term png small size 800,600
set output "mem-2334-graph.png"
set ylabel "VSZ"
set y2label "%MEM"
set ytics nomirror
set y2tics nomirror in
set yrange [0:*]
set y2range [0:*]
plot "mem-trunk.log" using 3 with lines axes x1y1 title "Trunk VSZ", \
     "mem-patch.log" using 3 with lines axes x1y1 title "Patch VSZ", \
     "mem-trunk.log" using 2 with lines axes x1y2 title "Trunk %MEM", \
     "mem-patch.log" using 2 with lines axes x1y2 title "Patch %MEM"

Make sure that there is absolutely no white space between the backslash characters and the end of lines in the plot command otherwise Gnuplot will complain. Save the script to a file called mem.gnuplot and run it:

$ gnuplot mem.gnuplot

And here is the output I got, which shows the improvement in memory usage between trunk and patch:

Memory usage graph

Ubuntu 10.10 Beta First Impressions

Ubuntu released the first Maverick beta a couple of days ago. As I had some time on my hands today (including the time to re-install Lucid if it all went pear shaped), I decided the upgrade my ThinkPad T42 so, as instructed, I typed this in a terminal:

update-manager -d

And here's how it went.

The Good

  • The upgrade took a few hours, was extremely smooth and just worked.
  • Everything that I've tried so far just works out of the box, no regressions (apart from a small glitch, see below).
  • I thought Ubuntu 10.04 was fast but 10.10 is even faster! Firefox and Evolution in particular feel snappier.
  • The new keyboard layout indicator is bigger and clearer.
  • Shotwell replaces F-Spot.

The Bad

The Ugly

  • The default background really doesn't look good so the first thing I did was change to a different background image.

All in all, an excellent upgrade!

Sunday, 15 August 2010

Contributing to Shotwell

Background

I use open source software every day on all the computers I own. In fact, outside of work environments where Windows is still predominant, very little of the software I use is closed source these days. As a result, I've wanted to contribute back to the community that has developed such great software by fixing bugs and implementing new features. However, I've found it a lot more difficult that I expected. To make any meaningful contribution, the project I would contribute back to had to be a piece of software that I used regularly. When I look at the ecosystem of software I use regularly, they are either written in C, a language that I haven't programmed in for 20 years (assuming the stuff I did at university actually counts) or they are extremely complex, or both.

Then, a few months ago, I got wind that one of the planned changes for Ubuntu 10.10 (Maverick Meerkat) was to replace the default photo management software: F-Spot was out, Shotwell was in. Photo management has been a sore spot of mine for ages on Ubuntu. None of the software that was available until now actually met my needs so I ended up not using any photo management software and doing everything through the file manager. Looking into it, it looked like Shotwell was the ideal opportunity for me to get a photo manager I liked. Shotwell uses Vala as a programming language, which I found intriguing and which I thought should be easier than C to get into.

Baby Steps

So I downloaded the source from the SVN repository, built the software and lo and behold, the build was actually easy and 10 minutes later I had a local version of Shotwell to play with. I then had a look at the source code and found it very easy to understand. So I thought: let's see if I can fix an easy bug!

For that experiment, I choose a bug that looked extremely easy: bug 1954 looked like the ideal candidate because it merely consisted in adding a line of information to an existing window in exactly the same way as the other lines already present in the window. And indeed, the fix was trivial. So I created a patch and uploaded it to the bug tracker.

What happened next was very important and is something that all community projects should be very attentive to: I got some feedback on my patch within a day, basically saying that the patch had now been committed to trunk with a minor modification. This is extremely important and is something the Yorba team, who produce Shotwell are very good at doing: all contributors are welcome and any contribution, such as suggestions or patches, are acknowledged very quickly. From a contributor's point of view, it means that you know that you can really participate and help the core development team improve their software.

More Complex Bugs

Quite chuffed by the outcome of my first bug fix, I decided to use Shotwell extensively and try solving more complex bugs. I eventually hit a significant issue with my SLR camera, which I duly reported in the bug tracker and which I decided to attempt to solve. It took a bit of time but I eventually got there and as a side effect actually solved another bug: result!

A Whole New Feature

By now completely sold on Shotwell, I wanted to do my bit so that it would be well received by Ubuntu users. In my mind, the most important thing was to ensure a smooth transition from F-Spot to Shotwell and indeed the migration was one of the major requested features in Launchpad and very high on the Yorba list too.

Implementing such a major feature can feel a bit daunting at first but, at the end of the day, integrating and migrating from weird and wonderful databases is something I get involved in on a regular basis in my day job so bringing that experience to Shotwell sounded like the way to go. And to be honest, the F-Spot database is extremely simple compared to some of the horrors I've faced recently.

It took a bit of time and I learnt a lot about Vala and the GObject library in the process. Thanks to the support from Jim and Adam from Yorba I got there in the end and finished the bulk of it in the middle of GUADEC. A few more extra tweaks were required but it's now in fairly good shape, ready to face the hordes of new users that Ubuntu will bring it.

There you go, that was my first major contribution to an open source project and I'm now hooked so will endeavour to contribute more.

Saturday, 7 August 2010

Flashing the N900

Introduction

The Nokia N900 is able to upgrade its operating system over the air, which is the best way to upgrade it, especially if you have a fast Internet connection. This is also very practical when you're not running Windows because the Nokia suite only exists on that platform. This is exactly what I had been doing since I bought my N900. Except that a few months ago, I declined to update when requested for a reason I have now forgotten, probably because I didn't have the time then. And since then, it has failed to propose any new upgrade. So the only solution was to flash it but I had no idea how to do that on Linux. Luckily, I found a very handy guide on the subject which includes a version for Linux. Unfortunately, this guide is light on details and not fully correct so here's the fix.

Warning

Note that flashing the device rather than upgrading it over the air will remove all software you previously installed, as well as some of your settings. It will not delete your data as it only flashes the root partition. Your address book for instance is safe. But I would still strongly advise that you backup your device before going any further. The N900 ships with a backup utility so please use it and copy the backup file to your computer. If you fail to do so and you lose all your data, you're on your own to recover it.

Flashing the Device

As the guide explains, download the Linux Flasher Tool and the latest available N900 firmware image. You will need to enter your device ID for the image and in both cases you will need to accept Nokia's license agreement.

If you are using Ubuntu, I suggest you download the .deb version of the flasher tool: maemo_flasher-3.5_2.5.2.2_i386.deb, then double-click on the downloaded file and the package installer will open. Install the package. The flasher tool will then be installed to /usr/bin, not in the current directory, as suggested by the article.

For the image, ignore all the red warnings on the page next to the eMCC image links and download the latest combined image for your region, in my case RX-51_2009SE_10.2010.19-1.203.1_PR_COMBINED_203_ARM.bin.

Go through steps 3 and 4 as explained in the guide: turn the phone off and connect it to your computer via the USB cable.

Now comes the glitch in the guide: the actual command to flash the image needs to specify where to find the image in question:

$ sudo flasher-3.5 -F /path/to/image.bin -f -R

When you run the tool, you will get quite a bit of output when it validates the image. It will eventually show the following message:

Suitable USB device not found, waiting.

And the tool will hang. This message is not an error message as the guide says, even though it sounds a bit ominous, it's the flasher tool's way to tell you that you need to switch the device on so that it can find it. So switch the phone on. As soon as you do so, the flasher tool will recognise it and will start loading the image. It will provide on-screen feedback while it does so and the N900 will display a small progress bar at the bottom of the Nokia boot image. Once the tool has finished, the phone will take a lot longer than usual to start up. Don't try to hurry things along, just wait until it is fully booted. Then, and only then, disconnect the phone from the computer.

As mentioned above, you then need to re-install the software you had previously installed on the device. Luckily the OVI application store remembers what you previously paid for and downloaded so you should be able to just re-download software without having to pay for it again.

Sunday, 1 August 2010

GUADEC 2010

I was at GUADEC this week so here is a summary of what I took out of it.

The Location

GUADEC was held in The Hague this year and hosted by De Haagse Hogeschool. The Hague is a nice medium sized town where, in typical Dutch fashion, you go everywhere by bicycle or tram. Public transport is very efficient, fast and always on time. In short, the only similarity with London is that the weather was cold and cloudy most of the time we were there. De Haagse Hogeschool is a great space for such a conference, with good rooms for the talks and a great space to mingle with people during breaks. However, the restaurants around the venue were clearly not used to having that many customers and I missed the afternoon keynote talks both on Thursday and Friday when it took longer than expected to have lunch. On the other hand, lunchtime was always spent with a bunch of very interesting people so no harm done there.

The Talks

As there were different streams running in parallel, I didn't go to all the talks. Also, I may have completely missed the main point of any talk so what follows is my take on things. It may be biased, incomplete or outright wrong due to me misunderstanding something or other.

GNOME, the Web and Freedom, by Luis Villa

Luis is a lawyer at Mozilla and a geek. His premise is that since the first GUADEC in 2000, a universal platform built on open source software has appeared on every computer on the planet and it's a platform upon which Microsoft is actually struggling to keep up with the competition. It's not GNOME, it's the web. And it's not a fad, it's here to stay (despite what the star formerly known as the star formerly know as Prince might say).

Luis suggests that we should strive to make GNOME Shell the best shell for web applications. All GNOME applications should embrace the web and integrate with it. And when you write a new application, you should write it in HTML and JavaScript first.

I completely agree with the idea of integrating the web into the desktop. A photo management application for instance should be able to seamlessly integrate with services like flickr so that you don't have to leave your desktop to upload photos to flickr or to subscribe to someone's photo stream. In practice, this is what The Web Standards Project have been pushing for some time in the form of the semantic web.

I disagree with the idea to write all applications in HTML and JavaScript because that would significantly reduce developer freedom and flexibility in writing those applications. Also, HTML and JavaScript are just a tool, are well suited to some applications but not all of them.

Finally, there is on aspect which I believe Luis is very aware of: I want to know where my data is hosted and be able to make sure that some of my data is exclusively hosted on my own machine, not somewhere in the cloud. That sort of issues will need to be addressed if the desktop platform we use is to integrate more closely with the web.

Who Makes GNOME? by Dave Neary

This talk sparked a bit of controversy. Dave presented the results of the GNOME Census report. This report produced some statistics on the people who contribute to GNOME in the form of commits to the repositories. The idea was to understand better who contributes to GNOME and how much of it is contributed by paid developers.

Obviously, the fact that Red Hat accounts for 16.3% of the total and Canonical accounts for just over 1% sparked controversy with people saying that this was the proof that Canonical didn't contribute. We're talking about statistics here so as usual they don't tell the whole story and I feel it is very dangerous to draw any conclusion from them. Dave is very aware of the fact and took great care in including caveats in his presentation. That didn't prove enough to prevent the controversy though. So here's my take on what aspects of the statistics show that they should be taken with a pinch of salt:

  • The top 2 contributor categories are Volunteer and Unknown and between them represent more than 40% of the contributions. Those people may have become contributors for a variety of reasons and some of them may be from Canonical or Red Hat or any other company, or may have become individual contributors as a result of using Fedora or Ubuntu, we just don't know.
  • There are 11 companies between 1 and 3% (ranks 6 to 16). A small change in the number of contributions could change the list dramatically.
  • Counting commit events only tells part of the story and focuses exclusively on the development effort. From my experience, in order to deliver projects, packaging, testing and bug management is as important as development. And so are specifications, usability studies and artwork. As a community, in order to deliver the best operating system possible, we need all of the above. Ubuntu (and by extension Canonical) has been very successful in addressing the non-development aspects of delivering a great operating system based on great open source software. And I'd rather Canonical concentrated on working on the areas where the community is lacking resources and experience than add more resources to the bits that are already well covered.
State of the GNOME 3 Shell, by Owen Taylor

I have very few notes on this talk, it basically amounts to: some stuff works, some doesn't and it's still a work in progress.

Shell Yes! Deep Inside the GNOME Shell Design, by William John McCann and Jeremy Perry

Another talk where my notes are less than of stellar quality. The idea is that the GNOME Shell developers are here to make GNOME great. And indeed, the demos look fantastic and are very encouraging. A lot of thought has gone into the interface and I think for the most part, it will make application management, from a user's points of view a lot smoother. On the other hand, some bits and bobs look like the same old things re-hashed, in particular how notifications are handled. I'm not sure how Ubuntu will integrate that as they seem to go the opposite way from what Ubuntu has been doing in this area. Time will tell.

GNOME State of the Union, by Fernando Herrera and Xan Lopez

What more to say than absolute genius? There's no point in me telling you what happened, you needed to be there.

Clutter State of the Union, by Emmanuele Bassi

Clutter has a new logo and a new web site. Very interesting is the introduction of ClutterState to provide declarative state changes and animations. The latest implementation also makes use of the GPU if available. Very, very cool technology and it's what makes MeeGo rock!

So you think you can release? by Emmanuele Bassi

This was a very interesting talk and it made a lot of excellent points in how to handle the release of library code. A lot is also applicable to end user applications and everything is equally applicable to open source and closed source code. The main ones I took out of it are:

  • Any new version of the library should not break apps written with older versions.
  • Use the 0.x cycle to refine and optimise the release mechanism.
  • Have reliable release cycles, such as time based cycles (e.g. every 6 months) and stick to it even if it means dropping functionality from a given cycle.
  • Write performance test suites and run them.
  • Replying RTFM to questions implies that you've actually written the manual.
  • Write a coding style guide and require that patches follow it. I would add to that: keep the coding style short and don't diverge too much from what other projects do otherwise contributors will have no incentive to read it or abide by it.
  • When using the phrase patches welcome, make sure that this is really the case and that when someone contributes a patch you actually review and integrate it quickly.
  • When releasing version 1.0, commit to the smallest subset of your API as possible and review it thoroughly: whatever is left behind, you will have to support for some time to come.
  • Analyse what part of your API your users use most: is it the low level API or the convenience API? If an API turns out to be wrong, don't hesitate to deprecate it.
  • Don't develop on the master branch: it needs to build at all times.
  • Resist the urge of changing an implementation too often.
  • Plan for next minor release but also the next major one.

That's quite a list and I agree with all of it!

Multitouching Your Apps, by Carlos Garnacho

A very interesting talk by Carlos on how to integrate multi-touch abilities in GNOME applications. There is a whole new API for that, that basically allows you to specify the device to use rather than assume that there is a single keyboard and mouse connected. The low-down is:

  • Each finger gets a device;
  • Devices emit individual events;
  • There is a need to gather and correlate event:
    • obtain an event packet,
    • calculate the distance and relative angle between 2 events,
    • use GtkDeviceGroup and multi-device-event signal.

If you want to play with it, you will require XOrg 7.5 and you can get the code through git:

$ git clone -b xi2-playground git://git.gnome.org/gtk+
GNOME 3 and Your Application, by Dan Winship and Colin Walters

A quick introduction on the changes brought by GNOME 3. My take on it:

  • No GNOME 2 API will be removed so GNOME 2 applications should keep working with GNOME 3. However, you will need to use a set of new API to take full advantage of GNOME 3.
  • The .desktop will be much more central to the application, will act as a unique ID for it and will provide more features. In particular make sure, to include the StartupNotify=true option.
  • Provide high resolution icons as icons will be displayed much larger.
  • There will be less chrome in application windows.
  • No more tray icons (well, they will be done differently).
  • Notification tray at the bottom.
  • No more libnotify.
Open Design Thinking Workshop, by inventedhere.de

Thanks to Andrea Scheer, Sophia Klees, Andreas Weigt, Martin Steck and Clemens Buss for a very inspiring workshop! I failed miserably at my task in the warm-up LEGO exercise. We all had a lot of fun and I think we came up with interesting concepts. If I find the time, I'd love to try to implement some of what the team I was working on came up with.

Tracker's Place in the GNOME Platform, by Martyn Russell

Tracker has evolved a lot since version 0.6, which is the one available in the Ubuntu Lucid repositories and a is a lot less resource intensive than it used to be. It provides application with the ability to share all sorts of data and is standards based. It also has a brand new query interface in the form of tracker-sparql which looks very powerful.

In practice the tracker database provides functionality which is very similar to what Desktop Couch provides, which is what is installed by default in Ubuntu. Tracker does not provide the replication facility but it provides more comprehensive search facilities. The other difference is that Tracker is designed to store non-application specific data (such as meta-data on an image, which should be available to any image management or manipulation program), while Desktop Couch includes application specific data such as preferences, as well as non-application specific data. I don't think the two are incompatible though and creating a Desktop Couch back-end for Tracker could be very interesting and provide a solution that is greater than the sum of its parts.

Identifying Software Projects and Translation Teams in Need, by Andre Klapper

Andre presented an interesting research he did based on code commits and bug activity to try to identify projects and translation teams that were in need of help in GNOME, the results of which can be found online. It still leaves a lot of questions on how to help those projects but identifying them is an essential first step.

Embracing the Web: Integrating Web Services within GNOME, by Rob Bradford

Rob presented libsocialweb, which is used in MeeGo to provide integration to web services such as Facebook and flickr. And there were demos aplenty: very entertaining! Oh and GeoClue looks really cool too.

GNOME Color Manager: Exploring the User Experience and Integration Points for a 100% Colour Managed Desktop, by Richard Hughes

For professional graphic artists, it is essential to have proper colour management throughout the complete image workflow, from the source (camera or scanner) to the output (printer or PDF file). OS-X does that to perfection, Windows 7 does it but not very well, Linux doesn't do it at all. Richard proposes a set of tools and libraries to change this and make the GNOME (and possibly KDE) desktop fully colour managed. Great stuff!

Cairo: 2D in a 3D World, by Christopher Paul Wilson

Maybe it was burnout from 3 days of conference but I found this talk the least interesting of all. All I took out of it was that there are a lot of reasons why Cairo hasn't released a new version for a long time, if Cairo is slow it's because the drivers are broken or developers are not using it properly and it's all going to be better very soon now as they move to a 6-month release cycle.

Anyway, moving on swiftly...

Growing Communities with Launchpad: Ubuntu and GNOME, by Danilo Segan

Danilo's talk was all about how to use Launchpad better to ensure that we can get related communities (such as Ubuntu and GNOME) to work together better. It concentrated particularly on bugs and translations. I think that, as a result of a question I asked, I volunteered myself to contribute code to Launchpad in order to simplify forwarding bugs upstream.

The Parties

Nearly as important as the talks were the parties: it was the opportunity to meet people I'd only ever heard of or talked to via email or IRC.

Canonical Party

The indefatigable Jorge Castro made sure that the party was a success. The music was a bit loud at times but considering it was held in a night-club, it was to be expected. It also proved that the best DJ in the world would find it hard to get a group of geeks to dance.

Collabora Party

The beach party organised by Collabora was great. Less loud music than the Canonical one and a nice barbecue. It would have been perfect if the weather hadn't been cold and windy. We also got to be initiated in the correct Dutch etiquette regarding the use of satay sauce at barbecues: just drown everything in the stuff.

GUADEC After-party at Revelation Space

This hacker space is just great and it was a nice way to wind down.

Shotwell Hacking

It was a pleasure to meet Adam, Jim, Rob and Lucas from Yorba while at GUADEC. We talked a lot about Shotwell and with Jim's help I finalised the code to migrate photos from F-Spot, which Jim committed in trunk. There are a couple of follow-up bugs that were reported by early testers but the bulk of the functionality made it in before the string freeze and the rest should be sorted in time for the release of Shotwell 0.7, which will them make it into Ubuntu 10.10.

Next Year

Next year, GUADEC and Akademy will join forces for a desktop summit in Berlin.

Friday, 23 July 2010

Converting OpenOffice Documents in Bulk

I had a request from a customer earlier this week: they wanted a copy of all the diagrams that are present in a specification I've been writing for them. All those diagrams are in separate OpenOffice Draw (.odg) files. They don't use OpenOffice but were happy to have PDF versions. The only problem is that there are 42 of them so it would take ages to convert them manually. A quick Google later and I found a way to convert documents from the command line.

So, to adapt it to Ubuntu, download the DocumentConverter.py file mentioned in the post above and store it where your documents are. Then, start OpenOffice in headless mode:

$ ooffice -headless -accept="socket,port=8100;urp;" &

As the version of Python installed on Ubuntu already includes the UNO bindings, there is no need to use a special OpenOffice version of Python to do the job, the standard one will do. The script takes two arguments, the input file and the output file, and works out the formats based on the extensions. Doing the bulk convert is therefore extremely easy:

$ ls *.odg | while read f; do
> echo $f
> python DocumentConverter.py $f ${f%.*}.pdf
> done

Job done! It took 5 minutes for my laptop to convert all 42 files, during which time I made coffee rather than repetitively click on UI buttons and I even have enough time left to blog about it. Oh and I have a happy customer: that's the most important.

Saturday, 29 May 2010

Installing OpenERP on Ubuntu 10.04 LTS

Background

It's that time of the year again. I need to prepare all the information required by my accountant to issue my company accounts. Luckily I have most of it saved in a handy directory on my laptop, with a backup copy on the server. However, it still takes a lot of time that I'd rather be spending doing something else. Accounts are not the only process that I'd like to improve: sales, expenses, invoicing, everything that is not part of my daily job is done in an ad-hoc way. This is exactly what ERP systems are designed to address. And as usual, there is an open source solution out there, one that is even fully supported on Ubuntu: OpenERP. Let's install it then!

Before we start, OpenERP is a client-server solution and as such there are two components to consider:

OpenERP Server
This is a central component that is deployed on a central server and is responsible for managing the company database. You should have a single instance of it for the whole company.
OpenERP Client
This is a desktop application that enables users to view and update the data held in the server. It should be installed on every computer that needs access to the ERP system.

In addition, OpenERP also offers another server component called OpenERP Web. This component is meant to be deployed on the same server as OpenERP Server and offers a web interface to the ERP system, meaning that you can access the system using a vanilla web browser, rather than the dedicated client application. This is great if you have a large number of users and don't want to deploy the dedicated client everywhere. I will ignore this component for today and only go through the installation of the server and dedicated client.

OpenERP Server

To install the server component, you need a computer that will act as a server. You could potentially install it on your desktop or laptop if you are sure that you will only ever be the single user of the application but I wouldn't recommend it. In my case, I decided to install it on my existing home server that runs Ubuntu Server 10.04 LTS.

Install PostgreSQL

OpenERP needs a database engine and is designed to run with PostgreSQL so we need to install this first. To do this, connect to the server and just install the relevant package, as detailed in the OpenERP manual:

$ sudo apt-get install postgresql
Create an openerp user in PostgreSQL

The server will need a dedicated user in the database so we need to create it. To do this, we first need to start a session under the identity of the postgres Linux user. We can then create the database user that we will name openerp. When prompted, enter a password and make sure you remember it. There is no need for that user to be an administrator so we answer n when asked that question. Then close the postgres session to come back to your standard Linux user.

$ sudo su - postgres
$ createuser --createdb --username postgres --no-createrole \
 --pwprompt openerp
Enter password for new role: 
Enter it again: 
Shall the new role be a superuser? (y/n) n
$ exit

Now that this new user is created, let's try to connect to the database engine using it:

$ psql -U openerp -W
psql: FATAL:  Ident authentication failed for user "openerp"

It doesn't work. This is because PostgreSQL uses IDENT-based authentication rather than password-based authentication. To solve this, edit the client authentication configuration file:

$ sudo vi /etc/postgresql/8.4/main/pg_hba.conf

Find the line that reads:

local all all ident

Replace the word ident with md5:

local all all md5

And restart the database server:

$ sudo /etc/init.d/postgresql-8.4 restart

Let's try again:

$ psql -U openerp -W
psql: FATAL:  database "openerp" does not exist

OK, that's a different error which is due to the fact that PostgreSQL tries to connect to a database that has the same name as the user if none is specified. So let's try to connect to the postgres database, which is contains system information and is always installed.

$ psql -d postgres -U openerp -W
psql (8.4.4)
Type "help" for help.
postgres=>

That works so we know that the user has been successfully created and there should be no problem connecting with that user identity.

Install OpenERP Server

OpenERP Server is part of the Ubuntu repositories so it's extremely easy to install:

$ sudo apt-get install openerp-server

Now is a good time for a coffee break. On my installation, the openerp-server package triggers the installation of no less than 108 packages in 48.9MB of archives that will require an additional 251MB of disk space. This may take some time. Most of the additional packages are Python packages, which is sensible because OpenERP is written in Python but the fact that it also requires things like xterm and bits of GTK makes me think that the dependency list could be pruned somewhat. At the end of the installation, a message appears saying that you should go and read the file called /usr/share/doc/openerp-server/README.Debian. Go and do this. It mainly explains the PostgreSQL installation that we just did but it also mentions a bit of useful information, namely that the OpenERP Server configuration file is /etc/openerp-server.conf. That's quite handy because we need to update it.

$ sudo vi /etc/openerp-server.conf

If you are installing the server and client on different machine, which I would recommend, you need to find the line that says:

interface = localhost

And replace the word localhost with the IP address of your server. If you don't know the IP address of the server, just run ifconfig with no parameters and look for the words inet addr: at the beginning of the second line of output: the IP address is the set of four number separated by dots that come just after that. You then need to modify another two lines:

dbpassword = the password you chose for the openerp user
dbhost = localhost

In theory you shouldn't have to specify dbhost = localhost because leaving that entry blank should default to the value localhost but when I tried this, OpenERP Server could not connect to PostgreSQL. It's now time to restart the server process but before we do this, it's always good to be able to follow the logs in a second window just in case something goes wrong. The location of the log file is helpfully specified in the configuration file that we just edited. Open another terminal, connect to the server and run the following command:

$ tail -f /var/log/openerp-server.log

The last 10 lines of log will appear and every time a new entry is added to the log file, it will also appear in this window. In the first window, we can restart the server:

$ sudo /etc/init.d/openerp-server restart

If all goes well, no nasty error message should appear in the log window.

OpenERP Client

Now, to install the client software on a desktop, connect to the desktop, open a terminal and install the package:

$ sudo apt-get install openerp-client

That's it. Once finished, you will find an OpenERP Client entry in the Applications -> Internet menu. Click on it to open the client. If you want to fill in the feedback form, do so, otherwise click Cancel. You then need to create a new database. To do this, go to the File -> Databases -> New Database menu:

New Database menu

This will open a dialogue where you can enter the details of the new database.

New Database dialogue

You will need to click on the Change button at the top to specify the name or IP address of the server. The port should be the default, 8070. Of course, if you have a firewall between the client and the server, the firewall configuration will need to be updated to allow traffic to the server through this port. The default password for the super administrator is admin as specified in the dialogue. Note that the database name cannot contain spaces, dashes or any non-alphanumeric characters.

So that's OpenERP installed. It's now time to read the rest of the documentation and get a copy of Accounting for Dummies to understand what it's all about.

Thursday, 29 April 2010

Shared Folders in Ubuntu with setgid and ACL

Introduction

There is an often requested feature on Linux (or UNIX) to have the ability to create shared directories similar to what is possible in Windows, that is a directory in which every person who has been given access can read, write or modify files. However, because Linux file systems such as ext4 enforce file permissions that are stricter than any of the windows file systems such as FAT or NTFS, creating such a directory is not obvious. Of course, if you put your shared directory on a FAT or NTFS partition, it will automatically behave just like in Windows but that requires a separate partition and doesn't allow you to enforce permissions on a per-group basis. So here's a quick guide on how to do this with Ubuntu. The same principles apply to other Linux distributions so should be portable.

Use Cases

Let's go through a couple of classic use cases first, to identify exactly what we want to do.

Project Folder

In a company or university setting where users are assigned to project teams or departments, it can be useful to create shared folders where all members of the team can drop files that are useful for the whole team. They need to be able to create, update, delete files, all in the same folder. They also need to be able to read, update or delete files created by other members of the team. However, users external to the team should only have read access.

Web Development

For anybody doing web development on Linux, a classic problem is when you have to deal with development or test web servers. The default web server process runs with the www-data user and the document directory is owned by the same user. It would be great if all web developers on the team were able to update the document directory on the server while not requiring root access to do so.

Linux Default Behaviour

Linux has the concept of user groups. You can check what groups your user belongs to by typing the following on the command line:

$ groups
bruno adm dialout cdrom plugdev lpadmin admin sambashare

On a default Linux installation, groups are used to give access to specific features to different users, such as the ability to administer the system or use the CD-ROM drive. But one of the core feature of user groups is to support file permissions. Each file has separate sets of read, write and execute permissions for the user who is the owner of the file, the group that owns the file and others, that is everybody else. Whenever a user attempts to read, write or execute a file, the system will decide whether he can do it based on the following rules:

  • if the user is the owner of the file, user permissions apply,
  • otherwise, if the user is part of the group that owns the file, group permissions apply,
  • otherwise, others permissions apply.

So to configure a shared directory as defined above, we need to:

  • create a user group for the team,
  • assign all team member users to that user group,
  • create a directory and configure it so that all users in the group can:
    • add new files to the directory,
    • modify any existing file in the directory,
  • and of course, all this should work without users having to do anything special.

How To

Enable ACL

The first thing we need to do is to enable ACL support on the partition where we will create the shared directory. ACL extend the basic Linux permission implementation to enable more fine grained control. As this requires the file system to be able to store more permission meta-data against files, it needs to be configured accordingly. We can do this by adding the acl option to the relevant line in /etc/fstab, such as:

UUID=b8c490d0-0547-4e1f-b052-7130bacfd936 /home ext4 defaults,acl 0 2

The partition then needs to be re-mounted. If the partition to re-mount is /, /usr or /home, you will probably need to restart the machine. Otherwise, the following commands should re-mount the partition:

$ sudo umount partition
$ sudo mount partition

where partition is the mount point of the partition as defined in /etc/fstab, such as /var/www.

Create Group

We then need to create the group to which we will give shared access, let's call that group teamgroup:

$ sudo groupadd teamgroup

Try to give the group a meaningful name while keeping it short. If it's meant to be a team group, give it the name of the team, such as marketing. Note the following restrictions on Debian and Ubuntu for group names (taken from the man page):

It is usually recommended to only use groupnames that begin with a lower case letter or an underscore, followed by lower case letters, digits, underscores, or dashes. They can end with a dollar sign. In regular expression terms: [a-z_][a-z0-9_-]*[$]?

On Debian, the only constraints are that groupnames must neither start with a dash (-) nor contain a colon (:) or a whitespace (space: , end of line: \n, tabulation: \t, etc.).

Groupnames may only be up to 32 characters long.

We then need to assign users to that group:

$ sudo usermod -a -G teamgroup teamuser

Where teamuser is the login name of the user to assign to the group. This assignment will take effect next time the user logs in. Make sure that you do not forget the -a option otherwise you will wipe out all existing group assignment for that user, rather than just adding a new one.

Create the Folder

The next step is to create the shared folder. This is easy:

$ cd /path/to/parent
$ mkdir teamfolder

Where /path/to/parent is the path to the parent folder and teamfolder is the name of the folder you want to create. We then assign group ownership of the folder to the group previously created:

$ chgrp teamgroup teamfolder

And give write access to the group on that folder:

$ chmod g+w teamfolder

Let's check what this folder looks like:

$ ls -l
drwxrwxr-x 2 teamuser teamgroup 4096 2010-03-03 14:32 teamfolder

Now, let's try to create a new file in that directory:

$ touch teamfolder/test1
$ ls -l teamfolder
-rw-r--r--  1 teamuser teamuser 5129 2010-03-03 14:34 test1

That looks good and any other user who is part of teamgroup should be able to create files in this directory. However, group members will not be able to update files created by other members of the group for the following reasons:

  • the group that owns the file is the user's primary group, rather than teamgroup,
  • the file's permissions only allow the owner of the file to update it, not the group.
Set the setgid Bit

We'll solve the first problem by setting the setgid bit on the folder. Setting this permission means that all files created in the folder will inherit the group of the folder rather than the primary group of the user who creates the file.

$ chmod g+s teamfolder
$ ls -l
drwxrwsr-x 2 teamuser teamgroup 4096 2010-03-03 14:32 folder

Note the s in the group permissions instead of the x that was there previously. So now let's try to create another test file.

$ touch teamfolder/test2
$ ls -l teamfolder
-rw-r--r--  1 teamuser teamuser  5129 2010-03-03 14:34 test1
-rw-r--r--  1 teamuser teamgroup 5129 2010-03-03 14:35 test2

So now whenever a file is created in the team directory, it inherits the team's group.

Set Default ACL

The second issue is related to umask, the default mask applied when creating files and directories. By default umask is set to the octal value 0022, as demonstrated if you run the following:

$ umask
0022

This is a negative mask that is applied to the octal permission value of every file or directory created by the user. By default, a file is created with permissions rw-rw-rw-, equivalent to 0666 in octal and a directory is created with permissions rwxrwxrwx, equivalent to 0777 in octal. umask is then subtracted from that default to give the effective permission with which files and directories are created. So for a file, 0666-0022 gives 0644, equivalent to rw-r--r-- and for a directory 0777-0022 gives 0755, equivalent to rwxr-xr-x. This default is sensible for most situations but needs to be overriden for a team directory. The way to do this is to assign specific ACL entries to the team directory. The first thing to do is to install the acl package to obtain the necessary command line tools. Well, in fact, the first thing to do would be to enable acl on the relevant partition but we already did that at the very beginning.

$ sudo apt-get install acl

Now that the package is installed, we have access to the setfacl and getfacl commands. The first one sets ACLs, the second one reads them. In this particular case, we need to set default ACLs on the team folder so that those ACLs are applied to files created inside the directory rather than the directory itself. The syntax is a bit complicated: the -d option specifies that we want to impact the default ACLs, while the -m option specifies that we want to modify the ACLs and expects an ACL specification to follow.

$ setfacl -d -m u::rwx,g::rwx,o::r-x teamfolder
$ touch teamfolder/test3
-rw-r--r--  1 teamuser teamuser  5129 2010-03-03 14:34 test1
-rw-r--r--  1 teamuser teamgroup 5129 2010-03-03 14:35 test2
-rw-rw-r--  1 teamuser teamgroup 5129 2010-03-03 14:36 test3

There we go, it all works as expected: new files created in the team folder are created with the team's group and are group writeable. To finish off, let's have a look at how the folder's ACLs are stored:

$ getfacl teamfolder
# file: teamfolder
# owner: teamuser
# group: teamgroup
user::rwx
group::rwx
other::r-x
default:user:rwx
default:group:rwx
default:other:r-x
Granting and Revoking Access

Granting a user write access to the team folder is now extremely easy: you can just add that user from the team's group when he joins the team:

$ sudo usermod -a -G teamgroup joiner

Where joiner is the user ID of the user joining the team. Revoking access is nearly as easy, you just need to remove the user from the team's group. Unfortunately, there is no way to do this in a simple command so you will have to edit the file /etc/group, find the group and remove the user ID from that group.

Variations

Restrict Delete and Rename to Owner

By default, any user who has write access to a file can delete or rename it. This means that any member of the team can delete or rename any file created by another member. This is generally OK but if it is not, it can also be restricted by setting the sticky bit on the directory:

$ chmod +t teamfolder
$ ls -l
drwxrwsr-t 2 teamuser teamgroup 4096 2010-03-03 14:32

This feature is used on the /tmp directory to ensure that all files created in that directory can only be deleted by their owners.

Restrict Access for Others

Another variation that may be more useful is to completely deny access for users that are not part of the team. it may be that a particular team is working on some sensitive stuff and you don't want anybody outside the team to see it. To do this, we just revoke all permissions and ACLs for others on the team folder:

$ chmod o-rx teamfolder
$ setfacl -d -m o::--- teamfolder

References

Saturday, 17 April 2010

Ubuntu Lucid Netbook Remix from Alternate CD

The Problem

I've had Ubuntu Netbook Remix running on my Asus EeePC 701 for some time. After upgrading my main laptop to the beta 2 of Ubuntu Lucid 10.04, I wanted to do the same to the EeePC so that I could test the new release, report bugs if I found any and generally benefit from the improvements in 10.04.

The first problem I faced was that the EeePC 701 I have only has 4GB of internal storage, which is just enough for Ubuntu but too little to enable me to do a straight upgrade from the previous version (Karmic 9.10). No problem, I thought, I can re-install from scratch as I really have nothing important on that machine.

I created a flash drive image from the UNR CD, as detailed on the Ubuntu web site, booted my EeePC from it and selected to install Ubuntu on the system. However, I then faced another problem related to the limitations of the 701 model: this model has a 7-inch screen with a resolution of 800x480 pixels, which is quite small and the Prepare Disk Space screen doesn't fit in that resolution, making it impossible to install. I duly reported the bug and wondered how I could work around that problem.

Installing from the Alternate CD

One of the great thing with Linux is that you always have a lot of options. Ubuntu in particular also comes with an alternate installer, which is text based and designed to work on very constrained hardware. Exactly what I needed!

I downloaded the alternate CD, created a flash drive image from it, booted the EeePC from it and started installing Ubuntu. The installation from the alternate CD is extremely easy and follows exactly the same path as with the standard CD, with the difference that you don't have the flashy graphical user interface. At the end of the installation, I had a perfectly functional Ubuntu system using the standard desktop. However, what I wanted was the netbook desktop.

From Standard to Netbook Desktop

The Ubuntu netbook desktop looks very different from the standard one but in practice it is just made up of a small number of packages.

Adding the Netbook Packages

Adding the netbook packages is extremely easy. Open a terminal and run the following command:

$ sudo apt-get install go-home-applet maximus\
 netbook-launcher window-picker-applet
Adding Launcher and Maximus to Application Startup

Once this is done, you will need to ensure the netbook launcher and maximus applications are started automatically. To do this, go to System -> Startup Applications and add two entries with the following commands:

  • netbook-launcher
  • maximus

Make sure that both can be started by starting them manually, either in a terminal window or via ALT-F2. Then, disable the desktop to ensure you don't accidentally click on it when you want to start an application:

$ sudo gconftool-2 --type bool\
 --set /apps/nautilus/preferences/show_desktop false
Customising the Gnome Panels

You then need to customise the Gnome panels.

Right click on the top panel, where the Applications menu is and remove it. Do the same with the Firefox and Help icons. Then right-click again, in an empty area, and select "Add to Panel"; add the Go Home Applet, which will add a small Ubuntu icon. Right click on that icon again, select "Move" and move it all the way to the left.

Right click on an empty area of the top panel and select "Add to Panel"; add the Window Picker Applet and move it to the left so that it is flush left against to Go Home Applet.

Finally, right-click on the bottom panel and select "Delete this Panel".

Log out, log back in again and you should have a full netbook desktop that look something like this:

Lucid Netbook Desktop

Lucid Netbook Desktop

At first, the favourites folder will be empty so you will have to add them yourself.

Final Changes

One final change I made to the installation is that I removed OpenOffice.org and Evolution. The former because it occupies a lot of hard disk space, the latter because it really doesn't work well on so small a screen.

Other Options

An alternative option to install all the required packages is to install the netbook meta-package:

$ sudo apt-get install ubuntu-netbook

I didn't try this so I don't know how much it installs and configures for you. One of the nice side effect of doing it the way I did is that it keeps some standard configuration, such as the multiple workspaces that you can access via CTRL+ALT+arrow keys. I find that this works particularly well with the netbook interface.

Friday, 1 May 2009

ExifTool on Ubuntu

A while ago, I blogged about installing ExifTool on Ubuntu. There's actually a much simpler way to do this. ExifTool is part of the standard packages directly available from the Ubuntu repositories so it can be installed in one line using apt-get, no need for that make malarkey I mentioned last time:

$ sudo apt-get install libimage-exiftool-perl

Et voilà, ExifTool installed and ready to go!

Thursday, 23 April 2009

Making Skype work with PulseAudio in Ubuntu Intrepid and Jaunty

Following my sound issues with Skype on Ubuntu, I did a bit more research and eventually found an excellent how-to article on PulseAudio in the Ubuntu forums. Appendix C explains how to get Skype to work properly and indeed it is a doodle and means there is no need to kill PulseAudio when using Skype! I've tested it in Intrepid and Jaunty and it works a treat. To quote the article, here's how to do it:

Open Skype's Options, then go to Sound Devices. You need to set "Sound Out" and "Ringing" to the "pulse" device, and set "Sound In" to the hardware definition of your microphone. For example, my laptop's microphone is defined as "plughw:I82801DBICH4,0".

You will have to experiment with the different options for "Sound In" until you find the correct one: choose an option, click on "Make a test call" until you find the option that works. On my machine, here's what the option window looks like:

Skype Options, Sound Devices Tab

Thursday, 2 April 2009

Adding a Static Address to the DHCP Server

I've got a Lexmark X9575 all-in-one printer. Up to now, it was connected to the Mac via USB. I wanted to reconfigure it to connect it to the network instead so that I can use it with other computers. But I wanted to do so in such a way that it would get its network configuration via DHCP while keeping a fixed address and be resolvable via DNS.

The first thing I did was to remove the printer setup from the Mac (no, you can't change the configuration, you have to un-install and re-install, thank you Lexmark!). Then I connected the printer to the network switch with a standard RJ45 cable. It requested a DHCP lease form the server immediately and the DNS entries got updated. That was a good start. However, the IP address was part of the dynamic range and therefore unpredictable and the name of the printer was hard-coded to the very non-intuitive ET0020002C5B7A. I wanted something more memorable like lexmark-printer.

To force a fixed IP address is quite simple, as explained in this DHCP mini-howto. I just added the following to the /etc/dhcp3/dhcpd.conf file:

host lexmark-printer {
 hardware ethernet 00:20:00:2c:5b:7a;
 fixed-address 192.168.1.250;
}

Restarted the DHCP server:

$ sudo /etc/init.d/dhcp3-server restart

And it didn't work, for two reasons:

  • The DHCP server already had a lease assigned so wasn't going to renew it;
  • The printer already has an IP address configured so wasn't going to renew it either.

To solve the first problem, I needed to force the expiry of the lease on the DHCP server. The ISC DHCP server stores leases between restarts in a file called /var/lib/dhcp3/dhcpd.leases. So revoking the lease can be done by updating that file, removing the entry for the given lease and restarting the DHCP server.

The second problem was of the same ilk: go to the printer's TCP/IP configuration, set DHCP to no, save the settings, go back into them, set DHCP back to yes and that forced the printer to request a new lease.

That gave me a static address for the printer while still keeping it managed by DHCP. However, in such a case, the DHCP server doesn't update the DNS database. This has to be done manually, by adding the following in the forward lookup file:

lexmark-printer.home.   IN A    192.168.1.250

And this in the reverse lookup file:

250.1.168.192.in-addr.arpa. IN PTR  lexmark-printer.home.

Then of course, a restart of the DNS server is required:

$ sudo /etc/init.d/bind9 restart

Tuesday, 31 March 2009

Skype Sound Issues

Skype is not available in the Ubuntu repositories but you can get hold of it (and a few other things) through medibuntu. However, it looks like that version of Skype has a problem with sound on Ubuntu 8.10 Intrepid Ibex, as detailed in this bug report. The workaround suggested in the discussion that consists in killing pulseaudio before launching Skype works for me so that's good news. Even better news, it looks like the problem is fully sorted in Ubuntu 9.04, which is currently in beta. So, until then, I will use the workaround.

One lesson to learn from this though, is that if you ever have a problem with a piece of software on Linux, a good practice is to start that software from the command line: the output in the terminal window is invaluable for developers and maintainers to understand the problem and is a good place to start in order to find a solution to it. Quite often, just putting the error message in Google will return a number of articles and bug reports on that exact problem.

Upgrading the Server from Gutsy to Hardy

My silent server that provides DNS, DHCP, Subversion and other services to my home network hadn't been upgraded since it was first installed and had been running Ubuntu 7.10 (aka Gutsy Gibbon) quite happily all this time. But with 7.10 reaching end of life in a few weeks, I felt it was time to upgrade and that today was the day to do so.

The first port of call is the upgrade notes and in particular the Hardy note to upgrade from 7.10 to 8.04. Make sure you read the "Before You Start" section at the beginning of that note before you start. So taking this into account, here is the sequence of what I did for that upgrade:

Refresh the package index

It's always good to do that once in a while and especially before an upgrade.

$ sudo apt-get update

Update all packages

Before an upgrade, it's essential to ensure that you are on the latest version of packages for your current release.

$ sudo apt-get upgrade

You will likely need to reboot after that, especially if the upgrade includes a new kernel. If in doubt, reboot anyway.

$ sudo init 6

Install update-manager-core

That's the bit that will perform the upgrade so you need to make sure it's there. If in doubt, install it, apt-get will tell you if you already have the latest version.

$ sudo apt-get install update-manager-core

Upgrade!

That's the biggie that will take a long time and may ask you some questions in the process. If you ever get any question, make sure you read them carefully. Defaults tend to be sensible so it shouldn't wreck your system but that doesn't excuse you from being sensible and paying attention.

$ sudo do-release-upgrade

A few things to note on the upgrade process:

  • I was doing my upgrade through SSH. If things go wrong, you can potentially lose connection with your server and it can all end in tears so the upgrade process warns you about this and starts a second SSH daemon on a different port (9004 in my case but it will tell you). I had no problem installing over ssh but be careful nonetheless.
  • If you have a DHCP server configured, as I do, it will probably notify you that the file /etc/dhcp3/dhcpd.conf file has been modified on your server and ask you whether you want to replace it with the new one it just downloaded or keep the old one. You need to keep the old one if you want your settings to be preserved. To be on the safe side, make a copy of it just in case.
  • Because of a well documented bug in Debian upon which Ubuntu is based, the upgrade process will re-generate any SSL key, in particular the RSA keys used by SSH. That will affect us later and I'll explain what to do about it.

Once the upgrade is finished, the script will ask you of you want to reboot immediately. Unless you have a good reason to reboot manually, you can let the upgrade process do that for you.

Updating the SSH keys on the client machine

If you attempt to reconnect to your server via ssh straight after the upgrade, you will be greeted by a worrying message and you won't be able to go further:

Helsinki:~ brunogirin$ ssh bruno@szczecin
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
dc:11:1a:78:f4:34:c3:a2:ab:9d:52:1e:98:6d:7f:36.
Please contact your system administrator.
Add correct host key in /Users/brunogirin/.ssh/known_hosts to get rid of this message.
Offending key in /Users/brunogirin/.ssh/known_hosts:2
RSA host key for szczecin has changed and you have requested strict checking.
Host key verification failed.

This is normal and is due to the fact that the upgrade process has re-generated the ssh RSA keys on the server. Those keys are stored on all client machines that have previously connected to that server so that they can verify the identity of the server. To resolve that problem, the error message is giving us a hint. On the example above taken from my OS-X box, it tells me that the offending key is on line 2 of file /Users/brunogirin/.ssl/known_hosts. So open that file in an editor and remove the offending line then try to connect again. As it doesn't have the key anymore, it will ask for confirmation before adding the new one to that file and let you connect:

Helsinki:~ brunogirin$ ssh bruno@szczecin.home
The authenticity of host 'szczecin.home (192.168.1.253)' can't be established.
RSA key fingerprint is dc:11:1a:78:f4:34:c3:a2:ab:9d:52:1e:98:6d:7f:36.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'szczecin.home' (RSA) to the list of known hosts.

Note that if you have several keys for the same host, for instance if you've connected through its name and IP address in the past, it may give you another warning, as shown on my Ubuntu laptop:

Warning: the RSA host key for 'szczecin' differs from the key
 for the IP address '192.168.1.253'
Offending key for IP in /home/bruno/.ssh/known_hosts:1
Are you sure you want to connect (yes/no)?

Once again, it tells you which is the offending key so you can remove it and attempt to connect via the IP address to renew the key. Note that this only works as explained above if SSH on the client is configured so that the StrictHostKeyChecking option is set to ask. If it is set to no, it will never check and will happily connect. If it is set to yes, you will have to update the keys manually. See man ssh_config for the full details.

There you go: apart from the SSH malarkey at the end, it was rather straightforward and very quick too! In fact, it took me more time to write this post that do the upgrade.

Bootnote

Now that I have this server on 8.04, I could upgrade immediately to 8.10 but I'll leave that for another day. In fact, considering that 8.04 is an LTS release, I may leave my server on that version until the next LTS release, 9.10 aka Karmic Koala, scheduled for October.

Thursday, 18 December 2008

Wandering Ibex

After 6 weeks with Ubuntu 8.10, aka Intrepid Ibex, the change that has most affected the way I use my laptop is the new network manager. Connecting to a wireless network is easy and just works. It is generally fast to connect, definitely faster than with 8.04 aka Hardy Heron. It will immediately recognise networks it knows about and connect automatically. In particular, it is much better than Windows at connecting to a Wi-Fi network that does not advertise its SSID and recognising it later as a known network.

But the biggest benefit is the support for 3G modems out of the box, in particular the Huawei models available in the UK. When I plugged in my 3 USB modem for the first time, it recognised the device, asked me what country and what operator I was on and that was all the configuration I had to go through: no software to install, instantly on. It also integrates seamlessly into the network manager so there's no flaky third party software to use every time I want to connect. I just have to plug the modem in, select it in the network manager drop down and hey presto, in a few seconds I am online, whether in a pub, on the train or anywhere I've got network coverage from 3 (which is sometimes a bit patchy, I have to admit). OK, there's one thing it doesn't do, which the Windows version does: it doesn't tell me how much of my monthly quota I've consumed. However, I very rarely download large files via the modem so I've never reached the limit: large files is what the (fast) home broadband is for. It would be more important for someone who does use a 3G modem more heavily than I do. Maybe that's a feature to ask for in the next version?

Thursday, 30 October 2008

Intrepid Upgrade

Canonical released Ubuntu 8.10, code named Intrepid Ibex today. So obviously, I felt like I had to upgrade my Hardy Heron (8.04) T42 tonight, especially considering that Canonical have advertised this release as focused on the desktop and mobile computing so should be ideal for a laptop. As usual with recent Ubuntu releases, once the 1466 files my upgrade needed were downloaded, the rest of the process was a breeze and I am now writing this on a newly upgraded machine. Well done to Canonical for making it easy and idiot proof. So here are my first impressions, in no particular order:

  • The Gnome theme is slightly more streamlined and looks good.
  • I was expecting OpenOffice.org to be updated to v3.0 but that wasn't the case and Intrepid keeps v2.4. Hopefully, an upgrade to 3.0 will be available later.
  • There is a new Create a USB startup disk option in the System, Administration menu. I'll have to try that out!
  • There's a Recent Documents sub-menu in the Places menu. This will definitely come in useful.
  • The Shutdown button that used to open a dialog box box has been replaced with a drop down menu that includes a Guest session option: makes it easy to lend your machine for a few hours to a friend without being worried about your files or having to set up a special account.

Everything else looks like Hardy but I'm sure I'll discover new changes as I use the machine more.

Sunday, 10 February 2008

Subversion on Ubuntu

The Cunning Plan

One of the planned functionality for my new silent server was to offer a Subversion repository for code and documents that I could access via WebDAV. By doing this, I would be able to save important documents on the server and benefit from version control. Version control is essential for computer code but can also be very useful for other types of documents by allowing you to have multiple versions, revert to an old version, etc. So without further ado, let's get into the nitty gritty of getting it to work on Ubuntu 7.10.

Resources

Installing the software packages

We need to install Subversion, Apache and the Subversion libraries for Apache. As this is all part of the standard Ubuntu distribution, it is extremely easy.

sudo apt-get install subversion apache2 libapache2-svn

And that's it, you have a working Subversion installation! It doesn't do very much yet so we need to create repositories for documents.

Subversion Repositories

Subversion is extremely flexible in the way it deals with files and directories. There are a number of standard repository layout that are well explained in the book. Then there's the question of whether you'd rather put everything in the same repository or split it. This is also well explained in the book. My rule of thumb is to only create multiple repositories if you need to keep things completely separate, such as having one repository per customer, or if you need different settings. In my case, I need to store code and documents, the code being accessed via an IDE that competely supports Subversion, the documents being accessed as a network folder. Both usage scenarios require different settings in Apache so this is a typical case where several repositories are a good idea. However, there is no need for splitting the code or the document repositories further. You can create your repositories where you want in the file system, I chose to create them in a specific top level directory.

$ sudo mkdir /svn
$ sudo svnadmin create /svn/docs
$ sudo svnadmin create /svn/dev
$ sudo chown -R www-data:www-data /svn/*

The svnadmin command created a complete structure:

ls /svn/dev
conf  dav  db  format  hooks  locks  README.txt

The last command changes ownership recursively of both repositories so that the Apache instance can read and write to them. This is essential to get WebDAV to work.

Configuring Apache

Next, we need to configure Apache so that it can provide access to both repositories over WebDAV.

$ cd /etc/apache2/mods-available
$ sudo vi dav_svn.conf

In this file, I defined two Apache locations, one for each repository, with slightly different options. As this is a home installation, I don't need authentication. If you want to add authentication on top, check the How-To Geek article. The extra two options on the documents location are meant to enable network clients that don't support version control to store files. See Appendix C of the Subversion book for more details. Note that if you wanted to use several development repositories, such as one for each of your customers, you could replace the SVNPath option with SVNParentPath and point it to the parent directory.

<Location /dev>
  DAV svn
  SVNPath /svn/dev
</Location>

<Location /docs>
  DAV svn
  SVNPath /svn/docs
  SVNAutoversioning on
  ModMimeUsePathInfo on
</Location>

The last thing to do is to restart Apache:

$ sudo /etc/init.d/apache2 restart

Subversion Clients

Now that the Subversion server is working, it's time to connect to it using client software. For development, I use Eclipse for which there is the Subclipe plugin. It all works as expected. For the documents repository, Apple OS-X has built-in support for WebDAV remote folders. Go to the finder, select the menu Go > Connect to Server and type in the folder's URL in the dialogue box that appears, in my case http://szczecin.home/docs. It's also possible to browse both repositories using a web browser, which is a good way to provide read-only access.

Conclusion

That was a very short introduction to Subversion on Ubuntu. There's a lot more than this to it, a lot of it in the Subversion book. In particular, you can add authentication and SSL to the repositories once they are available through WebDAV. There are also a lot of options as far as Subversion clients are concerned and you can find free client software for every operating system you can think of.

Saturday, 3 November 2007

DHCP and Dynamic DNS on Ubuntu Server

The cunning plan

I have broadband Internet at home and to connect to the outside world I use a Wi-Fi ADSL router. This router also acts as a DHCP and DNS server. The DHCP function is what allows any machine I connect to my home network to dynamically obtain an IP address. The DNS function is what resolves names into IP addresses, for example, the DNS will tell you that www.blogger.com is really called blogger.l.google.com and its address is 72.14.221.191. This is all well and good: when I switch on one of my computers and let it connect to the network, it will get its IP address from the router, which will also tell it to use the same router for names service queries. The router itself knows to delegate requests to my ISP's DNS so the new computer on the network has full access to the Internet. If I connect a second computer on the network, the same happens and both can have access to the Internet at the same time. Great! However, they don't know anything about each other. If I connect the two machines called nuuk and helsinki to my network, the DNS server is unable to tell either what the address of the other one is. This is because the DNS in the router is fairly basic and doesn't know to update its database when a new machine gets allocated an address by the DHCP service. Basically, we'd want a DHCP server that can communicate with the DNS server and tell it oy, I've got a new machine on the network, here's its name and the address I just allocated to it when a new machine comes online.

So what's a geek to do? Set up his own DHCP and DNS server obviously! And make them talk together. Luckily, I have an old workstation that I haven't used for many years and that I was planning to throw away: it's a bit dated but it should be exactly what I need for this. Then my recent experience with Ubuntu suggests that the recently released Ubuntu 7.10 Gutsy Gibbon Server Edition might be exactly what I need for the job. So let's get started.

The hardware

I said I had this dated workstation lying around. It's an 8 year old piece of kit that was originally built to run Red Hat Linux. It wasn't very good at it at the time because Linux was not quite ready for the desktop at the time but it was a wicked piece of kit at the time:

CPU
Dual 666 MHz Pentium III (Coppermine)
Memory
256 Mb
Storage
10 Gb SCSI hard disk: I got a SCSI controller rather than the cheaper IDE because I wanted to connect my SCSI negative scanner to it

There's plenty of horsepower for what we want to do, more than enough storage and the memory should be fine if the operating system we install on it is lightweight enough. This is where the Server Edition of Ubuntu comes into play: it's meant for server hardware and doesn't have the nice but memory hungry desktop front-end, it's all command line driven. No glitz, just useful stuff. This means that even the latest version of Ubuntu Server should be able to run comfortably within 256 Mb and there should be no need to fall back on an older version of the operating system.

Preparation

Before we start, there is a bit of planning to do. As the new server will be the authority in terms of assigning IP addresses on the network, it needs to have its own address fixed. We also need to decide on a range of addresses to allocate through DHCP. As the router is currently using the address 192.168.1.254, it makes sense to leave it as it is. Here is the network configuration I am aiming for:

ADSL router
192.168.1.254
New DHCP and DNS server
192.168.1.253 and I'll call it szczecin
DHCP address range
From 192.168.1.100 to 192.168.1.200
Domain name
home: no need to have an official domain name and in fact it's probably better if it's not something that could be a valid Internet domain

One thing I did before installing anything and that you may want to do as well is boot the new server with the desktop Ubuntu Live CD just to check that all the hardware is supported. The server edition is not a live CD so it doesn't give you the opportunity to check that before going ahead.

Installing Ubuntu

Let's plug everything together first: a monitor, a keyboard, no need for a mouse as it's all command line, power and VGA cables. And we might as well connect it to the network immediately so we'll need a network cable as well.

Start the machine, put the CD in the drive and follow the instructions. As usual with Ubuntu, it's quite easy. There are only a few things to be careful about:

  • When it asks how to partition the disk, choose the automatic option using the whole disk.
  • When it gets to network setup, cancel the DHCP client setup and configure the network manually. Give it the IP address chosen above and a name. When it asks for a DNS server, put its own address.
  • Don't forget to select DNS in the list of additional services you want to install. I would suggest you also install the SSH server to that you can connect to the machine remotely.

At the end of the installation, the machine pops the CD out and asks you to confirm a restart. No nice funky Ubuntu logo when it restarts, it's all text and you are faced with a command line login prompt. If you want to keep working from the console you can, or you can just connect from any other machine connected to the same network using SSH, provided you installed the SSH server obviously. So let's login to our new server.

Configuring a simple static DNS

The first step is to configure a simple static DNS service that is able to resolve names for the router and the new server. Ubuntu 7.10 comes with BIND 9.4.1 as a DNS server and I have used O'Reilly's DNS and BIND book as my reference. The copy I have is the 3rd edition rather than the 5th but it is more than adequate for my purpose.

The very first task is to make sure we have the necessary basics in /etc/hosts:

127.0.0.1       localhost
192.168.1.253   szczecin.home szczecin
192.168.1.254   gateway.home  gateway

Then we need to reproduce that in the BIND configuration. The first task is to find where the BIND configuration files are. On Ubuntu, you will find them in /etc/bind with a modular default set of files:

$ ls /etc/bind
db.0
db.127
db.255
db.empty
db.local
db.root
named.conf
named.conf.local
named.conf.options
rndc.key
zones.rfc1918

The main file, named.conf is constructed in such a way that for most simple installations, you should only have to change named.conf.local, which is exactly what we are going to do. But first, we need to create our database files. Let's start with the forward lookup, which we will name db.home

home.           IN SOA  szczecin.home. admin.email.address. (
                                1          ; serial
                                10800      ; refresh (3 hours)
                                3600       ; retry (1 hour)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
home.           IN NS   szczecin.home.

localhost.home. IN A    127.0.0.1

szczecin.home.  IN A    192.168.1.253
gateway.home.   IN A    192.168.1.254

The first entry specifies the Start Of Authority, identifying that our server is the best source of information for this zone. The admin.email.address. bit can be any admin email address you want to advertise, with the @ sign replaced by a dot. It doesn't have to be a valid address if you don't want it to be. The second entry identifies this machine as the name server for this zone. If you had multiple servers, you'd need one line per server. The following lines reflect what we have in /etc/hosts.

After that, we can work on the reverse lookup file, which will be named db.192.168.1:

1.168.192.in-addr.arpa.     IN SOA  szczecin.home. admin.email.address. (
                                1          ; serial
                                10800      ; refresh (3 hours)
                                3600       ; retry (1 hour)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
1.168.192.in-addr.arpa.     IN NS   szczecin.home.

253.1.168.192.in-addr.arpa. IN PTR  szczecin.home.
254.1.168.192.in-addr.arpa. IN PTR  gateway.home.

This file just defines the opposite mapping. The first two entries follow the same format that in the other file. Note how the IP addresses are back to front. The last two entries use the PTR record type rather than the A record type. We now need to declare those two database files in the BIND configuration. To do this, we just add the following to the end of the named.conf.local file:

zone "home" in {
        type master;
        file "/etc/bind/db.home";
};

zone "1.168.192.in-addr.arpa" in {
        type master;
        file "/etc/bind/db.192.168.1";
};

Note the semi-colons all over the place in the file, BIND doesn't like it if you forget them. We just need to restart the DNS for it to pick up the configuration:

$ sudo /etc/init.d/bind9 restart

If it doesn't start properly, the best way to identify what's wrong is to go have a look at the system log files. On Ubuntu, the DNS messages will be in /var/log/daemon.log. One last thing to do before we test our setup is to update the local resolver configuration by modifying /etc/resolv.conf. Remove all lines that start nameserver so that the local resolver automatically sends requests to the local BIND instance irrespective of the IP address of the machine. We should end up with a file that contains a single line:

domain home

And we can now use nslookup to verify that it's all working:

$ nslookup szczecin
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   szczecin.home
Address: 192.168.1.253

$ nslookup szczecin.home
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   szczecin.home
Address: 192.168.1.253

$ nslookup gateway
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   gateway.home
Address: 192.168.1.254

$ nslookup localhost
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   localhost.home
Address: 127.0.0.1

Installing the DHCP server

My reference for installing the DHCP server was an excellent article aimed at the Debian distribution by Adam Trickett. As Ubuntu is based on Debian, I didn't have much to change. In its default state, an Ubuntu installation doesn't include a DHCP server so we need to add it:

$ sudo apt-get install dhcp3-server

This will install the ISC DHCP server. It will ask you to insert the Ubuntu CD in the drive so just do so. It will then attempt to start the new DHCP server and fail saying that the configuration is incorrect, which is to be expected. Configuration files for this server can be found in /etc/dhcp3 and the one we are interested in is dhcpd.conf. Move the default version out of the way by renaming it and we will start anew with an empty file. Here is what I have on my system, based on Adam's article. I highlighted the lines that tell the DHCP server to update the DNS and what key file to use.

# Basic stuff to name the server and switch on updating
server-identifier       192.168.1.253;
ddns-updates            on;
ddns-update-style       interim;
ddns-domainname         "home.";
ddns-rev-domainname     "in-addr.arpa.";
# Ignore Windows FQDN updates
ignore                  client-updates;

# Include the key so that DHCP can authenticate itself to BIND9
include                 "/etc/bind/rndc.key";

# This is the communication zone
zone home. {
        primary 127.0.0.1;
        key rndc-key;
}

# Normal DHCP stuff
option domain-name              "home.";
option domain-name-servers      192.168.1.253;
option ip-forwarding            off;

default-lease-time              600;
max-lease-time                  7200;

# Tell the server it is authoritative on that subnet (essential)
authoritative;

subnet 192.168.1.0 netmask 255.255.255.0 {
        range                           192.168.1.100 192.168.1.200;
        option broadcast-address        192.168.1.255;
        option routers                  192.168.1.254;
        allow                           unknown-clients;

        zone 1.168.192.in-addr.arpa. {
                primary 192.168.1.253;
                key "rndc-key";
        }

        zone localdomain. {
                primary 192.168.1.253;
                key "rndc-key";
        }
}

Now that we've done that, we need to modify the DNS configuration so that it can accept the changes. But first, a quick note on the /etc/bind/rndc.key file. This file is automatically created when you install DNS on Ubuntu and should be fine for your installation. It is a key file that authenticates the DHCP server to the DNS server so that only the DHCP server is allowed to send updates. You don't want random people to be able to update your DNS database. Having said that, we need to modify the DNS configuration to accept the updates so back to /etc/bind. And while we are at it, let's have a look at this key file.

key "rndc-key" {
        algorithm hmac-md5;
        secret "some base 64 encoded secret key";
};

Note the name of the key on the first line, it may vary from one distribution to another so make a note of it. Then we need to add the control information to the DNS configuration. Rather than update named.conf, I decided to add the relevant code to the end of named.conf.options: it feels like the right place to put it but I suspect it doesn't really matter. So add this to the end of the file:

// allow localhost to perform updates
controls {
        inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; };
};

Then we need to modify the zone definitions in the named.conf.local file. Here is what it looks like with changes highlighted:

zone "home" in {
        type master;
        file "/etc/bind/db.home";
        allow-update { key "rndc-key"; };
        notify yes;
};

zone "1.168.192.in-addr.arpa" in {
        type master;
        file "/etc/bind/db.192.168.1";
        allow-update { key "rndc-key"; };
        notify yes;
};

include "/etc/bind/rndc.key";

We're nearly there. The changes we just made mean two things: the DNS server needs to be able to update the content of the /etc/bind directory and the DHCP server needs to be able to read the key file /etc/bind/rndc.key. By default on Ubuntu, this won't work as the permissions around those files are fairly stringent. So let's change them. As the BIND configuration directory belongs to root:bind, we just need to give write access to the group for BIND to be able to write to it. To give the DHCP server access to the key file, the right thing to do would be to add the dhcpd user to the bind group but we can also make the file readable to everybody. Yes this is less secure but will be fine for a home installation.

$ sudo chmod g+w /etc/bind

$ sudo chmod +r /etc/bind/rndc.key

Now we just need to start DHCP and restart DNS.

$ sudo /etc/init.d/dhcp3-server start

$ sudo /etc/init.d/bind9 restart

Error messages will be in the same place as before if the services don't start properly. If all starts as exected, it's now time to go to the administration interface of the router and disable DHCP. We should not need it anymore.

Booting the clients

The proof of the pudding is in the eating and the proof of installing a server is in starting a number of clients to use the service. The first machine I tried with was my Ubuntu laptop called nuuk. To see what happens, tail the log file. You should see something like the following appear:

Nov  3 15:14:02 szczecin dhcpd: DHCPDISCOVER from 00:12:f0:1e:f4:79 via eth0
Nov  3 15:14:03 szczecin dhcpd: DHCPOFFER on 192.168.1.102 to 00:12:f0:1e:f4:79
(nuuk) via eth0
Nov  3 15:14:03 szczecin named[4771]: client 127.0.0.1#32773: updating zone
'home/IN': adding an RR at 'nuuk.home' A
Nov  3 15:14:03 szczecin named[4771]: client 127.0.0.1#32773: updating zone
'home/IN': adding an RR at 'nuuk.home' TXT
Nov  3 15:14:03 szczecin dhcpd: Added new forward map from nuuk.home. to
192.168.1.102
Nov  3 15:14:03 szczecin named[4771]: client 192.168.1.253#32773: updating zone
'1.168.192.in-addr.arpa/IN': deleting rrset at '102.1.168.192.in-addr.arpa' PTR
Nov  3 15:14:03 szczecin named[4771]: client 192.168.1.253#32773: updating zone
'1.168.192.in-addr.arpa/IN': adding an RR at '102.1.168.192.in-addr.arpa' PTR
Nov  3 15:14:03 szczecin dhcpd: added reverse map from 102.1.168.192.in-addr.arpa.
to nuuk.home.
Nov  3 15:14:03 szczecin dhcpd: Wrote 4 leases to leases file.
Nov  3 15:14:03 szczecin dhcpd: DHCPREQUEST for 192.168.1.102 (192.168.1.253) from
00:12:f0:1e:f4:79 (nuuk) via eth0
Nov  3 15:14:03 szczecin dhcpd: DHCPACK on 192.168.1.102 to 00:12:f0:1e:f4:79 (nuuk)
via eth0

If you don't see the message about updating the forward and reverse maps, it may be that your client machine is not configured to send its name to the DHCP server. For this to work on Ubuntu, you should have the following line in /etc/dhcp3/dhclient.conf:

send host-name "<hostname>";

Next on the list is my Apple PowerMac G5 called helsinki. A simple restart and it works like a charm: I can ping hesinki from nuuk or szczecin and the other way round, all machines can access the Internet, great! While we're talking about computer names, if you don't know how to change the machine's name under OS-X, here's how.

The next test is with my work's Windows XP laptop. All goes well: the Windows box gets an IP address and I can ping it from the other machines. However, I can't ping from the Windows box to the other ones unless I use the fully qualified domain name: that is I can ping nuuk.home but not nuuk. It looks like Windows ignores the domain information sent by DHCP. This is not a major problem as everything else works fine. I vaguely remember something about network settings on Windows that you have to change but. I'll have a look see if I can remember where it was when I can.

The final test is to start my Solaris Express laptop called mariehamn. It gets its IP address fine but doesn't seem to send its name to the server. So it can't be pinged. Everything else works though: it can ping all of the other machines, get to the Internet, etc. I suspect I need to find the equivalent of the /etc/dhcp3/dhclient.conf file on Solaris and change it so that it sends its name. It's probably hidden somewhere in the Network Auto-Magic configuration.

Conclusion

That was a bit convoluted but the excellent resources that are the O'Reilly DNS and BIND book and Adam's article made it significantly easier. Real system administrators would say that was a doodle and made way too easy by Ubuntu. I've learnt useful stuff on the way and I now have a good use for this old workstation. Speaking of which, it hasn't really been breaking a sweat so far: it's been close to 100% idle all the time, it has 114Mb RAM free out of 256 and the hard disk has seen virtually no activity. In other words, my 8 year old box is over spec'ed for this and I could get it to do a lot more server tasks. Should I mention that the server version of some other recent operating systems that shall remain nameless would not even start on such a machine? No, I'll leave that debate for another day.

For those who are wondering what scheme I use to name my computers, it all comes from a previous job and all machines are named after cities of the world, with a Scandinavian and Baltic theme so far: Helsinki, Szczecin, Nuuk and Mariehamn.