Showing posts with label howto. Show all posts
Showing posts with label howto. Show all posts

Sunday, 30 January 2011

D-Bus Experiments in Vala

Most modern Linux desktop distributions now include D-Bus. It enables different applications in the same user session to communicate with each other or with system services. So I thought I'd experiment with D-Bus using Vala and starting with the published example.

Example 1: Ping Loop

I started with a simple ping client and server based on the example above, the main difference being that I added a loop in the client. The server code takes a message and an integer, prints out the message and the received integer, then adds one to the integer before returning the result:

[DBus (name = "org.example.Demo")]
public class DemoServer : Object {

    public int ping (string msg, int i) {
        stdout.printf ("%s [%d]\n", msg, i);
        return i+1;
    }
}

void on_bus_aquired (DBusConnection conn) {
    try {
        conn.register_object ("/org/example/demo",
            new DemoServer ());
    } catch (IOError e) {
        stderr.printf ("Could not register service\n");
    }
}

void main () {
    Bus.own_name (
        BusType.SESSION, "org.example.Demo",
        BusNameOwnerFlags.NONE,
        on_bus_aquired,
        () => {},
        () => stderr.printf ("Could not aquire name\n"));

    new MainLoop ().run ();
}

server.vala

And the client simply queries the server in a loop every second and prints out the input and output values:

[DBus (name = "org.example.Demo")]
interface DemoClient : Object {
    public abstract int ping (string msg, int i)
        throws IOError;
}

void main () {
    int i = 1;
    int j;
    while(true) {
        try {
            DemoClient client = Bus.get_proxy_sync (
                BusType.SESSION, "org.example.Demo",
                "/org/example/demo");

            j = client.ping ("ping", i);
            stdout.printf ("%d => %d\n", i, j);
            i = j;

        } catch (IOError e) {
            stderr.printf ("%s\n", e.message);
        }
        Thread.usleep(1000000);
    }
}

client.vala

Compiling both programs requires the gio-2.0 package:

$ valac --pkg gio-2.0 server.vala
$ valac --pkg gio-2.0 client.vala

Start the server followed by the client in two separate terminal windows and you should see them exchange data.

Example 2: Graceful Termination

The problem with the code above is that if the server process terminates while the client is still running, an exception occurs and the client doesn't recover. It would be nice if we could get the client to terminate gracefully when the server stops. The Vala GDBus library provides the ability to detect when a service comes up or is brought down using watches. When you setup a watch, you need a callback method for the watch to call. That callback method needs to be called on a different thread than the main client thread. This is handled by the MainLoop class but it means that the core client loop needs to be run in its own thread, which complicates the client code a bit. In the code below, the client code has been encapsulated into a Demo class and the client loop thread is controlled using a simple boolean attribute.

[DBus (name = "org.example.Demo")]
interface DemoClient : Object {
    public abstract int ping (string msg, int i)
        throws IOError;
}

public class Demo : Object {
    private bool server_up = true;
    
    private DemoClient client;
    
    private uint watch;
    
    private MainLoop main_loop;
    
    public Demo() {
        try {
            watch = Bus.watch_name(
                BusType.SESSION,
                "org.example.Demo",
                BusNameWatcherFlags.AUTO_START,
                () => {},
                on_name_vanished
            );
            client = Bus.get_proxy_sync (
                BusType.SESSION,
                "org.example.Demo",
                "/org/example/demo");
            server_up = true;
        } catch (IOError e) {
            stderr.printf ("%s\n", e.message);
            server_up = false;
        }
    }
    
    public void start() {
        main_loop = new MainLoop();
        Thread.create(() => {
            run();
            return null;
        }, false);
        main_loop.run();
    }
    
    public void run() {
        int i = 1;
        int j;
        while(server_up) {
            try {
                j = client.ping ("ping", i);
                stdout.printf ("%d => %d\n", i, j);
                i = j;
            } catch (IOError e) {
                stderr.printf ("%s\n", e.message);
            }
            if (server_up)
                Thread.usleep(1000000);
        }
        main_loop.quit();
    }
    
    void on_name_vanished(DBusConnection conn, string name) {
        stdout.printf ("%s vanished, closing down.\n", name);
        server_up = false;
    }
}

void main () {
    Demo demo = new Demo();
    demo.start();
}

client.vala

The server code is unchanged.

With this version, once the server and client are started, stopping the server through CTRL-C will notify the client which will then stop gracefully. You can even have multiple clients, they will all stop in the same way. The advantage of doing this is that the client has a chance to clean up any resource it holds before stopping.

Example 3: Peers and Service Migration

All of the above is great and we can now design client programs that share a common service. But when dealing with software that is started by the user, the client/server model is not always the best option: you need to ensure that the server is started before any client is and that it is only stopped after the last client has stopped. And what happens is the server fails? What would be really nice is if we could have peers: no difference between client and server, just a single executable that can be run multiple times, the first process to start acts as a server for the other ones and when it stops responsibility for the service is taken over by one of the other processes if any are still running. The last process to terminate closes the service. D-Bus makes it easy to implement for the following reasons:

  • Only one process can own a service;
  • All service calls go via D-Bus so a client process will not notice if the server process actually changes, as long as the service is still available;
  • The client and server processes can be the same process or different processes, it makes no difference to the client.

Therefore, the implementation of the peer program is simple:

  • Start the server, followed by the client and ignore any failure in starting the server: if another process has already started, the service will be available to the client;
  • When the peer is notified of the loss of service, try to start the server but ignore any failure: if another process was notified first, it will have started the server and there will be no interruption in service for the client thread.

We only need a single Vala file to implement the peer:

[DBus (name = "org.example.Demo")]
interface DemoClient : Object {
    public abstract int ping (string msg, int i)
        throws IOError;
}

[DBus (name = "org.example.Demo")]
public class DemoServer : Object {

    public int ping (string msg, int i) {
        stdout.printf ("%s [%d]\n", msg, i);
        return i+1;
    }
}

public class Demo : Object {
    private DemoClient client;
    
    private uint watch;
    
    public Demo() {
        // nothing to initialise
    }
    
    public void start() {
        start_server();
        start_client();
        new MainLoop().run();
    }
    
    public void start_client() {
        try {
            watch = Bus.watch_name(
                BusType.SESSION,
                "org.example.Demo",
                BusNameWatcherFlags.AUTO_START,
                on_name_appeared,
                on_name_vanished
            );
            client = Bus.get_proxy_sync (
                BusType.SESSION,
                "org.example.Demo",
                "/org/example/demo");
            Thread.create(() => {
                run_client();
                return null;
            }, false);
        } catch (IOError e) {
            stderr.printf ("Could not create proxy\n");
        }
    }
    
    public void run_client() {
        int i = 1;
        int j;
        while(true) {
            try {
                j = client.ping ("ping", i);
                stdout.printf ("%d => %d\n", i, j);
                i = j;
            } catch (IOError e) {
                stderr.printf ("Could not send message\n");
            }
            Thread.usleep(1000000);
        }
    }
    
    public void start_server() {
        Bus.own_name (
            BusType.SESSION,
            "org.example.Demo",
            BusNameOwnerFlags.NONE,
            on_bus_aquired,
            () => stderr.printf ("Name aquired\n"),
            () => stderr.printf ("Could not aquire name\n"));
    }
    
    void on_name_appeared(DBusConnection conn, string name) {
        stdout.printf ("%s appeared.\n", name);
    }
    
    void on_name_vanished(DBusConnection conn, string name) {
        stdout.printf (
            "%s vanished, attempting to start server.\n",
            name);
        start_server();
    }

    void on_bus_aquired (DBusConnection conn) {
        try {
            conn.register_object (
                "/org/example/demo",
                new DemoServer ());
        } catch (IOError e) {
            stderr.printf ("Could not register service\n");
        }
    }
}

void main () {
    Demo demo = new Demo();
    demo.start();
}

peer.vala

Start several peers in different terminal windows, at least 3 or 4. You will notice that the first process starts a server thread that all client threads connect to. When the process that provides the service terminates, the next process in the chain takes over the responsibility for the service. And finally, when the last process terminates, the service is closed.

Note that if you want to check D-Bus activity while running the programs in this introduction, you can either install the D-Feet tool or run dbus-monitor from the command line.

Saturday, 4 September 2010

Memory Usage Graphs with ps and Gnuplot

When developing the import from F-Spot feature in Shotwell, a user who tested the patch found out that there was a bit of a memory leak. After finding the cause, I produced a patch to fix it but I also wanted to identify what the difference was between the development trunk and the patch. So here's how I did it.

Gathering Data

The first step was to gather relevant memory usage data. For this I needed a repeatable test and perform that test both with the trunk build and the patch build. As I had a test F-Spot database, that proved quite straightforward:

  1. Delete the Shotwell database,
  2. Build the trunk version,
  3. Import the test F-Spot database using the trunk build,
  4. Delete the Shotwell database again,
  5. Build the patched version,
  6. Import the test F-Spot database using the patched build.

With the test process sorted, I needed to gather memory data during steps 3 and 6. That's easily done using the ps command in a loop and sending the output to files. So, for the trunk build, I just started this command in a terminal before starting Shotwell and stopped it once finished:

$ while true; do
ps -C shotwell -o pid=,%mem=,vsz= >> mem-trunk.log
sleep 1
done

The one for the patched version is virtually the same:

$ while true; do
ps -C shotwell -o pid=,%mem=,vsz= >> mem-patch.log
sleep 1
done

Note the the equal sign (=) after each field specification tells ps not to output the column header. So at the end of this, you end up with two files that contain 3 columns of data each: the PID, the percentage of memory used and the total virtual memory used for the process at intervals of one second. In both cases, I let Shotwell run idle for a few seconds at the end of the import before closing it to ensure that everything had stabilised.

Next, I checked how many lines of data I had in each file:

$ wc -l mem-*.log

And truncated the longer file to the length of the sorter one, just to make sure I had the same number of data points for both.

Creating the Graph

After that, I wanted to create a single graph that included four lines: VSZ and %MEM for both trunk and patch. And I wanted to output the result to a PNG file. Gnuplot can do all of this, you just need to know how to set its myriad of options. So here's the Gnuplot script in details.

set term png small size 800,600
set output "mem-graph.png"

Gnuplot works with the concept of terminals. So the first line tells it to use the special terminal called png with a small font and a size of 800 pixels wide by 600 pixels high. The second line is fairly self-explanatory: output to the given file.

set ylabel "VSZ"
set y2label "%MEM"

The two sets of values I am interested in have very different ranges. VSZ is a number of bytes and will have values in the hundreds of thousands if not millions, while %MEM is a percentage so will have a value somewhere between 0 and 100. So to make sure that both types of graphs fit in the output, I will use the ability that Gnuplot has to use left and right Y axes with different ranges: VSZ will go on the default Y axis (left, called y), while %MEM will go on the other one (right, called y2). So I set the labels for both.

set ytics nomirror
set y2tics nomirror in

As the right Y axis is not used by default, I need to enable it and to set where the tics go. To do that, I first disable the mirror option on the left Y axis and enable tics on the right Y axis by telling Gnuplot that their position will be in.

set yrange [0:*]
set y2range [0:*]

The last piece of setup is to customise the range on both axes. By default, Gnuplot will adjust the range so that there is as little white space as possible above or below the graph. But in this case, I want both sets of graphs to start at zero so that I can have a better idea of total memory used.

The next bit is quite long so I will start by explaining the instruction for a single graph before bringing all four together.

plot "mem-trunk.log" using 3 with lines axes x1y1 title "Trunk VSZ"

In the line above, I tell Gnuplot to take its data from the third column in the file called mem-trunk.log. The with lines section specifies that I want a line graph. The axes x1y1 specifies that I want it to be drawn against the first X axis and the first Y axis (the default, but here for completeness). And the last bit specifies what I want the title for this graph to be. Then it's just a case of plotting all four graphs in a single plot command separated by commas. So here's the full script:

set term png small size 800,600
set output "mem-2334-graph.png"
set ylabel "VSZ"
set y2label "%MEM"
set ytics nomirror
set y2tics nomirror in
set yrange [0:*]
set y2range [0:*]
plot "mem-trunk.log" using 3 with lines axes x1y1 title "Trunk VSZ", \
     "mem-patch.log" using 3 with lines axes x1y1 title "Patch VSZ", \
     "mem-trunk.log" using 2 with lines axes x1y2 title "Trunk %MEM", \
     "mem-patch.log" using 2 with lines axes x1y2 title "Patch %MEM"

Make sure that there is absolutely no white space between the backslash characters and the end of lines in the plot command otherwise Gnuplot will complain. Save the script to a file called mem.gnuplot and run it:

$ gnuplot mem.gnuplot

And here is the output I got, which shows the improvement in memory usage between trunk and patch:

Memory usage graph

Saturday, 7 August 2010

Flashing the N900

Introduction

The Nokia N900 is able to upgrade its operating system over the air, which is the best way to upgrade it, especially if you have a fast Internet connection. This is also very practical when you're not running Windows because the Nokia suite only exists on that platform. This is exactly what I had been doing since I bought my N900. Except that a few months ago, I declined to update when requested for a reason I have now forgotten, probably because I didn't have the time then. And since then, it has failed to propose any new upgrade. So the only solution was to flash it but I had no idea how to do that on Linux. Luckily, I found a very handy guide on the subject which includes a version for Linux. Unfortunately, this guide is light on details and not fully correct so here's the fix.

Warning

Note that flashing the device rather than upgrading it over the air will remove all software you previously installed, as well as some of your settings. It will not delete your data as it only flashes the root partition. Your address book for instance is safe. But I would still strongly advise that you backup your device before going any further. The N900 ships with a backup utility so please use it and copy the backup file to your computer. If you fail to do so and you lose all your data, you're on your own to recover it.

Flashing the Device

As the guide explains, download the Linux Flasher Tool and the latest available N900 firmware image. You will need to enter your device ID for the image and in both cases you will need to accept Nokia's license agreement.

If you are using Ubuntu, I suggest you download the .deb version of the flasher tool: maemo_flasher-3.5_2.5.2.2_i386.deb, then double-click on the downloaded file and the package installer will open. Install the package. The flasher tool will then be installed to /usr/bin, not in the current directory, as suggested by the article.

For the image, ignore all the red warnings on the page next to the eMCC image links and download the latest combined image for your region, in my case RX-51_2009SE_10.2010.19-1.203.1_PR_COMBINED_203_ARM.bin.

Go through steps 3 and 4 as explained in the guide: turn the phone off and connect it to your computer via the USB cable.

Now comes the glitch in the guide: the actual command to flash the image needs to specify where to find the image in question:

$ sudo flasher-3.5 -F /path/to/image.bin -f -R

When you run the tool, you will get quite a bit of output when it validates the image. It will eventually show the following message:

Suitable USB device not found, waiting.

And the tool will hang. This message is not an error message as the guide says, even though it sounds a bit ominous, it's the flasher tool's way to tell you that you need to switch the device on so that it can find it. So switch the phone on. As soon as you do so, the flasher tool will recognise it and will start loading the image. It will provide on-screen feedback while it does so and the N900 will display a small progress bar at the bottom of the Nokia boot image. Once the tool has finished, the phone will take a lot longer than usual to start up. Don't try to hurry things along, just wait until it is fully booted. Then, and only then, disconnect the phone from the computer.

As mentioned above, you then need to re-install the software you had previously installed on the device. Luckily the OVI application store remembers what you previously paid for and downloaded so you should be able to just re-download software without having to pay for it again.

Friday, 23 July 2010

Converting OpenOffice Documents in Bulk

I had a request from a customer earlier this week: they wanted a copy of all the diagrams that are present in a specification I've been writing for them. All those diagrams are in separate OpenOffice Draw (.odg) files. They don't use OpenOffice but were happy to have PDF versions. The only problem is that there are 42 of them so it would take ages to convert them manually. A quick Google later and I found a way to convert documents from the command line.

So, to adapt it to Ubuntu, download the DocumentConverter.py file mentioned in the post above and store it where your documents are. Then, start OpenOffice in headless mode:

$ ooffice -headless -accept="socket,port=8100;urp;" &

As the version of Python installed on Ubuntu already includes the UNO bindings, there is no need to use a special OpenOffice version of Python to do the job, the standard one will do. The script takes two arguments, the input file and the output file, and works out the formats based on the extensions. Doing the bulk convert is therefore extremely easy:

$ ls *.odg | while read f; do
> echo $f
> python DocumentConverter.py $f ${f%.*}.pdf
> done

Job done! It took 5 minutes for my laptop to convert all 42 files, during which time I made coffee rather than repetitively click on UI buttons and I even have enough time left to blog about it. Oh and I have a happy customer: that's the most important.

Saturday, 29 May 2010

Installing OpenERP on Ubuntu 10.04 LTS

Background

It's that time of the year again. I need to prepare all the information required by my accountant to issue my company accounts. Luckily I have most of it saved in a handy directory on my laptop, with a backup copy on the server. However, it still takes a lot of time that I'd rather be spending doing something else. Accounts are not the only process that I'd like to improve: sales, expenses, invoicing, everything that is not part of my daily job is done in an ad-hoc way. This is exactly what ERP systems are designed to address. And as usual, there is an open source solution out there, one that is even fully supported on Ubuntu: OpenERP. Let's install it then!

Before we start, OpenERP is a client-server solution and as such there are two components to consider:

OpenERP Server
This is a central component that is deployed on a central server and is responsible for managing the company database. You should have a single instance of it for the whole company.
OpenERP Client
This is a desktop application that enables users to view and update the data held in the server. It should be installed on every computer that needs access to the ERP system.

In addition, OpenERP also offers another server component called OpenERP Web. This component is meant to be deployed on the same server as OpenERP Server and offers a web interface to the ERP system, meaning that you can access the system using a vanilla web browser, rather than the dedicated client application. This is great if you have a large number of users and don't want to deploy the dedicated client everywhere. I will ignore this component for today and only go through the installation of the server and dedicated client.

OpenERP Server

To install the server component, you need a computer that will act as a server. You could potentially install it on your desktop or laptop if you are sure that you will only ever be the single user of the application but I wouldn't recommend it. In my case, I decided to install it on my existing home server that runs Ubuntu Server 10.04 LTS.

Install PostgreSQL

OpenERP needs a database engine and is designed to run with PostgreSQL so we need to install this first. To do this, connect to the server and just install the relevant package, as detailed in the OpenERP manual:

$ sudo apt-get install postgresql
Create an openerp user in PostgreSQL

The server will need a dedicated user in the database so we need to create it. To do this, we first need to start a session under the identity of the postgres Linux user. We can then create the database user that we will name openerp. When prompted, enter a password and make sure you remember it. There is no need for that user to be an administrator so we answer n when asked that question. Then close the postgres session to come back to your standard Linux user.

$ sudo su - postgres
$ createuser --createdb --username postgres --no-createrole \
 --pwprompt openerp
Enter password for new role: 
Enter it again: 
Shall the new role be a superuser? (y/n) n
$ exit

Now that this new user is created, let's try to connect to the database engine using it:

$ psql -U openerp -W
psql: FATAL:  Ident authentication failed for user "openerp"

It doesn't work. This is because PostgreSQL uses IDENT-based authentication rather than password-based authentication. To solve this, edit the client authentication configuration file:

$ sudo vi /etc/postgresql/8.4/main/pg_hba.conf

Find the line that reads:

local all all ident

Replace the word ident with md5:

local all all md5

And restart the database server:

$ sudo /etc/init.d/postgresql-8.4 restart

Let's try again:

$ psql -U openerp -W
psql: FATAL:  database "openerp" does not exist

OK, that's a different error which is due to the fact that PostgreSQL tries to connect to a database that has the same name as the user if none is specified. So let's try to connect to the postgres database, which is contains system information and is always installed.

$ psql -d postgres -U openerp -W
psql (8.4.4)
Type "help" for help.
postgres=>

That works so we know that the user has been successfully created and there should be no problem connecting with that user identity.

Install OpenERP Server

OpenERP Server is part of the Ubuntu repositories so it's extremely easy to install:

$ sudo apt-get install openerp-server

Now is a good time for a coffee break. On my installation, the openerp-server package triggers the installation of no less than 108 packages in 48.9MB of archives that will require an additional 251MB of disk space. This may take some time. Most of the additional packages are Python packages, which is sensible because OpenERP is written in Python but the fact that it also requires things like xterm and bits of GTK makes me think that the dependency list could be pruned somewhat. At the end of the installation, a message appears saying that you should go and read the file called /usr/share/doc/openerp-server/README.Debian. Go and do this. It mainly explains the PostgreSQL installation that we just did but it also mentions a bit of useful information, namely that the OpenERP Server configuration file is /etc/openerp-server.conf. That's quite handy because we need to update it.

$ sudo vi /etc/openerp-server.conf

If you are installing the server and client on different machine, which I would recommend, you need to find the line that says:

interface = localhost

And replace the word localhost with the IP address of your server. If you don't know the IP address of the server, just run ifconfig with no parameters and look for the words inet addr: at the beginning of the second line of output: the IP address is the set of four number separated by dots that come just after that. You then need to modify another two lines:

dbpassword = the password you chose for the openerp user
dbhost = localhost

In theory you shouldn't have to specify dbhost = localhost because leaving that entry blank should default to the value localhost but when I tried this, OpenERP Server could not connect to PostgreSQL. It's now time to restart the server process but before we do this, it's always good to be able to follow the logs in a second window just in case something goes wrong. The location of the log file is helpfully specified in the configuration file that we just edited. Open another terminal, connect to the server and run the following command:

$ tail -f /var/log/openerp-server.log

The last 10 lines of log will appear and every time a new entry is added to the log file, it will also appear in this window. In the first window, we can restart the server:

$ sudo /etc/init.d/openerp-server restart

If all goes well, no nasty error message should appear in the log window.

OpenERP Client

Now, to install the client software on a desktop, connect to the desktop, open a terminal and install the package:

$ sudo apt-get install openerp-client

That's it. Once finished, you will find an OpenERP Client entry in the Applications -> Internet menu. Click on it to open the client. If you want to fill in the feedback form, do so, otherwise click Cancel. You then need to create a new database. To do this, go to the File -> Databases -> New Database menu:

New Database menu

This will open a dialogue where you can enter the details of the new database.

New Database dialogue

You will need to click on the Change button at the top to specify the name or IP address of the server. The port should be the default, 8070. Of course, if you have a firewall between the client and the server, the firewall configuration will need to be updated to allow traffic to the server through this port. The default password for the super administrator is admin as specified in the dialogue. Note that the database name cannot contain spaces, dashes or any non-alphanumeric characters.

So that's OpenERP installed. It's now time to read the rest of the documentation and get a copy of Accounting for Dummies to understand what it's all about.

Backup and Restore a Subversion Repository

Sometimes, you just have to do serious maintenance to a server, like install a new instance of your operating system from scratch with a new partition layout. If this server happens to be your Subversion server, you need to backup your repository and restore it once you're done with the maintenance. Or maybe you just want to move your repository from one server to another. Here's how to do it. The commands below need to be performed on the SVN server by a user who has write access to the repository (in theory any SVN admnistrator).

First, here's how back up the repository to a compressed file:

$ svnadmin dump /path/to/repo | gzip > backup.gz

And how to restore it:

$ gunzip -c backup.gz | svnadmin load /path/to/repo

Those commands are meant for UNIX or Linux so you will have to adapt them if you are running Windows. It shouldn't be too difficult to do so, especially if you are using Cygwin.

Thursday, 29 April 2010

Shared Folders in Ubuntu with setgid and ACL

Introduction

There is an often requested feature on Linux (or UNIX) to have the ability to create shared directories similar to what is possible in Windows, that is a directory in which every person who has been given access can read, write or modify files. However, because Linux file systems such as ext4 enforce file permissions that are stricter than any of the windows file systems such as FAT or NTFS, creating such a directory is not obvious. Of course, if you put your shared directory on a FAT or NTFS partition, it will automatically behave just like in Windows but that requires a separate partition and doesn't allow you to enforce permissions on a per-group basis. So here's a quick guide on how to do this with Ubuntu. The same principles apply to other Linux distributions so should be portable.

Use Cases

Let's go through a couple of classic use cases first, to identify exactly what we want to do.

Project Folder

In a company or university setting where users are assigned to project teams or departments, it can be useful to create shared folders where all members of the team can drop files that are useful for the whole team. They need to be able to create, update, delete files, all in the same folder. They also need to be able to read, update or delete files created by other members of the team. However, users external to the team should only have read access.

Web Development

For anybody doing web development on Linux, a classic problem is when you have to deal with development or test web servers. The default web server process runs with the www-data user and the document directory is owned by the same user. It would be great if all web developers on the team were able to update the document directory on the server while not requiring root access to do so.

Linux Default Behaviour

Linux has the concept of user groups. You can check what groups your user belongs to by typing the following on the command line:

$ groups
bruno adm dialout cdrom plugdev lpadmin admin sambashare

On a default Linux installation, groups are used to give access to specific features to different users, such as the ability to administer the system or use the CD-ROM drive. But one of the core feature of user groups is to support file permissions. Each file has separate sets of read, write and execute permissions for the user who is the owner of the file, the group that owns the file and others, that is everybody else. Whenever a user attempts to read, write or execute a file, the system will decide whether he can do it based on the following rules:

  • if the user is the owner of the file, user permissions apply,
  • otherwise, if the user is part of the group that owns the file, group permissions apply,
  • otherwise, others permissions apply.

So to configure a shared directory as defined above, we need to:

  • create a user group for the team,
  • assign all team member users to that user group,
  • create a directory and configure it so that all users in the group can:
    • add new files to the directory,
    • modify any existing file in the directory,
  • and of course, all this should work without users having to do anything special.

How To

Enable ACL

The first thing we need to do is to enable ACL support on the partition where we will create the shared directory. ACL extend the basic Linux permission implementation to enable more fine grained control. As this requires the file system to be able to store more permission meta-data against files, it needs to be configured accordingly. We can do this by adding the acl option to the relevant line in /etc/fstab, such as:

UUID=b8c490d0-0547-4e1f-b052-7130bacfd936 /home ext4 defaults,acl 0 2

The partition then needs to be re-mounted. If the partition to re-mount is /, /usr or /home, you will probably need to restart the machine. Otherwise, the following commands should re-mount the partition:

$ sudo umount partition
$ sudo mount partition

where partition is the mount point of the partition as defined in /etc/fstab, such as /var/www.

Create Group

We then need to create the group to which we will give shared access, let's call that group teamgroup:

$ sudo groupadd teamgroup

Try to give the group a meaningful name while keeping it short. If it's meant to be a team group, give it the name of the team, such as marketing. Note the following restrictions on Debian and Ubuntu for group names (taken from the man page):

It is usually recommended to only use groupnames that begin with a lower case letter or an underscore, followed by lower case letters, digits, underscores, or dashes. They can end with a dollar sign. In regular expression terms: [a-z_][a-z0-9_-]*[$]?

On Debian, the only constraints are that groupnames must neither start with a dash (-) nor contain a colon (:) or a whitespace (space: , end of line: \n, tabulation: \t, etc.).

Groupnames may only be up to 32 characters long.

We then need to assign users to that group:

$ sudo usermod -a -G teamgroup teamuser

Where teamuser is the login name of the user to assign to the group. This assignment will take effect next time the user logs in. Make sure that you do not forget the -a option otherwise you will wipe out all existing group assignment for that user, rather than just adding a new one.

Create the Folder

The next step is to create the shared folder. This is easy:

$ cd /path/to/parent
$ mkdir teamfolder

Where /path/to/parent is the path to the parent folder and teamfolder is the name of the folder you want to create. We then assign group ownership of the folder to the group previously created:

$ chgrp teamgroup teamfolder

And give write access to the group on that folder:

$ chmod g+w teamfolder

Let's check what this folder looks like:

$ ls -l
drwxrwxr-x 2 teamuser teamgroup 4096 2010-03-03 14:32 teamfolder

Now, let's try to create a new file in that directory:

$ touch teamfolder/test1
$ ls -l teamfolder
-rw-r--r--  1 teamuser teamuser 5129 2010-03-03 14:34 test1

That looks good and any other user who is part of teamgroup should be able to create files in this directory. However, group members will not be able to update files created by other members of the group for the following reasons:

  • the group that owns the file is the user's primary group, rather than teamgroup,
  • the file's permissions only allow the owner of the file to update it, not the group.
Set the setgid Bit

We'll solve the first problem by setting the setgid bit on the folder. Setting this permission means that all files created in the folder will inherit the group of the folder rather than the primary group of the user who creates the file.

$ chmod g+s teamfolder
$ ls -l
drwxrwsr-x 2 teamuser teamgroup 4096 2010-03-03 14:32 folder

Note the s in the group permissions instead of the x that was there previously. So now let's try to create another test file.

$ touch teamfolder/test2
$ ls -l teamfolder
-rw-r--r--  1 teamuser teamuser  5129 2010-03-03 14:34 test1
-rw-r--r--  1 teamuser teamgroup 5129 2010-03-03 14:35 test2

So now whenever a file is created in the team directory, it inherits the team's group.

Set Default ACL

The second issue is related to umask, the default mask applied when creating files and directories. By default umask is set to the octal value 0022, as demonstrated if you run the following:

$ umask
0022

This is a negative mask that is applied to the octal permission value of every file or directory created by the user. By default, a file is created with permissions rw-rw-rw-, equivalent to 0666 in octal and a directory is created with permissions rwxrwxrwx, equivalent to 0777 in octal. umask is then subtracted from that default to give the effective permission with which files and directories are created. So for a file, 0666-0022 gives 0644, equivalent to rw-r--r-- and for a directory 0777-0022 gives 0755, equivalent to rwxr-xr-x. This default is sensible for most situations but needs to be overriden for a team directory. The way to do this is to assign specific ACL entries to the team directory. The first thing to do is to install the acl package to obtain the necessary command line tools. Well, in fact, the first thing to do would be to enable acl on the relevant partition but we already did that at the very beginning.

$ sudo apt-get install acl

Now that the package is installed, we have access to the setfacl and getfacl commands. The first one sets ACLs, the second one reads them. In this particular case, we need to set default ACLs on the team folder so that those ACLs are applied to files created inside the directory rather than the directory itself. The syntax is a bit complicated: the -d option specifies that we want to impact the default ACLs, while the -m option specifies that we want to modify the ACLs and expects an ACL specification to follow.

$ setfacl -d -m u::rwx,g::rwx,o::r-x teamfolder
$ touch teamfolder/test3
-rw-r--r--  1 teamuser teamuser  5129 2010-03-03 14:34 test1
-rw-r--r--  1 teamuser teamgroup 5129 2010-03-03 14:35 test2
-rw-rw-r--  1 teamuser teamgroup 5129 2010-03-03 14:36 test3

There we go, it all works as expected: new files created in the team folder are created with the team's group and are group writeable. To finish off, let's have a look at how the folder's ACLs are stored:

$ getfacl teamfolder
# file: teamfolder
# owner: teamuser
# group: teamgroup
user::rwx
group::rwx
other::r-x
default:user:rwx
default:group:rwx
default:other:r-x
Granting and Revoking Access

Granting a user write access to the team folder is now extremely easy: you can just add that user from the team's group when he joins the team:

$ sudo usermod -a -G teamgroup joiner

Where joiner is the user ID of the user joining the team. Revoking access is nearly as easy, you just need to remove the user from the team's group. Unfortunately, there is no way to do this in a simple command so you will have to edit the file /etc/group, find the group and remove the user ID from that group.

Variations

Restrict Delete and Rename to Owner

By default, any user who has write access to a file can delete or rename it. This means that any member of the team can delete or rename any file created by another member. This is generally OK but if it is not, it can also be restricted by setting the sticky bit on the directory:

$ chmod +t teamfolder
$ ls -l
drwxrwsr-t 2 teamuser teamgroup 4096 2010-03-03 14:32

This feature is used on the /tmp directory to ensure that all files created in that directory can only be deleted by their owners.

Restrict Access for Others

Another variation that may be more useful is to completely deny access for users that are not part of the team. it may be that a particular team is working on some sensitive stuff and you don't want anybody outside the team to see it. To do this, we just revoke all permissions and ACLs for others on the team folder:

$ chmod o-rx teamfolder
$ setfacl -d -m o::--- teamfolder

References

Tuesday, 19 January 2010

Improving your Linux Skills

I came across a few interesting web sites that are excellent resources if you want to improve your Linux skills:

Thursday, 2 April 2009

Adding a Static Address to the DHCP Server

I've got a Lexmark X9575 all-in-one printer. Up to now, it was connected to the Mac via USB. I wanted to reconfigure it to connect it to the network instead so that I can use it with other computers. But I wanted to do so in such a way that it would get its network configuration via DHCP while keeping a fixed address and be resolvable via DNS.

The first thing I did was to remove the printer setup from the Mac (no, you can't change the configuration, you have to un-install and re-install, thank you Lexmark!). Then I connected the printer to the network switch with a standard RJ45 cable. It requested a DHCP lease form the server immediately and the DNS entries got updated. That was a good start. However, the IP address was part of the dynamic range and therefore unpredictable and the name of the printer was hard-coded to the very non-intuitive ET0020002C5B7A. I wanted something more memorable like lexmark-printer.

To force a fixed IP address is quite simple, as explained in this DHCP mini-howto. I just added the following to the /etc/dhcp3/dhcpd.conf file:

host lexmark-printer {
 hardware ethernet 00:20:00:2c:5b:7a;
 fixed-address 192.168.1.250;
}

Restarted the DHCP server:

$ sudo /etc/init.d/dhcp3-server restart

And it didn't work, for two reasons:

  • The DHCP server already had a lease assigned so wasn't going to renew it;
  • The printer already has an IP address configured so wasn't going to renew it either.

To solve the first problem, I needed to force the expiry of the lease on the DHCP server. The ISC DHCP server stores leases between restarts in a file called /var/lib/dhcp3/dhcpd.leases. So revoking the lease can be done by updating that file, removing the entry for the given lease and restarting the DHCP server.

The second problem was of the same ilk: go to the printer's TCP/IP configuration, set DHCP to no, save the settings, go back into them, set DHCP back to yes and that forced the printer to request a new lease.

That gave me a static address for the printer while still keeping it managed by DHCP. However, in such a case, the DHCP server doesn't update the DNS database. This has to be done manually, by adding the following in the forward lookup file:

lexmark-printer.home.   IN A    192.168.1.250

And this in the reverse lookup file:

250.1.168.192.in-addr.arpa. IN PTR  lexmark-printer.home.

Then of course, a restart of the DNS server is required:

$ sudo /etc/init.d/bind9 restart

Tuesday, 31 March 2009

Upgrading the Server from Gutsy to Hardy

My silent server that provides DNS, DHCP, Subversion and other services to my home network hadn't been upgraded since it was first installed and had been running Ubuntu 7.10 (aka Gutsy Gibbon) quite happily all this time. But with 7.10 reaching end of life in a few weeks, I felt it was time to upgrade and that today was the day to do so.

The first port of call is the upgrade notes and in particular the Hardy note to upgrade from 7.10 to 8.04. Make sure you read the "Before You Start" section at the beginning of that note before you start. So taking this into account, here is the sequence of what I did for that upgrade:

Refresh the package index

It's always good to do that once in a while and especially before an upgrade.

$ sudo apt-get update

Update all packages

Before an upgrade, it's essential to ensure that you are on the latest version of packages for your current release.

$ sudo apt-get upgrade

You will likely need to reboot after that, especially if the upgrade includes a new kernel. If in doubt, reboot anyway.

$ sudo init 6

Install update-manager-core

That's the bit that will perform the upgrade so you need to make sure it's there. If in doubt, install it, apt-get will tell you if you already have the latest version.

$ sudo apt-get install update-manager-core

Upgrade!

That's the biggie that will take a long time and may ask you some questions in the process. If you ever get any question, make sure you read them carefully. Defaults tend to be sensible so it shouldn't wreck your system but that doesn't excuse you from being sensible and paying attention.

$ sudo do-release-upgrade

A few things to note on the upgrade process:

  • I was doing my upgrade through SSH. If things go wrong, you can potentially lose connection with your server and it can all end in tears so the upgrade process warns you about this and starts a second SSH daemon on a different port (9004 in my case but it will tell you). I had no problem installing over ssh but be careful nonetheless.
  • If you have a DHCP server configured, as I do, it will probably notify you that the file /etc/dhcp3/dhcpd.conf file has been modified on your server and ask you whether you want to replace it with the new one it just downloaded or keep the old one. You need to keep the old one if you want your settings to be preserved. To be on the safe side, make a copy of it just in case.
  • Because of a well documented bug in Debian upon which Ubuntu is based, the upgrade process will re-generate any SSL key, in particular the RSA keys used by SSH. That will affect us later and I'll explain what to do about it.

Once the upgrade is finished, the script will ask you of you want to reboot immediately. Unless you have a good reason to reboot manually, you can let the upgrade process do that for you.

Updating the SSH keys on the client machine

If you attempt to reconnect to your server via ssh straight after the upgrade, you will be greeted by a worrying message and you won't be able to go further:

Helsinki:~ brunogirin$ ssh bruno@szczecin
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
dc:11:1a:78:f4:34:c3:a2:ab:9d:52:1e:98:6d:7f:36.
Please contact your system administrator.
Add correct host key in /Users/brunogirin/.ssh/known_hosts to get rid of this message.
Offending key in /Users/brunogirin/.ssh/known_hosts:2
RSA host key for szczecin has changed and you have requested strict checking.
Host key verification failed.

This is normal and is due to the fact that the upgrade process has re-generated the ssh RSA keys on the server. Those keys are stored on all client machines that have previously connected to that server so that they can verify the identity of the server. To resolve that problem, the error message is giving us a hint. On the example above taken from my OS-X box, it tells me that the offending key is on line 2 of file /Users/brunogirin/.ssl/known_hosts. So open that file in an editor and remove the offending line then try to connect again. As it doesn't have the key anymore, it will ask for confirmation before adding the new one to that file and let you connect:

Helsinki:~ brunogirin$ ssh bruno@szczecin.home
The authenticity of host 'szczecin.home (192.168.1.253)' can't be established.
RSA key fingerprint is dc:11:1a:78:f4:34:c3:a2:ab:9d:52:1e:98:6d:7f:36.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'szczecin.home' (RSA) to the list of known hosts.

Note that if you have several keys for the same host, for instance if you've connected through its name and IP address in the past, it may give you another warning, as shown on my Ubuntu laptop:

Warning: the RSA host key for 'szczecin' differs from the key
 for the IP address '192.168.1.253'
Offending key for IP in /home/bruno/.ssh/known_hosts:1
Are you sure you want to connect (yes/no)?

Once again, it tells you which is the offending key so you can remove it and attempt to connect via the IP address to renew the key. Note that this only works as explained above if SSH on the client is configured so that the StrictHostKeyChecking option is set to ask. If it is set to no, it will never check and will happily connect. If it is set to yes, you will have to update the keys manually. See man ssh_config for the full details.

There you go: apart from the SSH malarkey at the end, it was rather straightforward and very quick too! In fact, it took me more time to write this post that do the upgrade.

Bootnote

Now that I have this server on 8.04, I could upgrade immediately to 8.10 but I'll leave that for another day. In fact, considering that 8.04 is an LTS release, I may leave my server on that version until the next LTS release, 9.10 aka Karmic Koala, scheduled for October.

Sunday, 15 February 2009

Using the LaTeX letter document class

LaTeX offers a variety of built-in document classes. One of them is letter, which is designed for writing official letters, such as the one I just did to the (inept) agency that manages my flat. However, that standard document class is not very well liked: it's the ugly little duckling of the LaTeX document classes. But when you just have to get things done, better the devil you know than the one you don't. So here's a quick summary of easy tweaks I applied to my letter to bend the default document class to my needs.

Structure

The letter document class requires a specific structure. It is designed so that you can write several letters to several recipients while using the same return address. The basic structure of a letter document looks something like this:

Basic structure of a letter document
\documentclass{letter}

\address{The return address}
\signature{Your signature}

\begin{document}
\begin{letter}{The recipient's address}

\opening{The opening, such as: Dear Sir,}

The body of the letter

\closing{The closing, such as: Best regards,}

\end{letter}
\end{document}

The letter environment that is enclosed in the document environment looks superfluous at first. The idea is that you can have several letter environments when you want to produce more than a single letter: this can be very useful for mailings.

Top space

The document class introduces a lot of white space at the top. This is good for a multi-page letter but looks really awkward on a single page letter. So, I did what the TeX FAQ suggests and added the following in the preamble, just after the \documentclass command:

Preamble to remove the initial top space

\makeatletter
\let\@texttop\relax
\makeatother

\begin{document}

Adding a reference

As my managing agent specified a reference in the letter they sent me, I wanted to include that reference in my letter, just after their address but with some small vertical space in between. There is no obvious way to do that so I just added it to the end of the recipient's address with a space of length \parskip:

Adding a reference
\begin{letter}{The recipient's address\\[\parskip]
Your ref: The reference}

Adding some vertical space before the closing sentence

The closing sentence is printed very close to the last paragraph in the letter. I wanted a bit more space so added the following just before the \closing command:

Adding space before closing
\vspace{3\parskip}
\closing{The closing, such as: Best regards,}

Adding space between the closing and the signature

When writing a letter, I like leaving enough space between the closing sentence and the printed signature so that I can add a hand written signature once it's printed. The standard space is too small for this. However, considering both lines are printed by the \closing command, there is no obvious way to include any space in between. However, looking at the letter.cls code, it appears that the standard template adds 6 times the \medskipamount length in between the two lines so the simple way to change that space is to change the relevant length just before the \closing command:

Adding between the closing and signature
\vspace{3\parskip}
\addtolength{\medskipamount}{2\medskipamount}
\closing{The closing, such as: Best regards,}

This will multiply the \medskipamount length by 3 and as result multiply the space between closing and signature by 3 as well. This works great for a single letter but will not have the intended result when you have multiple letters in the same document. The way to avoid this is to save the original length and restore it afterwards:

Adding between the closing and signature
\vspace{3\parskip}
\setlength{\oldmedskipamount}{\medskipamount}
\addtolength{\medskipamount}{2\medskipamount}
\closing{The closing, such as: Best regards,}
\setlength{\medskipamount}{\oldmedskipamount}

With those few tricks, I got a letter that was close enough to what I wanted. I could probably have spent a lot more time tweaking it or trying alternative document classes but this particular job didn't warrant me spending too much time on it.

Tuesday, 13 January 2009

Tuning a Wi-Fi Router with Linux

Nowadays, with Wi-Fi broadband routers becoming the de-facto standard in the home, comes a new problem for people who live in cities: interferences between neighbouring wireless networks. This can lead to slow connections or even dropped connections. A few years ago it was not a problem, few people had Wi-Fi routers at home and if you had one your router would work great out of the box. Nowadays, everybody's got a Wi-Fi router at home, whether it be a traditional router or a phone that masquerades as one. Wi-Fi was designed to avoid interferences by being able to work on a number of frequencies and every single router allows you to choose what frequency to use by selecting a channel. In the UK, you will typically have the choice of a channel value between 1 and 13. Just go to the wireless settings area of your router administration page and you should be able to change the channel, as shown on this example from a Netgear router:

Netgear wireless settings screen, set to Channel 4

Netgear wireless settings screenshot

That's simple: change the value, ensure it's applied and, depending on the manufacturer, reboot the router. But what is a good value that will ensure a good connection? Well, it depends on your environment. As you want to avoid interferences, this value should be as far as possible from other routers in the range of your equipment. But how do you tell what channel other routers in your area use? The Linux wireless tools come to the rescue, and in particular the one called iwlist. If you have a Wi-Fi laptop running Linux, it will have this utility installed as standard. The basic command we want is:

$ iwlist [interface] scan[ning]

To do a full scan, we need to run it as root so we'll prepend sudo to it. It is not necessary to specify a network interface but you might as well do so to avoid scanning non-wireless adapters. On my laptop, the wireless interface is eth1 so here is what I obtain by running iwlist against it:

$ sudo iwlist eth1 scan
eth1      Scan completed :
          Cell 01 - Address: 02:1D:68:4B:6D:F6
                    ESSID:"BTOpenzone"
                    Protocol:IEEE 802.11bg
                    Mode:Master
                    Frequency:2.412 GHz (Channel 1)
                    Encryption key:off
                    Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 6 Mb/s; 9 Mb/s
                              11 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s
                              48 Mb/s; 54 Mb/s
                    Quality=27/100  Signal level=-83 dBm  
                    Extra: Last beacon: 240ms ago
          Cell 02 - Address: 00:1D:68:4B:6D:F5
                    ESSID:"BTHomeHub-954D"
                    Protocol:IEEE 802.11bg
                    Mode:Master
                    Frequency:2.412 GHz (Channel 1)
                    Encryption key:on
                    Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 6 Mb/s; 9 Mb/s
                              11 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s
                              48 Mb/s; 54 Mb/s
                    Quality=29/100  Signal level=-82 dBm  
                    Extra: Last beacon: 248ms ago

... and so on

Each Cell section provides the details of a wireless hub in range. For each of them, the line we are interested in is the one that start with the word Frequency. So, if we use grep to filter the data, we get:

$ sudo iwlist eth1 scan | grep Frequency
Frequency:2.412 GHz (Channel 1)
Frequency:2.412 GHz (Channel 1)
Frequency:2.412 GHz (Channel 1)
Frequency:2.427 GHz (Channel 4)
Frequency:2.442 GHz (Channel 7)
Frequency:2.442 GHz (Channel 7)
Frequency:2.462 GHz (Channel 11)
Frequency:2.442 GHz (Channel 7)

This shows that I am in range of 8 wireless hubs. The one I am using is the fourth one, set to use channel 4. But that's because I changed it yesterday. It used to be configured with its default setting, using channel 11, which was clashing with the one before last. In fact, running the command at different times, it appears that all routers in my area use channels 1, 7 or 11. With a possible set of channels between 1 and 13, there are 5 unused channels between 1 and 7, 3 between 7 and 11 and 2 above 11. So the best choice is halfway between 1 and 7: channel 4. And since I reconfigured the router to use that channel, speed has improved significantly and dropped connections have been a thing of the past.

Now, why router manufacturers don't design their products to be able to scan neighbouring wireless networks at start-up and choose a frequency that doesn't clash with other hubs, I don't know. By any means, leave the ability to power users to explicitly specify the channel but a little bit of automation would go a long way in making Wi-Fi easier for the average home user.

Update

After all this, I had to change the router's channel again earlier, as more access points were switched on during the evening. At one point, a full scan showed a grand total of 24 wireless networks! So I changed my router to channel 13 and it all seems fine so far. Methinks I'll have to do something about improving the range of that router. While I was at it, I also upgraded the firmware so we'll see if it makes a difference.

Saturday, 26 July 2008

Parameterised Testing with JUnit 4

Anybody who's ever written software more complex than the typical Hello, World! examples knows that software without bugs doesn't exist and that it needs to be tested. Most people who have ever done testing on programs written in the Java language have come across JUnit. Now, JUnit is a very small package. In fact, when you look at the number of classes provided you'd think it doesn't do anything useful. But don't be fooled, in this small amount of code lies extreme power and flexibility. Even more so since JUnit 4 that makes full use of the power of annotations, introduced in Java 5.0.

Typically, when using JUnit 3, you'd create one test case class per class you want to test and one test method per method you want to test. The problem is, as soon as your method takes a parameter, it is likely that you will want to test your method with a variety of values for this parameter: normal and invalid values as well as border conditions. In the old days, you had two ways to do it:

  • Test the same method with several different parameters in a single test: it works but all your tests with different values are bundled into a single test when it comes to output. And if you have 100 error conditions and it fails on the second one, you don't know whether any of the remaining 98 work.
  • Create one method per parameter variation you want to try: you can then report each test condition individually but it can be difficult to tell when they are related to the same method. And it requires a stupid amount of extra code.

There must be a better way to do this. And JUnit 4 has the answer: parameterised tests. So how does it work? Let's take a real life example. I've been playing with the new script package in Java 6.0 to create a basic Forth language interpreter in Java. So far, I can do the four basic arithmetic operations and print stuff off the stack. That sort of code is the ideal candidate for parameterised testing as I'd want to test my eval method with a number of different scripts without having to write one single method for each test script. So, the JUnit test class I started from looked something like this:

<some more imports go here...>

import static org.junit.Assert.*;
import org.junit.Test;

public class JForthScriptEngineTest {

 @Before
 public void setUp() throws Exception {
  jforthEngineFactory = new JForthScriptEngineFactory();
  jforthEngine = jforthEngineFactory.getScriptEngine();
 }

 @After
 public void tearDown() throws Exception {
  jforthEngine = null;
  jforthEngineFactory = null;
 }

 @Test
 public void testEvalReaderScriptContext() throws ScriptException {
  // context
  String script = "2 3 + .";
  String expectedResult = "5 ";
  ScriptContext context = new SimpleScriptContext();
  StringWriter out = new StringWriter();
  context.setWriter(out);
  // script
  StringReader in = new StringReader(script);
  Object r = jforthEngine.eval(in, context);
  assertEquals("Unexpected script output", expectedResult,
   out.toString());
 }

 @Test
 public void testGetFactory() {
  ScriptEngine engine = new JForthScriptEngine();
  ScriptEngineFactory factory = engine.getFactory();
  assertEquals(JForthScriptEngineFactory.class, factory.getClass());
 }

 private ScriptEngine jforthEngine;
 
 private ScriptEngineFactory jforthEngineFactory;
}

My testEvalReaderScriptContext method was exactly the sort of things that should be parameterised. The solution was to extract that method from this test class and create a new parameterised test class:

<some more imports go here...>

import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;

@RunWith(Parameterized.class)
public class ParameterizedJForthScriptEngineTest {
 @Parameters
 public static Collection<String[]> data() {
  return Arrays.asList(new String[][] {
    { "2 3 + .", "5 " },
    { "2 3 * .", "6 " }
  }
  );
 }
 
 public ParameterizedJForthScriptEngineTest(
   String script, String expectedResult) {
  this.script = script;
  this.expectedResult = expectedResult;
 }

 @Before
 public void setUp() throws Exception {
  jforthEngineFactory = new JForthScriptEngineFactory();
  jforthEngine = jforthEngineFactory.getScriptEngine();
 }

 @After
 public void tearDown() throws Exception {
  jforthEngine = null;
  jforthEngineFactory = null;
 }

 @Test
 public void testEvalReaderScriptContext() throws ScriptException {
  // context
  ScriptContext context = new SimpleScriptContext();
  StringWriter out = new StringWriter();
  context.setWriter(out);
  // script
  StringReader in = new StringReader(script);
  Object r = jforthEngine.eval(in, context);
  assertEquals("Unexpected script output", expectedResult,
   out.toString());
 }

 private String script;
 
 private String expectedResult;

 private ScriptEngine jforthEngine;
 
 private ScriptEngineFactory jforthEngineFactory;
}

The annotation before the class declaration tells it to run with a custom runner, in this case, the Parameterized one. Then the @Parameters annotation tells the runner what method will return a collection of parameters that can then be passed to the constructor in sequence. In my case, each entry in the collection is an array with two String values which is what the constructor takes. If I want to add more test conditions, I can just add more values in that collection. Of course, you could also write the data() method so that it reads from a set of files rather than hard code the test conditions and then you will reach testing nirvana: complete separation between tested code, testing code and test data!

So there you have it again: size doesn't matter, it's what you put in your code that does. JUnit is a very small package that packs enormous power. Use it early, use it often. It even makes testing fun! Did I just say that?

Update

I added the ability to produce compiled scripts in my Forth engine today, in addition to straight evaluation. I added a new test method in ParameterizedJForthScriptEngineTest and all parameterised tests are now run through both methods without having to do anything. What's better, I can immediately see what test data work in one case but not the other.

Saturday, 3 November 2007

DHCP and Dynamic DNS on Ubuntu Server

The cunning plan

I have broadband Internet at home and to connect to the outside world I use a Wi-Fi ADSL router. This router also acts as a DHCP and DNS server. The DHCP function is what allows any machine I connect to my home network to dynamically obtain an IP address. The DNS function is what resolves names into IP addresses, for example, the DNS will tell you that www.blogger.com is really called blogger.l.google.com and its address is 72.14.221.191. This is all well and good: when I switch on one of my computers and let it connect to the network, it will get its IP address from the router, which will also tell it to use the same router for names service queries. The router itself knows to delegate requests to my ISP's DNS so the new computer on the network has full access to the Internet. If I connect a second computer on the network, the same happens and both can have access to the Internet at the same time. Great! However, they don't know anything about each other. If I connect the two machines called nuuk and helsinki to my network, the DNS server is unable to tell either what the address of the other one is. This is because the DNS in the router is fairly basic and doesn't know to update its database when a new machine gets allocated an address by the DHCP service. Basically, we'd want a DHCP server that can communicate with the DNS server and tell it oy, I've got a new machine on the network, here's its name and the address I just allocated to it when a new machine comes online.

So what's a geek to do? Set up his own DHCP and DNS server obviously! And make them talk together. Luckily, I have an old workstation that I haven't used for many years and that I was planning to throw away: it's a bit dated but it should be exactly what I need for this. Then my recent experience with Ubuntu suggests that the recently released Ubuntu 7.10 Gutsy Gibbon Server Edition might be exactly what I need for the job. So let's get started.

The hardware

I said I had this dated workstation lying around. It's an 8 year old piece of kit that was originally built to run Red Hat Linux. It wasn't very good at it at the time because Linux was not quite ready for the desktop at the time but it was a wicked piece of kit at the time:

CPU
Dual 666 MHz Pentium III (Coppermine)
Memory
256 Mb
Storage
10 Gb SCSI hard disk: I got a SCSI controller rather than the cheaper IDE because I wanted to connect my SCSI negative scanner to it

There's plenty of horsepower for what we want to do, more than enough storage and the memory should be fine if the operating system we install on it is lightweight enough. This is where the Server Edition of Ubuntu comes into play: it's meant for server hardware and doesn't have the nice but memory hungry desktop front-end, it's all command line driven. No glitz, just useful stuff. This means that even the latest version of Ubuntu Server should be able to run comfortably within 256 Mb and there should be no need to fall back on an older version of the operating system.

Preparation

Before we start, there is a bit of planning to do. As the new server will be the authority in terms of assigning IP addresses on the network, it needs to have its own address fixed. We also need to decide on a range of addresses to allocate through DHCP. As the router is currently using the address 192.168.1.254, it makes sense to leave it as it is. Here is the network configuration I am aiming for:

ADSL router
192.168.1.254
New DHCP and DNS server
192.168.1.253 and I'll call it szczecin
DHCP address range
From 192.168.1.100 to 192.168.1.200
Domain name
home: no need to have an official domain name and in fact it's probably better if it's not something that could be a valid Internet domain

One thing I did before installing anything and that you may want to do as well is boot the new server with the desktop Ubuntu Live CD just to check that all the hardware is supported. The server edition is not a live CD so it doesn't give you the opportunity to check that before going ahead.

Installing Ubuntu

Let's plug everything together first: a monitor, a keyboard, no need for a mouse as it's all command line, power and VGA cables. And we might as well connect it to the network immediately so we'll need a network cable as well.

Start the machine, put the CD in the drive and follow the instructions. As usual with Ubuntu, it's quite easy. There are only a few things to be careful about:

  • When it asks how to partition the disk, choose the automatic option using the whole disk.
  • When it gets to network setup, cancel the DHCP client setup and configure the network manually. Give it the IP address chosen above and a name. When it asks for a DNS server, put its own address.
  • Don't forget to select DNS in the list of additional services you want to install. I would suggest you also install the SSH server to that you can connect to the machine remotely.

At the end of the installation, the machine pops the CD out and asks you to confirm a restart. No nice funky Ubuntu logo when it restarts, it's all text and you are faced with a command line login prompt. If you want to keep working from the console you can, or you can just connect from any other machine connected to the same network using SSH, provided you installed the SSH server obviously. So let's login to our new server.

Configuring a simple static DNS

The first step is to configure a simple static DNS service that is able to resolve names for the router and the new server. Ubuntu 7.10 comes with BIND 9.4.1 as a DNS server and I have used O'Reilly's DNS and BIND book as my reference. The copy I have is the 3rd edition rather than the 5th but it is more than adequate for my purpose.

The very first task is to make sure we have the necessary basics in /etc/hosts:

127.0.0.1       localhost
192.168.1.253   szczecin.home szczecin
192.168.1.254   gateway.home  gateway

Then we need to reproduce that in the BIND configuration. The first task is to find where the BIND configuration files are. On Ubuntu, you will find them in /etc/bind with a modular default set of files:

$ ls /etc/bind
db.0
db.127
db.255
db.empty
db.local
db.root
named.conf
named.conf.local
named.conf.options
rndc.key
zones.rfc1918

The main file, named.conf is constructed in such a way that for most simple installations, you should only have to change named.conf.local, which is exactly what we are going to do. But first, we need to create our database files. Let's start with the forward lookup, which we will name db.home

home.           IN SOA  szczecin.home. admin.email.address. (
                                1          ; serial
                                10800      ; refresh (3 hours)
                                3600       ; retry (1 hour)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
home.           IN NS   szczecin.home.

localhost.home. IN A    127.0.0.1

szczecin.home.  IN A    192.168.1.253
gateway.home.   IN A    192.168.1.254

The first entry specifies the Start Of Authority, identifying that our server is the best source of information for this zone. The admin.email.address. bit can be any admin email address you want to advertise, with the @ sign replaced by a dot. It doesn't have to be a valid address if you don't want it to be. The second entry identifies this machine as the name server for this zone. If you had multiple servers, you'd need one line per server. The following lines reflect what we have in /etc/hosts.

After that, we can work on the reverse lookup file, which will be named db.192.168.1:

1.168.192.in-addr.arpa.     IN SOA  szczecin.home. admin.email.address. (
                                1          ; serial
                                10800      ; refresh (3 hours)
                                3600       ; retry (1 hour)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
1.168.192.in-addr.arpa.     IN NS   szczecin.home.

253.1.168.192.in-addr.arpa. IN PTR  szczecin.home.
254.1.168.192.in-addr.arpa. IN PTR  gateway.home.

This file just defines the opposite mapping. The first two entries follow the same format that in the other file. Note how the IP addresses are back to front. The last two entries use the PTR record type rather than the A record type. We now need to declare those two database files in the BIND configuration. To do this, we just add the following to the end of the named.conf.local file:

zone "home" in {
        type master;
        file "/etc/bind/db.home";
};

zone "1.168.192.in-addr.arpa" in {
        type master;
        file "/etc/bind/db.192.168.1";
};

Note the semi-colons all over the place in the file, BIND doesn't like it if you forget them. We just need to restart the DNS for it to pick up the configuration:

$ sudo /etc/init.d/bind9 restart

If it doesn't start properly, the best way to identify what's wrong is to go have a look at the system log files. On Ubuntu, the DNS messages will be in /var/log/daemon.log. One last thing to do before we test our setup is to update the local resolver configuration by modifying /etc/resolv.conf. Remove all lines that start nameserver so that the local resolver automatically sends requests to the local BIND instance irrespective of the IP address of the machine. We should end up with a file that contains a single line:

domain home

And we can now use nslookup to verify that it's all working:

$ nslookup szczecin
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   szczecin.home
Address: 192.168.1.253

$ nslookup szczecin.home
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   szczecin.home
Address: 192.168.1.253

$ nslookup gateway
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   gateway.home
Address: 192.168.1.254

$ nslookup localhost
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   localhost.home
Address: 127.0.0.1

Installing the DHCP server

My reference for installing the DHCP server was an excellent article aimed at the Debian distribution by Adam Trickett. As Ubuntu is based on Debian, I didn't have much to change. In its default state, an Ubuntu installation doesn't include a DHCP server so we need to add it:

$ sudo apt-get install dhcp3-server

This will install the ISC DHCP server. It will ask you to insert the Ubuntu CD in the drive so just do so. It will then attempt to start the new DHCP server and fail saying that the configuration is incorrect, which is to be expected. Configuration files for this server can be found in /etc/dhcp3 and the one we are interested in is dhcpd.conf. Move the default version out of the way by renaming it and we will start anew with an empty file. Here is what I have on my system, based on Adam's article. I highlighted the lines that tell the DHCP server to update the DNS and what key file to use.

# Basic stuff to name the server and switch on updating
server-identifier       192.168.1.253;
ddns-updates            on;
ddns-update-style       interim;
ddns-domainname         "home.";
ddns-rev-domainname     "in-addr.arpa.";
# Ignore Windows FQDN updates
ignore                  client-updates;

# Include the key so that DHCP can authenticate itself to BIND9
include                 "/etc/bind/rndc.key";

# This is the communication zone
zone home. {
        primary 127.0.0.1;
        key rndc-key;
}

# Normal DHCP stuff
option domain-name              "home.";
option domain-name-servers      192.168.1.253;
option ip-forwarding            off;

default-lease-time              600;
max-lease-time                  7200;

# Tell the server it is authoritative on that subnet (essential)
authoritative;

subnet 192.168.1.0 netmask 255.255.255.0 {
        range                           192.168.1.100 192.168.1.200;
        option broadcast-address        192.168.1.255;
        option routers                  192.168.1.254;
        allow                           unknown-clients;

        zone 1.168.192.in-addr.arpa. {
                primary 192.168.1.253;
                key "rndc-key";
        }

        zone localdomain. {
                primary 192.168.1.253;
                key "rndc-key";
        }
}

Now that we've done that, we need to modify the DNS configuration so that it can accept the changes. But first, a quick note on the /etc/bind/rndc.key file. This file is automatically created when you install DNS on Ubuntu and should be fine for your installation. It is a key file that authenticates the DHCP server to the DNS server so that only the DHCP server is allowed to send updates. You don't want random people to be able to update your DNS database. Having said that, we need to modify the DNS configuration to accept the updates so back to /etc/bind. And while we are at it, let's have a look at this key file.

key "rndc-key" {
        algorithm hmac-md5;
        secret "some base 64 encoded secret key";
};

Note the name of the key on the first line, it may vary from one distribution to another so make a note of it. Then we need to add the control information to the DNS configuration. Rather than update named.conf, I decided to add the relevant code to the end of named.conf.options: it feels like the right place to put it but I suspect it doesn't really matter. So add this to the end of the file:

// allow localhost to perform updates
controls {
        inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; };
};

Then we need to modify the zone definitions in the named.conf.local file. Here is what it looks like with changes highlighted:

zone "home" in {
        type master;
        file "/etc/bind/db.home";
        allow-update { key "rndc-key"; };
        notify yes;
};

zone "1.168.192.in-addr.arpa" in {
        type master;
        file "/etc/bind/db.192.168.1";
        allow-update { key "rndc-key"; };
        notify yes;
};

include "/etc/bind/rndc.key";

We're nearly there. The changes we just made mean two things: the DNS server needs to be able to update the content of the /etc/bind directory and the DHCP server needs to be able to read the key file /etc/bind/rndc.key. By default on Ubuntu, this won't work as the permissions around those files are fairly stringent. So let's change them. As the BIND configuration directory belongs to root:bind, we just need to give write access to the group for BIND to be able to write to it. To give the DHCP server access to the key file, the right thing to do would be to add the dhcpd user to the bind group but we can also make the file readable to everybody. Yes this is less secure but will be fine for a home installation.

$ sudo chmod g+w /etc/bind

$ sudo chmod +r /etc/bind/rndc.key

Now we just need to start DHCP and restart DNS.

$ sudo /etc/init.d/dhcp3-server start

$ sudo /etc/init.d/bind9 restart

Error messages will be in the same place as before if the services don't start properly. If all starts as exected, it's now time to go to the administration interface of the router and disable DHCP. We should not need it anymore.

Booting the clients

The proof of the pudding is in the eating and the proof of installing a server is in starting a number of clients to use the service. The first machine I tried with was my Ubuntu laptop called nuuk. To see what happens, tail the log file. You should see something like the following appear:

Nov  3 15:14:02 szczecin dhcpd: DHCPDISCOVER from 00:12:f0:1e:f4:79 via eth0
Nov  3 15:14:03 szczecin dhcpd: DHCPOFFER on 192.168.1.102 to 00:12:f0:1e:f4:79
(nuuk) via eth0
Nov  3 15:14:03 szczecin named[4771]: client 127.0.0.1#32773: updating zone
'home/IN': adding an RR at 'nuuk.home' A
Nov  3 15:14:03 szczecin named[4771]: client 127.0.0.1#32773: updating zone
'home/IN': adding an RR at 'nuuk.home' TXT
Nov  3 15:14:03 szczecin dhcpd: Added new forward map from nuuk.home. to
192.168.1.102
Nov  3 15:14:03 szczecin named[4771]: client 192.168.1.253#32773: updating zone
'1.168.192.in-addr.arpa/IN': deleting rrset at '102.1.168.192.in-addr.arpa' PTR
Nov  3 15:14:03 szczecin named[4771]: client 192.168.1.253#32773: updating zone
'1.168.192.in-addr.arpa/IN': adding an RR at '102.1.168.192.in-addr.arpa' PTR
Nov  3 15:14:03 szczecin dhcpd: added reverse map from 102.1.168.192.in-addr.arpa.
to nuuk.home.
Nov  3 15:14:03 szczecin dhcpd: Wrote 4 leases to leases file.
Nov  3 15:14:03 szczecin dhcpd: DHCPREQUEST for 192.168.1.102 (192.168.1.253) from
00:12:f0:1e:f4:79 (nuuk) via eth0
Nov  3 15:14:03 szczecin dhcpd: DHCPACK on 192.168.1.102 to 00:12:f0:1e:f4:79 (nuuk)
via eth0

If you don't see the message about updating the forward and reverse maps, it may be that your client machine is not configured to send its name to the DHCP server. For this to work on Ubuntu, you should have the following line in /etc/dhcp3/dhclient.conf:

send host-name "<hostname>";

Next on the list is my Apple PowerMac G5 called helsinki. A simple restart and it works like a charm: I can ping hesinki from nuuk or szczecin and the other way round, all machines can access the Internet, great! While we're talking about computer names, if you don't know how to change the machine's name under OS-X, here's how.

The next test is with my work's Windows XP laptop. All goes well: the Windows box gets an IP address and I can ping it from the other machines. However, I can't ping from the Windows box to the other ones unless I use the fully qualified domain name: that is I can ping nuuk.home but not nuuk. It looks like Windows ignores the domain information sent by DHCP. This is not a major problem as everything else works fine. I vaguely remember something about network settings on Windows that you have to change but. I'll have a look see if I can remember where it was when I can.

The final test is to start my Solaris Express laptop called mariehamn. It gets its IP address fine but doesn't seem to send its name to the server. So it can't be pinged. Everything else works though: it can ping all of the other machines, get to the Internet, etc. I suspect I need to find the equivalent of the /etc/dhcp3/dhclient.conf file on Solaris and change it so that it sends its name. It's probably hidden somewhere in the Network Auto-Magic configuration.

Conclusion

That was a bit convoluted but the excellent resources that are the O'Reilly DNS and BIND book and Adam's article made it significantly easier. Real system administrators would say that was a doodle and made way too easy by Ubuntu. I've learnt useful stuff on the way and I now have a good use for this old workstation. Speaking of which, it hasn't really been breaking a sweat so far: it's been close to 100% idle all the time, it has 114Mb RAM free out of 256 and the hard disk has seen virtually no activity. In other words, my 8 year old box is over spec'ed for this and I could get it to do a lot more server tasks. Should I mention that the server version of some other recent operating systems that shall remain nameless would not even start on such a machine? No, I'll leave that debate for another day.

For those who are wondering what scheme I use to name my computers, it all comes from a previous job and all machines are named after cities of the world, with a Scandinavian and Baltic theme so far: Helsinki, Szczecin, Nuuk and Mariehamn.