Monday, 29 December 2008

Small Plastic Bag Madness

Following a push to eradicate the plastic bag from the United Kingdom (and London in particular), supermarkets have stopped giving the standard plastic carrier bags unless you ask for them. And even then, a lot of shops will now preferably give you a small plastic bag rather than a normal size one. On the face of it, this all sounds like a good idea as it consumes less plastic but it only looks at the very beginning of the life of a plastic bag and assumes I will throw it away once I get home. Well, no, I don't throw my plastic bags away: I reuse them. There are tons of different ways I can reuse a standard size supermarket plastic bag: as a bin liner for my kitchen bin for instance. But what can I do with the small ones they now give away? Not much! So they pile up at home. It's even got to the point where I started wondering if I needed to buy plastic bin liners for the kitchen bin as I'm now running out of standard size plastic bags!

So what could be the solution to this conundrum? Most people have two reasons to go to the supermarket: either to do a weekly food shop, in which case they will need large bags and are even quite likely these days to have their own bags, so no problem there; or they will just pop in to buy a few items that they realised they needed, in which case they may need a small bag. In that last case, it seems to me that the small plastic bag is not the right answer. A small (recycled) paper bag, similar to the ones you get in Pret or most other sandwich shop would be perfect! When I come home, I can then pop the small paper bag in my recycling green box and be done with it: much nicer and easier than dealing with a small plastic bag that is difficult to recycle or reuse.

Stop this small plastic bag madness!

Canon and Linux, part 2

5 days ago, I sent a complaint email to Canon about their lack of support for Linux (or any operating system other than Microsoft Windows or Apple OS-X) when it comes to upgrading the firmware of my camera. I received this standard email reply today:

Dear Customer,

Thank you for your recent enquiry regarding your Canon product.

Your query has been sent to the appropriate group and is currently under investigation.

Yours sincerely,

Canon Support Centre

Should our answer not fully resolve your problem, please feel free to either re-submit a new query by clicking here, or alternatively call our support helpdesk at 08705 143 723 Monday to Friday from 9:00AM to 5:30PM stating the 7 digit reference number in the subject of this email.

We aim to answer queries as soon as possible. Responses on average take 2 to 3 business days (or the next working day if submitting on a weekend or public holiday).

If I read between the lines, it looks like the Christmas period was out of hours so the average of 2 to 3 business days probably starts now.

Wednesday, 24 December 2008

Canon and Linux

I've now been the proud owner of a Canon EOS 5D for a few years. I've never upgraded the firmware on it but tonight I thought I would investigate how complicated it was. I easily found the corresponding web page on the Canon web site. It seems easy enough: you get a .fir file that you put in the root directory of a newly formatted memory card and you then follow the upgrade procedure on the camera. All easy and simple. The only snag is that you get two version of the firmware package: a self-extracting .exe file for Windows or a .dmg package file for Mac OS-X. Both those options are proprietary so they don't work on Linux. Therefore, there is no way I can upgrade my firmware using my laptop. I could use the Mac desktop at home but I am not home right now. Furthermore, it's not even a software problem, it's just a packaging problem: there is nothing in the .exe or the .dmg apart from the automated unpacking. It would be so easy for Canon to offer a simple .zip file that can be extracted on any operating system. By any means, provide the proprietary formats for people who will benefit from them but also provide a generic format that can be used anywhere. So here is the complaint I registered on the Canon UK support site:

You offer firmware download on the Japanese site (http://web.canon.jp/imaging/eos5d/eos5d_firmware-e.html). Unfortunately, you only provide proprietary Windows and Mac packages. Using a Linux laptop, I cannot use those formats. If you were to offer a simple ZIP file for all other operating systems, it would enable all your users to update their firmware. As such, I feel treated like a second class citizen by Canon and it is very disappointing. I am happy to take complete responsibility for updating the firmware from a Linux machine but I would like to be given the option. Is there any way I can download a version of the latest firmware that can be extracted on Linux? If yes I would appreciate if you could provide me with the details of such a download.

We shall see if anything comes out of it. They promised to come back to me within 2 to 3 days. Then I went and filed a second complaint because their web site crashes when I try to log into it as a registered Canon customer but that's a whole different story.

Friday, 19 December 2008

Fractals with Octave: Classic Mandelbrot and Julia

After reading this article in FSM, I decided to have a go at doing fractal images with Octave. In the process, I discovered quite a bit about fractals so here is the result of those experiments. First I'll build some classic Mandelbrot and Julia fractal images.

Pre-Requisites

All the code below was created on Ubuntu using Octave. If you don't use Ubuntu, check with your distribution or follow the instructions on the Octave web site. If you have Ubuntu, you just need to install the Octave package. To do this, open a terminal and type the following at the command line:

$ sudo apt-get install octave

Alternatively, considering that Octave is meant to be compatible with MATLAB, it should be possible to adapt everything in this series to MATLAB.

Fractal Basics

Fractals is a term that encompasses a variety of mathematical constructions that all have a property in common: they are inherently recursive. This means that you can only ever produce an approximation of a fractal to a given level of precision, which is why their study is well adapted to computational approximation methods, what a computer is inherently designed for.

Mandelbrot and Julia sets are generated as a result of studying the convergence properties of series in the complex plane. The series that is most commonly used is:

zn+1=zn2+c

This series converges or diverges for large values of n, depending on the value of z0 and c. All well and good but how does that help us produce cool fractal images with Octave? We're getting there.

The Julia and Mandelbrot sets

The Julia sets (from Gaston Julia who first studied such series) show the behaviour of complex series when you vary the value of z0 in the complex plane, for a given value of c. But the question is then: what are interesting values of c?

Conversely, the Mandelbrot sets (from Benoît Mandelbrot who first used a computer to study such series) show the behaviour of the same series when you vary the value of c in the complex plane, for a given value of z0, the most classic example using z0=0.

Constructing an Image

So, if we map the pixels of our image to the complex plane, we can associate a value of c to each pixel and calculates values of zn for z0=0 in order to obtain a Mandelbrot image.

But what values of n are we interested in and what do we do with the value we calculate? As said above, we are interested in whether the series diverges or not so we could colour a pixel in white if it diverges and in black if it converges.

How do we know whether the series converges or diverges then? Some people much smarter than I am have proven that for a given polynomial series, there exists a value r such that if for any n, |zn|>r, then the series diverges. For the polynomial given above, r=2.

So now, we can identify for each point whether the series diverges or converges so we can colour the corresponding pixel white or black. But most Mandelbrot images you see in the wild are not black and white, they are rather colourful: so how do they get all their funky colours? Each colour is associated with how quickly the series diverges. To obtain that effect, we will associate the value 0 with points that converge and a number greater than 0 for points that diverge. That number will be the lowest value of n for which |zn|>r. Each value will then be associated to a colour in a colour to produce the final image.

The Code

After all this background, it's time to do some coding. But first, we need to ensure that Octave knows where to find the functions we will define. The way Octave works, it looks for function files in its path and expects each function to be stored in a file of the same name with the .m suffix. By default, the Octave path doesn't include any directory for which you have write access so what I did was to create an octave directory in my home directory as well as a .octaverc file that adds ~/octave to the path:

.octaverc

addpath("~/octave");

From now on, every .m file in the ~/octave directory will be found by Octave and recognised as a function file. You will need to restart Octave for this to take effect.

Now that the preamble is out of the way, we can start coding in earnest so let's fire a trusty file editor and create a file called mandelbrot.m in the ~/octave directory. In this file, we are going to define a mandelbrot function that takes 4 arguments: cmin, cmax, hpx and niter, corresponding respectively to the minimum and maximum values of c, the number of horizontal pixels in the image and the number of iterations:

function M=mandelbrot(cmin,cmax,hpx,niter)

The aim of this function will be to populate the matrix M with the colour values of pixels. The first thing we do is calculate the vertical number of pixels such that ratio of the final picture is the same as the one given implicitly by cmin and cmax:

vpx=round(hpx*abs(imag(cmax-cmin)/real(cmax-cmin)));

We then initialise z, c and M. z and M are easy as they just need to be initialised to zeroes. However, note that the vertical dimension is the first one because Octave orders a matrix per columns first. c is a bit more complicated as it need to be initialised to a matrix that is vpx by hpx in size with real values spread along the horizontal axis and imaginary values spread along the vertical axis. The linspace function produces a vector of values evenly spread between the minimum and maximum across the given number of points. The meshgrid function then produces 2 real matrices out of those 2 vectors that are then merged together as a complex matrix.

z=zeros(vpx,hpx);
[cRe,cIm]=meshgrid(linspace(real(cmin),real(cmax),hpx),
                   linspace(imag(cmin),imag(cmax),vpx));
c=cRe+i*cIm;
M=zeros(vpx,hpx);

The final part of the function is the main loop. Here we use Octave's full potential using matrix calculation and a matrix mask. The mask is calculated so that it contains all the points that have not yet diverged and the corresponding entries in the result matrix are incremented. Points that diverge are not incremented and are excluded from the next calculation. Finally, we set the remaining points, that is the ones that haven't diverged after niter iterations, to 0.

for s=1:niter
  mask=abs(z)<2;
  M(mask)=M(mask)+1;
  z(mask)=z(mask).^2+c(mask);
endfor
M(mask)=0;

Putting it all together, here is the final file:

mandelbrot.m

function M=mandelbrot(cmin,cmax,hpx,niter)
  vpx=round(hpx*abs(imag(cmax-cmin)/real(cmax-cmin)));
  z=zeros(vpx,hpx);
  [cRe,cIm]=meshgrid(linspace(real(cmin),real(cmax),hpx),
                     linspace(imag(cmin),imag(cmax),vpx));
  c=cRe+i*cIm;
  M=zeros(vpx,hpx);
  for s=1:niter
    mask=abs(z)<2;
    M(mask)=M(mask)+1;
    z(mask)=z(mask).^2+c(mask);
  endfor
  M(mask)=0;
endfunction

So let's run it to see what comes out of it:

octave-3.0.1:1> Mc=mandelbrot(-2.1+1.05i,0.7-1.05i,640,64);
octave-3.0.1:2> imagesc(Mc)

Octave should then fire up Gnuplot and show a nice classic Mandelbrot set:

Classic Mandelbrot Set

Following this success, let's tackle the Julia set. The code should be very similar, with the exception that c should be a parameter of the function and z is the variable that needs initialisation using linspace and meshgrid. Furthermore, c doesn't need to be masked in the loop because it is a scalar value, not a matrix. So without further ado, here's the code for the ~octave/julia.m file:

julia.m

function M=julia(zmin,zmax,hpx,niter,c)
  vpx=round(hpx*abs(imag(zmax-zmin)/real(zmax-zmin)));
  [zRe,zIm]=meshgrid(linspace(real(zmin),real(zmax),hpx),
                     linspace(imag(zmin),imag(zmax),vpx));
  z=zRe+i*zIm;
  M=zeros(vpx,hpx);
  for s=1:niter
    mask=abs(z)<2;
    M(mask)=M(mask)+1;
    z(mask)=z(mask).^2+c;
  endfor
  M(mask)=0;
endfunction

Let's choose a value for c that corresponds to a point at the limit of convergence in the Mandelbrot set, such as -0.75+0.2i, and call the julia function:

octave-3.0.1:3> Jc1=julia(-1.6+1.2i,1.6-1.2i,640,64,-0.75+0.2i);
octave-3.0.1:4> imagesc(Jc1)

Octave should then refresh the Gnuplot windows and show the result:

Classic Julia Set for c=-0.75+0.2i

Classic Julia Set for c=-0.75+0.2i

Factorising Code

Considering both function are so similar, it should be possible to factorise code between them and have a core Mandelbrot/Julia function called by two simpler functions. The major difference is in the initialisation before the loop so can we factorise the loop? The only difference in the loop between the two functions is that in the mandelbrot function, c is masked because it is a matrix, whereas in the julia function, it is is not masked because it is a scalar. The solution is to transform c into a matrix of identical values in the julia function. We now have three files: mjcore.m that contains the core code and refactored mandelbrot.m and julia.m. Here is the full listing:

mjcore.m

function M=mjcore(z,c,niter)
  M=zeros(length(z(:,1)),length(z(1,:)));
  for s=1:niter
    mask=abs(z)<2;
    M(mask)=M(mask)+1;
    z(mask)=z(mask).^2+c(mask);
  endfor
  M(mask)=0;
endfunction

mandelbrot.m

function M=mandelbrot(cmin,cmax,hpx,niter)
  vpx=round(hpx*abs(imag(cmax-cmin)/real(cmax-cmin)));
  z=zeros(vpx,hpx);
  [cRe,cIm]=meshgrid(linspace(real(cmin),real(cmax),hpx),
                     linspace(imag(cmin),imag(cmax),vpx));
  c=cRe+i*cIm;
  M=mjcore(z,c,niter);
endfunction

julia.m

function M=julia(zmin,zmax,hpx,niter,c)
  vpx=round(hpx*abs(imag(zmax-zmin)/real(zmax-zmin)));
  [zRe,zIm]=meshgrid(linspace(real(zmin),real(zmax),hpx),
                     linspace(imag(zmin),imag(zmax),vpx));
  z=zRe+i*zIm;
  cc=zeros(vpx,hpx)+c;
  M=mjcore(z,cc,niter);
endfunction

Next

In the next article of this series, More on the Classic Sets, we wander around the Mandelbrot and Julia sets and see what happens to the Mandelbrot set when we choose a value for z0 that is different from 0. I also demonstrate how to save images to disk and talk about colour maps.

Thursday, 18 December 2008

Wandering Ibex

After 6 weeks with Ubuntu 8.10, aka Intrepid Ibex, the change that has most affected the way I use my laptop is the new network manager. Connecting to a wireless network is easy and just works. It is generally fast to connect, definitely faster than with 8.04 aka Hardy Heron. It will immediately recognise networks it knows about and connect automatically. In particular, it is much better than Windows at connecting to a Wi-Fi network that does not advertise its SSID and recognising it later as a known network.

But the biggest benefit is the support for 3G modems out of the box, in particular the Huawei models available in the UK. When I plugged in my 3 USB modem for the first time, it recognised the device, asked me what country and what operator I was on and that was all the configuration I had to go through: no software to install, instantly on. It also integrates seamlessly into the network manager so there's no flaky third party software to use every time I want to connect. I just have to plug the modem in, select it in the network manager drop down and hey presto, in a few seconds I am online, whether in a pub, on the train or anywhere I've got network coverage from 3 (which is sometimes a bit patchy, I have to admit). OK, there's one thing it doesn't do, which the Windows version does: it doesn't tell me how much of my monthly quota I've consumed. However, I very rarely download large files via the modem so I've never reached the limit: large files is what the (fast) home broadband is for. It would be more important for someone who does use a 3G modem more heavily than I do. Maybe that's a feature to ask for in the next version?

Friday, 31 October 2008

New Template

I was getting bored of this weblog's template. After all, I've been using it since I created it so it was time for a change. I decided to go with Blogger's standard TicTac Blue as it was designed by one of my favourite web designers, Can Cederholm of SimpleBits fame. Of course, I'll probably end up customising it.

Vegetarian Pie

Seen on the Pies of the Day board at Eat today:

Vegetarian: Fisherman's Pie

Maybe Eat are run by French people?

Thursday, 30 October 2008

Intrepid Upgrade

Canonical released Ubuntu 8.10, code named Intrepid Ibex today. So obviously, I felt like I had to upgrade my Hardy Heron (8.04) T42 tonight, especially considering that Canonical have advertised this release as focused on the desktop and mobile computing so should be ideal for a laptop. As usual with recent Ubuntu releases, once the 1466 files my upgrade needed were downloaded, the rest of the process was a breeze and I am now writing this on a newly upgraded machine. Well done to Canonical for making it easy and idiot proof. So here are my first impressions, in no particular order:

  • The Gnome theme is slightly more streamlined and looks good.
  • I was expecting OpenOffice.org to be updated to v3.0 but that wasn't the case and Intrepid keeps v2.4. Hopefully, an upgrade to 3.0 will be available later.
  • There is a new Create a USB startup disk option in the System, Administration menu. I'll have to try that out!
  • There's a Recent Documents sub-menu in the Places menu. This will definitely come in useful.
  • The Shutdown button that used to open a dialog box box has been replaced with a drop down menu that includes a Guest session option: makes it easy to lend your machine for a few hours to a friend without being worried about your files or having to set up a special account.

Everything else looks like Hardy but I'm sure I'll discover new changes as I use the machine more.

Wednesday, 10 September 2008

No Big Bang Today

Thanks to The Register for reminding us that Today is not Hadron Collider Day and cutting through the hype and FUD generated by the rest of the press today. Yes, today CERN reached a significant milestone with the Large Hadron Collider by running the first beam of protons full circle but no collisions are planned for the next few months yet. That's because to have a collision, you need two beams of proton: one going clockwise, the other one going anti-clockwise. And even then, the world will not end because what the HLC will produce happens all the time in nature. The difference is that the HLC will do it under controlled conditions. If you are not convinced, you can always check that the world hasn't been destroyed by visiting this handy status page.

Thursday, 7 August 2008

Change Control

One of the biggest risk to any IT project is change. The more changes you get during a project, the more difficult it is to deliver it on time and on budget. Letting change happen willy nilly is a bit like going to a bar and asking the barman for a white wine. Then when he's about to pour it, change your mind and ask for a red wine. And when he's about to pour the red wine, change your mind again and ask for a beer: any sensible barman will then stop and ask you to make your mind up before going any further, potentially charging you if you changed your mind after the drink was poured. It's also a sure way to get served very slowly.

IT projects are the same: if you keep changing your mind, you will never deliver and if you change your mind too late in the process, it's going to cost you a lot. This is why a good Change Control Process is essential.

Now, the project I currently work on has had every single aspect of it changed over the past three weeks. I cannot name a single thing that is identical to what it was three weeks ago. We even changed the Change Control Process!

In fact, I'm unfair, there is one thing that hasn't changed: we are still working out of the same floor in the same office. But that's because we're only meant to move offices in October.

Sunday, 27 July 2008

Update on Montignac

So it's been nearly two months now since I started trying to get rid of my beer belly. The last time I blogged about it, I was rather frustrated by a lack of progress.

But progress there is now! I just got on the scale and I am now 83.1 kg, that's 5.3 kg less than when I started. I can now tighten my belt further and indeed some of my trousers are now feeling a wee bit too large. 5.3 kg lost in 2 months is not amazing but there were a few parties during that time for which I made exceptions and I didn't really have to go out of my way. In fact, that's what's very good with the method: it's easy to follow and sustainable.

Saturday, 26 July 2008

Parameterised Testing with JUnit 4

Anybody who's ever written software more complex than the typical Hello, World! examples knows that software without bugs doesn't exist and that it needs to be tested. Most people who have ever done testing on programs written in the Java language have come across JUnit. Now, JUnit is a very small package. In fact, when you look at the number of classes provided you'd think it doesn't do anything useful. But don't be fooled, in this small amount of code lies extreme power and flexibility. Even more so since JUnit 4 that makes full use of the power of annotations, introduced in Java 5.0.

Typically, when using JUnit 3, you'd create one test case class per class you want to test and one test method per method you want to test. The problem is, as soon as your method takes a parameter, it is likely that you will want to test your method with a variety of values for this parameter: normal and invalid values as well as border conditions. In the old days, you had two ways to do it:

  • Test the same method with several different parameters in a single test: it works but all your tests with different values are bundled into a single test when it comes to output. And if you have 100 error conditions and it fails on the second one, you don't know whether any of the remaining 98 work.
  • Create one method per parameter variation you want to try: you can then report each test condition individually but it can be difficult to tell when they are related to the same method. And it requires a stupid amount of extra code.

There must be a better way to do this. And JUnit 4 has the answer: parameterised tests. So how does it work? Let's take a real life example. I've been playing with the new script package in Java 6.0 to create a basic Forth language interpreter in Java. So far, I can do the four basic arithmetic operations and print stuff off the stack. That sort of code is the ideal candidate for parameterised testing as I'd want to test my eval method with a number of different scripts without having to write one single method for each test script. So, the JUnit test class I started from looked something like this:

<some more imports go here...>

import static org.junit.Assert.*;
import org.junit.Test;

public class JForthScriptEngineTest {

 @Before
 public void setUp() throws Exception {
  jforthEngineFactory = new JForthScriptEngineFactory();
  jforthEngine = jforthEngineFactory.getScriptEngine();
 }

 @After
 public void tearDown() throws Exception {
  jforthEngine = null;
  jforthEngineFactory = null;
 }

 @Test
 public void testEvalReaderScriptContext() throws ScriptException {
  // context
  String script = "2 3 + .";
  String expectedResult = "5 ";
  ScriptContext context = new SimpleScriptContext();
  StringWriter out = new StringWriter();
  context.setWriter(out);
  // script
  StringReader in = new StringReader(script);
  Object r = jforthEngine.eval(in, context);
  assertEquals("Unexpected script output", expectedResult,
   out.toString());
 }

 @Test
 public void testGetFactory() {
  ScriptEngine engine = new JForthScriptEngine();
  ScriptEngineFactory factory = engine.getFactory();
  assertEquals(JForthScriptEngineFactory.class, factory.getClass());
 }

 private ScriptEngine jforthEngine;
 
 private ScriptEngineFactory jforthEngineFactory;
}

My testEvalReaderScriptContext method was exactly the sort of things that should be parameterised. The solution was to extract that method from this test class and create a new parameterised test class:

<some more imports go here...>

import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;

@RunWith(Parameterized.class)
public class ParameterizedJForthScriptEngineTest {
 @Parameters
 public static Collection<String[]> data() {
  return Arrays.asList(new String[][] {
    { "2 3 + .", "5 " },
    { "2 3 * .", "6 " }
  }
  );
 }
 
 public ParameterizedJForthScriptEngineTest(
   String script, String expectedResult) {
  this.script = script;
  this.expectedResult = expectedResult;
 }

 @Before
 public void setUp() throws Exception {
  jforthEngineFactory = new JForthScriptEngineFactory();
  jforthEngine = jforthEngineFactory.getScriptEngine();
 }

 @After
 public void tearDown() throws Exception {
  jforthEngine = null;
  jforthEngineFactory = null;
 }

 @Test
 public void testEvalReaderScriptContext() throws ScriptException {
  // context
  ScriptContext context = new SimpleScriptContext();
  StringWriter out = new StringWriter();
  context.setWriter(out);
  // script
  StringReader in = new StringReader(script);
  Object r = jforthEngine.eval(in, context);
  assertEquals("Unexpected script output", expectedResult,
   out.toString());
 }

 private String script;
 
 private String expectedResult;

 private ScriptEngine jforthEngine;
 
 private ScriptEngineFactory jforthEngineFactory;
}

The annotation before the class declaration tells it to run with a custom runner, in this case, the Parameterized one. Then the @Parameters annotation tells the runner what method will return a collection of parameters that can then be passed to the constructor in sequence. In my case, each entry in the collection is an array with two String values which is what the constructor takes. If I want to add more test conditions, I can just add more values in that collection. Of course, you could also write the data() method so that it reads from a set of files rather than hard code the test conditions and then you will reach testing nirvana: complete separation between tested code, testing code and test data!

So there you have it again: size doesn't matter, it's what you put in your code that does. JUnit is a very small package that packs enormous power. Use it early, use it often. It even makes testing fun! Did I just say that?

Update

I added the ability to produce compiled scripts in my Forth engine today, in addition to straight evaluation. I added a new test method in ParameterizedJForthScriptEngineTest and all parameterised tests are now run through both methods without having to do anything. What's better, I can immediately see what test data work in one case but not the other.

Thursday, 17 July 2008

Getting One Back on Traffic Wardens

I was in France last week and heard this story on the radio. Apparently, a woman who was appearing in court for a large number of parking offences walked out free and escaped a fine because the law doesn't actually say that you have to prominently display the parking slip given by the meter. Therefore you can't prove that she hadn't paid the parking fee.

Monday, 7 July 2008

Peekaboo Nightmare, PIE to the Rescue

I've been working on the web site of a charity during my spare time for the past few months. Last night, I finally got round to uploading a prototype of a few of the revamped pages. Today I got an email from them saying in essence that they liked the prototype but there were a few quirks. The penny dropped immediately: I had been developing the prototype on my Mac and testing with Firefox, they were looking at it with IE on Windows. So I fired up IE on my work laptop and, lo and behold: my prototype was complete rubbish!

Having assessed the extent of the damage, I wrote back saying I'd work on it but if they downloaded Firefox they could see what it was meant to look like.

So when I got home, I started work on making it presentable in IE. It all took quite a lot of effort and swearing but I got there eventually. So here is what it took:

  • The first problem was with clearing floats: IE doesn't like it when the clearing element is empty. Luckily, Position is Everything came to the rescue with this handy article on clearing.
  • Then it appears that, IE on Windows has another spec violation that causes all boxes to expand and enclose all content, regardless of any stated dimensions that may be smaller, which is incidentally one of the features the above article on clearing relies on. So, if I wanted to hide my clearing elements, not only did I need to set their height to zero but their font-size too. Maybe the best way will be to properly implement the self-clearing method explained in that same article.
  • And of course, with all those floats all over the place, I had to come across the peekaboo bug! Luckily, a few strategically placed width properties sorted it but only after a healthy bit of swearing.

Moral of the story: if you ever come across IE bugs and want to keep sane while resolving them, head for PIE before attempting any modification of your code. I used to know this, I just got reminded tonight.

Monday, 23 June 2008

Pixel Lapse

I recently came across this small application called pixel-lapse. The basic principle sounds interesting: record a webcam image one pixel at a time. From the photo gallery on the web site, it looks like you get the most interesting results when part of the image is static and part of it moves. So I decided to give it a go and here's the first shot:

Pixel Lapse image

Working at the computer

Interesting result indeed! Shame about the watermark though. I'd happily pay a few quid for a version without the watermark but it doesn't seem to be an option on the web site.

Thursday, 19 June 2008

Web Sharing and PHP on Mac OS-X Leopard

Mac OS-X comes with a version the Apache web server that is configured to allow users of the system to publish their own web pages directly from the Sites directory in their home folder. This is of limited use for the average user but is just great for web developers who can test their work directly, using a real web server. However, there is a glitch: if you upgrade from OS-X Tiger (10.4) to Leopard (10.5), existing users will suddenly get an HTTP error 403 Forbidden when navigating to their web pages. This is because in Leopard, Apache's security is tightened by default.

Apple provide an article that describes how to re-enable access for those users. However, their version will still deny access to sub-directories. So I slightly adapted the shortname.conf file to make it more flexible. Here is my version:

<Directory "/Users/shortname/Sites/*">
Options Indexes MultiViews
AllowOverride FileInfo
Order allow,deny
Allow from all
</Directory>

The star (*) at the end of the directory name on the first line ensures that the rule applies not only to the ~/Sites directory but also all sub-directories. The FileInfo value for the AllowOverride option on the third line tells Apache to allow settings override in a .htaccess file in that directory or any sub-directory thus allowing much finer grained control.

After getting this to work, it appears that, although PHP5 is installed, it is not enabled in Leopard's Apache installation. Enabling it is very easy and very well explained at Foundation PHP.

There you go: a full blown web server with PHP support is just what you need to locally test drive you beautiful web creations and you don't even have to install any extra software.

Friday, 13 June 2008

Montignac, 12 days on

At the beginning of last week, I decided to follow the Montignac method to try and lose my beer belly. I lost just over 1kg in the first week but realised that, as I was buying my lunch from shops surrounding the office, I had no way to know the exact list of ingredients that went into my food and therefore no way to check that I was really following the method. So I decided to sort out my own pack lunch to make sure I knew exactly what I was eating. This meant doing more comprehensive and careful food shopping at the weekend, which was a bit of a pain but hopefully it would be worth it. As a result, I lost another 200grams on the first day but regained them over the next few days. I finish the week today the same weight as I started it: 87.3kg. That's not very promising and quite frustrating. I'll keep trying for a few more weeks just in case my body needs a bit more time to adapt. After that, if it stays the same, I'll just go buy an ice cream and dump my Montignac books in the (recycling) bin.

Monday, 2 June 2008

Getting Rid of the Beer Belly

Since I moved to the UK, I've been steadily acquiring a beer belly. Even without my mum telling me so (which she does every time she sees me), I've wanted to lose it for some time. Now the problem is that I've never been keen on weight loss diets because most of them are just unsustainable in the long term as they require you to give up entire food groups. But lo and behold, I was recently introduced to the Montignac method and I like his approach. I mean, someone who includes unpasteurised cheese, foie gras, dark chocolate and wine in a weight control diet (note: weight control, not crash weight loss) has to be a genius!

The whole concept behind this diet is the Glycemic Index of food ingredients. So, yes it is related to the Atkins and GI diets but is done is a way that seems more sensible to me. And, as Michel Montignac is French, he bases everything on eating good food and enjoying it which makes it that much easier to follow. Having said that, I will probably not be all plain sailing: I'm not sure how difficult it will be to cut down on stuff like beer, potatoes and bread. On the other hand, it will get me to cook more and experiment with food, which can only be a good thing.

To follow progress, I'll try to weigh myself every day. The starting position today was 88.4kg, for a BMI of 27.6, which is squarely in the 25–30 Overweight bracket.

Stay tuned to see if I can get myself into shape!

Obviously, this post is completely unrelated to the previous one.

In The Beginning

I received this story by email from a friend and it made me laugh out loud so I thought I had to share it. Enjoy!

In the beginning God covered the earth with broccoli, cauliflower and spinach, with green, yellow and red vegetables of all kinds so Man and Woman would live long and healthy lives.

Then using God's bountiful gifts, Satan created Dairy Ice Cream and Magnums. And Satan said: You want hot fudge with that? And Man said: Yes! And Woman said: I'll have one too with chocolate chips. And lo they gained 10 pounds.

And God created the healthy yoghurt that woman might keep the figure that man found so fair.

And Satan brought forth white flour from the wheat and sugar from the cane and combined them. And Woman went from size 12 to size 14.

So God said: Try my fresh green salad. And Satan presented Blue Cheese dressing and garlic croutons on the side. And Man and Woman unfastened their belts following the repast.

God then said I have sent you healthy vegetables and olive oil in which to cook them.

And Satan brought forth deep fried coconut king prawns, butter-dipped lobster chunks and chicken fried steak, so big it needed its own platter, and Man's cholesterol went through the roof.

Then God brought forth the potato: naturally low in fat and brimming with potassium and good nutrition.

Then Satan peeled off the healthy skin and sliced the starchy centre into chips and deep-fried them in animal fats adding copious quantities of salt. And Man put on more pounds.

God then brought forth running shoes so that his Children might lose those extra pounds.

And Satan came forth with a cable TV with remote control so Man would not have to toil changing the channels. And Man and Woman laughed and cried before the flickering light and started wearing stretch jogging suits.

Then God gave lean beef so that Man might consume fewer calories and still satisfy his appetite.

And Satan created McDonald's and the 99p double cheeseburger. Then Satan said You want fries with that? and Man replied: Yes, and super size 'em. And Satan said: It is good. And Man and Woman went into cardiac arrest.

God sighed... and created quadruple by-pass surgery.

And then... Satan chuckled and created the NHS.

For those who don't live in the UK, the NHS is the National Health Service and the subject of many a joke and disaster stories.

Wednesday, 12 March 2008

Making your Business Profitable: Rule #1

Okay, so I'm not a business guru, I'm a geek. But setting up my own limited company to operate as a contractor has taught me a few tricks about business and how to make it profitable. So, going back to basics, how do you make a business profitable? You ensure that there is more money coming in than going out. The money going out part is easy: it's all the bills you have to pay and it's a bit like your personal finances: you've got to make sure you don't overspend. The tricky part is the money coming in because it's different from what you're used to as an employee: the amount that comes in can vary widely and you never know when it will come in. So that's the area to concentrate on. As a result, here is in my (not very experienced) opinion the most important rule of running a business:

Make it as easy as possible for the money to come in.

At that point, you may be thinking about marketing, sales and all that sort of things. But before that, think about the plain boring basics. There are two things you need to get right from the start:

  • Tell your customers how much they owe you,
  • Make it easy for your customers to pay you.

Well, yes, the vast majority of your customers will gladly pay you if you tell them how much they owe you and make it easy for them to pay. So how does that work in real life?

Tell your customers how much they owe you

If your business is retail, this should be easy: just tell the customer when they buy something. In other businesses, like services, this is usually done by invoicing your customer. An invoice can be quite complex, especially when you start taking VAT into account. Some companies spend millions of pounds on invoicing systems. The reason is that the quicker you can invoice a customer, the quicker they will pay you. So spend a bit of time getting it right so that it's easy to do. In my case, I made sure I had a number of document templates for this and that they did most of the work for me. Nowadays, I can just open one of those templates, fill in the number of days worked, the rate per day and it does the rest for me: calculate the VAT and all total amounts. The result is that is takes me about 5 minutes to produce an invoice. Granted, my business model is very simple but it still applies whatever your model is.

Make it easy for your customers to pay you

Think about how your customers may want to pay you and make it easy for them to do so. Rather than theorise on what you could do, I'll give you a simple example.

Tonight, a colleague and I went out for dinner at a Chinese restaurant round the corner. We had a very good meal and were served by nice and helpful staff. We couldn't ask for more. Then came the time to pay the bill. The total amount for the two of us was £28.25 so we added a tip to take the total to £32. Because we have to file in our expenses differently, we wanted to pay half each and get two receipts for £16. I wanted to pay by credit card while my colleague wanted to pay cash. At that point, it all went downhill. For whatever reason, what we wanted to do was impossible. After a good 10 minutes talking to the head waiter, he managed to split the bill provided we both paid cash. From the explanation we got, I understood that the main problem in paying the way we wanted was the till system that just couldn't deal with it and didn't have a manual override.

The end result of that experience is that the problems in paying completely ruined the good experience we had had of the place, meaning that we will probably not go back there, even if everything else was great.

Invoicing your customers efficiently and making it easy for them to pay you may sound like boring mundane details but if you don't get them right, it doesn't matter how good the rest of your business is, money won't be coming in.

Wednesday, 5 March 2008

Leap Years and Microsoft

Following Leap Day last Friday and the confusion this seemed to generate, it looks like the problem is even worse that originally thought, especially where Microsoft products are concerned. And reading the comments on that Register article, it looks like they are not the only ones.

Via The Register.

Sunday, 2 March 2008

Asus Eee PC, Nokia E65 and WPA Wireless Networks

My home wireless network uses WPA for security. When I received my Asus Eee PC, it could not connect to my wireless network complaining about the shared key being too long, which I found odd because all other devices connected fine. Then when I got my Nokia E65, it couldn't connect either but wouldn't tell me why.

Then, over the weekend and prompted by a friend, I decided to fiddle with the Eee PC and get it to connect to the wireless network. So, in an attempt to humour the machine, I changed the pass phrase on my network to something shorter. An lo and behold, the Eee connected! It could then ping all the machines on the network except the router it was actually connected to. As a result, it couldn't route any traffic outside the network so couldn't get on the internet. Checking the routing tables, everything looked fine. It could resolve any name into an IP address so it had nothing to do with the DNS. I thought there could be something dodgy with DHCP so I re-configured the interface manually... and it worked! Going backwards, I set it back to DHCP and... it worked fine, even though I didn't change anything compared to the first attempt. That's IT for you: sometimes, doing the same thing twice fails the first time and works the second, for no apparent reason.

Now all chuffed by my success with the Eee, I decided to try again with the Nokia E65. I first spent a good 20 minutes trying to find how to set up access points and being defeated by the completely non-intuitive menu system of the S60 operating system that runs on those phones. I then turned to the user manual (yes, I know, RTFM) which only marginally helped because said manual is riddled with mistakes and the relevant menu option mentioned did not exist. So I spent another 10 minutes finding out where the manual was wrong. I eventually found what I wanted and set up my access point. Of course, in keeping with the experience with the Eee, it failed to connect first time. The main difference was that the E65 just closed the browser without explanation rather than tell me what went wrong. But trying a second time it worked fine.

So there you go: shortening the WPA secret key means that all my wireless enabled devices can now connect to my home network (apart from the Nintendo DS but that's because it doesn't support WPA at all). Some might say that a shorter key means my network is less secure. Yes and no: considering the network doesn't advertise its SSID in the first place, that's two pieces of information you need to guess. And then, because I knew I was shortening the pass phrase, I took more time to think of something that would be more difficult to guess.

PS: Nokia, could you please ditch the poor excuse for an operating system called S60 and give use something intuitive and user friendly instead? Now that you've acquired Trolltech, could we have nice Qtopia based phones please?

Tear It Down

It's illegal and it doesn't make sense: tear it down!

Friday, 29 February 2008

Leap Year Maths

Today is Leap Day, the 29th of February. Reading this article on The Register earlier and in particular the comments, it looks like the logic behind calculating whether a year is a Leap Year is still fuzzy for some. This is an essential calculation to get right in any software that deals with dates (that is, most of them), so here are all the gory details.

Tuesday, 19 February 2008

The Dangers of Alcohol

I am in Chesterfield tonight, for work. I just had dinner with colleagues and came back to my hotel, picked up the key from reception and started walking back towards my room. On the way there, I met a couple who had had a few too many drinks. They were so pickled with booze that they first needed a few minutes to identify which room was theirs. Then they realised that inserting the key into the lock was way too challenging so they asked me for help. I obliged and the lady managed to spittle a drunken thank you. I have the feeling that tomorrow morning will be a difficult time for them both.

Monday, 11 February 2008

Vodafone Handset Warranty: The Plot Thickens

I posted a rant about Vodafone a few months ago and followed it up yesterday with an update. Someone called Edward Peter, who apparently works for Vodafone UK, commented on the update in a way that confuses matters further.

Let's recap: I had an 18 month Vodafone contract that came with a free handset, namely a Nokia 6234. Said handset started mis-behaving 15 months into the contract. When I contacted Vodafone support at the time, I was told that because the warranty of the handset was 12 months and that I wasn't due an upgrade before another 3 months, they couldn't replace the handset. This incident convinced me to leave Vodafone, which I did when my contract expired. I then got a phone call from a customer relationship manager who, when I explained why I was leaving, said that the support people had not advised me correctly and that all handsets had a 2 year warranty. The comment left on yesterday's entry states: If you get a handset directly from Vodafone its warranty matches the length of conttract you're on; 12 months for 12 months, 18 for 18 and so on. So this leaves us with 3 options:

  • The warranty is 12 months, irrespective of the duration of the contract, which means you are toast if you have an 18 or 24 months contract and your phone packs up after 12 months;
  • The warranty is 24 months, irrespective of the duration of the contract, which means that you're covered because whatever the length of your contract, you are due for an upgrade before or at the same time as the warranty expires, that is until Vodafone start offering 36 month contracts;
  • The warranty matches the length of your contract, which means you're covered as you are aligible for an upgrade as soon as the warranty expires.

The last option would be the most sensible one. The first one is a trap for customers and one of the reasons why I left Vodafone even though it seems not to have been Vodafone's policy, just a misconception from support staff.

So if someone knows the answer, please tell me and tell the support guys at Vodafone: giving correct advice may help them retain customers. In the meantime, make sure you check that the length of the warranty for your handset matches the length of your contract, whoever you take your contract with.

Sunday, 10 February 2008

Vodafone Support

Stop press: Vodafone are not evil, it's just that some of their support staff don't have a clue. Now that's a surprise!

Following my rant about Vodafone contracts and phone warranties, I did not contact the Office of Fair Trading as I said I would. However, a few weeks ago, as my Vodafone contract was about to expire, I took up a contract with 3 and made sure the phone I was getting had a warranty that extended beyond the duration of the contact. I then requested my PAC code from Vodafone to move the number over to the new contract. Of course, that triggered someone at Vodafone to call me and ask why I was leaving. I explained that they had lost me as a customer three months ago after the phone warranty incident. It appears that what I had been told originally was incorrect: all Vodafone handsets have had a 2 year warranty for quite some time so I should have been able to get my handset replaced without a problem. I was then assured that the support manager would be made aware that some of the staff were wrongly advising customers so that this would not happen again. Ah well, too little too late Vodafone.

Funnily enough, getting the PAC code and transferring the number went without a glitch after that: perfect service both from Vodafone and 3.

For tales of PAC code woos, check Coofer Cat.

Subversion on Ubuntu

The Cunning Plan

One of the planned functionality for my new silent server was to offer a Subversion repository for code and documents that I could access via WebDAV. By doing this, I would be able to save important documents on the server and benefit from version control. Version control is essential for computer code but can also be very useful for other types of documents by allowing you to have multiple versions, revert to an old version, etc. So without further ado, let's get into the nitty gritty of getting it to work on Ubuntu 7.10.

Resources

Installing the software packages

We need to install Subversion, Apache and the Subversion libraries for Apache. As this is all part of the standard Ubuntu distribution, it is extremely easy.

sudo apt-get install subversion apache2 libapache2-svn

And that's it, you have a working Subversion installation! It doesn't do very much yet so we need to create repositories for documents.

Subversion Repositories

Subversion is extremely flexible in the way it deals with files and directories. There are a number of standard repository layout that are well explained in the book. Then there's the question of whether you'd rather put everything in the same repository or split it. This is also well explained in the book. My rule of thumb is to only create multiple repositories if you need to keep things completely separate, such as having one repository per customer, or if you need different settings. In my case, I need to store code and documents, the code being accessed via an IDE that competely supports Subversion, the documents being accessed as a network folder. Both usage scenarios require different settings in Apache so this is a typical case where several repositories are a good idea. However, there is no need for splitting the code or the document repositories further. You can create your repositories where you want in the file system, I chose to create them in a specific top level directory.

$ sudo mkdir /svn
$ sudo svnadmin create /svn/docs
$ sudo svnadmin create /svn/dev
$ sudo chown -R www-data:www-data /svn/*

The svnadmin command created a complete structure:

ls /svn/dev
conf  dav  db  format  hooks  locks  README.txt

The last command changes ownership recursively of both repositories so that the Apache instance can read and write to them. This is essential to get WebDAV to work.

Configuring Apache

Next, we need to configure Apache so that it can provide access to both repositories over WebDAV.

$ cd /etc/apache2/mods-available
$ sudo vi dav_svn.conf

In this file, I defined two Apache locations, one for each repository, with slightly different options. As this is a home installation, I don't need authentication. If you want to add authentication on top, check the How-To Geek article. The extra two options on the documents location are meant to enable network clients that don't support version control to store files. See Appendix C of the Subversion book for more details. Note that if you wanted to use several development repositories, such as one for each of your customers, you could replace the SVNPath option with SVNParentPath and point it to the parent directory.

<Location /dev>
  DAV svn
  SVNPath /svn/dev
</Location>

<Location /docs>
  DAV svn
  SVNPath /svn/docs
  SVNAutoversioning on
  ModMimeUsePathInfo on
</Location>

The last thing to do is to restart Apache:

$ sudo /etc/init.d/apache2 restart

Subversion Clients

Now that the Subversion server is working, it's time to connect to it using client software. For development, I use Eclipse for which there is the Subclipe plugin. It all works as expected. For the documents repository, Apple OS-X has built-in support for WebDAV remote folders. Go to the finder, select the menu Go > Connect to Server and type in the folder's URL in the dialogue box that appears, in my case http://szczecin.home/docs. It's also possible to browse both repositories using a web browser, which is a good way to provide read-only access.

Conclusion

That was a very short introduction to Subversion on Ubuntu. There's a lot more than this to it, a lot of it in the Subversion book. In particular, you can add authentication and SSL to the repositories once they are available through WebDAV. There are also a lot of options as far as Subversion clients are concerned and you can find free client software for every operating system you can think of.

Saturday, 26 January 2008

Playing with KML

The Idea

When I came back from holidays, I thought about creating a map of the journey in a way that I could share with friends. So I decided to try to do that using KML, the description language used by Google Earth. Google has tutorials and reference documentation about KML that got me started. In practice, what I wanted to do was very simple: lines showing the route and location pins showing the places I had visited on the way.

Document Structure

KML is a fairly straightforward XML dialect and the basic structure is very simple:

<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://earth.google.com/kml/2.2">
  <Document>
    Content goes here
  </Document>
</kml>

Route

For the route, I wanted lines that would roughly follow the route I had taken. To do this in KML is simple: add a Placemark tag containing a LineString tag that itself contains the coordinates of the different points on the line. So a simple straight line from London to Hamburg looks something like this:

<Placemark>
  <name>Outward flight</name>
  <LineString>
    <altitudeMode>clampToGround</altitudeMode>
    <coordinates> -0.1261270000000025,51.50896699999998
    9.994621999999991,53.55686600000001
    </coordinates>
  </LineString>
</Placemark>

The name tag is not essential but this is what will show in the side bar in Google Earth so it's better to have one. The clampToGround value in the altitudeMode tag tells Google Earth that each point on the line is on the ground so you don't have to specify the altitude in the coordinates. In the coordinates tag is a space separated list of coordinates. In this case, each point is specified by a longitude and a latitude separated by a comma, the altitude being implied. Longitude is specified with positive values East of the Greenwich meridian and negative values West of it. Latitude is specified with positive values North of the equator and negative values South of it.

That's good but another thing I wanted to do was specify different colours for the different types of transport I used during my holidays. KML has the ability to define styles that you can then apply to Placemark tags. This is done by adding a number of Style tags at the beginning of the document. You then have to specify the style using a styleUrl tag. Applying this to the simple line above, we get:

<Style id="planeJourney">
  <LineStyle>
    <color>ff00ff00</color>
    <width>4</width>
  </LineStyle>
</Style>


<Placemark>
  <name>Outward flight</name>
  <styleUrl>#planeJourney</styleUrl>
  <LineString>
    <altitudeMode>clampToGround</altitudeMode>
    <coordinates> -0.1261270000000025,51.50896699999998
    9.994621999999991,53.55686600000001
    </coordinates>
  </LineString>
</Placemark>

Note that the value for the color tag in the LineStyle is the concatenation of 4 hexadecimal bytes. The first byte is the Alpha value, that is how opaque is the colour, in our case the value ff specifies it is completely opaque. The next three bytes represent the Red, Green and Blue values, in our case a simple green.

Places

For places, I wanted a simple marker that showed more information when you clicked on it. This is also very simple in KML: a Placemark tag containing a name, a description and a Point tags. The name needs to be a simple string but the description can contain a full blown HTML snippet inside a CDATA node so you can include images, links and all sorts of things. Note that the box that will pop up when you select the place mark is quite small so don't overdo it. Here is an example for London:

<Placemark>
  <name>London</name>
  <description><![CDATA[<p><img src="some URL" /></p>
<p><a href="some URL">More photos...</a></p>]]></description>
  <Point>
    <coordinates>-0.1261270000000025,51.50896699999998,0</coordinates>
  </Point>
</Placemark>

Note that the coordinates here include the altitude as well as the longitude and latitude. You could also apply a style to the location place marks, in the same way as was done for the lines.

KML or KMZ

Once you've created your file, you need to save it with a .kml extension. You can then open it in Google Earth. When you're happy with it, you can also zip it and rename it with a .kmz extension: Google Earth will be able to load it as easily but the file will be smaller. Both files can also be used with Google Maps and can be shared online. So here is my complete holiday map built with KML.

Tips and Tricks

Getting the exact coordinates of a particular place can be cumbersome. To make it easy, just find the place in Google Earth, create a temporary place mark if there is none you can use, copy it with the Edit > Copy > Copy menu option and paste it in your text editor: you'll get the KML that defines the placemark, with exact coordinates.

The clampToGround option in the altitudeMode tag specifies that the points you define in the coordinates are at ground level. The line between two points will be straight, irrespective of what lays between said points. So if you have a mountain range in between, you will see your line disappear through the mountains. To correct this, you should insert intermediary points where the highest points are located. This is why on my map the flight between Izmir and Paris has intermediary points so that the line can go past the Alps without being broken.

If you want to do more complex stuff, be careful that Google Maps only supports a subset of KML. Of course, the whole shebang is supported by Google Earth.

Conclusion

KML is a nice and simple XML dialect to describe geographical data and share it online. It certainly beats writing postcards to show your friends and family where you've been and it doesn't get lost in the post.

Silent Server

A few months ago I set up a home server using an old box. Unfortunately that old box died shortly afterwards. Furthermore, it was quite noisy as it had been originally spec'ed as a high end workstation. So I went in search of a replacement, with a view to have a server that would be as silent and energy efficient at possible.

In this quest, I came across VIA, a Taiwanese company that specialises in low power x86 compatible processors and motherboards. You can get most of their hardware in the UK from mini-itx.com. But I'm not good at building a box from scratch so I really needed something already assembled. I found that at Tranquil PC, a small company based in Manchester. Here is the configuration I ordered from them:

  • An entry T2e chassis with DVD-R drive, colour black.
  • A VIA EN15000 motherboard. I choose this one as it is the only one that they offer that comes with a Gigabit Ethernet port and the new 1.5GHz VIA C7 processor, which is one of the most power-efficient.
  • 1GB RAM. Experience tells me that this is more than I need but having extra RAM should enable the machine to take on more tasks in the future.
  • A 100GB 2.5" HDD. I could have gone for a larger 3.5" HDD but I don't currently need the extra space and laptop drives are significantly more energy efficient and silent than desktop ones.

I received my T2e a week or so later. Unfortunately, it had been damaged in transit and the DVD drive was not working properly anymore. The support people at Tranquil PC were very nice and very efficient and arranged for the machine to be collected and sent back to them. It came back a week later in full working order.

I replicated the original install that I had done on the old server. Having done it once, it went very smoothly, everything working first time. The obvious difference from the start was how little noise the T2e makes. In fact, the only audible noise came from the DVD drive spinning the installation CD. Otherwise, it's as if the machine was switched off. Impressive! And it looks really cool with the blue glow coming out of the front panel. As I have a plug-in energy meter, I decided to check how much power this machine drew. So, once the installation was finished and the machine was up and running, I restarted everything with the meter in between the wall socket and the PC's plug. Results:

  • Max power consumption when starting up: 30 Watts.
  • Standard power consumption once in operation: 25 Watts.

In other words, this machine consumes about the same as a small standard light bulb without ACPI enabled. Once I've enabled ACPI and tweaked it somewhat, I should manage to make it consume even less.

This proves that a server doesn't have to be a big power hungry and noisy box, it can be a small machine that is so silent you forget it's switched on. There are currently few suppliers for that sort of hardware but my guess is that it will become more common. In the meantime, head to Tranquil PC to find one of those.

Sunday, 6 January 2008

More Music, Less DRM

According to Wired, Sony BMG have decided to start offering their music online without DRM. This makes them the last of the major labels to do so.

This is good news for everybody as it means we will start seeing more music sold online without DRM. Then again, music is not limited to major labels, there are lots of Indie labels out there that produce great music. And there are more and more online outfits that enable artists to distribute their music. Check out Magnatune, Amie Street or Jamendo for a few examples.