Wednesday 31 August 2011

Boost: a retrospective (part 4) - the Curious

In part 2 and part 3 I talked about the best and the worst (in my opinion, naturally) of Boost. Here are some interesting things which don't fall into either of those categories.

Boost Units

Does it make you uneasy to use the same type - say float or double - to represent a whole bunch of things which are fundamentally different, like length, time, volume? Or related but measured differently, like millimeters, feet and miles? It has always made me vaguely uncomfortable, and of course it has led to some spectacular disasters (not mine!). But doing something about it would be a lot of work. Defining, say, a millimeter class would be easy, but handling all the legitimate operations involving more than one unit would just bury you.

Enter Boost Units, which has a completely generic understanding of all these things. All of the meta-arithmetic, like knowing that distance divided by time gives speed, is done at compile time using some very heavyweight template metaprogramming. But you don't need to know about that. You just declare d, t and v as furlongs, fortnights and furlongs_per_fortnight respectively, and dividing d by t gives you v. Simple. Define t2 as seconds and assign it to t2, and seconds will automagically be converted to fortnights (slightly more than a million to one - so one microfortnight is conveniently close to a second, a fact used in one obscure corner of DEC's VMS operating system).

I put this in the "curious" category, rather than the "good", only because I've never had a chance to use it myself, being a systems kind of a person rather than say a mechanical engineer. But if I ever get round to rewriting my robotics code in C++, I will certainly use it.

Shared Pointer

Memory leaks are the bane of C programming, along with buffer overflow. They can be largely avoided in C++ by using auto_ptr to represent ownership of a structure. But this breaks down if there is not a single owner, for example if an object needs to be passed on to another function and then forgotten. It's just about guaranteed that a program that works this way will have leaks, even if they only occur in obscure error conditions.

Reference counts are a partial solution, but they just replace one problem with another since now everyone has to be disciplined about adjusting them. And of course they-re intrusive - the object has to have a reference count, and know to delete itself when the count drops to zero.

boost::shared_ptr tries to provide a solution to this, by keeping a behind-the-scenes reference count object. On the face of it, it looks perfect. If you are dealing with all-new code, and you keep solid discipline about never using a regular C-style pointer to the objects, maybe it even is perfect. I've used it for managing buffer pools.

I put this in the "curious" category because of what happens if you have to deal with a less structured environment. You can extract the raw pointer easily enough, to pass to a function that expects it. As long as that function never expects to take ownership, that's fine. Above all it must never delete the object, obviously. But there's a more subtle problem. If you have code which uses a mixture of raw pointers and shared_ptr's, there's a risk of creating a second shared_ptr from a raw pointer. And that is catastrophic, because now there are two reference counts, and whichever one goes to zero first will delete the object, leaving the other with a dangling reference and, microseconds or days later, a mysterious segfault. Guess how I know.

Proponents of the class would obviously argue that this is something you should simply never do, that you should have the discipline to avoid. But if you had perfect discipline, you wouldn't need the class in the first place - you could just remember at all times who controls the object, and be sure they delete it if they need to. So really all it has done is replace one way to shoot yourself in the foot with another.

Really the only solution to this is to keep the reference count in the object. Boost provides a class called intrusive_ptr which supports this, but I find the approach kind of backwards. I preferred to write my own base class for the referenced object. More on that in another post.

Sentries

The "sentry" is a programming paradigm for making sure that you undo everything you do, extending the "resource acquisition is initialisation" paradigm. The "do" part is done in the constructor of a sentry object, the "undo" part in its destructor. This ensures that the "undo" will always happen, even in the face of exceptions, return or break statements and so on. The classic example is locking, and indeed boost::thread provides a mutex::scoped_lock class which does exactly this.

But there are many other use cases, and the details of the do/undo operation vary quite a bit. For example, it's common in C to have a function that sets an attribute value, returning the previous value. The undo operation is to call the same function, with the saved value.

It's easy to write a sentry class for some particular case, like the mutex lock. It's not hard to write a generic sentry for a particular kind of do/undo - and indeed I have written a bunch of these.

But it seems to me that what would be ideal would be a generic sentry template class, that would figure out from the template arguments what kind of do/undo it is dealing with. This is beyond my own template metaprogramming skills, or at least beyond the learning investment I'm willing to make. But it does seem odd that it isn't part of Boost.

Lambda

There are often times where it would be convenient  to have a small, anonymous function - for example, the ordering function passed to a sort operator. Java and Python both provide ways to do this, which in computer science is called a "lambda function". The new version of the language, C++0x, also supports this.

But until that's available, C++ requires you to explicitly define a function, generally nowhere near the place where it's used. This just makes code harder to read and maintain.

boost::lambda is an ingenious attempt at solving the problem, pushing template metaprogramming to its utmost limits. The basic idea is to define a placeholder for a parameter. Then, simply using the placeholder implicitly declares a lambda function. Conventionally, the placeholders are "_1", "_2", etc. Simply writing "_1*2" generates a function that returns twice its argument - regardless of the type of the argument you supply later, as long as it supports multiplication of course. For trivial functions like this, Lambda works very nicely. (Although boost::bind also uses this syntax, and inexplicably, the two trip over each other. There's a workaround, by #defining an alternative syntax for lambda. But it's odd that Boost let this slip by).

Unfortunately, C++ doesn't provide a clean syntactic way to do a lot of things that ought to be very natural, like calling overloaded functions. So, although the authors have put a huge effort into trying to make language features work, in the end Lambda is more of a curiosity than a general purpose facility. I've used it to construct arbitrary combinations of filter functions based on user-supplied criteria, for which it did the job nicely and much more simply than any alternative I could think of. But you need to find the right application.

Tuesday 30 August 2011

Worst Ever Dining Experiences #4: South Kensington, London

Before I bought my flat in London, we often used to stay at a boutique hotel in South Kensington called Number 16. It was a converted row of houses, all terribly English. Eventually it priced itself out of what we thought was reasonable, considering the small if pretty rooms, and we swapped our allegiance for the Royal Garden. There are leafy squares to the south of Old Brompton Road, and plenty of restaurants and useful shops within walking distance. South Kensington station is close by, and the South Ken Museums are a 10 minute walk away. All in all, a nice area.

We'd just arrived from somewhere. We were tired, and didn't want to hike across London. Somewhere, we saw something favourable about an Italian place, in one of the side streets by the tube station.

It seemed fine, typical of London neighbourhood Italian restaurants. I can't remember what we ate, probably something involving veal or pasta or both. What I do remember is the wine we ordered, a bottle of Chianti Classico - a reliable standby with Italian food. When the bottle came, it was a Chianti but not a Classico. This is more than just a matter of a name - the "Classico" suffix represents a 50% or more increase in value and in quality. But it tatsed fine, and we weren't in a mood to make a fuss, so we drank it with our meal.

When the bill came, I noticed a line that said "Chianti Classico". I mentioned to the waiter that this wasn't what we'd had. The bottle was still there on the table for him to see. His reaction was a surprise, to say the least. He started screaming at us, accusing us of goodness knows what. I suppose he thought we were trying to get a cheap meal. We weren't of course, but we don't like paying for things we didn't get.

This went on for a while, and no doubt I yelled back at him, until eventually the owner came by. By this time I was certainly in no mood to pay for the "Classico" we hadn't had, and I told him so. He came straight out and accused me of trying not to pay. Eventually, getting fed up with whole scene, I suggested that he call the police.

Suddenly everything changed. He became as nice as anything. "I give you dinner for nothing," he said, "Next time you in London, you come here, I give you wonderful meal, best wine." Clearly, a visit from the police was not at all his idea of how the evening should end. He no doubt had a kitchen full of illegal immigrants, and probably quite a few health code violations. Restaurants are an ideal way to launder illegal money, too (as are nail parlours, but don't ask how I know that). Who knows what else he was afraid of.

So everything was amicable, and to profuse apologies we left. As you can imagine, we were quite bemused on our short walk back to the hotel.

The place was still there for quite a while afterwards, though it has gone now. Evidently the police took a while to catch up with it. Needless to say, we never did claim our free meal.


Thursday 25 August 2011

Boost: a retrospective (part 3) - the Bad and the Ugly

In part 2 I talked about my favorite elements of the Boost libraries. Boost is wonderful, but even so there are things that are not so good. These, the ones which (in my opinion) are best avoided, form the subject of this post.

Serialization

I wrote a while ago about my frustration with this library. It seemed the perfect solution to a data pickling need I had, until I discovered that it can't cope with polymorphism. It claims to, but it randomly crashes deeply nested in incomprehensible function calls if you try. There may have been a solution, but life is just too short to figure it out. The reason for all this is that its authors decided to invent their very own subclassing scheme, completely orthogonal to the one that C++ uses. They may have had their reasons, but it's a complex subject and clearly they missed something.

Asio

If you've ever needed to do low-level socket I/O, you've probably been tempted to write an object wrapper around the C function calls and data structures. You may even have taken a look at Boost to see if they have already done this. In which case, you'll find that they have. I've certainly been down this path, and discovered Boost Asio at the end of it.

You will next discover that Asio is extremely complex, with all kinds of interacting classes that you have to be aware of and create. I spent a day or so trying to get my head around it, finally getting to the point where I felt safe putting fingers to keyboard. Then I discovered that despite all that complexity, it couldn't do what I needed. This was nothing fancy, just listen on a port, and create a thread to handle each TCP session as it arrives. Turns out Asio has a race condition - by design - which can result in missed connections. Some searching showed that there's a workaround for this, but it's complex and requires even more delving into its complexities - and isn't without its own problems anyway.

I had a long meeting to attend, so I figured I'd print the documentation and peruse it during the meeting. Over 800 pages later, my meeting had finished anyway, but the printer still hadn't. At this point, I decided that anything which takes 800 pages to describe it - for such a relatively simple function, this isn't Mathematica after all (1465 pages) - just can't be worth the learning curve.

I wrote my own family of socket classes. It actually took me less time to write and debug than it did to print the Asio documentation, never mind read it! I've been very happily using them ever since. Probably, you will do the same, but if you'd like to use mine, you're welcome. You can find them here.

The Build System

Everyone knows Make. It's convoluted, nearly incomprehensible, and a syntactic nightmare, but everyone has used it and can bodge their way out of a tight corner if they need to.

But why use something everyone knows, when you can invent something unique of your own? Sadly, this is the path that Boost took. They have their own unique build system called Bjam. I'm sure it's very elegant compared to Make - it would take a huge effort not to be - but it's still very complex, and poorly documented too. In fairness, it does (mostly) "just work" if you need to build Boost from sources. But if for whatever reason you do need to get under the covers, woe betide you.

I discovered this when I needed to cross-build Boost for our embedded processor. This is always tricky because of the config stage, where the build system looks to see what capabilities the system has, where things are located and so on. For a cross-build, of course, you can't auto-discover this just by poking around at the system you're running on. That part went OK, though. However editing the build files to pick up the right cross-compiler, cross-linker and so on, was just impossible. I found quite a bit about it on the web, but never quite enough to make it work.

Fortunately, our hardware ran a complete Linux system and with a little fiddling we could just build it native on our box. But if you can't do this - and most embedded systems can't - then you can forget using Boost. Which is a shame.

Tuesday 23 August 2011

Boost: a retrospective (part 1)

My love affair with Boost started with my first, self-appointed programming task at Anagran, the fan controller for our box. I wanted a table of functions, corresponding to each of the temperature sensors. Some of these were parameterless, corresponding to unique items, while others were indexed by interface card number. I wanted to be able to put a "partly cooked" function object in the table, with the interface number frozen but other parameters to be supplied through the ultimate call. This is called a "partial function application" or "partial closure" in computer science.

STL provides C++ with some glimmerings of functional programming, with "memfun", "bind1st" and so on. It seemed like it ought to be possible to write something appropriate, but making it usefully generalized also seemed like a lot of work. Surely someone must have done this already!

Searching for it led me to Boost, "one of the most highly regarded and expertly designed C++ library projects in the world" as they modestly say at the top of the front page. It is however true. It's a huge collection of highly-generalized classes and functions for doing an amazingly large number of extremely useful things. It's an open-source project whose authors, while not anonymous, keep a very low profile. I can only assume they love a challenge (and have a lot of spare time), because they do some extremely tricky things, under the covers. But for the user, they're mostly very straightforward to use.

So over the last five years, I've discovered more and more that can be done with Boost. Although I've called this a "retrospective", I'm not planning to stop using it.

Boost makes extensive use of "template metaprogramming", which is a kind of compile-time computing. When C++ templates were invented, the idea was to allow simple compile-time parameterization of classes and functions, for example so you could write a "minimum" function to return the lowest of its arguments regardless of whether they were int, float, double or some user-defined class. As the concept evolved, it became possible to make very complex choices at compile time. In fact, you can write just about any program to produce its output directly from the compiler, without ever even running it, if you try hard enough. It's hard to get your head around, but fortunately you don't need to.

Function and Bind

These were the first Boost packages I discovered. Function defines a general, templatized function class. So you can define a variable as "function<int(foo*)>" and assign to it any suitable function. In particular, assign a member function of the foo class and all the right things will happen.

The Function class is useful, but it is the Bind class that really transforms things. You can take any function, bind some or all of the parameters to specific values, and leave the others (if any) to be supplied by a subsequent call to the bound object. This is exactly what I was looking for in my fan controller. For example, suppose you have a function "int foo::get_temperature<(double)>". Then you can write:

  function<int(double)> fn =
    bind(&foo::get_temperature, my_foo, _1);

to store a function which will apply its argument to the "my_foo" instance of foo, which you use for example as:

  printf("temperature at %f is %d\n", v, fn(v));

(Of course you shouldn't be using printf, you should be using boost::format, but that comes later). The "_1" is a placeholder, whose meaning is "take the first parameter of the final call, and put it here". Bind takes care of types, making sure that the actual parameter is (in this case) a double, or something that can be converted to it. If you want to, you can even apply bind to previously bound functions - though you might want to ask yourself why you're doing it.

This is absolutely perfect, for example, for callback functions that need to keep hold of some context. In C you do it using void* arguments, which is unsafe and generally wretched. This can be avoided in C++ by defining a special-purpose class, but that requires the caller to know about it, which ties everybody's shoelaces together more than is healthy.

The only problem with function/bind - which is true of any code that makes heavy use of templates - is that compiler errors become incredibly verbose and just about useless. A single mistake, such as getting a parameter type wrong, results in pages of messages, none of which gives you the slightest clue as to what you actually did wrong. The first time you compile a new chunk of code that makes extensive use of bind, you will typically get thousands of lines of errors, corresponding to just a handful of typos and the like. The trick is, to find the message line that gives you the actual source line - which is buried in there somewhere - then just go stare at the line until you figure out for yourself what you did wrong. The rest of the messages can be summarized as "you did something wrong on this line".

Part 2: The Good (things I just wouldn't live without)
Part 3: The Bad and the Ugly

VirtualBox - virtually complete: part 1

My new Linux machine is almost complete now. It has been running for a couple of months. Since I lost my laptop along with the company I worked for, and haven't seen a reason to buy another one yet, the Linux machine has become my main computing platform for just about everything.

However there are some things which can only be done on Windows. One is iTunes - I only use it as a backup for my iPhone, but for that it is indispensable. Another is the package that updates the navigation data for the plane. Then there is expensive software I bought for Windows and don't plan to buy again - Mathematica, CorelDraw and so on. Not to mention my HP printer/scanner which has no Linux driver. So I've had two machines sitting next to each other, with a  KVM switch to go back and forth when needed.

Of course the solution to this is obvious - virtual machines. I've been idly looking at VirtualBox, one of the open-source VM solutions, for a while. It took something odd to make me dig a bit deeper - the discovery that our electricity supplier (PG&E) has a web page where you can see your hourly consumption. It's fascinating to study, and one thing it showed is that my two desktop computers were accounting for about 60% of the house's background electricity usage. So, turn one of them off, instant 30% energy saving. (If only I could do something equally miraculous for the pool, which accounts for over half of our electricity).

I started by creating a second Linux system. That seemed less scary, though as it turned out it was harder than Windows. It's simple enough - tell VBox to create a virtual machine, then boot it with a Ubuntu CD in the drive and it pretty much goes all by itself. My plan for this VM was to use it as a web server for stuff I want to host at home, rather than through my web provider. It all went well, until I tried to set up a "shared folder".

In its pure form, the VM concept means that the VM runs in complete isolation in a bubble inside the real operating system. This is fine until you do actually want to share stuff. You can do it over the net, using FTP or whatever, but even that isn't "out of the box". By default, Vbox creates VMs using internal NAT addresses, so there is no way to access the VM from anywhere else, including the host. That can be fixed with a couple of clicks, selecting "Bridged Adapter" in the "Attachedto" drop-down, instead of "NAT". But still, it's a clunky way to do things.

So you can also create a "shared folder". This is just a regular directory on the host system, but it looks like a remote filesystem to the guest. It's easy enough to set up, but I just could not get it to work. I successfully mounted it just once. After that, attempts to mount it always failed with "no such device" (or some such). Even deleting it and creating a new one didn't work.

Finally I discovered that, because I'd ticked the "auto-mount" box, it was being automatically mounted in /media. Well, duh, I suppose.

With that done, my Linux VM was usable. Fortunately, because it turns out in my main (host) system, Firefox had autoupdated itself to V6 - and the Flash player doesn't work an more. No more stock charts from Google, no more news video clips from the BBC. This seems to be a problem that a handful of people have run into, with no solution. So for now, I just use an older version of Firefox running inside the Linux VM whenever I need Flash.

The next step was to install a Windows VM. But this is already too long, so that will be for another time


Saturday 13 August 2011

IAS - the PDP-11 Interactive Applications System


(This originally appeared on my web page).

DEC's approach to operating systems for the PDP-11 was anything but disciplined. New ones got invented every time some engineer or marketing person blinked. In the early days, there was a real-time kernel called RSX-11A, designed for memory-resident applications in what we now call embedded processors. Features got added to this rapidly - code bloat is nothing new. By the time it got to RSX-11D it had a complete disk-based file system, a program development environment, and support for every peripheral in the Small Computer Handbook (and there were plenty of them - peripherals on the PDP-11 obeyed the same strategic imperatives as operating systems - see above). At this time, a bright young engineer called Dave Cutler decided that enough was enough, and set out to create a small system that would do the same, which he called RSX-11M.

Meanwhile, the PDP-11 also had a timesharing system very loosely based on TOPS-10, called RSTS/E. A senior engineering manager, newly installed in Geneva, decided that it would be a smart move to develop a system that could do both real-time and timesharing, based on RSX-11D. It was specifically targetted at the planned PDP-11/70, which was a kind of super-11. Since he had newly moved to Europe (his name was David Stone, by the way) he gave the project to the European Software Engineering group that he had just invented in Reading, England. This was about the time that I joined DEC, and after a few misadventures I found myself assigned to the project.

The system was to be called IAS, which if I remember rightly stood for "Interactive Applications System". It added to the RSX-11D kernel a clever timesharing scheduler, a bunch of security features, and a new command language. These were the days of MCR, a command language which makes even the Unix shell look lucid. (To delete a file you typed "PIP file/D" for example). The then-boss of software decided we needed a Digital Command Language, which of course later become a feature of VMS, but IAS was the guinea-pig. In fact, all DCL commands were translated into the corresponding MCR and the fired off to the appropriate utility. The command interpreter that did this was thrown together in great haste, and remains to this day the nastiest piece of software I have ever encountered.

I had tremendous fun on IAS. Like V1 of any system, it lacked just about every feature anyone wanted, and they all had to be added for V2. It says something for the team that in fact they mostly got there in V3, and they mostly worked. The team by the way consisted of about six people - that's probably about the same number that Microsoft has doing quality control on the stupid paperclip in Office 97. My first job was to write the driver for the latest new disk, the RK06. It had about the same capacity as a couple of floppies; four of them would fit in a six-foot cabinet. I was duly sent on the course for writing device drivers, but on the first morning I finished reading the manual and by the end I had coded the driver. It did various nifty things that nobody had done before on the 11, like overlapped seeks, and ended up becoming the basis for all future IAS and RSX-11D drivers although I remained unhappy with a lot of it.

My next job was to write a new terminal driver. Despite what I said about the command interpreter, the old terminal driver was pretty special too. Support for new features and new hardware had been thrown in over several years, and it was impossible to figure out how it worked. One story that sticks in my mind: I changed it to suppress nulls on formatted output (because of a misfeature in the command interpreter). Thereafter, if you typed rapidly, it would drop the first character of every other line. I never figured out why, I just removed the fix.

The terminal driver was one of the most enjoyable bits of work I ever did. It was all table driven, and in fact was object-oriented 15 years before it became fashionable. Thus adding a new device just meant writing some standard routines and plugging them into the tables. It seems obvious now, but it was pretty revolutionary at the time! I cooperated with the guy who was writing the driver for the new VMS system, so when I invented "read with prompt" (mainly to make the output on hardcopy terminals look prettier) this found its way into VMS, and ten years later was used in a way that I certainly never thought of to double the performance of All-in-1.

All of this found its way into V2. But by then, Cutler had decided that RSX-11M was going to take over the world. Since he was at the heart of things in Maynard, and we were 3000 miles away in Reading, it was pretty easy for him to get the sales and support people to listen to him. IAS did get some very loyal customers, including Boeing and the US Navy, who stuck with it long after Digital had tried to kill it.

In fact, as the cost of developing and maintaining software soared, Digital did try to rein in the PDP-11 situation. As a result, we had to combine IAS and RSX-11D into a "unified product strategy". (I seem to have spent a lot of time over the years taking products that were never meant to be the same thing, and making it look as though they were).

IAS had many features that RSX-11M didn't, such as a proper timesharing scheduler. This led to RSX-11M+, which was RSX-11M with a bunch of features intended to match IAS. This was really a stretch for 11M, in complete conflict with the "size is the goal" philosophy which Cutler had made into a rubber stamp that appeared on all of the early 11M design documents. Nevertheless, M+ had the visibility in Maynard and IAS didn't, and got the development funding. This meant that several new features first appeared in M+ and then had to be retrofitted into IAS.

One of these was "PLAS" (I forget what it was supposed to stand for), which gave programs access to the memory management features and was a kind of do-it-yourself virtual memory. It fell to me to implement the linker support for this. Now the linker was Cutler's first ever piece of software at DEC. It was very clever; doing memory layout for a machine as constrained as this could never be easy. In addition, it supported PSECTS which were modeled on what the IBM/360 did for memory management, and of course found their way into VMS as well. Thus a memory section could be shared between overlays, or not, and could be marked for code or for data, and could be overlaid to support Fortran COMMON or not, and so on. There were about seven different PSECT attributes, and so over a hundred different ways that memory allocation could be handled. And overlays - who remembers those? The linker had a write-only language called ODL (Overlay Description Language), which allowed you to set up hugely complicated overlay structures. As the PDP-11 address space (64 kbytes!) become more and more of a constraint, ever-fancier overlay techniques were invented, and since the linker had to handle them all its own overlay structure was the most complex of the lot.

But by this time the writing was on the wall for the PDP-11 as a general-purpose machine. The VAX and VMS had been a huge success and all serious investment went on them, rightly so. Personally I moved on after we released V3, in about 1979, but IAS retained an engineering group for a few more years. I think DEC continued to support it, for the benefit of the handful of big customers (like the US Navy) who were still using it, up until 1988 or so.

The PDP-11 spawned several great operating systems, the most famous nowadays being Unix. But IAS had something the others never did, a unique ability to support both timesharing and real-time applications at the same time. A big 11/70 - which is to say a megabyte of memory and a few tens of megabytes of disk - could give decent timesharing support to 20 or 30 users, and there were people who ran it at the limit of 64 users and seemed happy with it. Try telling that to the youth of today!

Tuesday 9 August 2011

Python and Tkinter: wonderful

Having time on my hands at the moment, and no work commitments - that's another story though - I decided to start taking a new look at the robotics stuff I was playing with a year or so ago.

I'd written nearly all of the code to make a six-legged robot - a hexapod - walk with various different gaits and postures - the so-called inverse kinematics. It was in straight C, since I intended it run on a little embedded CPU which had no support for C++, nor floating point for that matter. And I'd built a development environment, including visualisation for the leg movements, using Visual Studio.

Things have moved on since then, though. For one thing I've pretty much switched to Linux for my computing environment. For another, the Roboard has really become the obvious onboard computer - it is now available with Linux, and it offers a full-function x86 including floating point, in a tiny size that will fit in my fairly small hexapod. And since it supports full GCC, I can write the code in C++. The C code is just so cluttered - I can't for the life of me imagine why anyone would prefer to code in C. It's full of irrelevant details that make it hard to read and even harder to get it to work. So, a rewrite is called for.

The only problem, is the GUI that I'd painfully created using the Visual Studio tools. Painful because there are 6 legs, and each has numerous parameters and state variables. I'd created the dialog box from hell. Every tiny change meant nudging numerous components around to get it to look right. What a pain. But it was done, for now anyway.

That was when I thought about Tkinter, which I've never used before. I've become a huge fan of Python in the last couple of years, using it for anything where performance is not a big deal. I also wrote a very powerful Python-based scripting system for my now-defunct employer, using Boost Python. So using Python and Tkinter for the GUI was kind of an obvious thing to do.

Somewhere in the mists of history I acquired Python and Tkinter Programming, which I think is the definitive book on the topic. I skimmed that, and with frequent help from Google - especially this site - started putting my new GUI together.

What a pleasure! Tkinter automatically takes care of making a reasonable layout, given some general guidance through the pack and grid functions. You no longer have to think about the minutiae of positioning, or spend ages getting boxes to line up with each other. I just couldn't help putting together a bit of infrastructure for collections of config variables, so they are now super-easy - just a list of names and default values and Python and Tkinter take care of everything.

In total it has probably taken me about 6 hours to get everything together - but that included learning Tkinter from scratch and writing quite a bit of infrastructure. And now I have everything I need to control my inverse kinematics, and have an animated visualisation of what it's doing.

I'll never do GUIs any other way now. Tkinter is wonderful!

Saturday 6 August 2011

Favourite restaurants #3: Pizza Cresci, Cannes

My sister used to buy the Daily Sketch, a now long-forgotten English newspaper, on her way to work every day. This was a long time ago - she married and moved out when I was 11. When she came home I would seize it and read the cartoon on the back page, Peanuts. Among the many incomprehensible cultural references, to a child growing up in England in the 1950s, was the occasional mention of "pizza pie". Pizza was pretty much unknown in England back then - probably there were Italian restaurants in London that served it, but those were hardly the kind of places we could afford to go to. It would be a good few years before I'd find out what it meant.

Now, you can probably get pizza in every country in the world. Really it's amazing how quickly it has spread. Of course it was already commonplace in the US back then, which was why Charlie Brown took it for granted. I've eaten pizza in just about every country I've visited, there are times when you just need a break from the local food no matter how much you like it - as in Japan - and certainly if you don't, as in Korea.

Pizza's introduction to England was courtesy of Pizza Express, a London chain (originally) that made them before your very eyes, and made a very tasty pizza too. They even published a pizza cookbook, which worked surprisingly well considering that a domestic oven doesn't get anywhere near hot enough. Though my own introduction to pizza was at a local restaurant when I worked in Reading, Mama Mia - long since closed I'm afraid.

When we lived in France, we would make the pilgrimage every summer right across the south to the beach town of Hossegor - site of another favourite restaurant. It was a long drive - 8 or 9 hours, especially before the autoroute was finished and you had to dice with death on the three-lane stretch between Salon and Arles. By the time we got home we were exhausted and hungry. We would pile out of the car, leaving it packed to the gills with bags and often cases of wine that we'd stopped off for at Buzet, and cram into Isabelle's tiny Abarth to drive down to Cannes to eat.

Tradition had it that we always went to the same place, Pizza Cresci on the waterfront. Just the location is the stuff of dreams - right across the street from the harbour, packed with millionaires' yachts. Oh, and right next to the Municipal Police, hence easily recognised by the illegally-parked police cars, as you can see in the picture at the top. You might expect that in such a touristy location, the food would be mediocre. You couldn't be more wrong!

Pizza Cresci has, quite simply, the very best pizza I've ever tasted, anywhere in the world. I've been to the original pizza restaurant in Naples, and to some of the most famous ones in the US. They've all been good, but none has been quite as good as Cresci. My special favourite is their Pepperoni. They use a thin crust, crisp around the edges but deliciously soaked in melted cheese and oil in the centre. With a sprinkling of hot oil... just sinfully moist and delicious. Isabelle's favourite is something quite unique, an aubergine (eggplant) pizza, very thin slices of aubergine, a little cheese, and the same yummy thin base.

Of course we went there at other times too - if we were tired and just couldn't be bothered with eating at home, it was so easy. And it's huge (by French standards anyway), so even when it's packed at the height of the tourist season, you never have to wait long. But since moving to California, it's a wee bit less convenient and we hadn't been there for a long time. Then this spring, we visited Sorrento and Naples, then spent the weekend in Nice. Fresh from Napoli, the self-appointed capital of pizza, we decided to have lunch there. It was as wonderful as in our memories! The pepperoni pizza was delicious, the aubergine too (so I'm told), and as always with view of the Cannes waterfront.

Forget all the famous many-starred restaurants in Cannes, head straight for Cresci. It's the place to eat!

Worst Ever Dining Experiences #3: The Fat Duck, England

The Fat Duck is to food as Damien Hurst is to art. Neither is either, by any reasonable definition. Rather, they're an attempt to see how far you can lead people down the path of the ridiculous, if you constantly reassure them how sophisticated they are. Not that different from the story of the Emperor's New Clothes. In both cases they have been highly successful: the Fat Duck managed to fool the Michelin inspectors into giving it three stars, which is truly astounding.

The original concept of a restaurant was that it was a place you could go to get a decent meal. Surprisingly, it was originally not a generic term, but the name of a specific establishment in Paris, the word coming from the French verb restaurer, to restore or replenish. Since 1765, when it was first coined, the meaning has evolved somewhat. If you want a good meal you naturally think of going to a restaurant. But at the same time some of the top restaurants of the world have evolved beyond simply giving you a good meal, to giving you a unique eating experience. You don't go to Troisgros or the French Laundry just because you're hungry. (Or maybe some people do - there was a distant friend of the family, wealthy, who lived in Roanne and who supposedly ate at Troisgros every night. Why not - he could afford it and it was better than eating the French equivalent of beans-on-toast at home).

Though there are still top restaurants whose focus remains just a good nosh - the Savoy Grill in London, for example, which serves excellent but basically unsophisticated food, that Desperate Dan would feel at home with.

There's no question that the Fat Duck provides a unique culinary experience. It's just that it has strayed so far from any notion of food that you can't really call it "eating", except in the raw physiological sense that you do put something in your mouth, chew it and swallow it. Though I'm not sure "eating" applies to things that aren't food, except maybe in the sense "the dog ate my homework".

Our visit there was before it had acquired the fame it has today, about ten years ago. No Michelin inspector had yet been bamboozled into giving it three stars. We were with another couple, making four of us. There were several amuse-gueules - tiny teaspoon-sized concoctions, each more bizarre than the previous one. I remember some kind of purple jelly thing. None of them had much in the way of flavour, I suppose the idea is to look extraordinary - which they did.

But it is the main course that I'll never forget. It was supposed to be the greatest item on the menu - after all, if you're going there, go for the best. Raw pigeon. Yep, pigeon breast, raw. Marinated in something that had changed it a bit, but fundamentally, a raw piece of pigeon. Isabelle took one tiny mouthful and left the rest, in disgust. Foolishly, I persevered with it. It was edible, though a bit chewy, and with little taste. I really can't imagine, with hindsight, why I ate it. It wasn't enjoyable, it wasn't interesting, and as it turned out it was exceedingly unwise.

I don't remember the details of the rest of the meal, except that were even more weird amuse-gueules. Eventually we left and drove for twenty minutes back to our hotel. I just about had time to run from the lobby to the nearest toilet, where I was violently ill. There's a reason why raw pigeon isn't a common element in the human diet, and I'd just discovered it. Fortunately, I'd recovered by the next morning - after a thoroughly unpleasant night - and was able to take our flight home.

I guess the inspector who awarded the three stars must have chosen something else, or maybe has developed an immunity to fowl-borne gastric infections. Though the Fat Duck did have an extended bout of poisoning its customers a couple of years back, for reasons that have never been very clear. Amazingly, that has done nothing to harm its reputation. The local Indian restaurant would have been shut down (we're talking hundreds of customers here, not just one or two, over a period of months). But when your reputation is about shocking people rather than feeding them, maybe it doesn't matter.