Tuesday, 21 September 2021

Moving my Plane and its Pilot to France

Sierra in France, about to leave Toussus-le-Noble

Sierra

When we moved to California in 2001, I started learning to fly. It was something I’d thought about for years, and in the US it was easy and relatively inexpensive. I got my private pilot license in early 2002.


I bought my own plane in October 2002. I spent a lot of time thinking about what I should get. I decided on a Cessna TR182. The 182 is the Swiss-army-knife of planes, with decent speed, a good payload and range, easy to fly and easy to maintain. The TR182 version adds retractable gear, making it a bit faster, and a turbo, allowing it to fly easily at high altitudes. I found a decent 1980 example at a local dealership. It came with the registration N5296S, and quickly gained the name Sierra, after the last letter.


My friend Bill, who got me started as a pilot, wondered how long I would keep it - apparently the average length of plane ownership is about 5 years. But 20 years later, we’re still together. Sierra was in decent shape when I bought her, for a 20-year old plane. The avionics were OK but I soon upgraded them to what was then state of the art - a Garmin GNS530 radio/navigation unit, and a GTX330 transponder that could receive information about traffic in the vicinity. Later I had the interior redone, in cream leather which even now looks very nice.


Sierra and I did a lot of flying together, especially in the early days. With Isabelle, we often went somewhere over the weekend or just for the day. On one occasion I flew her to Denver and then to Leadville, the highest public airport in the country. Flying over the Rockies makes the value of a turbo very obvious. On the way there I made one of my few excursions into the flight levels, at 20,000 feet. (In the US these start at 18,000 feet, though in Europe they are much lower.)


By the mid-2010s, the paintwork was getting tired. A long life parked in the open at Palo Alto, next to the seawater bay, had produced some nasty corrosion. It was time to undertake the repaint that I had been thinking about for a long time. I went for the original orange and brown colours, and a design fairly similar to the original too, though modified to allow for a full-size twelve-inch registration. The colours are “so 1980s”, but any other colors would be tied to a time too. Back when I first started thinking about the repaint, burgundy-and-grey were very popular (along with strange squiggly lines). By now that would look just as dated as the original colors, and even less suitable.


Soon after the repaint, it was time for an engine overhaul too. With that done, I had a 40-year old almost-new plane.


The Move


In 2019 we decided to move back to France. It was something we’d been planning, in an abstract way, for several years. Finally all the circumstances came together. One obvious question was, what to do with Sierra? I certainly wanted to carry on flying back in France. Flying is a bit of a drug, and very hard to give up.


I’d flown in France, and the UK, several times before, though not for a long while. From my own experience, and from following various Internet forums, I knew that it was a very different experience from flying in the US. Everything is more constrained and rule-bound, and a lot more expensive. Instrument flying, especially, is a lot more limited - there are fewer approaches and you have to pay to use them. I really wasn’t sure what ind of flying I’d end up doing.


Essentially I had three choices:


1. Sell Sierra, join a flying club, and fly club planes. The problem with this is that flying clubs in France generally have only a handful of aircraft, sometimes just one. Taking a plane for an extended trip is out of the question, so I’d be limited to local bimbling around.


2. Sell Sierra and buy a plane in Europe. This seemed like a good idea until I asked on European pilot forums about it. Everybody said it was a bad idea. Maintaining a French-registered plane is much more complicated than an N-registered one. Also, I’d want to buy something substantially newer than Sierra, now over 40 years old, so it would mean investing a lot more money.


3. Take Sierra with me. This isn’t without its own complications, as you’ll see later. But it meant I could keep the plane I know so well. And in the worst case, I call always sell her in Europe.


There was variant of number 3. I could leave Sierra in the US, get some experience in Europe, and decide what to do later. But planes need to be flown. I looked into putting her on the rental line at a club. The club head’s prognosis was uncompromising. “Your plane is old, it’s a turbo, and it’s retractable. The question isn’t whether someone will damage it, it’s just how long it will take. Don’t do it.” That left arranging with friends to fly her from time to time, but that could quickly get complicated too.


There are two ways to move a small plane like Sierra across the Atlantic. The obvious one is to fly it, starting at Goose Bay in north-eastern Canada, then via Greenland and Iceland to Wick in northern Scotland. For Sierra, this can be done without even installing extra fuel tanks. But it’s a risky flight - if the engine stops, you’re in the near-freezing north Atlantic, with minimal chance of survival even wearing a full waterproof immersion suit. I wasn’t keen on this, and my wife was even less keen. There are professional ferry pilots, who do this for a living. I got in touch with a company, who quoted me nearly $40,000. That is getting on for half the value of the aircraft, and not realistic.


That left the other solution, which is to unbolt the wings and put everything in a shipping container. Then, when the container arrives in France, the wings are reattached, and the plane is good to go. At least, that’s the theory. I found someone nearby, at Hayward airport just across the bay, who does this for a living. They quoted me $9000 for the dismantling, packing and shipping. There would obviously be some costs at the French end too, but it would be a lot cheaper than ferrying.


The surprising thing about their quote was that it included absolutely nothing about what would happen when the plane arrived in France. Their plan was to ship it to the container port at Fos-sur-Mer, near Marseille, and that was it. I’d assumed they would be in touch with a network of shops at least in major countries who could finish the job, but evidently not.


Fortunately - so it seemed at the time - through the Internet, I had made contact with someone at my destination airport, Cannes-Mandelieu (LFMD), who seemed perfect. She was an FAA qualified mechanic and inspector (IA) as well as being both a French and FAA instructor. She agreed to take charge of the reassembly, and would also be ideal for getting the French pilot qualifications I would eventually need.


Arriving in France


Sierra at Hayward, waiting to be shipped

On Monday 15th March 2021, I dropped Sierra off at Hayward airport, meeting for the first time Ed, who ran the shop there. His office was full of exhibits and artifacts going back a very long way - he told me he has been doing this for over 40 years. I was joined there by my aerobatics instructor friend Rich, who had agreed to drive me back to Palo Alto. Ed regaled us with tales of the various aircraft he has shipped over the years


The rest of our move was a long and incredibly stressful tale of bureaucracy, packing and dealing with the movers. Despite my worries and all the sleepless nights, everything ended up going smoothly. Two days later, on 17th, we set off for San Francisco airport, with our cat Missy, seven huge suitcases and as many items of cabin baggage, for the one-way trip to France.


As far as Sierra was concerned, there was nothing to be done until she showed up in France. The original estimate for that was six weeks - two weeks for the dismantling and packing, and four weeks for the container to reach France. That turned out to be extremely optimistic. Nearly three months passed before I even got a confirmed date for the shipment. The shop attributed this to Covid and the difficulty of getting hold of containers. For sure, all shipping worldwide was a big mess - there were articles about it in the media.


I wasn’t bothered by the delay. We had far more than enough to occupy us with the household move, once that container showed up, at the end of May. It would be the end of July before we had (mostly) finished unpacking and organizing things at our new home in Nice.


In early June, Ed sent me the shipping information, with a planned arrival at Fos-sur-Mer on 20th June. Now it was time to get serious about the French end of things. But it turned out not to be so simple.


The mechanic who was supposed to take care of reassembly suddenly became increasingly hard to get hold of. She didn’t return emails, and when I called her she would be busy, promising to call me back - which she never did. She did ask me, reasonably enough, for some information about how Sierra had been packed into the container. But communication with Ed was even harder. He also failed to return emails or answer the phone. The one time I did manage to get hold of him, I had to squeeze every word out of him. The only information I could get was “you’ll need a fork-lift with extra-length forks”. That seemed reasonable enough, and I passed this information on to the mechanic.


Ed had also managed to tell me the contact for the shipping agent in France. Fortunately they were extremely helpful and responsive. Finally the container arrived at Fos. It would take a few days to clear customs and then be delivered by road to Mandelieu.


That was when my mechanic dropped her bombshell. “We’ll be able to do the reassembly at Mandelieu,” she said, “but it’s your responsibility to get the plane out of the container.” WHAT? I have absolutely no idea how to do that, and I’m certainly not going to teach myself fork-lift driving skills in the process.


Enter my friend Laura. I’d met her on the internet very early on when trying to figure out the move. She’d shipped her own Carbon Cub from the US to France a year earlier, and gave me lots of useful advice. I sent her a despairing message late at night, when I got the message from the mechanic. She responded instantly, giving me the contact information for Michael, who had taken care of her plane.


At 9 the following morning I called him, naturally speaking French. After a couple of sentences he said to me, “You speak English, don’t you?”, and I realized he’s American. “Of course I’ll take care of your plane,” he said. We had a deal. I can’t even begin to describe my relief. To store the plane in the container while I searched for another solution would have cost something like €100 per day.


Naturally he wanted to know a bit about how the plane had been packed. I tried to get in touch with Ed, at Hayward, but got no response at all. Michael tried too, but to no avail. Finally he sent me a message saying, “Sorry, was in a car accident, but I’m fine now.” I hope he is, but that was the last we ever heard from him. Michael just had to improvise when it came to extracting Sierra from her container.


I had to call the importing agent again. Michael’s shop is at Toussus-le-Noble, one of the few GA airports in the Paris area. It would cost me over €1000 more to move the container from Fos to Toussus, but there was no choice. One little twist was that the container made the journey by rail - making Sierra one of the very few aircraft to have travelled by train. I was told it would take at least a week, because of all the congestion of container traffic.


Reassembly


Then on Friday, I got a message from the importers. “Please pay your bill immediately so the shipment can take place. Is the mechanic ready to receive the container on Monday morning?”


Well no, he wasn’t. I’d told it him it would take at least a week. I called him. “Sure, Monday is fine,” he said. On Monday morning he texted me a picture of a huge container truck. “On my way to work, got stuck behind this!” he said. And indeed it was my container. Later that day he sent me more pictures. It was wonderful to see Sierra again, even if she was still a long way from being flyable. He extracted her form the container with no difficulty and sent the truck back on its way, avoiding any storage charges.


Sierra's container arriving at Toussus-le-Noble


It turned out that the packing had not been done very well. The fuselage was resting on the bare wood of a pallet, with no packing at all, and the gear half-retracted. Surprisingly, this had caused little damage, just a few cosmetic scratches on the belly. As the reassembly progressed other damage showed up. They’d cut through a couple of cables when they removed the wings. They hadn’t noted the position of the critical camber-adjustment cams when they removed the wings. They’d damaged some bits and pieces of the landing gear. Nothing that couldn’t be fixed, but irritating.


I agreed with Michael that he would do an annual at the same time, since it would soon be due, and left everything in his hands.


A couple of weeks later I was able to meet him. He was visiting Mandelieu and invited me to lunch and to meet the people he knew there, which turned out to be extremely useful. He’d flown down from Paris in his Aerostar, a sleek, fast piston twin. It’s one of the few small planes where you have to worry about the 250 knot speed limit below 10,000 feet. Afterwards I visited the FBO where Sierra was to be parked and maintained, meeting the people there.


Paperwork


There’s a saying in aviation that no aircraft can fly until the weight of the associated paperwork exceeds the weight of the aircraft. This wasn’t quite true for Sierra, but there was a lot to be taken care of.


The first hurdle was customs. Normally an import like this would have to pay VAT - 20% in France, quite a lot of money. But because it was part of our belongings returning to live in France, it ought to be exempt. Laura had been a big help with this, and among the vast quantities of baggage we had brought with us was every document I could find which would support it, including 20 years’ worth of receipts for parking and taxes at Palo Alto. It wasn’t enough, though. The importers wanted the original purchase receipt from 2002. Getting hold of this, or at least a copy of it, was quite a challenge, but fortunately it worked. The shipment cleared customs without a hitch.


The next problem was legal ownership. It comes as a surprise to most people that you can operate a plane in France with a US (N) registration - like N5296S. If we’d managed to import our Toyota FJ, for example, we would have had to re-register it in France within a few months - which is the main reason we didn’t. Probably over half the privately owned aircraft in France, and Europe generally, are on the US register, and flown by pilots using their FAA licenses. The ongoing bureaucracy associated with European registration is much more demanding than the US. Also it is pretty much impossible for a private pilot to obtain an instrument rating in Europe, so if you want to fly IFR the obvious route is an N-registered plane and an FAA IR. The European authorities hate this, and have taken steps to control it, but it remains true.


Still, without US citizenship, I cannot legally own an N-reg plane outside the US. Fortunately this is a widespread problem with a well-known solution. There are companies that specialize in providing US ownership via trustees, keeping everything legal. Following recommendations on the web, I got this set up without too much difficulty. In the US I’d owned Sierra through a Delaware Corporation, and the cost was similar. (That also created some worries over the duty-free import, but it turned out not to be a problem).


Finally I had to get European insurance. This worked out to be bit more expensive than in the US, but not a problem. In the US, it’s insurance companies who really decide what you can own and fly. If as a new PPL with 100 hours you go out and buy a sophisticated retractable, you simple won’t be able to insure it. In Europe this seems to be less of a problem. For example when I added Michael to the insurance, so he could do post-maintenance test flights, all they wanted to know was his total time. Nothing about time in model, or retractable time, which would have been a major issue in the US. When I added an instructor to my US insurance, they wanted not only time in model but time in model of the same year, which is ridiculous.


Flying in France


Although I can legally fly my plane in France without any formalities, it seemed like a good idea to get some experience operating in France and also with using French on the radio. This isn’t required for ATC, who will work in English, but it’s needed if you ever go to uncontrolled fields, and it seems like a good idea to be able to do it even with ATC. Also, from May 2022, a French license will be needed for residents even when flying an N-reg aircraft with an FAA license. The good news is that they have invented a simplified procedure for getting one, for experienced pilots, but still it has to be done.


We planned to stay at our “beach house” near Biarritz, while we waited for our possessions to show up. I’d flown a long while ago with the Aéroclub Basque at Biarritz airport, so I contacted them, but they took a long time to respond and when they finally did they weren’t very encouraging. One time, years ago, I’d stopped by the club on the airport at Dax, and when I got in touch with them they were much more helpful.


Their first requirement was for a French medical. I made an appointment with an aviation doctor nearby. In the US the system is pretty much self-policed. If you can walk unaided into the doctor’s surgery you will probably get a medical, as long as you haven’t declared anything the FAA doesn’t like - which is pretty much everything. On the other hand, if you omit so much as a dental hygienist appointment from your declared list of medical treatment, you can be banned for life from holding an FAA license. This makes the self-policing work quite well.


The French doctor actually examined me, which was quite a shock. I did a hearing test, a vision test, an ECG, a respiratory capacity test, and various other things. Luckily it all went well, and half an hour later I left his surgery clutching a French medical certificate.


My first flight at Dax was in a Robin DR400, the universal French aeroclub plane, which had been modified to fit a 100hp Rotax engine. The nice thing about the plane is that everything seems brand new. The not-so-good thing is the 100hp engine, which makes climbing a delicate matter and limits it to a top speed of around 85 KIAS. Still, I’m not in a hurry to get anywhere. We flew to Biarritz, not far away, and did a missed approach since otherwise we would have to pay the €40 landing fee. That’s another big difference between the US and Europe - in the US only truly huge airports, like SFO and JFK, have landing fees. At an equivalent of Biarritz - say San Luis Obispo - you pay nothing. Then we returned to Dax via the beautiful Atlantic coast.


Dax is a strange airport. I don’t know its history, but now its main role is as the training base for all official French helicopter pilots - army, police and so on. There is fleet of about 20 identical red-and-white H120s based there, and during the working day there is a constant background noise of helicopters. It’s controlled by the military, and civil flying is permitted only for the aeroclub and the small handful of based planes. You can’t just go and land there.


The other odd thing about Dax is the runway. The actual paved surface is 800 metres (2600 feet) - about the same as Palo Alto. But trees just off one end limit its useful length to 494 metres, about 1600 feet and definitely the shortest runway I’ve ever used at a designated airport. Landing on 25, the usual runway, you spend more time looking straight down at the treetops 100 feet below you, than you do looking at the runway. The airport is closed at night, and you can understand why.


I’ve done several more flights there, getting to know French airspace and regulations, and practicing French on the radio. I speak French fluently, almost bilingually, but still the first few radio interactions were just as panic-inducing as the first few flights in the US. Now I can generally get by OK, but if ATC goes off-script I can still be left completely lost. There’s always the option to switch to English, but that seems like a bit of a defeat. Occasionally it happens that ATC hear my accent and reply in English anyway, which is kind of annoying.


An oddity of French aero clubs is that the instructors work for nothing - bénévole in French. This makes no sense to someone used to the US system, where you pay $50-100 per hour, but that’s the way it is. Most of my flying has been with Pierre-Alexandre, a trained airline pilot, who but for Covid would now be flying for Ryanair or Wizz. He’s staying current and filling in time by instructing at Dax, but in order to eat and pay the rent, he works in a sandwich shop in the mornings!


I also managed one flight with an instructor in a 172 out of Cannes. We did a tour of all the named VFR reporting points around the airport, which was extremely useful. I’d hoped to fly some more, but between holidays, aircraft availability and other hiccups, I didn’t.


Named reporting points are another non-US difference. In the US, there are reporting points around airports but they’re informal. At Palo Alto you can report Lake Elizabeth, Cement Plant, Cooley’s Landing and numerous others. The only way to get to know them is to fly locally. If you fly to an unfamiliar airport and they tell you “direct Joe’s Tire and Muffler” you have to say “unfamiliar” and hope they’ll come up with something less cryptic. In France every airport has a “VAC”, a combined approach chart and airport information sheet. For bigger airports this will include several named reporting points which are used when arriving and departing. Dax for example has S, SE, N, N2, NE, BG and more. Luckily SDVFR knows about these, because some of them are pretty obscure if you’re trying to identify them by ground reference.


First Flight

Our route from Toussus to Mandelieu

Getting Sierra ready to fly again took a long time. There was, as expected, a constant string of little things that needed fixing, and then there was August - when the whole of France, including Michael, disappears on holiday. Finally, at the beginning of September, we agreed that I would pick Sierra up on Monday 6th. This was subject of course to weather - I certainly wasn’t going to fly a recently-reassembled plane in IFR, and with zero IFR experience in Europe. The previous week I’d signed a contract to park and maintain Sierra at Mandelieu - necessary since otherwise I would have nowhere to park when I got there.


Luckily the weather was good. A pilot and CFI friend of mine had agreed to come along as moral support, and practical support if necessary. We took an Air France flight to Orly, and a taxi for the half-hour ride through the leafy southern suburbs of Paris to Toussus. Finally, I saw Sierra again, just one week short of six months since I left her at Hayward. She looked perfect.


Michael gave us a tour of his hangar. His speciality is rebuilding damaged Piper PA46 Malibus, of which there were several cadavers around the edges of the hangar. Two of them had been damaged at the same airport, the very challenging “altiport” at Courcheval. We went to lunch at the on-field restaurant, where I finally met Laura - she’d agreed to join us there. It was all very enjoyable, swapping flying stories. She had worked in the Bay Area for a time, and flown out of Palo Alto, so we knew a lot of the same people there.


We did a short post-maintenance test flight together. Everything seemed fine, though we forgot to test the autopilot, which turned out to be a mistake. I surprised Michael by rolling into a 60 degree bank - quite forgetting that not everybody has aerobatic experience. Toussus is a tricky airport. It’s right under the 1500 foot floor of the Paris airspace, which is absolutely closed to VFR. It’s also hemmed in on three sides by the surface region of the same airspace, so it’s a bit like flying in a blind tunnel. There is one route out, and the same route back in again.


I’d been worrying over the preparation of the flight for weeks. French airspace is extremely complex. There is military airspace everywhere. Some of it is permanently closed, but most isn’t. All my French pilot friends told me, don’t worry about it. Plan a straight line, you’ll nearly always be allowed in, maybe with an altitude change or a slight detour. Michael had quite literally flown in a straight line from Cannes to Paris after our meeting there. But what if they don’t let you in? What do you do then? Finally, with the help of the excellent SDVFR app which understands all the subtleties of flying in France, I’d worked out a route which would let me avoid all military airspace, including all the stuff that would probably be inactive. It also avoided flying over any seriously inhospitable terrain, like the Massif Central. The actual route was Toussus - Rambouillet - Pithiviers - Nevers - Moulin - Montelimar - a few kinks around stuff en route - Cannes. I had to choose the right altitude - 7500 feet, no higher or lower.


Finally at 4pm, two hours later than I had intended, we took off. Despite all my fears the flight was completely uneventful. I soon discovered that the autopilot didn’t work, mysteriously since Michael assured me it had been fine in ground testing. It was quite enjoyable to hand-fly a long flight for a change. The first hour was over the rather dull agricultural plain south of Paris, under a perfect blue sky. As we got further south we started to see a few clouds, though nothing you couldn’t fly around. We also started to see terrain - we were within gliding distance of the Rhone valley, but underneath us were the rolling foothills of the Massif Central.


The town of Montelimar is famous for two things: fudge, as immortalized in the Beatles song Savoy Truffle, and its VOR (radio navigation beacon). Whenever you fly from London to Nice, the pilot always comes on to the radio about 30 minutes out and says “we are just approaching Montelimar”. Why this little insignificant place rather than say Lyon or Avignon, you ask yourself. The answer is that the VOR there is where the flight will turn left towards Nice.


We did the same, but then we realised that despite the perfect weather forecast, there were actually some clouds. We dropped down to 7000 feet, and then by stages to 5000 - still comfortably above the terrain, though not what we’d planned. Our route took us over the vines of the Cotes du Rhone, and later over Cotes de Provence, and just north of the highest mountain the area, Le Mont Ventoux at 6600 feet.


Clouds over Le Mont Ventoux

We’d been talking to someone for the whole flight. France makes a distinction between control and “info”, which is a VFR-only service. Sometimes they’ll hand you off to someone, sometimes they just say “squawk VFR” and leave you to figure out who comes next, though they will tell you if you ask. Generally working in English was fine, though it took me several attempts to get Marseille Info to understand who I was. That seems a good argument for using French.


Finally we reached the first reporting point for Mandelieu, WL, and were able to call Cannes Tower. They gave us a straightforward arrival - thank goodness for my one recent flight there. We taxied to my brand new parking spot, after 2h45 of flying.





And that was it. We left for the Atlantic coast again a couple of days later, leaving Sierra in the hands of Jet Azur to finish up the few remaining squawks. Now I have to figure out where we want to fly to, when we get back.


First Landing at Mandelieu

Wednesday, 23 June 2021

Kotlin for a Python and C++ Programmer

A while ago I got interested in Kotlin as a possible type-safe alternative to Python for our system. A lot of the non-performance-critical things are done in Python. As a lot of people have discovered, Python works well for small programs. But small programs have a tendency to get bigger, and to take on a life of their own. Maintaining large Python programs is hard, and refactoring them, for example to change the way objects relate to one another, is just about impossible. You're sure to miss some corner case which will show up as a runtime error much later.

Our CLI was the obvious candidate for experimenting with Kotlin. This is several thousand lines of Python and predictably, it has become hard to maintain. My first efforts with Kotlin were not very successful. It is based on Java and inherits some mis-features and design baggage from there, which stopped me from doing what I wanted. Also, the build environment is a nightmare.

Then recently I took another look at this project, and saw a way around my previous problem. As a result, I have since written a complete Kotlin implementation of the CLI. It's a nice piece of code and much easier to maintain and work with than its Python equivalent.

Here's a summary of the good and bad points of Kotlin, based on my experience.

  • Really, really good: the Kotlin language. It's a delight to use with lots of features that lead to compact, uncluttered code, yet totally type-safe. More on that later.
  • Very good: the IDE (called Idea). Intuitive and easy to get used to, and makes writing code just so easy. Only problem is that it occasionally crashes, taking the system with it.
  • OK but not great: libraries. Like Python, Java supposedly has libraries for everything. But finding them is hard, and figuring out how to use them from Kotlin is even harder.
  • Awful beyond belief: the build system, a dog's breakfast of several different tools (Gradle, Maven, Ant, who knows what else). As long as you stay within the IDE, life is mostly good. But at some point you generally need to build a stand-alone app. There is no documentation for how to do this, and what you can find online is confusing, contradictory and rarely works. Probably if you come from a Java background all this seems normal.

Idea: the IDE


Kotlin is joined at the hip to its IDE, which comes from the company that invented the language (Jetbrains). The same company also produces PyCharm for Python, and the two are very similar. It's everything you could ask from an IDE. The instant typing-time error checking has spoiled me completely, and now I expect Emacs to do the same when writing C++ - except of course that it doesn't.

The biggest problem, running it on Ubuntu, is that every so often it freezes and takes the entire GUI with it. If you can log in from another system, "killall -9 java" kills it and allows the system to keep going. Otherwise, you just have to reboot.

It does a good job of hiding the complexities of the build system, as long as you want to stay entirely in the IDE. But the problem with anything "automagic" is what happens when it goes wrong, or doesn't do what you need. The build system is a nightmare (see later) and the IDE offers no help at all in dealing with it if you want to create a stand-alone app.

It sometimes gets confused about which library symbols come from, and flags errors that aren't there. It still lets you run the compiler, so it's only a nuisance. It also isn't very helpful when you add a library that comes from a new place. You have to hand-edit an obscure Gradle file, and then restart Idea before it understands what you have done.

The Language


Once I got my head around it, Kotlin is the nicest language I have ever used. It particularly lends itself to functional-style programming, but there's nothing to stop you using it like C or Fortran. The few irritating things are a result of its Java legacy - fundamentally, Kotlin is just syntactic sugar over the top of Java. Some of the really nice things:
  • inobtrusive strong typing: everything's type is known at compile time, yet you rarely have to be explicit about types. The compiler does an excellent job of figuring out types from context, like auto in C++ but much better.
  • ?. and ?: operators: between them these make dealing with nullable values very clean and simple. ?. lets you write in one line what would take a string of nested if statements in C++ or Python. 
  • lambda functions: all languages now support lambda (anonymous) functions, but in both C++ and Python they're an afterthought, and it shows. In Kotlin they are an integral part of the way the language is meant to be used, making them a very clean and natural way to express things.
  • the "scope functions": a collection of highly generic functions that make it easy to do functional programming. For example, the 'let()' function allows you to execute some procedural style code using the result of a functional call chain. 'also()' makes it very easy to write chainable functions.
  • generic sequence handling functions: 'map()' does the obvious job of applying a function to every element of a sequence or collection. There are plenty more that simplify all kinds of common requirements, for example to trim null elements from a list after some processing.
  • string interpolation: the string "foo = $foo" replaces the last part with the value of foo, converted as appropriate to a string. That started with Perl and is now available in Python 3. Kotlin takes it further though, allowing complex expressions and figuring out the string syntax, e.g. "foo = ${x.getFoo("bah")+1}".
  • extension functions: it's easy to define new functions as if they were member functions of a class. They can only access public members of the class, but to the user they work exactly as if they were part of the base class definition. For example I wrote a function String.makePlural() which figures out the plural of a noun. This has always struck me as an obvious improvement but it has never even been considered for C++ (nor Python as far as I know).
There's nothing really bad about the language. The "generic" support is fairly feeble compared to C++ templates. In C++ a template parameter type behaves exactly as any other type, for example you can instantiate it. And no validity checking is applied until you instantiate the template, which is very flexible.

Kotlin's generics are built on the Java equivalent. You have to specify exactly what the template can and can't do in the function definition, meaning you can't for example write a numeric function using the normal arithmetic operators. (It doesn't help that there is no common supertype for integers and floats that supports arithmetic). Once you accept this, it's not too hard to work around it. But it's a shame.

Libraries


The great thing about Python is that you can find libraries to do just about anything. The same is supposedly true of Java, too. Since Kotlin can easily call Java, that should mean you can find libraries to do just about anything in Kotlin too. But you can't.

I first ran into this when trying to use a Rest API from Kotlin. Python has an excellent library for this, called Requests. After a bit of googling, I found that someone had ported it to Kotlin, calling it khttp. Then I spent a couple of hours trying to figure out how to get Kotlin to actually use it. That ought not to be hard, but you have to tell the build system where to find a library, i.e. a URL. And none of the documentation, for any library, tells you this. Or sometimes it does, but it's wrong.

I did finally get khttp working, and it was good. But when I returned to my project a few months later it had simply disappeared. It was a single-person project, and the maintainer had got bored with it and moved on to something else. There were bits and pieces about it on the web, and maybe you could get it from here and patch it from there, but it didn't look like a good path.

So I googled around some more, and found another library. It's called Fuel, and it does allow you to make Rest requests. But it is obscure and barely documented. For example, when Rest returns an error, there is no straightforward way to access the details. You can do it in a very clumsy way. But even then, it uses the type system in such an obscure way that there is no way to write a common function that will work across multiple request types (Get, Put, Post and so on). You have to repeat the same ten lines of ugly impenetrable code.

One of the much-vaunted features of Kotlin is coroutine support, allowing you to run lightweight threads that maintain their own stack and state. This looked useful to handle parallel Rest requests, which I needed. Even though it is documented as part of the language, it isn't really. It's part of a library, and has to be explicitly imported. But from where? Everything you can find on the web says you need to "import kotlinx.coroutines". But that doesn't work. Eventually I did figure it out, but I never did convince the IDE. That showed an error right up until I decided I didn't need coroutines anyway.

Another example: a CLI needs the equivalent of GNU readline, so commands can be recalled and edited. The good news is that someone has ported the functionality to Java, in a library called JLine. In fact, they've done it three times - there is JLine, JLine2, and JLine3. They're all different in undocumented ways. But anyway there's hardly any documentation. To find out how to show history (equivalent of the shell 'history' command) I ended up reading the code.

The experience with other libraries has been the same:
  • the documentation is between non-existent and very poor
  • figuring out where to get the library from is near impossible
  • even though there is probably a library for what you want, finding it is a challenge

The Build System


When you start writing Kotlin, you work entirely in the IDE and you don't even have to think about the build system. When you want to run or debug your program, you click on the menu, it takes a second or two to build, and it runs. Life is good.

But if you're writing a program it's probably because you want it to do something. And for that you most likely need to be able to run it without the IDE - from the command line, file explorer or similar. In the Java world, that means creating a .jar file. And that is where the fun begins.

You might reasonably suppose that the IDE would have a button somewhere, "turn this into a Jar file". But it doesn't, nothing like it. So you google it, thinking the menu item you need must just be buried somewhere. But no. What you find is incredibly complicated suggestions about editing files that don't even exist in your environment.

When you do finally manage to persuade something to create a Jar file, and you try to run it, you get a message about not having a manifest for something or other. If you're an experienced Java hand, this may mean something to you. All I know is that the IDE has agreed to build something, but missed out something vital.

Eventually, somehow, I managed to create a menu item that successfully built a runnable Jar file. Problem is, I have no idea how. When I created a trivial "hello world" program just for this exercise, I could never get it to work again. And then somewhere along the way I did something wrong, and the menu item disappeared, never to return.

Ironically, I did once find an article by someone from Jetbrains saying "of course when you write a program, you want to be able to run it without the IDE. Here's what you need to do." The instructions were simple, and they worked. The trouble is, no matter what search I do, I have never managed to find the article again.

Java programs were originally built using something called Ant. That was too complicated, so it was overlaid with another tool called Maven. Then that was too complicated too, so it was overlaid with something called Gradle. That came with its own language, but Kotlin invented a variant where the build requirements are described in a Kotlin mini-program.

So far so good, but all of these tools are mind-numbingly complicated and poorly documented for the casual user. Such documentation as you can find, assumes complete familiarity with the world of Java. Just because Gradle sits on top of Maven, doesn't mean you can ignore Maven. You still sometimes have to go and edit Maven files, which use XML. I've always viewed XML as an object language that only computers should deal with, just like Postscript or PDF. But the Java world is in love with it.

This all really starts to matter if you want to build your Kotlin program as part of some larger project - for example, our entire application. That is built with Make, and heaven only knows Make is a nightmare. But it's a familiar nightmare.

The IDE creates a "magic" file called gradlew. It isn't mentioned in any instructions, nor what to do with it. But a friend told me that './gradlew build' will build a stand-alone jar file from the command line - and sometimes it does. Luckily that worked for me "real" program, though when I tried it with a toy hello-world program it didn't.

Summary


Kotlin is great language, and a pleasure to use. Sadly, the nightmare build system, and the lack of any help from the IDE in dealing with it, means it is not really ready for prime time as part of a serious application or project.

Monday, 9 November 2020

My Little List of Useful Principles

This first appeared on my web page, but I thought it deserved to be repeated here on my blog. 

The David Stone Principle

“Never ask a question that has an answer you may not like.” Also expressed as “It is easier to obtain forgiveness than permission”. In other words, don’t ask if it is OK to do something, because the chances are there will be someone who will have some reason why you shouldn’t, and having asked the question (and got an answer you don’t like), you have placed yourself under an obligation to do something about the answer. Whereas if you just got on and did it, you could deal with any objections afterwards. This has two advantages. First, you’ve already done what you intended, and it is pretty unlikely that you will be made to undo it. Second, people are less likely to object after the fact anyway.

Harper’s Theory of Socks

Everybody who has ever packed a suitcase knows that no matter how full the suitcase, no matter how difficult it is to close, there is always some crevice where you can squeeze in one more pair of socks. Those familiar with the Principle of Mathematical Induction will immediately see that it follows that you can put an infinite number of pairs of socks in a single suitcase.

If this is obviously fallacious, it is less obvious why. But in any case it is a useful riposte to the executive or marketing person who wants to add just this one tiny extra piece of work to a project.

Law of Ambushes

I heard this one from Tony Lauck, but he claims to have got it from someone else. Think of an old-fashioned Western, with the good guys riding up towards the pass. They know the bad guys are up there somewhere, and they’re looking every step of the way, scanning the hilltops, watching for any movement, peering around twists and turns in the trail. Suddenly there’s a dramatic chord and the bad guys appear from nowhere, guns blazing. Of course the good guys triumph, except the one you already figured was only there to get shot, but the point is, ambushes happen and take you by surprise even though you expect them, even though you’re waiting for them every second. And they always come from where you weren’t expecting and weren’t watching.

The Lauck Principle of Protocol Design

This one is a little technical, but it is so fundamentally important to the small number of people who can benefit from it, that I include it anyway. Communication protocols (such as TCP) work by exchanging information that allows the two, or more, involved parties to influence each others’ operation. When designing a protocol, you have to decide what information to put in the messages. It is tempting to design messages of the form “Please do such and such” or “I just did so and so”. The problem here is that the interpretation of such messages generally ends up depending on the receiver having an internal model of its partner’s state. And it is very, very easy for this internal model to end up being subtly wrong or mis-synchronised (see the Law of Ambushes). The only way to build even moderately complex protocols that work is for the messages to contain only information about the internal state of the protocol machine. For example, not “please send me another message”, but “I have received all messages up to and including number 11, and I have space for one more message”. There are legitimate exceptions to this rule, for example where one protocol machine has to be kept very simple and the other is necessarily very complex, but they are rare and exceptional. As soon as both machines are even moderately complex, this principle must be followed slavishly.

The Lauck Principle of Building Things That Work

If you don’t understand what happens in every last corner case, every last combination of improbable states and improbable events, then it doesn’t work. Period. Yes, you may say, but it is too complex to understand all of these things right now. We will figure them out later as we build it. In this case, you are doomed. Not only does it not work, but it will never work.

The Jac Simensen Principle of Successful Management

Get the right people doing the things they’re good at, and then let them get on with it. It sounds simple, but it is rarely done thoroughly in practice. It's applicable to all levels of management but especially at more senior levels where there’s a lot of diversity in the tasks to be undertaken.

The Principle of Running Successful Meetings

Write the minutes beforehand. If you don’t know what outcome you’re trying to achieve, you stand little chance of getting there.

Harper’s Principle of Multiprocessor Systems

Building multiprocessor systems that scale while correctly synchronising the use of shared resources is very tricky, Whence the principle: with careful design and attention to detail, an n-processor system can be made to perform nearly as well as a single-processor system. (Not nearly n times better, nearly as good in total performance as you were getting from a single processor). You have to be very good – and have the right problem with the right decomposability – to do better than this.

Harper’s Principle of Scaling

As CPU performance increases by a factor of n, user-perceived software performance increases by about the square root of n. (The rest is used up by software bloat, fancier user interface and graphics, etc).

The Delmasso Exclamation Mark Principle

The higher you go in the structure of an organisation, the more exclamation marks are implicitly attached to everything you say or write. So when a junior person says something, people evaluate the statement on its merits. When the VP says it (even in organisations and cultures that aren’t great respecters of hierarchy and status, like software engineering), everyone takes it much more seriously. It means that as you move up the organisation, you have to be increasingly careful about what you say, and especially you have to be increasingly moderate (which doesn’t always come naturally!).

The Dog-House Principle

A dog-house is only big enough for one dog. So if you don’t want to be in the dog-house, make sure somebody else is. I first heard this applied to family situations (specifically to someone’s relationship with his mother-in-law) but it seems more generally applicable.

Mick's Principle of Centrally Managed Economies

There are three reasons why centrally managed economies don’t work. The first is obvious, the second less so, and the third not obvious at all. This principle was formulated by a friend of mine during the dying days of the Soviet Union. Its applicability to centrally-managed economy is obvious, but it should be borne in mind whenever an organization’s success model involves the slightest degree of central planning.

The first problem is that they assume a wise central authority that, given the correct facts, can figure out the right course of action for the next Five Year Plan. It is fairly obvious that such wisdom is unlikely to be found in practice.

The second problem is that even if such a collection of wisdom did exist, it would only succeed if given the correct input. In the case of the Soviet Union, this means the state of production in thousands of factories, mines and so on, as well as the needs in thousands of towns and villages. But all of this input will be distorted at every point.

The lowliest shopfloor supervisor will want to make things look better than they are, while the village mayor will make things look worse so as to get more for his village. And at every step up the chain of management, the information will be distorted to suit someone’s personal or organizational agenda. By the time the Central Planning Committee gets the information about what is supposedly going on, it has been distorted to the point where it is valueless.

The third problem is the least obvious. Suppose that by some miracle an infinitely wise central committee could be found, and that by another miracle it could obtain accurate information. Its carefully formulated Five Year Plan must now be translated into reality through the same organizational chain that amassed the information, down to the same shopfloor supervisor and collective farm manager. At every step the instructions are subject to creative interpretation and being just plain ignored. The Central Tractor Committee, knowing the impossibility of getting parts to make 20,000 tractors, adds an “in principle” to the plan. The farm manager, knowing that his people will never get enough food supplies to live well through the winter, grows an extra hundred tons of corn and stocks it. And so on.

Acknowledgements

Tony Lauck led the Distributed Systems Architecture group at DEC, and was my manager for several years. As a manager he was pretty challenging at times, but as a mentor he was extraordinary. He had (still has, I guess) the most incredible grasp of what you have to do to get complicated systems to work, or perhaps more accurately what you have to avoid doing. At first encounter, spending a whole day arguing over some fraction of the design of a protocol seemed like pedantry in the extreme. It was only later that you came to realise that this is the only way to build complex systems that work, and work under all conditions. With the dissolution of DEC, the “Lauck School of Protocol Design” has become distributed throughout the industry, to the great benefit of all. A whole book could be written about it, citing examples both positive and negative – were it not for the fact that Tony is still very much alive, BGP for example would have him spinning in his grave.

Jac Simensen was my boss (or thereabouts) at DEC for several years. It would be an exaggeration to say he taught me everything I know about management, but he was the first senior manager I saw in action from close-up, and one of the very best managers I’ve ever worked for. He certainly gave me an excellent grounding when I quite unexpectedly found myself managing a group of nearly 100 people, by a long way the biggest group I’d ever led at the time.

Friday, 11 September 2020

Bread



Today's Loaf

At the start of the shelter-in-place order for the Bay Area I decided to try my hand at making bread. Me, and tens of millions of others. I got started thanks to a friend who gave me a bag of Italian Doppio Zero flour, and thanks also to a small pack of yeast I happened to have. Both ingredients had completely disappeared from supermarket shelves. I found a recipe on the web - which turned out to be seriously flawed. Still, my first effort was pleasant to eat, and encouraged me to keep trying.

Six months have now passed. I've made bread twice every week since then, on Friday and Sunday mornings, which amounts to about 50 loaves. I think that now I've got the hang of it. There are really only two ingredients in bread, flour and water, plus of course yeast. Yet there are amazing variations in what you get with only small changes in the ingredients.

But I'm getting ahead of myself. My two-pound bag of Doppio Zero was quickly exhausted. We had some all-purpose flour, but bread should be made with proper bread flour, which has a higher protein content than normal flour. The protein is what turns into gluten, which is what gives bread its structure and texture. Normally you can buy it in the supermarket, but not in March 2020.

Looking online, I discovered a high-end flour producer (Azure) who claimed to have ten-pound bags of bread flour available. I ordered one, and hoped it would arrive quickly. But it didn't. When I chased them, they assured me it was on its way, but delayed due to the problems arising from the pandemic. That seemed fair enough, but it didn't help me.

I looked some more, and discovered that I could get a fifty-pound sack of flour from King Alfred, the top name in flour in the US. It seemed crazy to buy that much, but it didn't cost all that much and it would solve my problem. I placed the order, intending to cancel the Azur order when the new order shipped.

You can guess what happened. Literally within minutes of the King Alfred confirmation, Azure sent me a shipping notice. The two showed up within a day of each other.

The First Attempt - just wheat flour,
and horribly over-hydrated
I had a few packets of supermarket yeast, but given we couldn't know when bread ingredients would reappear on the shelves, I needed more. Through a similar sequence of events as the flour, I ended up with two packs of yeast as well, a total of three pounds - enough for about 160 loaves. On the bright side, it keeps for a long time. Incidentally the Fleischmann stuff doesn't make very good bread.

Tricks


A bunch of tricks I've learned along the way...

One thing you quickly discover with bread is the importance of the "hydration", which is to say the amount of water. Too little gives you a very dense bread, while too much delivers decent bread but the dough is a sticky mess that won't hold any kind of shape. I've found 71% works very well, for example 340 ml of water with 480g of flour. This may seem over-precise, but when on occasion I've got sloppy and used an extra 10ml (2%) of water, the dough is really different.

Early on I tried adding hazelnut flour to the normal wheat flour. I add 30g of it to 450g of bread flour. That gives a delicious nuttiness to the taste, and also contributes to the crispness of the crust. I tried walnut flour too. That gives a different taste and less of a crust, but it's interesting too.

At first I tried to knead the bread by hand. It's very satisfying, but it takes a long time and makes your wrists ache. Now I put the flour (and a tiny amount of salt) in the mixer, add the yeast starter, then slowly trickle in the remaining water while the mixer runs. I leave it for ten minutes, occasionally stopping the mixer to scrape the dough off the mixing hook. After that a quick, one minute hand knead finishes everything off and gets the dough to the right texture.

Personally I like bread to have a crisp, crunchy crust. It's tricky to get that to come out right. It all has to do with the way the starches react in the early stages of baking. Industrial bread ovens have a mechanism for injecting copious amounts of steam at the right time. The idea is that in the early stages, the surface is kept moist by steam condensing on the relatively cool dough. This promotes the right reactions in the starch, leading eventually to the Maillard reaction which turns starch and sugar into delicious light brown caramel.

Since I don't have an industrial oven, I have to improvise. I put a shallow pie dish of water in the oven when I turn it on. By the time it is at its operating temperature of 500°F (250°C), this is boiling nicely, creating a very humid atmosphere in the oven. Then, when I put the bread in, I empty half the water onto the floor of the oven. This fills it with steam (and generally makes a bit of a mess on the floor too). I leave the pan in the oven for the first five minutes of baking time. When I open the oven to remove it, a hot blast of scalding steam emerges - showing that it has done its job.

I cook hazelnut bread for a total of 29 minutes, 5 with water and the rest without. This results in a perfect, crunchy crust, just beginning to turn deep brown in the darkest places along the top, yet moistly soft inside. Walnut bread does better with a couple of minutes less. Really the goal is to take it out just before it burns.

It has been a challenge to get bread to be the right shape, which for me means roughly circular and 3-4" (80-100 cm) across. If you stretch the dough to the shape you want, it has an annoying tendency to have "memory" and go back to its original shape in its first couple of minutes in the oven. Finally what I have found works is to flatten the dough, as part of the final "knocking back" which removes over-large bubbles. I work on the flattened, pizza-like dough to get it the right length, then fold it over and roll it like a giant sausage roll to get the circular shape.

Even so it happens sometimes that a loaf "explodes" - it develops a big split along one side. This doesn't affect the flavour but it's not very pretty. Cutting slits across the top, half an inch or so apart and quite deep, helps a lot. The other important thing is to make sure the dough joins together properly. Generally I sprinkle flour around when working with dough. Thats coats the surface and makes it stick less, but it also stops it sticking to itself when you roll it up. A sprinkling of water (not much!) helps, and massaging the join together.

I generally split off some of the dough to make a couple of rolls. About 80g of dough gives a little roll, perfect for breakfast, with a disproportionate amount of deliciously crunchy crust.

At first I had problems with bread sticking to the baking tray. A piece of parchment paper covering the tray solves that problem. Surprisingly, considering that the ignition temperature of paper is famously "Fahrenheit 451", it chars a little at 500°F but doesn't burn.

Recipe


I use the following ingredients to make a "one pound" loaf:
  • 450g of King Arthur bread flour
  • 30g of ground hazelnut flour
  • a pinch of salt (about 3g - the amount is fairly critical and a matter of personal taste)
  • 8g of yeast
  • 5g of sugar
  • 340ml of water
The water and flour can be adjusted as long as they are in the same proportion.

I mix up the yeast and sugar along with 30ml of water and 20g of flour and leave them somewhere warm (around 40°C, 100°F) for 15-30 minutes. That gets the yeast going well. This is mixed in with the remaining solid ingredients and the remaining water prior to kneading.

Since getting up at 4am isn't really my thing, I make the dough the evening before. Once it is kneaded, I leave it to rise for a couple of hours, then put it in the fridge overnight. I generally get up briefly around 6am, and use that to get the dough back out and let it warm back up to room temperature by the time I do the final stages starting some time between 8 and 9. A couple of times I have started too late for that, and left it out overnight. It doesn't seem to make much difference to the final result.

Flour


I was very surprised, a couple of weeks ago, to realise that I was near the end of my fifty pound sack of King Arthur flour. When it ran out I switched to the ten pound bag of Azure. This turned out to give a completely different bread! The Azure flour is grey rather than white. The bread is denser, tastes different, and has a less crunchy crust. Obviously this is all a matter of personal taste, but both of us greatly prefer the King Arthur flour. Now that flour is easy to obtain again, I have bought another ten pounds of King Arthur. That seems to give even better results than the original sack, though I have no idea why.

Sourdough


My friendly English baking neighbour once gave me a sourdough starter. This is supposed to have all kinds of mystical, magical properties. You have to feed it - to the point that if you go away for a few days, you have to arrange with the cat sitter to feed the sourdough as well. There's something very primal about it all, which I think is its attraction.

It also totally failed to work. Luckily some conventional yeast added just before going to bed did work. I was feeling a bit badly about what I'd say to my neighbour. Then she reported the exact same experience.

So much for sourdough.

Sunday, 30 August 2020

Some Network History - Open Systems Interconnection (OSI)

The standards for Open Systems Interconnection (OSI) were a big part of my job from 1980 until 1991. This is a very personal view of what happened, and why it all went wrong.

Background

It's hard to remember now that computers were not always networked together. When you buy a $10 Raspberry Pi, or a $50K server, it's connected to the Internet as soon as you turn it on. Not only can you find cute kitten pictures, but it will load new software and all sorts of behind-the-scenes things you probably aren't even aware of.

It wasn't always so. In the 1970s, "computer" meant a giant mainframe, typically with a whole building or floor of one to itself. They cost a fortune, and they were self-contained - they didn't need to communicate with anything else. The nearest thing to networking was "Remote Job Entry" (RJE) - typically a card reader and a lineprinter, with a controller, connected over a high-speed data line. High speed as in 9600 bits/sec, or about a thousandth of typical WiFi bandwidth. It would take a long time to load even a single kitten picture at that speed. These were used in places that needed access to the computer, but couldn't justify the cost of one - branch offices, remote buildings on a campus and so on.

Each of the mainframe companies - IBM and the "BUNCH" (Burroughs, Univac and others) - did RJE their own way. There were no standards or industry agreements, even though they were all doing exactly the same thing. Communication was over a "leased circuit" - a dedicated, and horribly expensive, telephone line directly between the two places. There was nothing that could be called a "network".

The company I worked for, DEC, was the pioneer for smaller computers - minicomputers. These were inexpensive enough that you could have several, which typically needed to share data - for example to run the machines in a factory. For this it had defined its own network architecture, called DECnet, which was the first peer-to-peer commercial network ever. It allowed DEC's VAXes and PDP-11s to communicate with each other, to share files, access applications and various other things.

They also needed to access data held on the mainframe. For this, we wrote software that pretended to be an RJE terminal. To get data, we would send a pretend card deck that ran a job to print the file, then intercept the "lineprinter" output. A similar ruse would send data in the other direction. At one point I was responsible for all these strange "emulation" products. There was one for the IBM 2780 terminal, and one for each of the other mainframe manufacturers. They were a nightmare to maintain, because none of these RJE protocols was documented. They had been worked out by reverse engineering the messages over the data link. So we were constantly running into special cases that the original code didn't know about.

X.25 - The First "Open" Networking

The first inkling of something better came along in the mid-70s. The world's phone companies - at that time still nationalised "PTT"s - had got together through CCITT, their standards body, and come up with something called X.25. This allowed computers to connect just like on the telephone or telex networks. No prior arrangement was needed, you just sent a message which was the equivalent of dialing a phone call, and then you could send and receive data.

My first networking job at DEC, in 1979, was to implement X.25 for the PDP-11 and the VAX. Just a few countries had networks - the UK, France, Germany, and the US, which had two incompatible ones. Although there was a "standard", it had so many options and variations that every network was different and needed its own variant of the software. It was also expensive to use, with a charge for every single byte of data. Getting a connection was a challenge, since the whole concept was such a novelty for the behemoth monopoly PTT organisations.

Apart from the technical difficulties of X.25, there was a much more fundamental problem. As one industry wit put it at the time, "Now I've taught my computers to talk to each other, I find they have nothing to say." There was no standard way to, say exchange files, or log in to a remote computer. Manufacturers could write their own, but that defeated the object of the "open" network in the first place.

There were a couple of efforts to improve this situation. In the US the Arpanet had been funded by the government in 1969, to connect research and government laboratories. It was this that ultimately led to the Internet, but that was a long way off in 1980. There was a similar effort in the UK, led by the universities, to develop standard protocols for common tasks. Each one was published with a different colour cover, so they were called the "Colour Book Protocols".

OSI is Invented

Having a different standard in every country wasn't a great idea either. International standards for all kinds of things have been produced by the International Standards Organization (ISO) since its creation in 1947 - everything from railway equipment to film standards (the ISO film speed for example). Their work included computers. ISO 646, also known as ASCII, was the first standard for character codes. It was the obvious place to put together standards that would be accepted world wide.

The effort needed a name, and "Open Systems Interconnection" (OSI) was selected. 

By then, the concept of protocol "layers" was well established. X.25 had three layers: the physical layer that dealt with how bits were sent across the wire; layer 2 (data link) that got data reliably across a single connection; and layer 3 (network) that took it through the network via what are now called routers. The first task of the ISO effort was to come up with a formal model of protocol layering. This is probably the only piece of the effort that anyone has still heard of, the "seven layer model" published in 1979 as ISO 7498.

The first four layers of the model - as described above, plus the "transport" layer 4 - were already well accepted and not controversial, though the details of their implementation certainly were. The last three layers were however more or less invented out of nothing and weren't aligned at all with the way application protocols were built, then or now.

The "session layer" (layer 5) was conceptually imported from IBM's SNA architecture, though all the details were completely different. It was extremely complicated, reflecting things like the need to control half-duplex (one direction at a time) modems. There wasn't a single application protocol that used it to do anything except simple pass through.

The presentation layer's overall goals were never very clear. What it turned into was a universal data metadata and encoding, called ASN.1. It was useful, in that it allowed message formats and such to be expressed in terms of datatypes rather than byte layouts. But it was vastly overcomplicated for what it did.

The OSI Transport Protocol

My own involvement with OSI started in 1980. Definition of the OSI transport protocol was taking place in an obscure Geneva-based group called ECMA. DEC wanted to be involved, and sent me along. My first meeting was at the Hotel La Pérouse in Nice. The work was already well advanced. To call it a dogs' breakfast would be a big disservice to both dogs and breakfasts. There were groups who thought the transport protocol should rely entirely on the network for reliability, and others who thought it should be able to recover from a limited class of errors. Other arcane distinctions, including the need for alignment with CCITT - the telco's standards club - meant had it had no less than four separate "classes", which in reality were distinct protocols having no more in common than a few parts of the encoding.

My task was to add a fifth. All of the work so far was intended to work in conjunction with X.25, which provided a "reliable" network service. If you sent a packet it would be delivered or, exceptionally, the network could tell you that it had been unable to deliver something. It would never (in theory anyway) just drop a packet without telling you, nor misorder them. DECnet, as well as the emerging Arpanet, made a different assumption. They kept the network layer as simple as possible, and relied on the transport layer to detect anything that went wrong, and fix it. That meant a more complex transport protocol. This incidentally is how the Internet works, with TCP as the transport protocol.

I spent the next 18 months designing the "Class 4 Transport Protocol" (the others were numbered from 0 to 3, don't ask), TP4 for short. It worked exactly the same as DECnet's equivalent protocol, NSP, and TCP, but the encoding had to be compatible, as far as possible, with the other classes. However the operation was completely different. Practically speaking, a complete implementation of the OSI transport protocol required five completely separate protocol implementations.

I got a lot of guidance and help within DEC, but at ECMA and later ISO I was on my own. Nobody else cared about TP4, nor understood it. That suited me perfectly. It was published in 1981 as ECMA-72.

Maybe because I was really the only one doing any technical work in the group, when the current chair was moved on to another project by his company, I was asked to take that on. It was quite an honour - I was only 28, in the world of standards which (as in politics) tends to be dominated by people towards the end of their careers. That also meant that I got to attend ISO meetings, representing ECMA, the beginning of a long involvement. 

ISO adopted the ECMA proposal for the transport protocol, all five incompatible classes of it, without any technical changes. It was later published as ISO 8073.

Around this time I took up DEC's offer to move to the US for a while, to lead a team building software to connect to IBM systems using their SNA architecture. At least, that was what I was told. In reality, they already had someone for the job, and I was just backup. That gave me plenty of time to work with the network architecture team there, the people responsible for the design of DECnet. The team was really smart and had a big influence on my career, at DEC and subsequently.

ISO meetings were held all around the world, hosted by the various national standards bodies (like BSI, ANSI and AFNOR) and their industry members like IBM and DEC. In those early days I went to meetings in Paris, London, California, Washington DC, Tokyo and others. 

The day before the California meeting, in Newport Beach, we had a very hush-hush meeting at DEC. It was the only time I was in the same room as the CEO and founder, Ken Olsen, along with our genius CTO, Gordon Bell, and our head of standards. The occasion was a meeting with the CEO of ICL, the British computer company which was still important then, and a high powered team on his side. ICL was convinced that IBM was trying to take over computer networking and impose SNA on the world. That would be a disaster for us, since SNA was very firmly oriented to the mainframe world and not designed for peer-to-peer computing at all. Ken was readily convinced that salvation lie in the creation of international standards that IBM would be obliged to follow, which is to say OSI.

This completely transformed my role in things. Until then, my standards work had been an interesting diversion, the kind of thing that large companies do pro bono for the good of the industry. I thoroughly enjoyed it but nobody at DEC really cared much. Suddenly, it was a key element of the company's strategy, with me and a handful of others at its heart.

In 1983 something extraordinary happened. We were invited by China to have our meeting there, the first international technical meeting that China ever hosted. That meeting, in Tianjin, deserves its own article.

The OSI Network Layer

Shortly after the Tianjin meeting there was a shake-up in the way the various working committees were structured, which left the chair of the network layer group (SC6/WG2) open. This was by far the most complex area of OSI. The meetings were routinely attended by nearly 100 people. It was also extremely controversial, and from DEC's point of view the most important area. I was astounded when I was asked if I'd be willing to chair it. I later learned some of the negotiations behind this from Gary Robinson, for many years DEC's head of standards and an extremely wily political operator. (He was responsible for the tricky compromises that allowed Ethernet and other LAN standards to go ahead despite enormous fundamental disagreement - Token Ring and Token Bus were still very much alive). In essence, the other possible candidates, all much more qualified and experienced than me, had too many enemies. I hadn't yet made any, so I became chair of what was officially ISO/IEC JTC1/SC6/WG2, the OSI network layer group, and went on to acquire plenty of my own enemies.

The problem with the network layer was a complete schism between the circuit view of things and the packet view. The telcos had built X.25, at great expense, and saw that as the model for the network. The user of the network established a "connection", and packets were delivered tidily and in order across the connection. The packet view, which included DEC, was that the network could only be trusted to deliver packets, and then not reliably, and should make no effort to do any more. It could safely be left to the transport layer to fix up the resulting errors.

In OSI-speak, these were respectively the "connection-oriented network service", or CONS, and the "connectionless network service", or CLNS. By the time I arrived there had already been years of debate and architectural hypothesis about how to somehow combine these two views. This had generated one of the most incomprehensible "standard" documents of all time, the "Internal Organisation of the Network Layer" (IONL, ISO 8648). The dust was just about beginning to settle on the only way forward, which was to allow the two to progress in parallel. There was no compromise possible.

The telcos hated this, because it pushed their precious X.25 networks down into a subsidiary role underneath a universal packet protocol, making all of their expensively engineered reliability features unnecessary. From our (DEC) view, this was far better than the complex engineering required to somehow stitch together an "internet" from a sequence of connections. Building a network router is hard enough. There's no need, or point, to make it even harder.

So by the time I was in charge of things, we had two parallel efforts. The CLNS side was led almost entirely by DEC, with excellent support from others in the US. As a result we were able to make rapid progress. We came up with a relatively simple protocol with no options, variants and all the other horrors than bedevilled OSI. It was standardized as ISO 8473, the Connectionless Network Protocol (CLNP). 

As chair, I had a duty to be non partisan. On the other hand, I had no duty to actively help the CONS camp. Between the complexity of X.25, the additional complexity of trying to use it as an internet protocol, and internal divisions within the camp, they had little chance of success. After years of work they never did come up with anything that could be built.

That said, this schism did enormous damage to OSI, and was a major factor in its ultimate demise. To us at DEC it was obvious that CONS was a doomed sideshow, but to an observer it just showed a complete inability to make decisions or come up with something that could be built.

DECnet-OSI

That really highlights the basic flaw of the OSI process. Creating complex technology in a committee just doesn't work. It's hard enough to get a network architecture right, without having to embody delicate political compromises in every aspect of the design. Successful standards like TCP, IP and HTTP/HTML were designed by a single person or a small group under strong leadership. Where possible, we did the same thing at DEC. For example the routing protocol for OSI, universally called "IS-IS", was developed by a small team at DEC, and it still works. With modifications to support IP as well as OSI, it is still used by many of world's large telcos. We managed to get that through the OSI process with hardly any changes.

At DEC we had whole-heartedly adopted OSI as the future of networking. DECnet, our very successful networking system, was rebranded DECnet-OSI and was to be completely restructured to use the OSI protocols. We even persuaded James Martin, a well-known author of IBM-oriented textbooks, to write a book about it. That probably deserves its own article too. As it turned out, DECnet-OSI never really happened. That was more to do with internal engineering execution problems than with OSI itself, since we carefully picked only the bits that could be made to work.

The OSI Transaction Processing Protocol (or not)

In 1987 I got involved in another part of OSI. IBM had never really tried to influence the OSI lower layers or to try to make them like SNA. But suddenly they came up with the idea of imposing it on the upper layers. SNA had a very complex upper layer structure, mostly oriented around traditional mainframe networking like remote job entry. But they had finally woken up to peer-to-peer networking and added something called LU6.2 to support it. Their idea was to make LU6.2 an integral part of OSI, so that all applications of OSI would in effect be SNA applications. It was a good idea from their point of view, and was very strongly supported by senior management there.

We knew this was coming because of the way ISO works. It started as a "club" of the national standards bodies, and to a large degree still is. This means that proposals can't be submitted directly to ISO, they have to pass through a national standards body - or at least, they did at the time, things have changed a bit since then.

The question was, what to do about it? IBM were heavily constrained by the existing standards and projects. If they had come along with this five years earlier, it would have been much harder to stop, but now they had to find an empty spot they could introduce it to. This they did, under the guise of "transaction processing". So at the 1987 meeting in Tokyo, there was a "New Work Item" for transaction processing, as another application layer standard. To this was attached all of the IBM contributions, which is to say LU6.2 warmed over.

I got a call about a month before the meeting from DEC's CTO, saying, "John, we need you to go and stop this." In the standards process it is almost impossible to stop anything. Once a piece of work is under way, it will continue. Actually terminating a project or committee is virtually impossible. Typically committees continue to meet for years after they no longer serve any useful purpose. So if you want to stop something, you have to either divert it into something harmless, or ensure that it makes no progress.

An experienced chair knows that there are some people who, while working with the very best of intentions, will just about guarantee that nothing ever emerges. It's just the way they're made. I have had the good fortune to know several. You may ask, why "good" fortune? The answer is that if you don't want something to work out, you arrange for them to be put in charge of it. I couldn't possibly say whether something like this may have influenced the failure of the CONS work to deliver.

For IBM's LU6.2 proposal, though, this would not work. They had put some technically strong people from their network engineering centre in La Gaude, France in charge of it. In truth I had little idea what I would do until I got to the meeting. It turned out that there were three camps:

  • IBM and others who liked the idea of LU6.2 being part of OSI
  • Those who thought that making it part of the standard would act against IBM's interests, by making it easier to compete with them. While these people were "enemies of IBM" and in some sense on the same side as me, as far as this meeting was concerned, they were my opponents. For example, France's Bull was in this camp.
  • Those who didn't want it. This turned out to be just me, and ICL.
So I was hardly in a position of strength. In addition, I hadn't been able to make any official contribution to the meeting ahead of time. On the other hand, the people IBM had sent knew little about OSI and the way the upper layers had evolved. They seemed to believe they could do as they had, for example, with Token Ring (and as DEC and Xerox had with Ethernet as well) - just show up with a spec and get it approved as a standard. But things had already gone way too far for that. There were already too many bits and pieces of protocols and services defined.

This was their Achilles' Heel. In the end it was remarkably easy to divert the activity to a study of the requirements for transaction processing (and it turned out there weren't any), and how they could best be met with existing OSI work. Only then would extensions be studied. This was instant death to the idea of just sticking an OSI rubber stamp on LU6.2.

That all makes it sound very easy, though. I was on my own against a large group of people who all wanted me to fail. It was one of the toughest things I'ver ever done. Luckily there were a lot of DEC people and other friends at other parts of the meeting, so the evenings and weekend were very enjoyable as usual. 

There was one person at the meeting who genuinely frightened me. He was incredibly rude and aggressive during the formal meeting, to the point where it became very personal. It was a ten minute walk from the meeting place, just opposite the Tokyo Tower, to our usual hotel, the Shiba Park. I spent those ten minutes looking over my shoulder to be sure he wasn't following me.

That had an interesting consequence. The head of the US delegation was from IBM, and very much of the old school. He was close to retirement and, like most standards people of that era, very much a gentleman. A few weeks later, I was invited, along with DEC's head of standards, to a meeting at IBM's office in New York City. There the IBM guy apologised profusely, and very professionally, on behalf of both IBM and the United States - even though the person in question didn't work for IBM.

I don't exactly remember what happened after that meeting, but I think IBM just quietly dropped the idea and it faded away.

OSI Management

DECnet had powerful remote management capabilities, essential in a networked environment. We knew that if OSI was to be useful, it had to have the same. There was a management activity but for years it had been very academic and gone nowhere. There were some smart people in the UK who wanted management to work too, and between us we came up with everything required: a protocol, and a formal way to specify the metadata. In the end it never got implemented, because OSI was already struggling by the time it was ready. But it was a nice piece of work. It also got me to several interesting places I otherwise would have no reason to go to.

Why Did OSI Fail?


My final OSI meeting was in 1991, in San Diego. By then I had moved to a new job in the company and was no longer involved with the DECnet architecture. In any case the writing was on the wall: the OSI concept would happen, but it would happen through the Internet protocol suite under development in the IETF. DEC officially made the change shortly afterwards.

Why was OSI such a total failure? It was the work of hundreds of network experts, many of whom really were the top people in their fields. Yet hardly a single trace of it remains. On the other hand the concept of universal computer interconnection has been a huge success, way beyond the dreams of the OSI founders. All they hoped for was the possibility of open communication, they didn't expect it to be a constant feature of the way we use computers. The only thing is, this is all done using the protocols developed by the IETF and loosely called TCP/IP.

OSI was way too complex, with too many options and choices. It was a nightmare to implement, made worse because this was before open source caught on. Some companies tried to make a living selling complete OSI protocol stacks, but that was never really a success. At DEC we had a full OSI implementation several years before DECnet-OSI, but hardly anyone bought it - only a few academic and research users.

I think the main reason was that there was no compelling use case. That seems hard to believe now, but in 1990 it was a chicken and egg situation - until the connectivity was available, there was no use for it. My old boss at DEC said the main reason TCP/IP took over was that Sun was shipping it as part of their BSD-based software, and it was just there, free and available. Because of that, people started to find uses for it. That also happened to coincide with the invention of the World Wide Web in 1990. It was only a minuscule shadow of what it has become, but was a reason to be connected.

By 1995 it was obvious that the future of networking lay with the IETF and TCP/IP. In Europe there were still efforts to keep OSI alive, but without manufacturer support they went nowhere. Around 1997 I was paid to write a study of why the IETF had been so much more successful than ISO. The simple answer is that while IETF is a committee, or actually a collection of numerous committees, each individual standard is produced by at most two or three people. It is then discussed and may get modified, but it is not "design by committee". That is less true now than it was in 1995 - all organisations tend to become sclerotic with age. But back then its motto was "rough consensus and working code". It got stuff done.

Conclusion


From a personal point of view, OSI was one of the most interesting things I've ever done. It taught me a great deal about how to lead in situations where you have absolutely no official authority. It took me on many, many journeys to fascinating places around the world. It also provided my introduction to the woman who would later be my life partner, though that isn't part of this story.

It can be endlessly debated whether OSI was a complete waste of time and effort, or whether it postponed open networking long enough for IBM's SNA to lose its predominant role, making room for TCP/IP. We will never know.