Thursday, 12 September 2019

Flying the G1000 - a Six-Pack Pilot's Initiation

Getting Started

My plane is not flying at the moment, for reasons I won't go into. Instead I have a good deal with a local flying club, allowing me to fly any of their aircraft subject to a checkout. They have several recent fixed-gear Cessna 182s, which make a reasonable substitute for my retractable, turbo 1980 TR182.

Most of them are new enough to have a Garmin G1000 panel instead of the traditional six-pack of mechanically driven "steam gauges" like my own plane. They do also have some older steam-gauge 182s, and I've flown quite a bit in one of them, but it seemed a good opportunity to learn some new technology.

I started by looking at the manual, readily available online. It runs to over 500 pages, making it pretty much impossible just to sit and read from end to end. It struck me how similar it is to the familiar old Garmin GNS530, which I fitted to my plane when I bought it 17 years ago. I have over 1000 hours of flying with it now, so I hoped the transition to the G1000 would be straightforward. It also includes a very sophisticated autopilot, the GFC700, which has all the features you'd find in an airliner (well, except Cat III autoland) - vertical navigation, coupled approaches and so on.

G1000 in a Diamond DA40, on the way back to Palo Alto
A G1000 installation has two screens. The one in front of the pilot is called the Primary Flight Display (PFD) and replaces a normal six-pack of mechanical instruments. The one to the right is called the Multi Function Display (MFD) and contains all sorts of other things, including the moving map, flight plan and engine instruments. They are both covered in knobs and buttons, about 40 of them, which mostly do the same thing on each panel - but not always.

Flying the Simulator

Fortunately the club also has a G1000 flight simulator, made by Precision Flight Controls. It has a panel like a G1000-equipped 172, plus an X-plane based simulator with an instructor station that lets you set up weather conditions, create failures, position the aircraft and various other things. I quickly got myself set up on this, with the help of a friendly instructor.

The first challenge was flying the simulator. It seems hyper-sensitive, unlike a real 182 which is extremely stable. It took me an hour or so to be able to "fly" it smoothly and achieve decent approaches, with the G1000 serving just as a simple glass panel replicating the traditional attitude indicator and HSI. It's very hard to land, and although that's not really necessary for learning the G1000 it does seem like something you should be able to do. Every landing is "good" in the sense that you can always walk away from it, and even use the simulator again, but at first several of them turned into simulated crashes requiring a reset at the instructor station. Once I could get the plane stopped on the runway I decided that was good enough. It's hard to get any real feeling for how high you are above the runway, something which comes surprisingly easily when flying a real aircraft. On my one and only flight in a wartime B-25 bomber I landed it smoothly even though the sight-picture is very different from anything else I've flown.

My first couple of sessions with the simulator brought several moments of severe frustration of the "How the ***** do you do that?" variety. A big advantage of an old fashioned panel, where each instrument stands by itself, is that you have a pretty good idea which buttons to try pressing even if you're not sure. For example, my panel has a GTX-345 transponder which includes a bunch of timers. Even if you have no idea how to get to the flight timer, there aren't too many things to try. With the G1000, the function could be anywhere in dozens of nested menus and soft-buttons. The flight timer is a case in point. It's there, but buried in one of the 'aux' pages - and certainly not accessed through the 'timer' soft-button, which would be much too easy.

Another example is the minimum fuel setting. It's nice to be able to set this, so you can get a warning if you reach it. In a 182 I never plan to land with less than 20 gallons. That's pretty conservative, enough for 90 minutes of flying, but with tanks that hold 88 usable gallons it's easy to do, and reduces the chances of becoming the subject of a feature article in the NTSB Reporter. There's a whole sub-menu for dealing with fuel management, allowing you to enter the actual amount of fuel in the tanks, and so it's obviously on that page. Wrong. It's under a sub-sub-menu of the Map setup page. There is some kind of logic to that, because all it does is to show a ring on the map where you will reach fuel minimums. But it certainly isn't intuitive.

Operating the simulator by myself was interesting. You could just position yourself at the start of the runway, then take off and fly just like the real thing. But it would waste a lot of time climbing and getting to the start of the approach. One nice thing about the sim is that you can position the airplane anywhere you want, for example just outside the initial approach fix. But this takes some acrobatics. If you just take a stationary airplane and position it at altitude, it instantly enters a power-off dive. That's recoverable but not really necessary.

In the end the routine I developed was to take off normally and set the autopilot to climb on a fixed heading. The next step is to leap over to the instructor station and position the aircraft at the altitude and location where you want it. But this disconnects the autopilot, so now you have to leap back to the pilot station and re-engage it, being careful to set up the same altitude as the one the sim thinks it's at. After a while it becomes routine, but there are lots of ways to mess it up. For example, before setting position, it's a good idea to think about terrain. Once I didn't, and leaping into the pilot seat was disconcerted to see the scenery a lot closer than it should be, shortly before ploughing into the trees. I had set the position of the airplane in the hills, without first setting the altitude to something that would put me above them.

Another good thing about the sim is that you can set the weather conditions. As an instrument pilot you practice under the "hood" - actually a pair of glasses adapted so you can only see down to the panel. That does a reasonable job of seeing nothing at all, as when you are in a cloud, but there is no way to simulate very poor visibility. An ILS or LPV approach can typically be flown with less than one mile, and without a sim it's pretty much impossible to know what that feels like unless you get lucky (or maybe unlucky) with the weather. Old-fashioned non-precision approaches are particularly hard. I tried the VOR 13 into Salinas, with minimums of 500 feet and one mile. You reach MDA and see... nothing at all, ploughing on through the murk, until just before the MAP you can faintly make out some runway lights. It's a great exercise but I wouldn't be very happy to do it for real. An ILS - or LPV to a similarly equipped runway - seems a lot easier, even to lower minimums. I flew the RNAV 25 to Livermore, with minimums of 200 and a half. As you reach decision height the approach lights are right there, allowing a further descent to 100 feet - no peering through the murk hoping to see something.

Time to Go Flying

After several hours on the simulator and numerous approaches, it was time to go fly for real. We flew three approaches, entirely using the autopilot down to minimums. The G1000 was a pleasure to use, and certainly a lot less stressful than hand flying. It does all seem a bit like a video game though.

With that flight over, I was signed off to go fly by myself. We took the same airplane down to San Luis Obispo for a fish taco lunch at Cayucos and an apple-buying excursion at Gopher Glen, surely the finest apple farm in the country. It was a perfect VFR day with very modest winds aloft, an excellent opportunity to give the G1000 a workout with the reassurance that if ever things started to get tricky, it would be easy to take over and hand fly. The goal, from a flying point of view, was to get comfortable with the G1000, so I let the autopilot do all the flying. Well, almost all - of course I had to do the takeoffs and landings. That led to the uncomfortable discovery that the wheels on the fixed-gear 182 are an inch or two lower than on my retractable. It's not much, but it's the difference between a perfectly smooth landing and one that raises my wife's eyebrows.

Vertical Navigation (VNAV)

One thing I really wanted to try was the autopilot's VNAV (vertical navigation) feature. The idea is simple enough. Instead of telling it the climb or descent rate you want (VS mode) or the airspeed (FLC mode), you just tell it where you want to be at a certain altitude, and it figures the rest out for itself. If you are flying an instrument approach or standard arrival (STAR), the altitudes are built in and are displayed on the flight plan page beside each waypoint. For VFR flight, you can create a waypoint on the final leg to the airport, and create a "track offset" a given distance before, and set an altitude for that. For example, on my way to Palo Alto I set a waypoint 5 miles before the field with an altitude of 1500 feet. The G1000 figures out where it needs to start the descent to meet that, for a specified (but not normally changed) descent profile, e.g. 500 ft/min. As long as you press the right buttons at the right time, it will fly the descent all by itself, leaving you only to monitor the throttles.

Sounds good, except that the documentation for how to use the feature is terrifying. It runs for several pages in the manual, mainly telling you all the things that will make it refuse to do what you want and other things that can go wrong. For example, the altitude associated with a waypoint can be shown in four different ways: in blue or in white, and in a large or a small font. They all mean different things and woe betide you if you can't remember which means what. But after reading it a couple of times and trying it in the sim, I realised that normal operation is pretty simple.

The flight plan panel shows, among many other things, the time to the "top of descent" that has been calculated. As you fly along this gradually goes down, until eventually it gets to one minute. At that point the autopilot status line on the PFD changes. There are two things you have to do, assuming you're currently flying in altitude hold: first press the VNAV button, and then set the desired altitude to something less than the first target. If you're planning to land, it makes sense to set it to minimums for the approach. If you don't reduce the desired altitude, it doesn't descend. But assuming you do, one minute ticks by and then the nose drops, the annunciator changes to say VPTH, and down you go. If there are multiple step-downs (rare these days), it will level off between them but pick up the next one and keep on flying down them until the glideslope activates.

Flying Approaches

There is one more button to press before you land. Once VNAV is active and you have been cleared for the approach, you press the APR button which sets the system up to capture the glideslope. (And don't what I did once, fortunately in the sim, and press the AP button instead - which disables the autopilot. To my pleasant surprise, pressing it again simply re-enabled the autopilot and carried on where it had left off).

If you're used to a traditional HSI or CDI display, finding the glideslope on the G1000 is far from intuitive. Instead of a horizontal bar in the middle of the HSI, it appears as a magenta diamond to the left of the altitude tape. It took me a while to find it at first, though it's simple enough once you know. For a traditional ILS, it is active as soon as the physical glide slope signal is received. For a GPS approach (LNAV or LPV) it's a bit less obvious. It shows up at the first fix outside the FAF. It's the same for the 530W, and I remember a very frustrating moment flying the GPS into Palo Alto, first wondering why it wasn't there, and then wondering why it had suddenly shown up as I flew through the fix in question, ACHOZ.

The G1000 does a very nice job of flying the aircraft all the way down the approach to Decision Height - as long as you press the right buttons at the right time. Compared to hand-flying an ILS or LPV, it's very relaxing! You can set the DH or MDA, but once again it isn't obvious where. It's called "baro mins" (not sure why specifically "baro"), and it's found on... the timer inset, unlike the flight timer. If you do manage to figure out how to set it, a serious-sounding voice calls out "minimums!" at just the right time, so it's pretty useful.

I've had one chance to try the G1000 in actual IMC, luckily, and since it was flying around the Bay Area there was plenty of vectoring, course and altitude changes and everything else that ATC can do to make a flight more interesting. Everything worked perfectly. We flew the ILS into Santa Rosa, in perfect VMC, with the pleasure of watching it keep the runway on the nose down to DH. On the way back we were in actual as we were vectored to the GPS into Palo Alto, including an initial VNAV section. I let it fly all the way down to DH, 460 feet (not currently permitted in IMC due to the Google construction, but we were already in VMC).

Odds and Ends

Since the G1000 knows all of indicated and true airspeed, as well as ground speed and current heading and track, it can figure out what the wind must be doing. It's very nice to see that displayed in a tiny inset on the PFD, giving an instant readout of headwind rather than trying to calculate it by mental arithmetic.

The PFD includes a Flight Director (FD), which is the airplane telling you how it thinks you ought to be flying it. The idea is simple: your current attitude is shown by a pair of yellow lines, which you should try and line up with the magenta lines of the FD. Airline pilots swear by them, and so does my friend who happened to get one in his plane. For myself, I don't really see the point. I'm happy for the plane to fly itself, the yellow and magenta lines always snuggled up together, but if I'm hand flying then I don't really need it. It reminds me of the annoying indicators on stick-shift cars telling you that it thinks you should change gear. At least you can turn the FD off, though it's easy enough to ignore it.

The MFD normally shows a large map, showing you where are relative to the scenery, in much better detail than the 530. It's a nice feature although you can always look out of the window, in VMC anyway. It is good to see where other aircraft are (thanks to ADS-B) relative to the scenery - it is surprisingly hard to see them even when in theory you know where they are. The map also includes terrain warnings - if it's red, don't go there. It's good while you're in the air though a bit dazzling when you're taxiing, since naturally everything is red then. You can turn it off though it's a good idea to remember to turn it back on again, especially at night or in IMC.


I've enjoyed learning and flying the G1000, and I'll miss it when I go back to my own 1980 panel, especially the very capable GFC700 autopilot.

When I started I thought, how different can it be from the good old Garmin 530? In many ways it's very similar, but remembering which buttons to push when is very different and significantly harder because there are just so many of them.

For someone who flies regularly and can stay current with where everything is and which button to push when, it is really an excellent system. I would worry about it though for the typical PPL flying an hour or two per month - it would be just too easy to need some feature and blank completely on how to get to it.

Wednesday, 3 July 2019

The Numerologist's Birthday Cake

Everybody loves to blow out candles on their birthday cake. The problem is, once you get past about 10, the cake becomes a serious fire hazard if you stick with tradition and have one candle for each year. Once you become a fully qualified adult, you'd have to be a committed pyromaniac to try. So people make compromises, just one candle, or one per decade for those "important" birthdays.

That's a bit boring though. Being interested in number theory, I came up with another idea: make the number of candles equal to the number of non-distinct prime factors of your age.

"What...???" I hear you thinking. But trust me, it's quite simple really.

Every number can be expressed as a product of prime numbers, if it isn't one itself. To recap: a prime number is one that can't be divided by anything except itself and 1, like 2, 3, 5, 31, 8191, ... and many others in between and beyond. There's a simple and elegant proof that they go on for ever - there is no "largest prime number". Unfortunately there is no room for it in the margin here.

So for numbers which are not prime - called composite numbers - you can multiply prime numbers together to make them. Here are the first few:

     4 = 2 * 2
     6 = 2 * 3
     8 = 2 * 2 * 2
     9 = 3 * 3
    10 = 2 * 5
    12 = 2 * 2 * 3
    14 = 2 * 7
    15 = 3 * 5
    16 = 2 * 2 * 2 * 2
    18 = 2 * 3 * 3

and so on. It's easy to show that for any given number, there is exactly one way to do this. Some more examples:

    99 = 3 * 3 * 11
   255 = 3 * 5 * 17
   256 = 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2

The important thing, for my birthday cake, is how many numbers you have to multiply together, whether they are the same or not. So for 4, it's 2, for 16, it's 4, for 18, it's 3 and so on. For 256 it would be 8, but it's not very likely to happen.

The nice thing about this is that the number of candles varies with each year, but it never gets too big. What's the largest number of candles you can ever need? Well, 64 is 2*2*2*2*2*2, i.e. 6 candles. The smallest number with any given quantity of candles will always be a power of 2 (i.e. a number where all the factors are 2). The next 6 candle birthday is 96, which there is some chance of making. The next one would be 144, which is pretty improbable.

The first birthday with more than 6 candles would be 128, which is 64*2 and hence needs 7 candles. I don't think it's worth saving up for that seventh candle, really.

Birthdays which correspond to a prime number, like 53, only get one candle. Once you get past 3, there will never be two consecutive one-candle birthdays, since even numbers are always 2 times something. The exception is 2 itself, which is the only even prime number.

Then there are "perfect" birthdays. A perfect number is something rather rare, one where every number that divides it (whether prime or not, and including 1) adds up to the number itself. The smallest is 6:

    6 = 1 * 2 * 3 = 1 + 2 + 3

The next one is 28:

   28 = 1 + 2 + 4 + 7 + 14

After that they get big very quickly: the next two are 496 and 8128. So there are only two perfect birthdays that any of us are likely to see. (All known perfect numbers are even, incidentally. There is a simple formula for finding them. It is one of the great unknowns of number theory whether there are any odd perfect numbers. If so, they are huge: no smaller than 101500. It seems very unlikely, but nobody has managed to prove it either way).

I thought of this because it will soon be my sister's 81st birthday - the first odd birthday with 4 candles (81=3*3*3*3). The next one would be 135 (3*3*3*5), which probably isn't worth waiting for.

Monday, 11 February 2019

Airport Codes

Every experienced traveller is familiar with those three-letter codes that identify airports: SFO, LHR, CDG, JFK. Mostly their meaning is fairly obvious, though there are some - ORD, YYZ, GEG - that are obscure. Less well known is the parallel system of four-letter codes used by Air Traffic Control and by pilots for navigation. So while the passengers, and the cabin crew, think they are on a flight from SFO to LHR or from CDG (Paris Charles de Gaulle) to NRT (Tokyo Narita), on the other side of the cockpit door they are going from KSFO to EGLL and from LFPG to RJAA.

Three Letter Codes

The three-letter codes are assigned by IATA, the airlines' trade association. There is no system, an airport can request any code it wants and, if it is free, it will be granted. Usually an airport chooses something mnemonic: SFO, BOS(ton), AMS(terdam), FRA(nkfurt). Sometimes the mnemonic is for the name of the airport rather than the city it serves: JFK, HND (Tokyo Haneda), ARN (Stockholm Ă…rlanda), ORY (Paris Orly).

Airports sometimes change names, and sometimes new airports are built which replace an older one. Sometimes the codes follow, sometimes they don't. When New York's Idlewild (IDL) was renamed John F Kennedy following his assassination, it became JFK. But when Chicago's Orchard Field (ORD) was renamed O'Hare, it kept its old code. LED made perfect sense for Leningrad's airport, but is less obvious now the city is called St Petersburg. The same applies to PEK(ing) for Beijing and CAN(ton) for Guangzhou. Hong Kong got a brand new airport, replacing the cramped and challenging Kai Tak, but kept the same code, HKG.

That still leaves quite a few mysteries. Sometimes the code reflects local knowledge: Shanghai is PVG for its immediate locality, Pudong. Cincinatti, Ohio is CVG because the airport is actually across the state line in Covington, Kentucky.

Canada chose to have all its airport codes begin with Y. The story goes that originally they used two letter codes, for example VR for Vancouver. When the world standardised on three-letter codes, they just added Y to the front, making YVR. Since then, though, they seem to be allocated at random. The best known is YYZ, for Toronto.

In the US, the FAA has a say too. It assigns three-letter codes for many small airports with no airline service, which frequently conflict with IATA codes. One of the most flagrant is HND, a reliever airport for Las Vegas in Henderson, Nevada. But IATA's HND is Tokyo Haneda, the fifth busiest airport in the world. Presumably anticipating commercial service, Henderson now has an IATA code too, HSH.

The FAA reserves the initial letter N for US Navy facilities - for example Google's home airport, Moffett Field in Mountain View, California, which is NUQ. In consequence, civil airports have to jump through hoops to avoid the letter: EWR for Newark, New Jersey, ORF for Norfolk, Virginia, BNA for Nashville, Tennessee. In the past it also reserved K and W, which were used for radio stations, so Key West, Florida is EYW. But that seems now to have been relaxed, allowing for example WVI for Watsonville, California.

Like Canada, the US originally used two-letter codes. Generally these just got an X added to them, e.g. LAX, PDX (Portland, Oregon). San Francisco was SF, but added an O to become the more mnemonic SFO. The letters Q and X often show up as filler letters, for example TXL for Berlin Tegel.

Four Letter Codes

So much for the three letter codes. ICAO, the International Civil Aviation Organization, assigns four letter codes. In the US, this is very simple. The country is assigned the initial letter K, which it simply applies as a prefix to the FAA code. So San Francisco becomes KSFO, New York JFK becomes KJFK. Canada did the same thing, with initial letter C, so Toronto is CYYZ, Vancouver is CYVR.

There aren't enough letters for each country to have its own, so almost all countries have a two letter identifier. The first letter identifies the region, e.g. E for northern Europe, and the second the country, so EG is the United Kingdom. Within that, every country does what it wants. Many countries assign the third letter to some kind of internal region, often on a basis which is hard to figure out. EGL is the region around Heathrow, whose full code is EGLL. White Waltham airfield a few miles to the west is EGLM. This does lead to oddities. The code EGGW is assigned... to Luton, north of London. Gatwick, south of London, is EGKK. I'm sure this once made sense to someone.

France has the prefix LF. L is southern Europe, and is full - all 26 letters are in use, so when Kosovo became a country it had to steal its prefix from the North Atlantic region becoming BK along with BG (Greenland) and BI (Iceland). LFB is the Bordeaux region, so Toulouse is LFBO, which makes no sense at all.

A handful of other countries have single letter codes. U was the Soviet Union. Some codes are assigned to former USSR regions, e.g. UK for Ukraine. Z is mainly China, though ZK and ZM are North Korea and Mongolia respectively.

Australia somehow managed to wangle Y for itself, though it has fewer airports than many two-letter countries. It could have taken the same path as the US and Canada, so Melbourne for example would be YMEL. But somewhere a bureaucrat decided this would be too simple, so they have they same regional approach as the UK. Still, YM is the Melbourne region, but evidently the same bureaucrat's compulsive nature would not allow YMEL. Melbourne's main airport is YMML, and Sydney's main airport is YSSY.

The South Pacific region is N, allowing New Zealand to be NZ - surely not a coincidence. This also gives rise to the only airport in the world whose ICAO code is the same as its name: NIUE.

Japan does things differently. Its best known airport is Tokyo Narita, which becomes RJAA. Tokyo Haneda is RJTT, presumably T for Tokyo. The old Osaka airport is RJOO, though the new one, generally known (like LAX) by its IATA code, KIX, is RJKI.

It is one of life's great mysteries why the US was assigned K, rather than U or A which went to Western South Pacific, e.g. AY for Papua New Guinea. Strictly, this only applies to the "lower 48". Alaska is PA, conveniently allowing Anchorage (IATA ANC) to be PANC, while Hawaii is PH, allowing Honolulu (IATA HNL) to be PHNL.

Further Reading

There's a wonderful site at that lists and explains nearly all the three-letter IATA codes. For the ICAO codes, the Wikipedia page is the best resource. There are also quite a lot of articles out there covering various interesting subsets of the three-letter codes, for which Google is your friend.

Thursday, 3 January 2019

Dr Larry Roberts, RIP - a personal retrospective

I learned yesterday that Dr Larry Roberts passed away on December 26th, at the relatively young age,  these days, of 81. I had the good fortune to work closely with Larry at his company Anagran, and since then also.

To me he was always just Larry, not Dr Roberts or Dr Larry. We were colleagues (though he was my boss) and we worked closely together. It was a privilege to know him this well. He was quite a humble man close up, though this wasn't at all the common perception of him. It fell to me to take his very visionary technical concepts, and turn them into something an engineer could go off and build. Let's just say there was sometimes quite a gulf between the two.

Larry created Anagran to pursue his conviction that flow-based routing was a superior technology, that could change the way the whole Internet worked. That was fitting, because if any one person could be said to have created the Internet, it was Larry. Other people make the claim, on the basis of having invented some core technology. But Larry was the person who convinced the US Government, which is to say the defense research agency (DARPA), to fund it. That made it accessible to every university and many private companies, long before most people had even heard of it - my then-employer was connected to the Arpanet, as it was called then, in the early 1980s.

A brief explanation of flow routing is as follows. Conventional (packet based) routers look at every data packet, at the addresses and other information in it, to decide how to treat it and where to send. It takes incredible ingenuity to do this fast enough to keep up with high-speed data transmission. Larry's idea was to do this only once for each network connection (e.g. each web page), thereby amortizing the work over, these days, hundreds of packets. It's not simple though, because for each packet you have to find the connection - the "flow" of flow routing - that it relates to. This too requires considerable ingenuity. By 2000 or so, the engineering tradeoffs were such that flow routing was demonstrably cheaper to build. However the established vendors, especially Cisco, had invested huge amounts and large teams in the technology of packet routers, and weren't about to make the switch.

In 1998 Larry created Caspian Networks to pursue this idea, attracting huge amounts of funding - over $400M during the life of the company. They did build a product, but the technology was barely ready for it. The result was large and expensive, and sold only to a handful of customers.

Larry realised this was the wrong approach. In 2004 he created Anagran, to apply the flow routing concept to a much smaller device. Thanks to a brilliant CTO from his past who he enticed to join him again, this was achieved. The Anagran FR-1000 was about a quarter the size, power consumption and cost of its traditional equivalent, the Cisco 7600. Technically, it was a huge success.

I left Cisco to join Anagran as the head of engineering in 2006. It took us another year to get to a shippable product, and then we learned the sad truth. Networks were so critical to companies' business that they weren't about to take the risk of switching to an unknown vendor just to save a relatively tiny amount in their overall IT budget.

Larry was not just a visionary. Part of his concept for Anagran was a truly innovative way to manage traffic, based on completely new mathematics and algorithms. He had implemented this himself, as a proof of concept, in what must surely be the biggest spreadsheet ever created. It used every single one of the 32768 rows supported by Excel. If a single cell was changed, it took about 10 minutes to recalculate the sheet. The concept, once you understood it, was simple enough, but turning it into something that would deal with a real world traffic mix and that could be implemented in our hardware, was a big job. It occupied most of my time for over a year, and even today we are constantly improving it. The result is described in US Patent 8509074. It was through working on this together that we really got to know each other.

This turned out to be key to the survival of Anagran. We repurposed the hardware we had built, to use this algorithm to control users' traffic in a service provider network, and successfully sold it as such. The company's eventual demise was a result of being a hardware company: hardware has to be "refreshed", which is to say reinvented from scratch, every few years. And our revenue was not enough to sustain that. Software, on the other hand, just carries on, constantly under change but never needing to be started over. Cisco IOS probably still has lines in it from when it was first coded in 1983.

Larry was a genius and a visionary, but nobody can be everything at once. Some people found him overwhelming, and he could be brutally abrupt with people who didn't know what they were talking about. He was also a huge optimist when it came to Anagran's business prospects, which led to strained relationships with the investors.

Anagran finally closed down in 2011. I'm very pleased to say that Larry's brilliant flow management invention survives, since the company's assets - especially the patents - were purchased by the company I founded in 2013, Saisei Networks, and his work is very much still in use. We continued to work with Larry in his post-Anagran ventures and I saw him often.

We'll miss you, Larry, even - maybe especially - the times when your ability to see ahead of everyone else, and incomprehension that they couldn't see it too, made life challenging.

Rest in Peace.

Tuesday, 1 January 2019

Return to Anza Borrego

An Englishman can't really complain about the Bay Area weather, but it does get chilly and miserable around the end of the year. So we made a last minute decision to escape to the desert for a few days. It was our fifth trip to Anza Borrego, and our second with our Toyota FJ - you can read about the first, exactly three years ago, here. Since then we made one quick trip to see the wildflowers a couple of years ago. We rented a gigantic GMC Yukon XL, the only 4WD that Enterprise in San Diego could find for us. We nicknamed him Obelix, after the super-strong character in Asterix.

It's a long drive and it was dinner time when we arrived at our rented condo. The condo (really a house, joined by one wall to its neighbor) was very pleasant, thanks to - nice furniture, fantastic view, very comfortable enormous bed, to which we retired early. Just as well because we were awoken at 7am by the Grumpy Old Man next door, complaining that we were blocking his garage. We weren't, and as far as I could tell he didn't go out all day anyway, but Grumpy Gotta Grump. It was the perfect opportunity to make an early start, out at sunrise. But we went back to bed anyway, and it was after 11 by the time we started.

Day 1: Badlands, Truckhaven Trail

We wanted to revisit the badlands. It's an extraordinary place, visible from above at Font's Point. Driving through them is a completely different experience, only possible with a serious 4WD vehicle.

One frustration with the park is that there is no perfect map. The best paper map shows most trails, but not all of them - it would be too cluttered. The USGS 25000:1 topo maps are amazingly detailed, showing trails and how they relate to other features. What's more, they're free to download to an excellent iPad app, which gives you GPS location and many other features. The only problem is that they are updated very infrequently for rural areas - maybe every 50 years or less. They show "Jeep Trails" which have long since been banished to wilderness areas or just disappeared, and they don't show trails which have been created lately - as in, within the living memory of most of the population. There are several important trails in Anza Borrego that come into this category.

In this case we chose a trail which does appear on the topo map, though not on the paper map. It starts at the end of the dead-end road headed due north from where Yaqui Pass Road meets Borrego Springs Road and turns left. At first it seems like the driveway for a few houses and lots, but then it sets out confusingly eastbound, with several unmarked side trails. Eventually it joins Rainbow Wash, where you can turn left to the bottom of Font's Point, or right as we did. You need to turn left at the Cut Across Trail, which means keeping your eyes open because this is another that is not on the topo map. Most of it is just a sandy trail crossing several washes. At the end it enters the badlands, winding in and out of the landscape of low hills made of something between dried mud and sandstone. It had rained just before our arrival and there were green shoots everywhere - except in the badlands, which are truly lunar with not a plant in sight. The soil must be very alkaline.

Badlands in the setting sun
Winding through the badlands brings you eventually to Una Palma. At least it used to be - now the trunk of the palm lies on the ground. Five Palms, further along, has only four. We didn't count at 17 Palms.

The trail exits via Arroyo Salado onto the main road (S-22). There's a more interesting route, though, along the old Truckhaven Trail, which climbs out of the arroyo to the north-east. This road was built in the 1920s, the first road access to Borrego Springs. "Doc" Beaty led the effort by local ranchers, using mule-drawn scrapers to his own design.

I drove it on my last trip and found it mostly easy, climbing from one arroyo to another. There is one difficult stretch, bulldozed up the side of an arroyo to bypass a landslide further down. Even that, though steep and rocky, was easy enough if taken slowly and carefully. What a difference this time! The steep climb is very eroded and rocky. It requires very great care, constantly steering around and over big rocks. There is a second climb, part of the original 1920s road, which was just a steep dirt road before. Now it too is deeply rutted and full of big rocks. By chance I found the dashcam video of my 2015 trip, which shows the difference very clearly. Still, FJ made both climbs without a care in the world, using low gear but with no need for lockers.

Dinner on our second night was at Carlee's, the best bar in town (maybe the only one too), steak and ribs accompanied by margaritas and beer, followed by a few games of pool.

Day 2: Canyon Sin Nombre, Diablo Dropoff, Fish Creek

Today's goal was to drive through Canyon Sin Nombre (that's its name, No Name Canyon) then across to the Diablo Dropoff, a very steep one-way trail into Fish Creek Wash. We did this back in 2015, one of the classic Anza Borrego journeys. Sin Nombre is like a large-scale version of the badlands, with tall canyon walls made of similar crumbly almost-sandstone. There are lots of side canyons that you can hike into and explore.

The link to the dropoff is Arroyo Seco del Diablo, another long, twisty and spectacular canyon, and an easy drive. At least, it was last time. About half way through we came upon a stopped truck, whose crew of two were puzzling over how to traverse a large and very recent rockfall. There was no way either of our vehicles could climb over it. There was a possible bypass, which involved climbing onto and over a pile of soft sand about six feet tall. There were no tire tracks either over the rockfall, or over the sand pile, meaning we were the first people to try it.

We spent some time discussing possible tracks. Between us we were fully equipped, with shovels, jacks, traction boards and a winch. Still neither of us wanted to get stuck, and above all neither of us wanted to roll off the side of the sand pile.

Our new companion, Ryan, went first but didn't get far. He hadn't engaged lockers, and the wheels just spun in the deep sand as he tried to climb it. Worse, he slid alarmingly sideways. He backed down again, and we discussed some more, using the time to shovel the worst of the soft sand out of the way.

While he aired down to try again, I made my attempt. I'd already aired down to my usual 25 psi - not real airing down, like to 18 psi, but enough to make life easier for the tires over sharp rocks and such.  I engaged low gear, turned on all the locking, made a running start at the hill... and bingo, there I was on top. I paused briefly but the car was at an awkward angle, way short of its rollover angle but still very uncomfortable. There was another tippy moment dropping off the hill and then... I was through!

Ryan followed shortly, after locking everything he could. Then we were off to the Diablo Dropoff. This is a pretty steep angle in a shallow canyon, in itself not too serious. But the trail has been very badly damaged by people trying to go up, their wheels spinning and making deep holes in the sandy surface. The challenge is to negotiate these without losing lateral control, which is to say sliding sideways to a bad conclusion. From within the vehicle it's not too bad, though the occasional slight sideslip as a wheel goes into a hole certainly gets your attention. It looks a lot worse from outside.

There's a second drop further down, a bit easier in my opinion, and a bit of moderate rock crawling at the bottom. And then you're in Fish Creek, which is an easy sandy wash. We drove upstream as far as Sandstone Canyon, which is like a smaller version of Titus Canyon in Death Valley, winding through the narrow gap between high sandstone walls. We got about half way in before encountering the rockfall which has blocked it for years. There are tire tracks over the rocks and deeper into the canyon, but neither we nor our new companions were ready to try that.

We'd been so absorbed by all these events that we hadn't eaten lunch, and now it was 4pm and the sun was setting fast. We found a place in the main canyon where we could catch the very last of the sun while we feasted on cheese and crackers. From there it's a long drive out to the hard road, taking nearly an hour, with continuous magnificent scenery.

Our final stop was the Iron Door, the dive bar in Ocotillo Wells which is a great place for a post-trail beer. And nothing else. The very first time we went there, my partner asked for tea. "We got beer" was the response. "OK, I'll have a beer" - a wise reaction.

Day 3

Our main goal for today was a repeat run up Rockhouse Road. We did this during our wildflower visit, with Obelix who for his size did a surprisingly good job on the narrow twisty upper part of the trail.

Inspiration Point and the Dump Trail

But first, I wanted to visit Inspiration Point. This is another viewpoint over the badlands, a little north of Font's Point, with its own trail from the main road. The paper map shows the trail continuing westwards towards the main road again, though none of this is depicted on the topo map. And indeed there's a short but steep dropoff which goes straight into a very narrow, twisty track between the low hills of the western badlands. There were plenty of tire tracks, which is always encouraging, especially when you don't have a good map to help you at ambiguous junctions, of which there were plenty.

Just after one of them, we came to an unpassable rock fall in the bottom of the narrow canyon. Even if we could have climbed over or round it, the trail disappeared on the other side, replaced by a deep sand drift. We backed up to the last junction, and spotted some tracks that climbed out of the shallow wash. We followed these as they twisted around, the original canyon always in sight to the left, sometimes very close, sometimes further off. The other tracks gradually faded away until finally we were following the traces of just one vehicle, which had probably passed in the last 24 hours. We hoped he knew what he was doing.

Eventually his tracks did a long, shallow S-turn down into the floor of the canyon. From there it was a straightforward drive along what at this point has the picturesque name of the Dump Trail. The reason eventually becomes clear, at a crossroads on the corner of the county dump. The paper map shows the trail simply ending there, which seems improbable - and very annoying if true. By now there were lots of tracks again, so there must be some way out.

Eventually, after a few exploratory wanderings, we followed the dump's fence south and then west, ending up on its access road. From there it was a short drive to the main road.

Rockhouse Road

Rockhouse Road provides access to the eastern end of the cutely named Alcoholic Pass, leading over the ridge from Coyote Valley. We'd thought about hiking up it - we did it once in the opposite direction, on our very first visit, stopping at the ridge. But there was a strong, cold wind. We went further up the trail than we did with Obelix, onto the narrow part that eventually leads to Hidden Spring. It was very rocky and in poor condition, so we decided to stop and have our lunch. It was so windy that we ate inside FJ, something we normally never do. While we were eating we were passed by two FJs racing along the trail. I guess they made it to the end - we saw them again later on the main road.
Looking down from Rockhouse Road

The view from our lunch spot was spectacular, from several hundred feet above the valley floor and Clark Dry Lake. This time there was no carpet of wild flowers, but the ocotillos were just starting to bloom, with their bright red flowers contrasting with their deep green leaf-covered stems.

Font's Point and Vista del Malpais

There were still a couple of hours before sunset when we reached the main road. We've always visited Font's Point, the classic overview of the badlands, so that's where we went. It's an easy drive up a very wide sandy wash - I've done it a couple of times in 2WD rental cars. You just have to be careful to stay in the tire tracks and avoid any deep sand - though I understand rental cars routinely get stuck. Once we saw one that had barely made it off the highway before burying itself up to the hubs in sand.

Badlands, from Vista del Malpais
As we were coming back down the wash, I noticed a Jeep zip off into a side turning, Short Wash. I've seen it on the map but never before managed to figure out where it was - the topo map doesn't show it. It's always interesting to drive a new trail, but this one had something else: a side trail to a place called Vista del Malpais (Badlands View). That seemed interesting, so we turned right. None of this is shown on the topo map, so finding the side trails was a challenge. We found the turnoff using clues from the bends shown on the paper map. A narrow, twisty trail led through the badlands, ending before a final short hike to the ridge. The view was breathtaking, much closer than at Font's Point. We soaked up the view, then turned back onto Short Wash.

We were a little surprised, maybe a quarter mile later, to see a sign for Vista del Malpais up another side track. We followed it, along a bigger trail that ended in a small parking lot on the ridge. The real Vista del Malpais was very impressive too, but we were happy to have found our very own one.

After that it was back to the house. Dinner that night was at La Casa de Zorro, Borrego Springs' only "fancy" restaurant, conveniently only a mile from our house. We've eaten there before and it was decent, but this time we were not so impressed. In future we'll probably stick to Carlee's and the other every-day places in the town. And then, next morning up early for the long drive up I-5 back home.

Tuesday, 30 October 2018

Kotlin Part 2 - a real world example for Kotlin

In Part 1 I described my pleasure at finding what seemed to be, on the face it, an alternative to Python for larger programs where compile-time type safety is essential. And then the difficulties I ran into when I actually tried to use it. But in the end, I got a working program which could access our system's Rest API using the khttp package. It was time to move on and start building the pieces needed for a Kotlin replacement for our Python CLI.

Our system generates in real time the metadata for its Rest API, retrievable via another Rest call. This describes each object class, and each attribute of each class. The attributes of a class include its name, its datatype, and various properties such as whether it can be modified. The result of a Rest GET call is a Json string containing a tuple of (name, value) for each requested attribute. The value is always passed as a Json string. For display purposes that is all we need. But sometimes we would like to convert it to its native value, for example so we can perform comparisons or calculate an average across a sequence of historical values.

In Python, this is easy - a good consequence of the completely dynamic type structure. We keep an object for each datatype, which knows how to convert a string to a native value, and vice versa. When the conversion function is called, it returns a Python object of the correct type. As long as are careful never to mix values for different attributes (which we don't have a use case for), everything works fine. If we did happen to, say, try to add a string to a date, we will get an exception at runtime, which we can catch.

In C++ it's harder, because of course there is complete type checking. But our backend code, which is busily transforming data for tens of thousands of flows and millions of packets per second into Rest-accessible analytics, it is necessary.

The key is a C++ pure virtual base type called generic_variable. We can ask an attribute to retrieve from a C++ object (e.g. the representation of a user or an application) its current value, which it returns as a pointer to a generic variable. Later we can, for example, compare it with the value for another object, or perform arithmetic on it.

The owner of a generic variable knows nothing about the specific type of its content. But he does know that he can take two generic variables generated by the same attribute, and ask them to compare with each other, add to each other and so on. They can also be asked to produce their value as a string, or as a floating point number.

What happens if you try to perform an inappropriate operation, like adding two enums, or asking for the float value of a string? You simply get some sensible, if useless, default.

This is very easy to do in C++. The code looks something like this:

template<class C> class typed_generic_variable : public generic_variable
        typedef typed_generic_variable<C> my_type;
        C my_value = C();
        typed_generic_variable(const C &v) : my_value(v) { }
        string str() const { return lexical_cast<string>(my_value); }
        void set(const string &s) { my_value = lexical_cast<C>(s); }
        my_type *clone() const { return new my_type(my_value); }
        bool less(const generic_variable *other) const
            my_type *other_typed = dynamic_cast<my_type*>(other);
            return other_typed ? my_value < other_typed->my_value : false;
        bool add(const generic_variable *other) const
            my_type *other_typed = dynamic_cast<my_type*>(other);
            if (other_typed) {
                my_value += other_typed->my_value;
        // and so on...

The point here is that in this declaration, we can use the template parameter type C exactly as though it was the name of a class. We can use it to create a new object, we can use it in arithmetic expressions, we can invoke static class functions ("companion objects" in Kotlin). When the compiler deals with the declaration of a class like this, it doesn't worry about the semantics. It only considers that when you instantiate an object of the class. In the above case, if I try to create a typed_generic_variable<foo> where the foo class does not define a += operator, then the compiler will complain.

Two very helpful C++ features here are dynamic_cast and lexical_cast. The former allows us to ask a generic variable whether it is in fact the same derived type as ourself, and to treat it as such if it is. The latter, originally introduced by Boost, makes it easy to convert to and from a string without worrying about the details.

I'll admit this looks quite complicated, but actually it's very simple to code and to figure out what is going on. The language doesn't require me to do anything special to make the type-specific class work. The code is no different than if I had explicitly coded variants for int, float, string and so on - except that I only had to write it once.

(In our actual implementation, we make extensive of template metaprogramming (MPL), so in fact if I do try to create such a variable, the add function will simply be defined as a no-op. But that's more detail than we need for the Kotlin comparison).

The goal in the Kotlin re-implementation was to use the same concept. I kind of assumed that its generic type feature, which uses the underlying Java machinery, would take care of things. But I was sadly disappointed. But this is already too long, so more in Part 3.

Kotlin, Part 1 - oh well, nice try guys

It amazes that new programming languages continue to appear, if anything even faster than ever. In the last few years there have been Scala, D, R and recently I came across Kotlin. At first sight, it looked like a good type-safe alternative to Python. It is one of several "better Java than Java" languages, like Scala, optimised for economy of expression. It runs on the system's JVM, meaning that you can ship a Kotlin program with a very high probability that it will run just about anywhere.

To save you reading this whole blog, here's an executive summary:

  • Kotlin is a very neat toy programming language, great for teaching and such
  • Its apparent simplicity fades very quickly when you try to do any real-world programming
  • Many things which are simple and intuitive to do in Python or C++ require very convoluted coding in Kotlin
  • In particular, Kotlin "generics" - Java-speak for what C++ calls templates - are completely useless for any real-world programming
  • Overall, Kotlin is always just frustratingly short of usable for any actual problem
  • That said, I guess it's fine for GUI programming, since it is now the default language for Android development

Most of my code is written in either C++ or Python. There's no substitute for C++ when you need ultimate performance coupled with high reliability. Being strongly typed, you can pretty much turn the code upside down and shake it (formally known as "refactoring") and if it compiles, there's a good chance it will work.

Python is fantastic for writing short programs, and very convenient as they get larger. All our product's middleware that does things like managing the history database, and our CLI, are written in Python. It's easy to write, and as easy as can be hoped to understand. But refactoring is a nightmare. If function F used to take a P as an argument, but now it wants a Q, there is no way to be sure you've caught all the call sites and changed them. One day, in some obscure corner case, F will get called with a P, and the program will die. This means you absolutely cannot use it for anything where reliability is vital, like network software. It's OK if a failure just means a quiet curse from a human user, or if there is some automatic restart.

So for a long time, I have really wanted to see a language with the ease of use and breadth of library support that Python has, coupled with compile time type safety. When I read the overview of Kotlin, I thought YES! - this is it.

I downloaded both Kotlin and the Intellij IDE, to which it seems to be joined at the hip, and wrote a toy program - bigger than Hello World, but less than a page of code. The IDE did its job perfectly, Kotlin's clever constructs (like the "Elvis operator", ?:) were easy to understand and just right as a solution. I was very happy.

Our CLI and associated infrastructure has really got too big for Python, so it was the obvious candidate for transformation to Kotlin. Basically it is a translator from our Rest API to something a bit more human friendly, so the first thing needed is a Rest friendly HTTP library. Two minutes with Google found khttp, which is a Kotlin redo of the Python Requests package which is exactly what we use. Perfect.

Well, except it doesn't form part of the standard Kotlin distribution. I downloaded the source and built it, with no problems. But there seems to be absolutely no way to make a private build like this known to the Kotlin compiler or to Intellij. I searched my whole computer for existing Java libraries, hoping I could copy it to the same place. Nothing I did worked.

The khttp website also included some mysterious invocations that can be given to Maven. Now, if Java programming is your day job, well, first you have my every sympathy. But second, you're probably familiar with Maven. It's an XML based (yuck!) redo of Make, that is at the heart of all Java development. (Well, it used to be, now apparently the up and coming thing is Gradle - why would you only have one obscure, incomprehensible build system when you can have two?)

So, all you have to do is plug this handful of lines into your Maven files, and everything will work!

Except... Intellij doesn't actually use Maven. I (once again) searched my whole computer for the Maven files I needed to modify, and they weren't there. After a lot of Googling, I finally found how to  get it to export Maven files. Then I edited them according to the instructions, and ran Maven from the command line using these new files. And - amazingly - it worked. By some magic it downloaded hundreds of megabytes of libraries, then built my Kotlin program - which ran and did what I wanted. And if I ran it again, it found all the hundreds of megabytes already there, and just ran the compiler. When I ran my little program, it fired off Rest requests and turned the Json results into Kotlin data structures. Perfect, exactly what I wanted.

But as I said, Intellij doesn't actually use Maven. Goodness knows what it does use, under the covers. So now I had to create a brand new Maven-based project, using my existing source file and my precious Maven config. And now, with Maven having put all the libraries where the compiler is expecting to find them, Intellij's own build system would build my program. In theory there is a place where you can tell Intellij where to find packages on the web, which ought to have been perfect. But in practice, when you get to the right page, it shows an empty list of places, and has no way to do add to it. I guess probably there's an undocumented configuration file you can edit.

That's a good point to break off. In Part 2, I'll talk about my experience trying to build a real-world application using Kotlin.