Sunday, August 03, 2014

Prints

Two long-buried caches of photographs came to light last year. One was a stack of cellulose nitrate negatives made on the Scott Antarctic expedition almost a hundred years ago. Over time, they became stuck together into a moldy brick, but it was possible to tease the negatives apart and see what they revealed. You can view the images at the web site of the New Zealand Antarctic Heritage Trust. The results show ragged edges and mold spots but, even beyond their historical importance, the photographs are evocative and in some cases very beautiful.

The other cache contained images not quite so old and of less general interest but of personal importance. My mother moved from the house she had occupied for decades into a smaller apartment and while preparing to move she found the proverbial shoe box of old pictures in a closet. Some of the images are from my youth, some from hers, and some even from her parents'. One of the photographs, from 1931, shows my paternal great-grandparents. I never met my paternal grandparents, let alone great-grandparents, so this photograph touches something almost primordial for me. And some of the photographs in the box were even older.

Due to the miracle of photography, we are able to see over a hundred years into the past. Of course this is not news; all of us have seen 19th century photographs by the pioneers of the medium. By the turn of the 20th century photography was so common that huge numbers of images, from the historical to the mundane, had been created. And sometimes we are lucky enough to chance upon forgotten images that open a window into a past that would otherwise fade from view.

But such windows are becoming rare. A hundred years from now, there will be far fewer photo caches to find. Although the transition to digital photography has made photos almost unimaginably commonplace—one estimate puts the number of shutter activations at a trillion images worldwide per year—very few of those images become artifacts that can be left in a shoe box.

We live in what has been named a Digital Dark Age. Because digital technology evolves so fast, we are rapidly losing the ability to understand yesterday's media. As file formats change, software becomes obsolete, and hardware becomes outmoded, old digital files become unreadable and unrecoverable.

There are many examples of lost information, but here is an illustrative story of disaster narrowly averted. Early development of the Unix operating system, which became the software foundation for the Internet, was done in the late 1960s and early 1970s on Digital Equipment Corporation computers. Backups were made on a magnetic medium called a DECtape. By the mid 1970s, DECtape was obsolete and by the 1980s there were no remaining DECtape drives that could read the old backups. The scientists in the original Unix lab had kept a box of old backups under the raised floor of the computer room, but the tapes had spontaneously become unreadable because the device to read them no longer existed in the lab or anywhere else as far as anyone knew. And even if it did, no computer that could run the device was still powered on. Fortunately, around 1990 Paul Vixie and Keith Bostic, working for a different company, stumbled across an old junked DECtape drive and managed to get it up and running again by resurrecting an old computer to connect it to. They contacted the Unix research group and offered one last chance to recover the data on the backup tapes before the computer and DECtape drive were finally decommissioned. Time and resources were limited, but some of the key archival pieces of early Unix development were recovered through this combination of charity and a great deal of luck. This story has a happy ending, but not all digital archives survive. Far from it.

The problem is that as technology advances, data needs to be curated. Files need to have their formats converted, and then transferred to new media. A backup disk in a box somewhere might be unreadable a few years from now. Its format may be obsolete, the software to read it might not run on current hardware, or the media might have physically decayed. NASA lost a large part of the data collected by the Viking Mars missions because the iron oxide fell off the tapes storing the data.

Backups are important but they too are temporary, subject to the same problems as the data they attempt to protect. Backup software can become obsolete and media can fail. The same affliction that damaged the Viking tapes also wiped out my personal backup archive; I lost the only copy of my computer work from the 1970s. (It's worth noting my negatives and prints from the period survived.)

It's not just tapes that go bad. Consider CDs and DVDs, media often used for backup. The disks, especially the writable kind use for backups, are very fragile, much more so than the mass-produced read-only kind used to store music and movies. Within a few years, especially in humid environments, the metal film can separate from the backing medium. Even if the backup medium survives, the formats used to store the backups might become obsolete. The software that reads the backups might not run on the next computer one buys. Today, CDs are already becoming relics; many computers today do not even come with a CD or DVD drive. What were once the gold standard for backup are already looking old-fashioned just a few years on. They will be antiquated and obscure a century from now.

To summarize, digital information requires maintenance. It's not sufficient to make backups; the backups also need to be maintained, upgraded, transferred, and curated. Without conscientious care, the data of today will be lost forever in a few years. Even with care, it's possible through software or hardware changes to lose access forever. That shoebox of old backup CDs will be unreadable soon.

Which brings us back to those old photo caches. They held negatives and prints, physical objects that stored images. They needed no attention, no curating, no updating. They sat untended and forgotten for decades, but through all that time faithfully held their information, waiting for a future discoverer. As a result, we can all see what the Scott Antarctic expedition saw, and I can see what my great-grandparents looked like.

It is a sad irony that modern technology makes it unlikely that future generations will see the images made today.

Ask yourself whether your great-grandchildren will be able to see your photographs. If the images exist only as a digital image file, the answer is almost certainly, "No". If, however, there are physical prints, the odds improve. Those digital images need to be made real to endure. Without a print, a digital photograph has no future.

We live in a Digital Dark Age, but as individuals we can shine a little light. If you are one of the uncounted photographers who enjoy digital photography, keep in mind the fragility of data. When you have a digital image you care about, for whatever reason, artistic or sentimental, please make a print and put that print away. It will sit quietly in the dark, holding fast, never forgetting, ready to reveal itself to a grateful future generation.

Friday, January 24, 2014

Self-referential functions and the design of options

I've been trying on and off to find a nice way to deal with setting options in a Go package I am writing. Options on a type, that is. The package is intricate and there will probably end up being dozens of options. There are many ways to do this kind of thing, but I wanted one that felt nice to use, didn't require too much API (or at least not too much for the user to absorb), and could grow as needed without bloat.

I've tried most of the obvious ways: option structs, lots of methods, variant constructors, and more, and found them all unsatisfactory. After a bunch of trial versions over the past year or so, and a lot of conversations with other Gophers making suggestions, I've finally found one I like. You might like it too. Or you might not, but either way it does show an interesting use of self-referential functions.

I hope I have your attention now.

Let's start with a simple version. We'll refine it to get to the final version.

First, we define an option type. It is a function that takes one argument, the Foo we are operating on.

type option func(*Foo)

The idea is that an option is implemented as a function we call to set the state of that option. That may seem odd, but there's a method in the madness.

Given the option type, we next define an Option method on *Foo that applies the options it's passed by calling them as functions. That method is defined in the same package, say pkg, in which Foo is defined.

This is Go, so we can make the method variadic and set lots of options in a given call:

// Option sets the options specified.
func (f *Foo) Option(opts ...option) {
  for _, opt := range opts {
     opt(f)
  }
}

Now to provide an option, we define in pkg a function with the appropriate name and signature. Let's say we want to control verbosity by setting an integer value stored in a field of a Foo. We provide the verbosity option by writing a function with the obvious name and have it return an option, which means a closure; inside that closure we set the field:

// Verbosity sets Foo's verbosity level to v.
func Verbosity(v int) option {
  return func(f *Foo) {
     f.verbosity = v
  }
}

Why return a closure instead of just doing the setting? Because we don't want the user to have to write the closure and we want the Option method to be nice to use. (Plus there's more to come....)

In the client of the package, we can set this option on a Foo object by writing:

foo.Option(pkg.Verbosity(3))

That's easy and probably good enough for most purposes, but for the package I'm writing, I want to be able to use the option mechanism to set temporary values, which means it would be nice if the Option method could return the previous state. That's easy: just save it in an empty interface value that is returned by the Option method and the underlying function type. That value flows through the code:

type option func(*Foo) interface{}

// Verbosity sets Foo's verbosity level to v.
func Verbosity(v int) option {
  return func(f *Foo) interface{} {
     previous := f.verbosity
      f.verbosity = v
     return previous
  }
}

// Option sets the options specified.
// It returns the previous value of the last argument.
func (f *Foo) Option(opts ...option) (previous interface{}) {
  for _, opt := range opts {
     previous = opt(f)
  }
  return previous
}

The client can use this the same as before, but if the client also wants to restore a previous value, all that's needed is to save the return value from the first call, and then restore it.

prevVerbosity := foo.Option(pkg.Verbosity(3))
foo.DoSomeDebugging()
foo.Option(pkg.Verbosity(prevVerbosity.(int)))

The type assertion in the restoring call to Option is clumsy. We can do better if we push a little harder on our design.

First, redefine an option to be a function that sets a value and returns another option to restore the previous value.

type option func(f *Foo) option

This self-referential function definition is reminiscent of a state machine. Here we're using it a little differently: it's a function that returns its inverse.

Then change the return type (and meaning) of the Option method of *Foo to option from interface{}:

// Option sets the options specified.
// It returns an option to restore the last arg's previous value.
func (f *Foo) Option(opts ...option) (previous option) {
  for _, opt := range opts {
      previous = opt(f)
  }
  return previous
}

The final piece is the implementation of the actual option functions. Their inner closure must now return an option, not an interface value, and that means it must return a closure to undo itself. But that's easy: it can just recur to prepare the closure to undo the original! It looks like this:

// Verbosity sets Foo's verbosity level to v.
func Verbosity(v int) option {
  return func(f *Foo) option {
     previous := f.verbosity
     f.verbosity = v
     return Verbosity(previous)
  }
}

Note the last line of the inner closure changed from
     return previous
to
     return Verbosity(previous)
Instead of just returning the old value, it now calls the surrounding function (Verbosity) to create the undo closure, and returns that closure.

Now from the client's view this is all very nice:

prevVerbosity := foo.Option(pkg.Verbosity(3))
foo.DoSomeDebugging()
foo.Option(prevVerbosity)

And finally we take it up one more level, using Go's defer mechanism to tidy it all up in the client:

func DoSomethingVerbosely(foo *Foo, verbosity int) {
  // Could combine the next two lines,
  // with some loss of readability.
  prev := foo.Option(pkg.Verbosity(verbosity))
  defer foo.Option(prev)
  // ... do some stuff with foo under high verbosity.
}

It's worth noting that since the "verbosity" returned is now a closure, not a verbosity value, the actual previous value is hidden. If you want that value you need a little more magic, but there's enough magic for now.

The implementation of all this may seem like overkill but it's actually just a few lines for each option, and has great generality. Most important, it's really nice to use from the point of view of the package's client. I'm finally happy with the design. I'm also happy at the way this uses Go's closures to achieve its goals with grace.

Wednesday, May 01, 2013

Eisenbahnnet: Bohr's trip about spin


The other day I was talking with a friend (yes, I have friends) about the way communication of ideas has changed. The Internet is the obvious advance, but what used to happen when an important new idea needed to be disseminated? As an example of how things used to be, I told him the story of Bohr's famous train trip.

In 1925, two students at the University of Leiden, Sem Goudsmit and George Uhlenbeck, realized that the fourth electron quantum number could be explained if the electron had spin. This was a radical idea (a point particle spinning?), and coming from students was doubly suspect. Physicists throughout Europe were excited yet skeptical. When Bohr was planning a trip from Cophenhagen to Leiden for a conference, it seemed an excellent opportunity to talk to the students to help understand if they were right.

In his book, Inward Bound, Abraham Païs narrates the story as told to him by Bohr twenty years later:

Bohr's train to Leiden made a stop in Hamburg, where he was met by Pauli and Stern who had come to the station to ask him what he thought about spin. Bohr must have said that it was very very interesting (his favorite way of expressing that something was wrong), but he could not see how an electron moving in the electric field of the nucleus could experience the magnetic field necessary for producing fine structure. (As Uhlenbeck said later: 'I must say in retrospect that Sem and I in our euphoria had not really appreciated [this] basic difficulty.') On his arrival in Leiden, Bohr was met at the train by Ehrenfest and Einstein who asked him what he thought about spin. Bohr must have said that it was very very interesting but what about the magnetic field? Ehrenfest replied that Einstein had resolved that. The electron in its rest frame sees a rotating electric field; hence by elementary relativity it also sees a magnetic field. The net result is an effective spin-orbit coupling. Bohr was at once convinced. When told of the factor of two he expressed confidence that this problem would find a natural resolution. He urged Sem and George to write a more detailed note on their work. They did; Bohr added an approving comment.
After Leiden Bohr traveled to Goettingen. There he was met at the station by Heisenberg and Jordan who asked what he thought about spin. Bohr replied that it was a great advance and explained about the spin-orbit coupling. Heisenberg remarked that he had heard this remark before but that he could not remember who made it and when. ... On his way home the train stopped at Berlin where Bohr was met at the station by Pauli, who had made the trip from Hamburg for the sole purpose of asking Bohr what he now thought about spin. Bohr said it was a great advance, to which Pauli replied: 'eine Neue Kopenhagener Irrlehre' (a new Copenhagen heresy). After his return home Bohr wrote to Ehrenfest that he had become 'a prophet of the electron magnet gospel.'

Sneakernet indeed, or perhaps Eisenbahnnet. The idea of the great physicist carrying precious nuggets of wisdom across Europe is romantic and poignant. It also shows how Bohr's insight, and the insight of his brilliant colleagues, did the peer review in real time in two train trips. Bohr, Pauli, Stern, Ehrenfest, Einstein, Heisenberg, Jordan, Pauli: What a peer review it was!

It is one of the greatest oversights of the Nobel committee that Goudsmit and Uhlenbeck were never rewarded. Their colleagues certainly understood the earth-shaking merit of their insight.

Saturday, September 22, 2012

Thank you Apple


Some days, things just don't work out. Or don't work.

Earlier

I wanted to upgrade (their term, not mine) my iMac from Snow Leopard (10.6) to Lion (10.7). I even had the little USB stick version of the installer, to make it easy. But after spending some time attempting the installation, the Lion installer "app" failed, complaining about SMART errors on the disk.

Disk Utility indeed reported there were SMART errors, and that the disk hardware needed to be replaced. An ugly start.

The good news is that in some places, including where I live, Apple will do a house call for service, so I didn't have to haul the computer to an Apple store on public transit.

Thank you Apple.

I called them, scheduled the service for a few days later, and as instructed by Apple (I hardly needed prompting) prepped a backup using Time Machine.

The day before the repairman was to come to give me a new disk, I made sure the system was fully backed up, for security reasons started a complete erasure of the bad disk (using Disk Utility in target mode from another machine, about which more later), and went to bed.

The day

When I got up, I checked that the disk had been erased and headed off to work. As I left the apartment, the ceiling lights in the entryway flickered and then went out: a new bulb was needed. On the way out of the building, I asked the doorman for a replacement bulb. He offered just to replace it for us. We have a good doorman.

Once at work, things were normal until my cell phone rang about 2pm. It was the Apple repairman, Twinkletoes (some names and details have been changed), calling to tell me he'd be at my place within the hour. Actually, he wasn't an Apple employee, but a contractor working for Unisys, a name I hadn't heard in a long time. (Twinkletoes was a name I hadn't heard for a while either, but that's another story.) At least here, Apple uses Unisys contractors to do their house calls.

So I headed home, arriving before Twinkletoes. At the front door, the doorman stopped me. He reported that the problem with the lights was not the bulb, but the wiring. He'd called in an electrician, who had found a problem in the breaker box and fixed it. Everything was good now.

When I got up to the apartment, I found chaos: the cleaners were mid-job, with carpets rolled up, vacuum cleaners running, and general craziness. Not conducive to work. So I went back down to the lobby with my laptop and sat on the couch, surfing on the free WiFi from the café next door, and waited for Twinkletoes.

Half an hour later, he arrived and we returned to the apartment. The cleaners were still there but the chaos level had dropped and it wasn't too hard to work around them. I saw what the inside of an iMac looks like as Twinkletoes swapped out the drive. By the time he was done, the cleaners had left and things had settled down.

The innards of my 27" iMac


I had assumed that the replacement drive would come with an installed operating system, but I assumed wrong. (When you assume, you put plum paste on your ass.) I had a Snow Leopard installation DVD, but I was worried: it had failed to work for me a few days earlier when I wanted to boot from it to run fsck on the broken drive. Twinkletoes noticed it had a scratch. I needed another way to boot the machine.

It had surprised me when Lion came out that the installation was done by an "app", not as a bootable image. This is an unnecessary complication for those of us that need to maintain machines. Earlier, when updating a different machine, I had learned how painful this could be when the installation app destroyed the boot sector and I needed to reinstall Snow Leopard from DVD, and then upgrade that to a version of the system recent enough to run the Lion installer app. As will become apparent, had Lion come as a bootable image things might have gone more smoothly.

Thank you Apple.

[Note added in post: Several people have told me there's a bootable image inside the installer. I forgot to mention that I knew that, and there wasn't. For some reason, the version on the USB stick I have looks different from the downloaded one I checked out a day or two later, and even Twinkletoes couldn't figure out how to unpack it. Weird.]

Twinkletoes had an OS image he was willing to let me copy, but I needed to make a bootable drive from it. I had no sufficiently large USB stick—you need a 4GB one you can wipe. However I did have a free, big enough CompactFlash card and a USB reader, so that should do, right? Twinkletoes was unsure but believed it would.

Using my laptop, I used Disk Utility to create a bootable image on the CF card from Twinkletoes's disk image. We were ready.

Plug in the machine, push down the Option key, power on.

Nothing.

Turn on the light.

Nothing.

No power.

The cleaners must have tripped a breaker.

I went to the breaker box and found that all the breakers looked OK. We now had a mystery, because the cleaners had had lights on and were using electric appliances—I saw a vacuum cleaner running—but now there was no power. Was the power off to the building? No: the lights still worked in the kitchen and the oven clock was lit. I called the doorman and asked him to get the electrician back as soon as possible and then, with a little portable lamp, went looking around the apartment for a working socket. I found one, again in the kitchen. The iMac was going to travel after all, if not as far as downtown.

The machine was moved, plugged in, option-key-downed, and powered on. I selected the CF card to boot from, waited 15 minutes for the installation to come up, only to have the boot fail. CF cards don't work after all, although the diagnosis of failure is a bit tardy and uninformative.

Thank you Apple.

Next idea. My old laptop has FireWire so we could bring the disk up using target mode and then run the installer on the laptop to install Lion on the iMac.

We did the target mode dance and connected to the newly installed drive, then ran Disk Utility on the laptop to format the drive. Things were starting to look better.

Next, we put the Lion installer stick into the laptop, which was running a recent version of Snow Leopard.

Failure again. This time the problem is that the laptop, all of about four years old, is too old to run Lion. It's got a Core Duo, not a Core 2 Duo, and Lion won't run on that hardware. Even though Lion doesn't need to run, only the Lion installer needs to run, the system refuses to help. My other laptop is new enough to run the installer, but it doesn't have FireWire so it can't do target mode.

Thank you Apple. Your aggressive push to retire old technology hurts sometimes, you know? Actually, more than sometimes, but let's stay on topic.

Twinkletoes has to leave—he's been on the job for several hours now—but graciously lends me a USB boot drive he has, asking me to return it by post when I'm done. I thank him profusely and send him away before he is drawn in any deeper.

Using his boot drive, I was able to bring up the iMac and use the Lion installer stick to get the system to a clean install state. Finally, a computer, although of course all my personal data is over on the backup.

When a new OS X installation comes up, it presents the option of "migrating" data from an existing system, including from a Time Machine backup. So I went for that option and connected the external drive with the Time Machine backup on it.

The Migration Assistant presented a list of disks to migrate from. A list of one: the main drive in the machine. It didn't give me the option of using the Time Machine backup.

Thank you Apple. You told me to save my machine this way but then I can't use this backup to recover.

I called Apple on my cell phone (there's still no power in the room with the land line's wireless base station) and explained the situation. The sympathetic but ultimately unhelpful person on the phone said it should work (of course!) and that I should run Software Update and get everything up to the latest version. He reported that there were problems with the Migration Assistant in early versions of the Lion OS, and my copy of the installer was pretty early.

I started the upgrade process, which would take a couple of hours, and took my laptop back down to the lobby for some free WiFi to kill time. But it's now evening, the café is closed, and there is no WiFi. Naturally.

Back to the apartment, grab a book, return to the lobby to wait for the electrician.

An hour or so later, the electrician arrived and we returned to the apartment to see what was wrong. It was easy to diagnose. He had made a mistake in the fix, in fact a mistake related to what was causing the original problem. The breaker box has a silly design that makes it too easy to break a connection when working in the box, and that's what had happened. So it was easy to fix and easy to verify that it was fixed, but also easy to understand why it had happened. No excuses, but problem solved and power was now restored.

The computer was still upgrading but nearly done, so a few minutes later I got to try migrating again. Same result, naturally, and another call to Apple and this time little more than an apology. The unsatisfactory solution: do a clean installation and manually restore what's important from the Time Machine backup.

Thank you Apple.

It was fairly straightforward, if slow, to restore my personal files from the home directory on the backup, but the situation for installed software was dire. Restoring an installed program, either using the ludicrous Time Machine UI or copying the files by hand, is insufficient in most cases to bring back the program because you also need manifests and keys and receipts and whatnot. As a result, things such as iWork (Keynote etc.) and Aperture wouldn't run. I could copy every piece of data I could find but the apps refused to let me run them. Despite many attempts digging far too deep into the system, I could not get the right pieces back from the Time Machine backup. Worse, the failure modes were appalling: crashes, strange display states, inexplicable non-workiness. A frustating mess, but structured perfectly to belong on this day.

For peculiar reasons I didn't have the installation disks for everything handy, so these (expensive!) programs were just gone, even though I had backed up everything as instructed.

Thank you Apple.

I did have some installation disks, so for instance I was able to restore Lightroom and Photoshop, but then of course I needed to wait for huge updates to download even though the data needed was already sitting on the backup drive.

Back on the phone for the other stuff. Because I could prove that I had paid for the software, Apple agreed to send me fresh installation disks for everything of theirs but Aperture, but that would take time. In fact, it took almost a month for the iWork DVD to arrive, which is unacceptably long. I even needed to call twice to remind them before the disks were shipped.

The Aperture story was more complicated. After a marathon debugging session I managed to get it to start but then it needed the install key to let me do anything. I didn't have the disk, so I didn't know the key. Now, Aperture is from part of the company called Pro Tools or something like that, and they have a different way of working. I needed to contact them separately to get Aperture back. It's important to understand I hadn't lost my digital images. They were backed up multiple times, including in the network, on the Time Machine backup, and also on an external drive using the separate "vault" mechanism that is one of the best features of Aperture.

I reached the Aperture people on the phone and after a condensed version of the story convinced them I needed an install key (serial number) to run the version of Aperture I'd copied from the Time Machine backup. I was berated by the person on the phone: Time Machine is not suitable for backing up Aperture databases. (What? Your own company's backup solution doesn't know how to back up? Thank you Apple.) After a couple more rounds of abuse, I convinced the person on the phone that a) I was backing up my database as I should, using an Aperture vault and b) it wasn't the database that was the problem, but the program. I was again told that wasn't a suitable way to back up (again, What?), at which point I surrendered and just begged for an installation key, which was provided, and I could again run Aperture. This was the only time in the story where the people I was interacting with were not at least sympathetic to my situation. I guess Pro is a synonym for unfriendly.

Thank you Apple.

There's much more to the story. It took weeks to get everything working again properly. The complete failure of Time Machine to back up my computer's state properly was shocking to me. After this fiasco, I learned about the Lion Recovery App, which everyone who uses Macs should know about, but was not introduced until well after Lion rolled out with its preposterous not-bootable installation setup. The amount of data I already had on my backup disk but that needed to be copied from the net again was laughable. And there were total mysteries, like GMail hanging forever for the first day or so, a problem that may be unrelated or may just be the way life was this day.

But, well after midnight, worn out, beat up, tired, but with electricity restored and a machine that had a little life in it again, I powered down, took the machine back to my office and started to get ready for bed. Rest was needed and I had had enough of technology for one day.

One more thing

Oh yes, one more thing. There's always one more thing in our technological world.

I walked into the bathroom for my evening ablutions only to have the toilet seat come off completely in my hand.

Just because you started it all, even for this,

Thank you Apple.

Monday, June 25, 2012

Less is exponentially more


Here is the text of the talk I gave at the Go SF meeting in June, 2012.

This is a personal talk. I do not speak for anyone else on the Go team here, although I want to acknowledge right up front that the team is what made and continues to make Go happen. I'd also like to thank the Go SF organizers for giving me the opportunity to talk to you.

I was asked a few weeks ago, "What was the biggest surprise you encountered rolling out Go?" I knew the answer instantly: Although we expected C++ programmers to see Go as an alternative, instead most Go programmers come from languages like Python and Ruby. Very few come from C++.

We—Ken, Robert and myself—were C++ programmers when we designed a new language to solve the problems that we thought needed to be solved for the kind of software we wrote. It seems almost paradoxical that other C++ programmers don't seem to care.

I'd like to talk today about what prompted us to create Go, and why the result should not have surprised us like this. I promise this will be more about Go than about C++, and that if you don't know C++ you'll be able to follow along.

The answer can be summarized like this: Do you think less is more, or less is less?

Here is a metaphor, in the form of a true story.  Bell Labs centers were originally assigned three-letter numbers: 111 for Physics Research, 127 for Computing Sciences Research, and so on. In the early 1980s a memo came around announcing that as our understanding of research had grown, it had become necessary to add another digit so we could better characterize our work. So our center became 1127. Ron Hardin joked, half-seriously, that if we really understood our world better, we could drop a digit and go down from 127 to just 27. Of course management didn't get the joke, nor were they expected to, but I think there's wisdom in it. Less can be more. The better you understand, the pithier you can be.

Keep that idea in mind.

Back around September 2007, I was doing some minor but central work on an enormous Google C++ program, one you've all interacted with, and my compilations were taking about 45 minutes on our huge distributed compile cluster. An announcement came around that there was going to be a talk presented by a couple of Google employees serving on the C++ standards committee. They were going to tell us what was coming in C++0x, as it was called at the time. (It's now known as C++11).

In the span of an hour at that talk we heard about something like 35 new features that were being planned. In fact there were many more, but only 35 were described in the talk. Some of the features were minor, of course, but the ones in the talk were at least significant enough to call out. Some were very subtle and hard to understand, like rvalue references, while others are especially C++-like, such as variadic templates, and some others are just crazy, like user-defined literals.

At this point I asked myself a question: Did the C++ committee really believe that was wrong with C++ was that it didn't have enough features? Surely, in a variant of Ron Hardin's joke, it would be a greater achievement to simplify the language rather than to add to it. Of course, that's ridiculous, but keep the idea in mind.

Just a few months before that C++ talk I had given a talk myself, which you can see on YouTube, about a toy concurrent language I had built way back in the 1980s. That language was called Newsqueak and of course it is a precursor to Go.

I gave that talk because there were ideas in Newsqueak that I missed in my work at Google and I had been thinking about them again.  I was convinced they would make it easier to write server code and Google could really benefit from that.

I actually tried and failed to find a way to bring the ideas to C++. It was too difficult to couple the concurrent operations with C++'s control structures, and in turn that made it too hard to see the real advantages. Plus C++ just made it all seem too cumbersome, although I admit I was never truly facile in the language. So I abandoned the idea.

But the C++0x talk got me thinking again.  One thing that really bothered me—and I think Ken and Robert as well—was the new C++ memory model with atomic types. It just felt wrong to put such a microscopically-defined set of details into an already over-burdened type system. It also seemed short-sighted, since it's likely that hardware will change significantly in the next decade and it would be unwise to couple the language too tightly to today's hardware.

We returned to our offices after the talk. I started another compilation, turned my chair around to face Robert, and started asking pointed questions. Before the compilation was done, we'd roped Ken in and had decided to do something. We did not want to be writing in C++ forever, and we—me especially—wanted to have concurrency at my fingertips when writing Google code. We also wanted to address the problem of "programming in the large" head on, about which more later.

We wrote on the white board a bunch of stuff that we wanted, desiderata if you will. We thought big, ignoring detailed syntax and semantics and focusing on the big picture.

I still have a fascinating mail thread from that week. Here are a couple of excerpts:

Robert: Starting point: C, fix some obvious flaws, remove crud, add a few missing features.

Rob: name: 'go'.  you can invent reasons for this name but it has nice properties. it's short, easy to type. tools: goc, gol, goa.  if there's an interactive debugger/interpreter it could just be called 'go'.  the suffix is .go.

Robert Empty interfaces: interface {}. These are implemented by all interfaces, and thus this could take the place of void*.

We didn't figure it all out right away. For instance, it took us over a year to figure out arrays and slices. But a significant amount of the flavor of the language emerged in that first couple of days.

Notice that Robert said C was the starting point, not C++. I'm not certain but I believe he meant C proper, especially because Ken was there. But it's also true that, in the end, we didn't really start from C. We built from scratch, borrowing only minor things like operators and brace brackets and a few common keywords. (And of course we also borrowed ideas from other languages we knew.) In any case, I see now that we reacted to C++ by going back down to basics, breaking it all down and starting over. We weren't trying to design a better C++, or even a better C. It was to be a better language overall for the kind of software we cared about.

In the end of course it came out quite different from either C or C++. More different even than many realize. I made a list of significant simplifications in Go over C and C++:

  • regular syntax (don't need a symbol table to parse)
  • garbage collection (only)
  • no header files
  • explicit dependencies
  • no circular dependencies
  • constants are just numbers
  • int and int32 are distinct types
  • letter case sets visibility
  • methods for any type (no classes)
  • no subtype inheritance (no subclasses)
  • package-level initialization and well-defined order of initialization
  • files compiled together in a package
  • package-level globals presented in any order
  • no arithmetic conversions (constants help)
  • interfaces are implicit (no "implements" declaration)
  • embedding (no promotion to superclass)
  • methods are declared as functions (no special location)
  • methods are just functions
  • interfaces are just methods (no data)
  • methods match by name only (not by type)
  • no constructors or destructors
  • postincrement and postdecrement are statements, not expressions
  • no preincrement or predecrement
  • assignment is not an expression
  • evaluation order defined in assignment, function call (no "sequence point")
  • no pointer arithmetic
  • memory is always zeroed
  • legal to take address of local variable
  • no "this" in methods
  • segmented stacks
  • no const or other type annotations
  • no templates
  • no exceptions
  • builtin string, slice, map
  • array bounds checking

And yet, with that long list of simplifications and missing pieces, Go is, I believe, more expressive than C or C++. Less can be more.

But you can't take out everything. You need building blocks such as an idea about how types behave, and syntax that works well in practice, and some ineffable thing that makes libraries interoperate well.

We also added some things that were not in C or C++, like slices and maps, composite literals, expressions at the top level of the file (which is a huge thing that mostly goes unremarked), reflection, garbage collection, and so on. Concurrency, too, naturally.

One thing that is conspicuously absent is of course a type hierarchy. Allow me to be rude about that for a minute.

Early in the rollout of Go I was told by someone that he could not imagine working in a language without generic types. As I have reported elsewhere, I found that an odd remark.

To be fair he was probably saying in his own way that he really liked what the STL does for him in C++. For the purpose of argument, though, let's take his claim at face value.

What it says is that he finds writing containers like lists of ints and maps of strings an unbearable burden. I find that an odd claim. I spend very little of my programming time struggling with those issues, even in languages without generic types.

But more important, what it says is that types are the way to lift that burden. Types. Not polymorphic functions or language primitives or helpers of other kinds, but types.

That's the detail that sticks with me.

Programmers who come to Go from C++ and Java miss the idea of programming with types, particularly inheritance and subclassing and all that. Perhaps I'm a philistine about types but I've never found that model particularly expressive.

My late friend Alain Fournier once told me that he considered the lowest form of academic work to be taxonomy. And you know what? Type hierarchies are just taxonomy. You need to decide what piece goes in what box, every type's parent, whether A inherits from B or B from A.  Is a sortable array an array that sorts or a sorter represented by an array? If you believe that types address all design issues you must make that decision.

I believe that's a preposterous way to think about programming. What matters isn't the ancestor relations between things but what they can do for you.

That, of course, is where interfaces come into Go. But they're part of a bigger picture, the true Go philosophy.

If C++ and Java are about type hierarchies and the taxonomy of types, Go is about composition.

Doug McIlroy, the eventual inventor of Unix pipes, wrote in 1964 (!):
We should have some ways of coupling programs like garden hose--screw in another segment when it becomes necessary to massage data in another way. This is the way of IO also.
That is the way of Go also. Go takes that idea and pushes it very far. It is a language of composition and coupling.

The obvious example is the way interfaces give us the composition of components. It doesn't matter what that thing is, if it implements method M I can just drop it in here.

Another important example is how concurrency gives us the composition of independently executing computations.

And there's even an unusual (and very simple) form of type composition: embedding.

These compositional techniques are what give Go its flavor, which is profoundly different from the flavor of C++ or Java programs.

===========

There's an unrelated aspect of Go's design I'd like to touch upon: Go was designed to help write big programs, written and maintained by big teams.

There's this idea about "programming in the large" and somehow C++ and Java own that domain. I believe that's just a historical accident, or perhaps an industrial accident. But the widely held belief is that it has something to do with object-oriented design.

I don't buy that at all. Big software needs methodology to be sure, but not nearly as much as it needs strong dependency management and clean interface abstraction and superb documentation tools, none of which is served well by C++ (although Java does noticeably better).

We don't know yet, because not enough software has been written in Go, but I'm confident Go will turn out to be a superb language for programming in the large. Time will tell.

===========

Now, to come back to the surprising question that opened my talk:

Why does Go, a language designed from the ground up for what what C++ is used for, not attract more C++ programmers?

Jokes aside, I think it's because Go and C++ are profoundly different philosophically.

C++ is about having it all there at your fingertips. I found this quote on a C++11 FAQ:
The range of abstractions that C++ can express elegantly, flexibly, and at zero costs compared to hand-crafted specialized code has greatly increased.
That way of thinking just isn't the way Go operates. Zero cost isn't a goal, at least not zero CPU cost. Go's claim is that minimizing programmer effort is a more important consideration.

Go isn't all-encompassing. You don't get everything built in. You don't have precise control of every nuance of execution. For instance, you don't have RAII. Instead you get a garbage collector. You don't even get a memory-freeing function.

What you're given is a set of powerful but easy to understand, easy to use building blocks from which you can assemble—compose—a solution to your problem. It might not end up quite as fast or as sophisticated or as ideologically motivated as the solution you'd write in some of those other languages, but it'll almost certainly be easier to write, easier to read, easier to understand, easier to maintain, and maybe safer.

To put it another way, oversimplifying of course:

Python and Ruby programmers come to Go because they don't have to surrender much expressiveness, but gain performance and get to play with concurrency.

C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way.

The issue, then, is that Go's success would contradict their world view.

And we should have realized that from the beginning. People who are excited about C++11's new features are not going to care about a language that has so much less.  Even if, in the end, it offers so much more.

Thank you.

Tuesday, April 03, 2012

The byte order fallacy

Whenever I see code that asks what the native byte order is, it's almost certain the code is either wrong or misguided. And if the native byte order really does matter to the execution of the program, it's almost certain to be dealing with some external software that is either wrong or misguided. If your code contains #ifdef BIG_ENDIAN or the equivalent, you need to unlearn about byte order.

The byte order of the computer doesn't matter much at all except to compiler writers and the like, who fuss over allocation of bytes of memory mapped to register pieces. Chances are you're not a compiler writer, so the computer's byte order shouldn't matter to you one bit.

Notice the phrase "computer's byte order". What does matter is the byte order of a peripheral or encoded data stream, but--and this is the key point--the byte order of the computer doing the processing is irrelevant to the processing of the data itself. If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.

Let's say your data stream has a little-endian-encoded 32-bit integer. Here's how to extract it (assuming unsigned bytes):
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
If it's big-endian, here's how to extract it:
i = (data[3]<<0) | (data[2]<<8) | (data[1]<<16) | (data[0]<<24);
Both these snippets work on any machine, independent of the machine's byte order, independent of alignment issues, independent of just about anything. They are totally portable, given unsigned bytes and 32-bit integers.

What you might have expected to see for the little-endian case was something like
i = *((int*)data);
#ifdef BIG_ENDIAN
/* swap the bytes */
i = ((i&0xFF)<<24) | (((i>>8)&0xFF)<<16) | (((i>>16)&0xFF)<<8) | (((i>>24)&0xFF)<<0);
#endif
or something similar. I've seen code like that many times. Why not do it that way? Well, for starters:
  1. It's more code.
  2. It assumes integers are addressable at any byte offset; on some machines that's not true.
  3. It depends on integers being 32 bits long, or requires more #ifdefs to pick a 32-bit integer type.
  4. It may be a little faster on little-endian machines, but not much, and it's slower on big-endian machines.
  5. If you're using a little-endian machine when you write this, there's no way to test the big-endian code.
  6. It swaps the bytes, a sure sign of trouble (see below).

By contrast, my version of the code:
  1. Is shorter.
  2. Does not depend on alignment issues.
  3. Computes a 32-bit integer value regardless of the local size of integers.
  4. Is equally fast regardless of local endianness, and fast enough (especially on modern processsors) anyway.
  5. Runs the same code on all computers: I can state with confidence that if it works on a little-endian machine it will work on a big-endian machine.
  6. Never "byte swaps".
In other words, it's simpler, cleaner, and utterly portable. There is no reason to ask about local byte order when about to interpret an externally provided byte stream.

I've seen programs that end up swapping bytes two, three, even four times as layers of software grapple over byte order. In fact, byte-swapping is the surest indicator the programmer doesn't understand how byte order works.

Why do people make the byte order mistake so often? I think it's because they've seen a lot of bad code that has convinced them byte order matters. "Here comes an encoded byte stream; time for an #ifdef." In fact, C may be part of the problem: in C it's easy to make byte order look like an issue. If instead you try to write byte-order-dependent code in a type-safe language, you'll find it's very hard. In a sense, byte order only bites you when you cheat.

There's plenty of software that demonstrates the byte order fallacy is really a fallacy. The entire Plan 9 system ran, without architecture-dependent #ifdefs of any kind, on dozens of computers of different makes, models, and byte orders. I promise you, your computer's byte order doesn't matter even at the level of the operating system.

And there's plenty of software that demonstrates how easily you can get it wrong. Here's one example. I don't know if it's still true, but some time back Adobe Photoshop screwed up byte order. Back then, Macs were big-endian and PCs, of course, were little-endian. If you wrote a Photoshop file on the Mac and read it back in, it worked. If you wrote it on a PC and tried to read it on a Mac, though, it wouldn't work unless back on the PC you checked a button that said you wanted the file to be readable on a Mac. (Why wouldn't you? Seriously, why wouldn't you?) Ironically, when you read a Mac-written file on a PC, it always worked, which demonstrates that someone at Adobe figured out something about byte order. But there would have been no problems transferring files between machines, and no need for a check box, if the people at Adobe wrote proper code to encode and decode their files, code that could have been identical between the platforms. I guarantee that to get this wrong took far more code than it would have taken to get it right. [Note added in 2013: I'm told by folks at Adobe that the option was for TIFF files and only needed for third-party plugins. That doesn't explain why it was PC-only or necessary at all. Adobe might not be the right culprit but the issue was real.]

Just last week I was reviewing some test code that was checking byte order, and after some discussion it turned out that there was a byte-order-dependency bug in the code being tested. As is often the case, the existence of byte-order-checking was evidence of the presence of a bug. Once the bug was fixed, the test no longer cared about byte order.

And neither should you, because byte order doesn't matter.

Saturday, December 31, 2011

Esmerelda's Imagination

An actress acquaintance of mine—let's call her Esmerelda—once said, "I can't imagine being anything except an actress." To which the retort was given, "You can't be much of an actress then, can you?"

I was reminded of this exchange when someone said to me about Go, "I can't imagine programming in a language that doesn't have generics." My retort, unspoken this time, was, "You can't be much of a programmer, then, can you?"

This is not an essay about generics (which are a fine thing and may arrive in Go one day, or may not) but about imagination, or at least what passes for imagination among computer programmers: complaint. A friend observed that the definitive modern pastime is to complain on line. For the complainers, it's fun, for the recipients of the complaint it can be dispiriting. As a recipient, I am pushing back—by complaining, of course.

Not so long ago, a programmer was someone who programs, but that seems to be the last thing programmers do nowadays. Today, the definition of a programmer is someone who complains unless the problem being solved has already been solved and whose solution can be expressed in a single line of code. (From the point of view of a language designer, this reduces to a corollary of language success: every program must be reducible to single line of code or your language sucks. The lessons of APL have been lost.)

A different, more liberal definition might be that a programmer is someone who approaches every problem exactly the same way and complains about the tools if the approach is unsuccessful.

For the programmer population, the modern pastime demands that if one is required to program, or at least to think while programming, one blogs/tweets/rants instead. I have seen people write thousands of words of on-line vituperation that problem X requires a few extra keystrokes than it might otherwise, missing the irony that had they spent those words on programming, they could have solved the problem many times over with the saved keystrokes. But, of course, that would be programming.

Two years ago Go went public. This year, Dart was announced. Both came from Google but from different teams with different goals; they have little in common. Yet I was struck by a property of the criticisms of Dart in the first few days: by doing a global substitution of "Go" for "Dart", many of the early complaints about Go would have fit right into the stream of Dart invective. It was unnecessary to try Go or Dart before commenting publicly on them; in fact, it was important not to (for one thing, trying them would require programming). The criticisms were loud and vociferous but irrelevant because they weren't about the languages at all. They were just a standard reaction to something new, empty of meaning, the result of a modern programmer's need to complain about everything different. Complaints are infinitely recyclable. ("I can't imagine programming in a language without XXX.") After all, they have a low quality standard: they need not be checked by a compiler.

A while after Go launched, the criticisms changed tenor somewhat. Some people had actually tried it, but there were still many complainers, including the one quoted above. The problem now was that imagination had failed: Go is a language for writing Go programs, not Java programs or Haskell programs or any other language's programs. You need to think a different way to write good Go programs. But that takes time and effort, more than most will invest. So the usual story is to translate one program from another language into Go and see how it turns out. But translation misses idiom. A first attempt to write, for example, some Java construct in Go will likely fail, while a different Go-specific approach might succeed and illuminate. After 10 years of Java programming and 10 minutes of Go programming, any comparison of the language's capabilities is unlikely to generate insight, yet here come the results, because that's a modern programmer's job.

It's not all bad, of course. Two years on, Go has lots of people who've spent the time to learn how it's meant to be used, and for many willing to invest such time the results have been worthwhile. It takes time and imagination and programming to learn how to use any language well, but it can be time well spent. The growing Go community has generated lots of great software and has given me hope, hope that there may still be actual programmers out there.

However, I still see far too much ill-informed commentary about Go on the web, so for my own protection I will start 2012 with a resolution:

I resolve to recognize that a complaint reveals more about the complainer than the complained-about. Authority is won not by rants but by experience and insight, which require practice and imagination. And maybe some programming.