It's not about technology for its own sake. It's about being able to implement your ideas.
The 8086 "AAA" instruction seemed like a good idea at the time. In the 1970s there was still a case to be made for operating on binary-coded decimal values, with two digits per byte. What's the advantage of BCD? Large values can be easily displayed without multi-byte division or multiplication. "ASCII Adjust After Addition," or AAA, was committed to the x86 hardware and 30+ years later it's still there, emulated in microcode, in every i7 processor.
The C library function memcpy
seemed like a good idea at the time. memmove
was fast and robust, properly handling the case where the source and destination overlapped. That handling came at the expense of a few extra instructions that were enough of a concern to justify a second, "optimized" memory copying routine (a.k.a. memcpy
). Since then we've had to live with both functions, though there has yet to be an example of an application whose impressive performance can be credited to the absence of overlap-detection code in memcpy
.
libpng
seemed like a good idea at the time. The theory was to have an easy, platform-independent way of reading and writing PNG files. The result does work, and it is platform independent, but it's possibly the only image decoding library where I can read through the documentation and still not know how to load an image. I always Google "simple libpng example" and cut and paste the 20+ line function that turns up.
The UNIX ls
utility seemed like a good idea at the time. It's the poster child for the UNIX way: a small tool that does exactly one thing well. Here that thing is to display a list of filenames. But deciding exactly what filenames to display and in what format led to the addition of over 35 command-line switches. Now the man page for the BSD version of ls
bears the shame of this footnote: "To maintain backward compatibility, the relationships between the many options are quite complex."
None of these examples are what caused modern computers to be incomprehensible. None of them are what caused SDKs to ship with 200 page overview documents to give some clue where to start with the other thousands of pages of API description.
But all the little bits of complexity, all those cases where indecision caused one option that probably wasn't even needed in the first place to be replaced by two options, all those bad choices that were never remedied for fear of someone somewhere having to change a line of code...they slowly accreted until it all got out of control, and we got comfortable with systems that were impossible to understand.
We did this. We who claim to value simplicity are the guilty party. See, all those little design decisions actually matter, and there were places where we could have stopped and said "no, don't do this." And even if we were lazy and didn't do the right thing when changes were easy, before there were thousands of users, we still could have gone back and fixed things later. But we didn't.
(If you liked this, you might enjoy Living in the Era of Infinite Computing Power.)
permalink May 18, 2012
I'm James Hague, a recovering programmer who has been designing video games since the 1980s. Programming Without Being Obsessed With Programming and Organizational Skills Beat Algorithmic Wizardry are good starting points. For the older stuff, try the 2012 Retrospective.
Where are the comments?