programming in the
twenty-first century

It's not about technology for its own sake. It's about being able to implement your ideas.

Living Inside Your Own Black Box

Every so often I run across a lament that programmers no longer understand the systems they work on, that programming has turned into searches through massive quantities of documentation, that large applications are built by stacking together loosely defined libraries. Most recently it was Mike Taylor's Whatever happened to programming?, and it's worth the time to read.

To me, it's not that the act of programming has gotten more difficult. I'd even say that programming has gotten much easier. Most of the Apple Pascal assignments I had in high school would be a fraction of the bulk if written in Ruby or Python. Arrays don't have fixed lengths. Strings are easy. Dictionaries have subsumed other data structures. Generic sort routines are painless to use. Functions can be tested interactively. Times are good!

That's not to say all problems can be solved effortlessly. Far from it. But it's a tight feedback loop: think, experiment, write some code, reconsider, repeat. This works as long as you can live inside an isolated world, where the basic facilities of your programming language are the tools you have to work with. But at some point that doesn't work, and you have to deal with outside realities.

Here's the simplest example I can think of: Write a program to draw a line on the screen. Any line, any color, doesn't matter. No ASCII art.

In Python the first question is "What UI toolkit?" There are bindings for SDL, Cocoa, wxWindows, and others. Selecting one of those still doesn't mean that you can simply call a function and see your line. SDL requires some up front effort to learn how to create a window and choose the right resolution and color depth and so on. And then you still can't draw a line unless you use OpenGL or get an add-on package like SDL_gfx. If you decide to take the Cocoa route, then you need to understand its whole messaging / windowing / drawing model, and you also need to understand how Python interfaces with it. Maybe there's a beautifully simple package out there that lets you draw lines, and then the question becomes "Can I access that library from the language I'm using?" An even more basic question: "Is the library written using a paradigm that's a good match for my language?" (Think of a library based on subclassing mutable objects and try to use it from Haskell.)

There's a clear separation between programming languages and the capabilities of modern operating systems. Any popular OS is obviously designed for creating windows and drawing and getting user input, but those are not fundamental features of modern languages. At one time regular expressions weren't standard in programming languages either, but they're part of Perl and Ruby, and they're a library that's part of the official Python distribution.

A handful of language designers have tried to make GUI programming as easy as traditional programming. The Tk library for TCL, which is still the foundation for Python's out-of-the-box IDE, allows basic UI creation with simple, declarative statements. REBOL is a more recent incarnation of the same idea, that sample code involving windows and user input and graphics should be a handful of lines, not multiple pages of wxWindows fussing. I wish more people were working on such things.

A completely different approach is to go back to the isolationist view of only using the natural capabilities of a programming language, but in a more extreme way. I can draw a line in Python with this tuple:

("line",0,0,639,479)

or I can do the same thing in Erlang with two fewer characters:

{line,0,0,639,479}

I know it works, because I can see it right there. The line starts at coordinates 0,0 and ends at 639,479. It works on any computer with any video card, including systems I haven't used yet, like the iPad. I can use the same technique to play sounds and build elaborate UIs.

That the results are entirely in my head is of no matter.

It may sound like I'm being facetious, but I'm not. In most applications, interactions between code and the outside world can be narrowed down to couple of critical moments. Even in something as complex as a game, you really just need a few bytes representing user input at the start of a frame, then much later you have a list of things to draw and a list of sounds to start, and those get handed off to a thin, external driver of sorts, the small part of the application that does the messy hardware interfacing.

The rest of the code can live in isolation, doing arbitrarily complex tasks like laying out web pages and mixing guitar tracks. It takes some practice to build applications this way, without scattering calls to external libraries throughout the rest of the code, but there are big wins to be had. Fewer dependencies on platform specifics. Fewer worries about getting overly reliant on library X. And most importantly, it's a way to declutter and get back to basics, to focus on writing the important code, and to delve into those thousands of pages of API documentation as little as possible.

permalink March 23, 2010

previously

archives

twitter / mail

I'm James Hague, a recovering programmer who has been designing video games since the 1980s. Programming Without Being Obsessed With Programming and Organizational Skills Beat Algorithmic Wizardry are good starting points. For the older stuff, try the 2012 Retrospective.

Where are the comments?