programming in the
twenty-first century

It's not about technology for its own sake. It's about being able to implement your ideas.

Documenting the Undocumentable

Not too long ago, any substantial commercial software came in a substantial box filled with hundreds or thousands of printed pages of introductory and reference material, often in multiple volumes. Over time the paper manuals became less comprehensive, leaving only key pieces of documentation in printed form, the reference material relegated to online help systems. In many cases the concept of a manual was dropped completely. If you can't figure something out you can always Google for it or watch YouTube videos or buy a book.

If you're expecting a lament for good, old-fashioned paper manuals, then this isn't it. I'm torn between the demise of the manual being a good thing, because almost no one read them in the first place, and the move to digital formats hiding how undocumentable many modern software packages have become.

Look at Photoshop CS6. The "Help and Tutorials" PDF is 750 pages, with much of that being links to external videos, documents, and tutorials. Clearly that's still not enough information, because there's a huge market for Photoshop books and classes. The first one I found at Amazon, Adobe Photoshop CS6 Bible, is 1100 pages.

The most fascinating part of all of this is what's become the tip of the documentation iceberg: the Quick Start guide.

This may be the only non-clinical documentation that ships with an application. It's likely the only thing a user will read before clicking around and learning through discovery or Google. So what do you put in the Quick Start guide? Simple tutorials? Different tutorials for different audiences? Explanations of the most common options?

Here's what I'd like to see: What the developers of the software were thinking when they designed it.

I don't mean coding methodologies; I mean the assumptions that were made about how the program should be used. For example, some image editors add a new layer each time you create a vector-based element like a rectangle. That means lots of layers, and that's okay. The philosophy is that bitmaps and editable vector graphics are kept completely separate. Other apps put everything into the same layer unless you explicitly create a new one. The philosophy is that layers are an organizational tool for the user.

Every application has philosophies like this that provide a deeper understanding once you know about them, but seem random otherwise. Why does the iMovie project size remain the same after removing twenty seconds of video? Because the philosophy is that video edits are non-destructive, so you never lose the source footage. Why is it so much work to change the fonts in a paper written in Word? Because you shouldn't be setting fonts directly; you should be using paragraph styles to signify your intent and then make visual adjustments later.

I want to see these philosophies documented right up front, so I don't have to guess and extrapolate about what I perceive as weird behavior. I'm thinking "What? Where are all these layers coming from?" but the developers wouldn't even blink, because that's normal to them.

And I'd know that, if they had taken the time to tell me.

(If you liked this, you might enjoy A Short Story About Verbosity.)

permalink December 29, 2012



twitter / mail

I'm James Hague, a recovering programmer who has been designing video games since the 1980s. Programming Without Being Obsessed With Programming and Organizational Skills Beat Algorithmic Wizardry are good starting points. For the older stuff, try the 2012 Retrospective.

Where are the comments?