The Wrong Kind of Paranoia

Have you ever considered how many programming language features exist only to prevent developers from doing something? And it's not only to keep you from doing something in other people's code. Often the person you're preventing from doing this thing is yourself.

For example, modules let you prevent people from calling functions that haven't been explicitly exported. In C there's static which hides a function from other separately compiled files.

const prevents modifying a variable. For pointers there's a second level of const-ness, making the pointed-to data read-only. C++ goes even further, as C++ tends to, allowing a class method to be marked const, meaning that it doesn't change any instance variables.

Many object-oriented languages let you group methods into private and public sections, so you can't access private methods externally. At least Java, C++, and Object Pascal add protected, which muddies the water. In C# you can seal classes so they can't be inherited. I'm trying real hard not to bring up friend classes, so I won't.

Here's the question: how much does all this pedantic hiding, annotating, and making sure you don't double-cross yourself by using a "for internal use only" method actually improve your software? I realize I'm treading in dangerous territory here, so take a few deep breaths first.

I like const, and I automatically precede local variables with it, but the compiler doesn't need me to do that. It can tell that a local integer is only assigned to once, and the generated code will be exactly the same. You could argue that the qualifier prevents accidental changes, but if I've ever had that happen in real code it's rare enough that I can't recall.

Internal class methods are similar. If they're not in the tutorial, examples, or reference, you don't even know they exist. If you use the header file for documentation, and internal methods are grouped together beneath the terse comment "internal methods," then why are you calling them? Even if they're secured with the private incantation, nothing is stopping you from editing the file, deleting that word, and going for it. And if this is your own code that you're doing this with, then this scenario is teetering on the brink of madness.

What all of these fine-grained controls have done is to put the focus on software engineering in the small. The satisfaction of building so many tiny, faux-secure fortresses by getting publics and protecteds in the right places and adding immutability keywords before every parameter and local variable. But you've still got a sea of modules and classes and is anything actually simpler or more reliable because some methods are behind the private firewall?

I'm going to give a couple of examples of building for isolation and reliability at the system level, but don't overgeneralize these.

Suppose you're building code to control an X-ray machine. You don't want the UI and all of that mixed together with the scary code that irradiates the patient. You want the control code on the device itself, and a small channel of communication for sending commands and getting back the results. The UI system only knows about that channel, and can't accidentally compromise the state of the hardware.

There's an architecture used in video games for a long time now where rendering and other engine-level functions are decoupled from the game logic, and the two communicate via a local socket. This is especially nice if the engine is in C++ and you're using a different language for the game proper. I've done this with Erlang, which worked out well, at least under OS X.

Both of these have a boldness to them, where an entire part of the system is isolated from the rest, and the resulting design is easier to understand and simpler overall. That's more important than trying to protect each tiny piece from yourself.