programming in the
twenty-first century

It's not about technology for its own sake. It's about being able to implement your ideas.

Eleven Years of Erlang

I've written about how I started using Erlang. A good question is why, after eleven years, am I still using it?

For the record, I do use other languages. I enjoy writing Python code, and I've taught other people how to use Python. This website is statically generated by a Perl program that I had fun writing. And I dabble in various languages of the month which have cropped up. (Another website I used to maintain was generated by a script that I kept reimplementing. It started out written in Perl, but transitioned through at least REBOL, J, and Erlang before I was through.)

One of the two big reasons I've stuck with Erlang is because of its simplicity. The functional core of Erlang can and has been described in a couple of short chapters. Knowledge of four data types--numbers, atoms, lists, tuples--is enough for most programming problems. Binaries and funs can be tackled later. This simplicity is good, because the difficult part of Erlang and any mostly-functional language is in learning to write code without destructive updates. The language itself shouldn't pour complexity on top of that.

There are many possibilities for extending Erlang with new data types, with an alternative to records being high on the list. Should strings be split off from lists into a distinct entity? What about arrays of floats, so there's no need to box each value? How about a "machine integer" type that's represented without tagging and that doesn't get automatically promoted to an arbitrarily sized "big number" when needed?

All of those additional types are optimizations. Lists work just fine as strings, but even the most naive implementation of strings as unicode arrays would take half the memory of the equivalent lists, and that's powerful enticement. When Knuth warned of premature optimization, I like to think he wasn't talking so much about obfuscating code in the process of micro-optimizing for speed, but he was pointing out that code is made faster by specializing it. The process of specialization reduces your options, and you end up with a solution that's more focused and at the same time more brittle. You don't want to do that until you really need to.

It may be an overreaction to my years of optimization-focused programming, but I like the philosophy of making the Erlang system fast without just caving in and providing C-style abilities. I know how to write low-level C. And now I know how to write good high-level functional code. If I had been presented with a menu of optimization-oriented data types in Erlang, that might never have happened. I'd be writing C in the guise of Erlang.

The second reason I'm still using Erlang is because I understand it. I don't mean I know how to code in it, I mean I get it all the way down. I know more or less what transformations are applied by the compiler and the BEAM loader. I know how the BEAM virtual machine works. And unlike most languages, Erlang holds together as a full system. You could decide to ditch all existing C compilers and CPUs and start over completely, and Erlang could serve as a foundation for this new world of computing. The ECOMP project (warning: PowerPoint) proved that an FPGA running the Erlang VM directly gives impressive results.

Let me zoom in on one specific detail of the Erlang runtime. If you take an arbitrary piece of data in a language of the Lua or Python family, at the lowest-level it ends up wrapped inside a C struct. There's a type field, maybe a reference count, and because it's a heap allocated block of memory there's other hidden overhead that comes along with any dynamic allocation (such as the size of the block). Lua is unabashedly reliant on malloc-like heap management for just about everything.

Erlang memory handling is much more basic. There's a block of memory per process, and it grows from bottom to top until full. Most data objects aren't wrapped in structs. A tuple, for example, is one cell of data for the length followed by the number of cells in the tuple. The system identifies it as a tuple by tagging the pointer to the tuple. You know the memory used for a tuple is always 1 + N, period. Were I trying to optimize data representation by hand, with the caveat that type info needs to be included, it would be tough to do significantly better.

I'm sure some people are correctly pointing out that this is how most Lisp and Scheme systems have worked since those languages were developed. There's nothing preventing an imperative language from using the same methods (and indeed this is sometimes the case).

Erlang takes this further by having a separate block of memory for each process, so when the block gets full only that particular block needs to be garbage collected. If it's a 64K block, it takes microseconds to collect, as compared to potentially traversing a heap containing the hundreds of megabytes of data in the full running system. Disallowing destructive updates allows some nice optimizations in the garbage collector, because pointers are guaranteed to reference older objects (this is sometimes called a "unidirectional heap"). Together these are much simpler than building a real-time garbage collector that can survive under the pressure of giant heaps.

Would I use Erlang for everything? Of course not. Erlang is clearly a bad match for some types of programming. It would be silly to force-fit Erlang into the iPhone, for example, with Apple promoting Objective C as the one true way. But it's the best mix of power and simplicity that I've come across.

permalink March 10, 2010



twitter / mail

I'm James Hague, a recovering programmer who has been designing video games since the 1980s. Programming Without Being Obsessed With Programming and Organizational Skills Beat Algorithmic Wizardry are good starting points. For the older stuff, try the 2012 Retrospective.

Where are the comments?