programming in the
twenty-first century

It's not about technology for its own sake. It's about being able to implement your ideas.

Optimizing for Fan Noise

The first money I ever earned, outside of getting an allowance, was writing assembly language games for an 8-bit home computer magazine called ANALOG Computing. Those games ended up as pages of printed listings of lines like this:

1050 DATA 4CBC08A6A4BC7D09A20986B7B980
0995E895D4B99E099DC91C9DB51CA90095C0C8
CA10E8A20086A88E7D1D8E7E,608

A typical game could be 75 to 125+ of those lines (and those "three" lines above count as one; it's word-wrapped for a 40-column display). On the printed page they were a wall of hex digits. And people typed them in by hand--I typed them in by hand--in what can only be described as a painstaking process. Just try reading that data to yourself and typing it into a text editor. Go ahead: 4C-BC-08-A6...

Typos were easy to make. That's the purpose of the "608" at the end of the line. It's a checksum verified by a separate "correctness checker" utility.

There was a strong incentive for the authors of these games to optimize their code. Not for speed, but to minimize the number of characters that people who bought the magazine had to type. Warning, 6502 code ahead! This:

   LDA #0
   TAY

was two fewer printed digits than this:

   LDA #0
   LDY #0

Across a 4K or 6K game, those savings mattered. Two characters here, four characters there, maybe the total line count could be reduced by four lines, six lines, ten lines. This had nothing to do with actual code performance. Even on a sub-2MHz processor those scattered few cycles were noise. But finding your place in the current line, saying "A6," then typing "A" and "6" took time. Measurable time. Time that was worth optimizing.

Most of the discussions I see about optimization are less concrete. It's always "speed" and "memory," but in the way someone with a big house and a good job says "I need more money." Optimization only matters if you're optimizing something where you can feel the difference, and you can't feel even thousands of bytes or nanoseconds. Optimizing for program understandability...I'll buy that, but it's more of an internal thing. There's one concern that really does matter these days, and it's not abstract in the least: power consumption.

It's more than just battery life. If a running program means I get an hour less work done before looking for a place to plug in, that's not horrible. The experience is the same, just shorter. But power consumption equals heat and that's what really matters to me: if the CPU load in my MacBook cranks up then it gets hot, and that causes the fan to spin up like a jet on the runway, which defeats the purpose of having a nice little notebook that I can bring places. I can't edit music tracks with a roaring fan like that, and it's not something I'd want next to me on the plane or one table over at the coffee shop. Of course it doesn't loudly whine like that most of the time, only when doing something that pushes the system hard.

What matters in 2010 is optimizing for fan noise.

If you're not buying this, take a look at Apple's stats about power consumption and thermal output of iMacs (which, remember, are systems where the CPU and fan are right there on your desk in the same enclosure as the monitor). There's a big difference in power consumption, and corresponding heat generated, between a CPU idling and at max load. That means it's the programs you are running which are directly responsible for both length of battery charge and how loudly the fan spins.

Obvious? Perhaps, but this is something that didn't occur with most popular 8-bit and 16-bit processors, because those chips never idled. They always ran flat-out all the time, even if just in a busy loop waiting for interrupts to hit. With the iMacs, there's a trend toward the difference between idle and max load increasing as the clock speed of the processor increases. The worst case is the early 2009 24-inch iMac: 387.3 BTU/h at idle, 710.3 BTU/h at max load, for a difference of 323 BTU/h. (For comparison, that difference is larger than the entire maximum thermal output of the 20-inch iMac CPU: 298.5 BTU/h.)

The utmost in processing speed, which once was the goal, now has a price associated with it. At the same time that manufacturers cite impressive benchmark numbers, there's also the implicit assumption that you don't really want to hit those numbers in the everyday use of a mobile computer. Get all those cores going all the time, including the vector floating point units, and you get rewarded with forty minutes of use on a full battery charge with the fan whooshing the whole time. And if you optimize your code purely for speed, you're getting what you asked for.

Realistically, is there anything you can do? Yes, but it means you have to break free from the mindset that all of a computer's power is there for the taking. Doubling the speed of a program by moving from one to four cores is a win if you're looking at the raw benchmark numbers, but an overall loss in terms of computation per watt. Ideas that sounded good in the days of CPU cycles being a free resource, such as anticipating a time-consuming task that the user might request and starting it in the background, are now questionable features. Ditto for persistent unnecessary animations.

Nanoseconds are abstract. The sound waves generated by poorly designed applications are not.

permalink February 10, 2010

previously

archives

twitter / mail

I'm James Hague, a recovering programmer who has been designing video games since the 1980s. Programming Without Being Obsessed With Programming and Organizational Skills Beat Algorithmic Wizardry are good starting points. For the older stuff, try the 2012 Retrospective.

Where are the comments?