It's not about technology for its own sake. It's about being able to implement your ideas.
Basic math used to be slow. To loop 10K times on an 8-bit processor, it was faster to iterate 256 times in an inner loop, then wrap that in an outer loop executing 40 times. That avoided multi-instruction 16-bit addition and comparison each time through.
Multiplication and division used to be slow. There were no CPU instructions for those operations. If one of the multiplicands was constant, then the multiply could be broken down into a series of adds and bit shifts (to multiply N by 44: N lshift 5 + N lshift 3 + N lshift 2), but the general case was much worse.
Floating point used to be slow. Before FPUs, floating point math was done in software at great expense. Early hardware was better, but hardly impressive. On the original 8087 math coprocessor, simple floating point addition took a minimum of 90 cycles, division over 200, and there were instructions that took over a thousand cycles to complete.
Graphics used to be slow. For the longest time, programmers who had trouble getting 320x200 displays to update at any kind of reasonable rate, scoffed at the possibility of games running at the astounding resolution of 640x480.
All of these concerns have been solved to comical degrees. A modern CPU can add multiple 64-bit values at the same time in a single cycle. Ditto for floating point operations, including multiplication. All the work of software-rendering sprites and polygons has been offloaded to separate, highly-parallel processors that run at the same time as the multiple cores of the main CPU.
Somewhere in the late 1990s, when the then-popular Pentium II reached clock speeds in the 300-400MHz range, processing power became effectively infinite. Sure there were the notable exceptions, like video compression and high-end 3D games and editing extremely high-resolution images, but I was comfortably developing in interpreted Erlang and running complex Perl scripts without worrying about performance.
Compared to when I was building a graphically intensive game on an early 66MHz Power Macintosh, compared to when I was writing commercial telecommunications software on a 20MHz Sun workstation, compared to developing on a wee 8-bit Atari home computer, that late 1990s Pentium II was a miracle.
Since then, all advances in processing power have been icing. Sure, some of that has been eaten up by cameras spitting out twelve megapixels of image data instead of two, by Windows 7 having more overhead than Windows 98, and by greatly increased monitor resolution. And there are always algorithmically complex problems that never run fast enough; that some hardware review site shows chipset X is 8.17% faster than chipset Y in a particular benchmark isn't going to overcome that.
Are you taking advantage of living in the era of infinite computing power? Have you set aside fixations with low-level performance? Have you put your own productivity ahead of vague concerns with optimization? Are you programming in whatever manner lets you focus on the quality and usefulness of the end product?
To be honest, that sounds a bit Seth Godin-esque, feel-good enough to be labeled as inspirational yet promptly forgotten. But there have been and will be hit iOS / Android / web applications from people without any knowledge of traditional software engineering, from people using toolkits that could easily be labeled as technically inefficient, from people who don't even realize they're reliant on the massive computing power that's now part of almost every available platform.
(If you liked this, you might enjoy How Much Processing Power Does it Take to be Fast?.)
permalink June 26, 2011
I'm James Hague, a recovering programmer who has been designing video games since the 1980s. Programming Without Being Obsessed With Programming and Organizational Skills Beat Algorithmic Wizardry are good starting points. For the older stuff, try the 2012 Retrospective.
Where are the comments?