programming in the
twenty-first century

It's not about technology for its own sake. It's about being able to implement your ideas.

Self-Imposed Complexity

Bad data visualizations are often more computationally expensive--and harder to implement--than clear versions. A 3D line graph is harder to read than the standard 2D variety, yet the code to create one involves the additional concepts of filled polygons, shading, viewing angle, and line depth. An exploded 3D pie chart brings nothing over an unexploded version, and both still miss out on the simplicity of a flat pie chart (and there's a strong case to be made for using the even simpler bar chart instead).

Even with a basic bar chart there are often embellishments that detract from the purpose of the chart, but increase the amount of interface for creating them and code to draw them: bars with gradients or images, drop shadows, unnecessary borders. Edward Tufte has deemed these chartjunk. A bar chart with all of the useless fluff removed looks like something that, resolution aside, could have been drawn on a computer from thirty years ago, and that's a curious thing.

But what I really wanted to talk about is vector graphics.

Hopeful graphic designers have been saying for years that vector images should replace bitmaps for UI elements. No more redrawing and re-exporting to support a new screen size. No more smooth curves breaking into jagged pixels when zoomed-in. It's an enticing proposition, and if it had been adopted years ago, then the shift to ultra-high resolution displays would have been seamless--no developer interaction required.

Except for one thing: realistic, vector icons are more complicated than they appear. If you look at an Illustrator tutorial for creating a translucent, faux 3D globe, something that might represent "the network" or "the internet," it's not just a couple of Bezier curves and filled regions. There are drop shadows with soft edges and blur filters and glows and reflections and tricky gradients. That's the problem with scalable vectors for everything. It takes a huge amount of processing to draw and composite all of these layers of detailed description, and meanwhile the 64x64 bitmap version was already drawn by the GPU, and there's enough frame time left to draw thousands more.

That was the view three or more years ago, when user-interface accoutrements were thick with gloss and chrome and textures that you wanted to run your finger over to feel the bumps. But now looking at the comparatively primitive, yet aesthetically pleasing icons of iOS 7 and Windows 8, the idea that they could be live vector descriptions isn't so outlandish. And maybe what's kept us from getting there sooner is that it was hard to have to have self-imposed restraint amid a whirlwind of so much new technology. It was hard to say, look, we're going to have a clean visual language that, resolution aside, could have worked on a computer from thirty years ago.

permalink December 8, 2013



twitter / mail

I'm James Hague, a recovering programmer who has been designing video games since the 1980s. Programming Without Being Obsessed With Programming and Organizational Skills Beat Algorithmic Wizardry are good starting points. For the older stuff, try the 2012 Retrospective.

Where are the comments?