Against UI standardization

A user interface is a translation layer that sits between the user’s mental model of a problem and the developer’s mental model of that same problem. UI design is, therefore, trying to solve the same problem as programming language design: how do we represent all of the details relevant to the problem domain in an unambiguous way while avoiding the need for awkward manipulations on the part of the user? Language design makes certain assumptions about the type, breadth, and depth of the user’s knowledge on certain subjects: notably, that the user is willing to read some documentation and expend some effort learning the representation in addition to the effort put into solving their problem. So, we can say that UI design is about creating languages whose full expressive power is revealed gradually — languages with a shallow initial learning curve.

UI design, as practiced, is not in line with this language-centric perspective, nor is it in line with the philosophies or practices of the much-cited mid-century thinkers who designed physical devices. A big part of this is involved with widget and UI pattern standardization.

Google, Apple, NeXT, and many others have published style design guides, while every UI toolkit has semi-consistent native styles (which are often difficult to change). Even on the web, where creating a wild experimental UI is almost exactly as difficult as creating a conventional one due to the document-centric structure of embedded markup based display, convention reigns. Why?

One answer is marketing. The ‘look and feel’ of a UI is considered an important part of branding. Of course, if you are not a first-party developer, remaining consistent with the look and feel of your host OS or UI toolkit is not a good branding decision — you are increasing the mindshare of someone else’s brand!

Another answer is poor tooling: UI toolkits often don’t have good support for theming or for creating genuinely new kinds of widgets, and often have system-wide restrictions based on the needs of built-in widgets that limit one’s ability to create behaviors the original developers didn’t consider. (Consider the difficulty in displaying large quantities of editable text on a 3d surface in OpenGL, or the difficulty in making two widgets overlap or intersect window boundaries in any mainstream UI toolkit.) However, this doesn’t explain why web UIs, which have no built-in widgets and need to use CSS hacks to rearrange text and images to implement any structure other than minimally-formatted wrapped text, are so conventional in appearance.

Perhaps developers fear that users will be confused about how to use a novel-looking UI? Of course, users are also confused about conventionally-structured UIs, which (by their nature) cannot match the natural way a non-technical user thinks about their problem.

Ultimately, I think the main culprit is a lack of imagination, and a lack of willingness to think deeply about the appropriate way to structure a UI for a very particular task.

Much as the developers of UI toolkits have failed to imagine that people might want to use their tools in particular ways, designers have failed to imagine that users neither know nor care about the stock widgets, templates, and patterns. They have needed to learn how to use the applications they use; they will learn to use new applications faster when those applications more closely match their workflow and conceptual models.

The user is the domain expert: they know their own business better than we do. The job of a developer or a designer is to cater to them and recommend possibilities, not to dictate their environment.

More by A pig in a cage on antibiotics

Topics of interest

More Related Stories