Throughout the last 50 years, we have seen numerous interfaces come to light, some good, some bad, and some downright ridiculous. Where is the right balance between the magical, the familiar, and the powerful? What do we sacrifice when we make something more human, or what do we gain when we make something more powerful? Is it possible to find a true balance?
Starting with Metaphor
In the early days of Graphical User Interfaces (GUIs), metaphors were heavily employed to create an interface that people could wrap their heads around. Most of these metaphors are still with us today, like the desktop, files, the trash can, and so on.
In an essay entitled Working with Interface Metaphors, Thomas D. Erickson (then at Apple) explains the need for metaphor as a tool for understanding in everyday life. He says that “many people think of metaphor as a flowery sort of language found chiefly in poetry and bad novels. This is not correct. Metaphor is an integral part of our language and thought. It appears not only in poetry and novels, but in our everyday speech about common matters. Mostly we don’t notice; metaphor is such a constant part of our speech and thought that it is invisible.” He goes on to say that “we speak of argument as though it is war. Arguments have sides that can be defended and attacked. If a position is indefensible, one can retreat from it. Arguments can have weak points — they can even be destroyed; arguments can be right on target; arguments can be shot down. […] The metaphorical way in which we talk and think about argument is the rule, not the exception.”
Perhaps the first commercial example comes from 1983 on the Commodore, a year before the Macintosh (see it out in action here). When we look back at interfaces that tried to turn the computer into a space, we quickly see that many of the early examples fell flat in defining their purpose, and in some cases, sacrificing too much power for ease of use.
Introducing a Little Magic
By the mid-80s it was well understood that bringing metaphor into technology offered a way to make computers more accessible and understandable. According to Alan Kay (most notably known for his involvement at Xerox PARC and later at Apple), metaphors were starting to go too far.
In a 1989 essay entitled User Interfaces: A Personal View, he offers a departure from strictly using metaphor:
“One of the most compelling snares is the use of the term metaphor to describe a correspondence between what the users see on the screen and how they should think about what they are manipulating. My main complaint is that metaphor is a poor metaphor for what needs to be done. At [Xerox] PARC we coined the phrase user illusion to describe what we were about when designing user interface. […] For example, the screen as “paper to be marked on” is a metaphor that suggests pencils, brushes, and typewriting. Fine, as far as it goes. But it is the magic — understandable magic — that really counts. Should we transfer the paper metaphor so perfectly that the screen is as hard as paper to erase and change? Clearly not. If it is to be like magical paper, then it is the magical part that is all important and that must be most strongly attended to in the user interface design.”
In fact, the notion of using strict metaphor in technology wasn’t just tiring, it was also limiting. He plainly advocates for something more in his critique the desktop metaphor:
“I don’t want a screen that is much like my physical desk. It just gets messy — yet I hate to clean it up while in the middle of a project. And I’m usually working on lots of different projects at the same time. At [Xerox] PARC they used to accuse me of filling up my desk until it was uselessly tangled and then abandoning it for another one! One solution to this is “project views” as originally implemented by Dan Ingalls in Smalltalk-76. Again, this is more of a user illusion than a metaphor. Each area holds all the tools and materials for a particular project and is automatically suspended when you leave it. A bit like multiple desks — but it is the magic that is truly important. Here the magic is that every change to the system that is made while in a project area — no matter how deeply made to the system (even changing the meaning of integer “+”!) — is logged locally to the project area. Yet each of the project areas are parallel processes that can communicate with one another. This is a user illusion that is easy to understand yet doesn’t map well to the physical world.”
The desktop metaphor was an understandable way to make computers easier to understand, but what was the point in limiting yourself to one desktop? Why not take the familiarity of the desktop and augment its abilities?
Segmenting information into different spaces is something that started with the Amiga in 1985, allowing users to separate their active work into multiple different logical groupings. This concept would later appear in Xerox Rooms (pictured above), then in macOS, Windows, BeOS, and Linux.
It was Alan’s view that a metaphor was only good to the extent at which it provided meaning to the user, that is, there is no use in upholding real-world limitations on items in the digital realm that have the capacity to be magical.
He would certainly have some things to say about interfaces that cropped up in the mid-90s.
Back to Metaphor
By the time we reached 1995, we seemed to run back into heavy usage of metaphor.
Perhaps the most literal of the metaphor-based interfaces of the mid-90s was the Packard Bell Navigator. It was a shell that ran over Windows 3.1 and Windows 95 that shipped with most Packard Bell computers in 1995. It allowed users to navigate to different rooms such as the Living Room, the Workspace, the Info Room, or the Software Room, all set in pre-rendered hillside home. Because of this, there was little emphasis on customization within the space.
Magic Cap was an operating system developed for “personal intelligent communicators” by General Magic, headed by a number of Macintosh veterans and some other soon-to-be-influential figures like Andy Rubin, the founder of Android. Something about Magic Cap was special. Just listen to Andy, Bill, or Megan talk about it. I do believe in the importance of making a device truly familiar and known, and I do believe that quirk and charm belong in our technology. If General Magic’s software would’ve been accompanied with adequate hardware, I think it would’ve been an amazing platform to introduce the world to the next wave of communications.
Microsoft Bob is quite well known and its demise is quite complicated. Here, the interface didn’t allow for new things to be possible, it just simplified the normal uses of a computer. It did so in a very friendly way, but by 1995, people had gone through almost a decade of conditioning telling them what to expect from a GUI, which in most scenarios, was friendly enough.
It is important to understand that none of these ideas are contrived from hard science and are subject to the ebb and flow of every other technological trend. It is apparent, however, that the people behind these ideas believe passionately in their approaches, and, ultimately, they are all working towards the same goal: to make computing more human. When the interface falls away, we are able to discover more, make more connections, and ultimately become better versions of ourselves. In our aversion to these failed interfaces, many of our tools today are fractured and unapproachable; their utilitarian demeanor doesn’t invite us to play and explore. And so, there has been a shift to bring the best of both worlds together in a logical, yet human way.
Returning to Reality
Ever since the failed attempts of the mid-90s to make software more human and familiar, the community has been cautious in proposing interfaces that are heavy in such techniques. The announcement of the iPhone and subsequently the iPad marked one of the first times that we were to see these interfaces on such a big, successful scale. It is said that Apple integrated skeuomorphic elements into iOS to harken back to that familiarity, especially when they were introducing a device that was so foreign and new. For people to not only understand it but also accept it, it had to convey a sense of familiarity. It harkened back to Alan Kay’s thoughts on user illusion instead of metaphor, allowing new abilities to shine through while keeping the familiar around visually.
Perhaps the most influential thinker in this space now is Bret Victor. Bret has put out numerous talks and articles on the benefits of creating the ultimate interface not as something on a screen, but in an environment that extends your abilities.
In his keynote entitled The Humane Representation of Thought, he outlines the ideal computing space:
A book wants to be a space that you walk around in; something that feels a little bit more like a museum gallery than a book today, so you read this book by walking around in it, engaging visually, spatially, tangibly, using all of those capabilities that we’ve evolved for understanding spaces and environments. So, you want to learn linear algebra, for example, you download the linear algebra textbook which is this entire space; maybe on each floor there is a particular chapter, and you kind of make your way through the book by making your way through the space, interacting with the things, concepts are represented in a tangible, physical form, and you can use your spatial forms of perception to understand the gist of the material.”
He is careful to note that what he is talking about isn’t VR or MR… “it’s just R.” He ultimately wants these affordances to be present in the real, physical world. His motivations for having such a space stems from his belief that once the interface works with us, we are able to cut through the noise and understand more, seeing connections where they were originally too obscured to find. There is so much out there to learn and experience, but so often we cannot even realize its existence because we’re looking through the wrong lens.
“The example I like to give is back in the days of Roman numerals, basic multiplication was considered this incredibly technical concept that only official mathematicians could handle,” he continues. “But then once Arabic numerals came around, you could actually do arithmetic on paper, and we found that 7-year-olds can understand multiplication. It’s not that multiplication itself was difficult. It was just that the representation of numbers — the interface — was wrong,” said Bret Victor in an interview by John Pavlus.
Bret’s talks and proposals do seem to advocate for skeuomorphism as a tool in the sense that it presents the familiarity and if done correctly, allows us to intuitively understand the software. He advocates for experimentation and tight feedback loops, allowing us to intuitively pick up on relationships and connections.
It is my hope that through this post you were able to discover something new that changes your view on how we could possibly interact with computers. We don’t currently compute spatially, but perhaps that’s because we haven’t yet arrived on the correct solution. When we have an interface that disappears, one that fosters connections, and one that is persistent to our identity, perhaps we will know that it has arrived. With hundreds of failed experiments behind us, I think we’re all the verge of an incredibly personal and powerful interface. Once it comes, we can get back to what we do best: being human.
This is just the first in a series attempting to uncover the history of computer interfaces. Watch out for further additions in the coming weeks!
Hacker Noon is how hackers start their afternoons. We’re a part of the @AMI family. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!