The Decline of Usability: Revisited
In which we once more delve into the world of user interface design.
Autumn 2023
Three years ago, I wrote a rant about the problems of our current UI paradigm. The complaints I voiced were hardly new or unique, neither was the text what I'd consider my best writing. It was, honestly, mostly a way to blow off steam. It seems I struck a nerve, though, because it's proven to be one of the most popular texts I've published here. For some time, I've thought about writing a follow-up, and a recent resurgence in the text's popularity prompted me to finally do so.
I didn't (and still don't) have any delusions that my ramblings will somehow affect anything. And, in three years' time, nothing has indeed changed - at least not for the better. The most depressing part is perhaps that the debate around these issues hasn't changed one iota, either. The same non-arguments crop up all the time when discussing these issues:
"Well, gramps, maybe things weren't super duper great in the past, either?" "Where's the research, dude?" "It's progress, man. Progress! You can't stop progress!" "Uhhhhm, actually, compadre, we can do so much stuff with computers nowadays! That's usability, broseph!"
If I sound salty, it's because I am. Deal with it.
What were we talking about?
Usability, as defined by Wikipedia, is "the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience." Its relation to software is further specified: "In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use."
Let's go on with Wikipedia:
"The primary notion of usability is that an object designed with a generalized users' psychology and physiology in mind is, for example:
- More efficient to use — takes less time to accomplish a particular task.
- Easier to learn — operation can be learned by observing the object.
- More satisfying to use."
In short, usability is the ease with which a predetermined task can be accomplished. Consequently, "It looks fresh" isn't usability; it's aesthetics. Likewise, the lack of a specific program feature isn't the same as being able to use it as easily, efficiently and safely as possible. Looking for the power switch on a hand-cranked drill is silly, but maybe we should complain if we had to operate power tools with our pinky fingers. Similarly, a word processor without a mail merge function is perhaps intended for other types of word processing - such as writing novels. Hence, being usable in many different situations isn't automatically the same as having a high level of usability.
Conceptual consistency
In my original text, some arguments deal with static GUI design (such as low contrast or excessive use of screen real estate), but many do not. Some are about consistency: consistency over time, consistency across applications and consistency across platforms.
Pure, static GUI design is a subset of usability: poor design choices, such as low contrast and illegible fonts, will lead to worse usability. It is, however, not always clear where to draw the line between what we call GUI (toolkit) design, UI design, usability and UX. One affects the other, roughly in the order listed, from the ground up.
All three forms of consistency (across time, application and platform) were, until roughly the release of Windows 8, honoured by most major vendors. When I talk about consistency, it's not to be understood as the exact same look, widget for widget and icon for icon - it means adhering to basic standard principles of operation. One such example are the "File, Edit, View, Help" dropdown menus, recognizable between different operating systems, programs and UI toolkits. Sure, Java Swing looked a bit different than Win32, but they were still based around the same basic notions and concepts as all the other toolkits on the market.
This is not to say that consistency always trumps everything else: sometimes, real improvement of usability can be obtained through a complete interface overhaul. Windows 95 is a good example of that.
Show us the research, dude!
In discussions like these, there's usually at least one person who shows up to demand data or research, but curiously never presents anything to back up their own claims about modern UI superiority. But, by all means. The concepts I champion have been around for decades. Many of them have been studied in detail, some of them even build on ideas as old as - or older than - computers themselves.
One such concept is that of affordances, meaning how the look and shape of certain objects communicate information about how the object can be operated. A push button that protrudes from the surrounding surface, for example. Some affordances come more or less naturally, such as the taper and curve of a knife blade indicating the location of the sharpened edge.
Affordances can be constructed in computer interfaces using skeuomorphism, for example emulating a protruding button through the use of light and dark borders to indicate a 3D bevel around button borders.
These are fundamental concepts in all types of industrial design and have been for a long time. Another example is why the Mac, Atari and Amiga all put the menu bar at the top of the screen: it's an oft-used target and should be easy to move the pointer to. This is an adaptation of Fitt's law.
Industry Standards
The basic construction of this menu bar, starting with "File, Edit" was invented at Apple and introduced with the Lisa in 1983. It was then picked up in some similar fashion by nearly all desktops following it: Windows, Mac, GEM, Amiga, OS/2 - the list goes on - until it converged almost completely.
Another widespread source of influence was IBM Common User Access from 1987, which among other things introduced the kind of keyboard shortcuts we're still familiar with, and the ellipsis ("...") to indicate menu choices that opened a dialog window.
CDE - the Common Desktop Environment - was an effort of several major Unix vendors to standardize a graphical environment across platforms. This was adopted by at least Sun, Hewlett Packard, IBM, DEC (including in OpenVMS), Fujitsu, SCO and (for a short time) Silicon Graphics.
In short, anyone claiming that there weren't efforts in creating and maintaining an industry standard regarding UI design is either deeply ignorant or blatantly dishonest.
Further principles
A lot of my complaints can be described using the Thirteen Principles of Display Design, from the book "An Introduction to Human Factors Engineering" by Christopher D. Wickens, Sallie E. Gordon and Yili Liu. I'll be using Wikipedia's summarized principles here:
#2: Avoid absolute judgment limits. Do not ask the user to determine the level of a variable based on a single sensory variable (e.g., color, size, loudness).
#5: Similarity causes confusion: Use distinguishable elements.
#6: Principle of pictorial realism: A display should look like the variable that it represents.
Both judgment limits and similarity apply to window focus indicators: A clearly visible title bar and border in a distinctly different colour from the surrounding windows is a dissimilarity that makes the active window stand out clearly, making it easy to identify.
All three principles apply to icons. They used to be colourful little works of art, using both shape and colour make them discernible. Even during the monochrome days, most of them were carefully drawn, pixel by pixel, to represent something clearly distinguishable and identifiable.
Today's icons are often extremely stylized to the point of being meaningless, at least without knowing what their predecessors once looked like. Coupled with designs that often make a point of using monochrome icons and low contrast colours, they blur together into an indistinguishable mass of similar-looking geometric primitives.
Consider the below icons from Microsoft Outlook, for example. What is the "Archive" icon even supposed to depict? The lower part of a printer, with a sheet of paper sticking out? "Sweep" is most likely a broom - but would you be able to determine that without the text? In the lower toolbar, what's probably a flag might also just be a sketch by Mondrian. I have honestly zero idea what the rightmost icon is supposed to resemble.
#8: Minimizing information access cost or interaction cost. (...) A display design should minimize this cost by allowing frequently accessed sources to be located at the nearest possible position.
#9: Proximity compatibility principle. Divided attention between two information sources may be necessary for the completion of one task. These sources must be mentally integrated and are defined to have close mental proximity. Information access costs should be low.
The modern design paradigm is all about running applications in full screen mode, with large UI elements and ample white space. It's true that information density must be balanced. Interfaces that are too cramped will become illegible and distracting - but interfaces that rely on full screen hegemony don't work as well when a user needs to see two programs at once on one screen.
MDI:s (multiple document interfaces) do exist in modern apps, but they're often less flexible than in traditional programs, where free-floating sub-windows could be tiled, stacked, resized and placed according to the user's desire. Users of the old IRC client mIRC will perhaps remember its in-app window management facilities, sorely lacking from modern counterparts like Teams, Slack and Discord.
The hamburger menu also comes to mind. Compared to traditional menu bars, it counteracts Fitt's law, impedes discoverability and often increases the amount of clicks needed to navigate.
Hidden scroll bars not only hides information from the user - they also completely disregard Fitt's law: you cannot reliably aim your mouse at an invisible object.
#13: Principle of consistency. Old habits from other displays will easily transfer to support the processing of new displays if they are designed consistently.
I guess throwing out 30-40 years of design and interaction tenets basically overnight can be described as slightly counter to this principle.
Counterexample: Considering mIRC
The mIRC interface was in no way perfect, and yet it was so advanced we're apparently no longer able to recreate it:
- A look and feel that corresponds exactly to a large number of other Windows programs at the time.
- Unambiguous demarcation of window focus.
- Clear visual separation of content and program functionality.
- A menu bar with categorized commands for fast discoverability.
- Beveled separators indicating related functions.
- Icons discernible through both shape and colour.
- Always visible scroll bars, clearly indicating the current position in the chat backlog.
- Complete user freedom of window - and thus information - positioning.
How about showing me some research?
The above principles can be used to critique many other trends in contemporary UI crafting. The following examples have been reiterated ad nauseam, but let's do another round:
- Lack of affordances, E.G. buttons and other clickable elements that don't clearly and distinctly communicate their function.
- Ambiguous state, E.G. highly stylized slide switches instead of checkboxes.
- Flat design and low contrast in general. 3D bevels help with many different concepts including discoverability (what's clickable), distinction (raised above or sunk into the UI) and Fitt's law (clear demarcation of borders makes the size of a target easy to identify).
I'm deeply interested in seeing "data and research" from proponents of modern UI concepts. What kind of research was behind Microsoft's various changes in Windows 8, for example? I'm unsure, since they immediately backtracked plenty of them in version 8.1 and 10 - including reinstating the Start menu.
Putting UI elements in Window titlebars is often rationalized by "saving screen real estate". How, then, is Microsoft's gargauntan "ribbon"-style toolbars rationalized - and what type of research and data prompted their introduction? There are many more examples, but I'm specifically picking Microsoft here because if there's one company with enough cash to fund some science, it's got to be them.
Auto-hiding scroll bars, shrinking the draggable area of window titles and cramming the bulk of a program's functionality inside a cramped hamburger menu are, as discussed above, blatantly breaking a number of well known UI design principles. Surely - surely - that's because of the incredible amount of meticulous research underpinning those decisions. And surely - surely - considering how prevalent this paradigm is today, any UI designer worth their salt can recite the very good reasons for and many benefits of these concepts in their sleep?
Consistency over time
Plenty of programs change something about the UI in some way in almost every new version - and new versions are released very often these days. Firefox is a prime example of this, shuffling things around or changing the way they look and behave in nearly every update. A lot of Firefox users then get very angry and either 1) Find ways to patch away the new changes, or 2) Simply power through and get used to them, having to relearn workflows and recondition muscle memory. Then a new version comes along, and the cycle repeats.
Is there any reflection here? Do software vendors research how this affects usability? Do they actually learn something from these constant redesigns - as in, are new patterns and best practices formed and adhered to - or is it just, in fact, tweaking things randomly for the sake of tweaking? Are things genuinely getting gradually better or are they just getting gradually different? I'm genuinely curious about the processes and methods behind our new, supposedly superior UI paradigm, because I can't seem to discern any.
A perfect example of this Sudden Redesign Syndrome occurred very recently. Yes, it's Slack again, going about their usual crazy antics. This time it's not inconsistency over time or across platforms. No, I was recently blessed with two completely different UI designs in the very same application, on the very same device. One of the two "slacks" I'm a member of recently got a UI update - and the other didn't. The effect is that when I switch between the slacks, inside the same program instance, the whole UI changes. This has now been the case for several months, on both iOS and Linux, which leads me to believe it's intentional.
Yes, this is how two different "slacks" look in the same instance of the same program.
Slack is not some backwater cottage industry. It's a big company with thousands of employees and millions of users - many of whom are paying good money for their software. It would be interesting indeed to see the research, data and rationale behind this particular decision. It would also be interesting to know what the new design offers in usability that the old one didn't, and if these improvements (if, indeed, there are any) are significant enough to force every single user to re-learn the application interface once more.
I'm not advocating for complete rigidity in all programs forever, but there's great value in consistency over time. As an end user, the constant redesigns I'm now subjected to makes me feel more like a lab rat than someone entrusted to use a tool. I'm running around in a maze built by a bunch of developers and designers, hunting for a piece of cheese that constantly teleports to a new location. All the while I'm thinking: If modern application design is so great, why does everyone feel the need to change it all the time?
Yes, it's usability
The above examples and counterexamples are all about usability, as in the ease with which a computer environment lets a user accomplish a specific, predefined task. Identifying basic UI components, pointing at and clicking on them, and being able to quickly locate program features are all crucial and fundamental activities when using software.
Most (though not all) programs with the modern design approach seem to focus on one or a few major functions and hide everything else. I don't know where this idea originates, but a quote from web usability bigwig Jakob Nielsen comes to mind: "There is no such thing as a training class or a manual for a website. People have to be able to grasp the functioning of the site immediately after scanning the home page for a few seconds."
Many contemporary application designers seem to have this quote as their sole tenet, and forget that it was uttered in the year 2000, about shopping sites, when the expression "home page" was still used unironically.
This approach may still be of value in phone apps with similar purposes, such as immediately letting the user get started with "creating" funny AI selfie edits while bombarding them with ads and siphoning off their location data. However, mobile apps designed for leisurely entertainment translate badly into complex desktop applications built for power user productivity. The old desktop design paradigm may not have been perfect, but it did at the very least offer basic, transferable patterns for finding and operating advanced features.
This translation of mobile paradigms to the desktop ends after the first few clicks for completing basic tasks. After that, designers/developers (and hence users) no longer have a clearly defined set of rules to adhere and adapt to. Instead, we're treated to various new inventions that differ not only between platforms and applications, but is also constantly and suddenly changing between versions of the same program.
Nonstandard and Poor
Consider Gnome's human interface guidelines. Their basic principles aren't all bad, but once a program grows more complex, they break down fast. The end result for both developers, designers and users is conceptual poverty. By that I mean that developers using only Gnome are likely to start losing valuable concepts when thinking about UI - and thus program - design.
Take Blender, for example. The below screenshot was kindly provided by a friend who is a professional graphics artist. Yes, it looks complex, but that's because modern graphics creation is a highly complex process. Blender has a massive feature set and a plethora of parameters that can (and must) be tweaked to create the kind of stunning 3D scenes we've come to expect today.
Click to view full resolution.
I honestly can't see how a program like Blender could possibly be created using Gnome's guidelines - or indeed toolkit: certain time-tested UI elements aren't even allowed in Gnome anymore, such as menu bars and hierarchical pull down menus. "Progressive disclosure" and the prevailing interpretation of "navigation structures" means completely replacing certain parts of the interface with others - instead of letting the user decide what's relevant for them to see at any given moment. "Frequently used actions should be close at hand" - but in a program like Blender, frequently used actions vary profoundly with what kind of project is being worked on and what stage that project is in. I find it unlikely that a developer can make such judgement calls better than a user spending tens of thousands of hours in the program during the span of a career. Then again, "Focus on one situation, one type of experience." is rather telling. Using software professionally isn't about having a chic, boutique experience - it's about getting the job done as quickly and efficiently as possible. Sometimes, that means working with irreducible complexity.
This applies to a multitude of other professional software titles used in actually productive work, whether it's photo editing, CAD, software development or corporate management. There have been some efforts to "modernize" the UI of, for example, Excel - but in contrast to Teams, the olden ways are still prevalent in Microsoft's spreadsheet offering. I dare say it's impossible to replace its pull down menus, floating settings windows and other time-tested concepts, because the program is too complex and too powerful to fit into any dumbed-down, modern paradigm. Incidentally, Outlook is perhaps now at a point where it combines the worst of both worlds.
Getting old
I have personally, in some capacity, used Amiga Workbench, Atari GEM/TOS, MacOS Classic (6.x, 7.x, 8.x, 9.x), MacOS X (various versions), Windows (3.1, 95, 98, NT4, 2000, XP, Vista, 7, 8 and 10), SGI's IndigoMagic, Sun's OpenLook, BeOS, CDE, OS/2 Warp, NeXTStep, RiscOS, Gnome (1, 2, 3), KDE (various versions), Plan 9 and probably a handful more. I've used computers for 35 years and worked as a software developer for a quarter century. I've used a wide variety of software packages for photo editing, image creation, 3D graphics, spreadsheets, word processing, text editing, composing music, sound editing, desktop publishing, online communication and software development - to name a few. In short, I think it's safe to say that I have some experience with user interfaces and experimenting my way around systems and programs.
With the exception of Plan 9 and RiscOS, all of those systems, and a majority of the applications running on them, were instantly recognizable and usable for me up until (roughly) the release of Gnome 3 and Windows 8. Of course they each had their own quirks and idiosyncrasies, but the mental model I had built when using one system was easily translatable to all the other ones.
I could swiftly accomplish basic tasks in programs on the various platforms, including management of the programs themselves, such as determining which window was focused, what was a button and not, how to find advanced features, how to learn keyboard shortcuts, etc. Fundamental functions had fixed homes (Save and Open under File, for example) and the way to access them was sufficiently similar. This was efficient (learn one concept, apply it everywhere), easy to use (observe one system/program, operate all of them) and thus satisfying: my skills are transferable!
Today, I struggle with a lot of applications in very basic ways. In some incarnation of Outlook for iOS, for example, I couldn't figure out how to compose a new mail without scrutinizing every single element on the screen meticulously. The similar basic task of creating a new ticket in Jira once had me taking a long, hard survey of the entire screen before I figured out where the relevant button was - and that it was, in fact, a button. These are core functions of both applications, and someone with my background struggling to find them isn't exactly a testament to an overall improvement in usability.
Being able to quickly discern window focus isn't a mere aesthetic preference. When I was running Windows 10 on a multi-screen setup, I often came back to my computer from lunch or a meeting and started typing - but no text appeared where I expected it to. I then had to hunt across screens to locate the window which did in fact have focus. This ventures way past "ease of task accomplishment" and well into "safety" territory: accidentally spreading sensitive information to the wrong audience, for example.
Yes, these are both anecdotal accounts - but the solutions to both of these problems have been known and implemented for a very long time. These existing solutions were then removed on mere whims, and replacements have yet to materialize. This is not progress - it is, at best, a regression to the early days of experimental GUI prototypes at Xerox PARC.
Finally
I understand it's tempting to dismiss my views as those of some old codger unable to get with the times. In some ways, I freely admit that's an accurate assessment - but is that really an argument for the current UI paradigm?
Surely - surely - the point of all the alleged usability research being carried out today isn't to make experienced power users feel downright stupid. Surely - surely - the goal of usability shouldn't be to rob such users of time-tested, well-researched, efficient, effective, safe and satisfying ways to do things.
And surely - surely - if the modern UI paradigm is in fact well-established, well-researched and efficient, UI:s wouldn't change so damn much all the time.