The Decline of Usability
In which we delve into the world of user interface design.
There was a time (roughly between 1994 and 2012) when a reasonably computer-literate user could sit down in front of almost any operating system and quickly get to grips with the GUI, no matter what their home base was. Windows, MacOS, CDE, OpenStep, OS/2 and even outliers like Amiga, Atari and BeOS all had more in common than what set them apart. All windows had a title bar for dragging, easy identification and displaying current input focus. Clearly labeled drop-down menus following a set standard (File, Edit, View, Help etc.) made it easy for a newcomer to an application to browse program features and learn keyboard shortcuts. Buttons, input fields and other widget elements were clearly identifiable through various visual cues, such as 3D bevels.
A few rogue applications didn't play by the rules, but the vast majority of software did, at least in the fundamental places that really mattered.
Today, it seems we're on another track completely. Despite being endlessly fawned over by an army of professionals, Usability, or as it used to be called, "User Friendliness", is steadily declining. During the last ten years or so, adhering to basic standard concepts seems to have fallen out of fashion. On comparatively new platforms, I.E. smartphones, it's inevitable: the input mechanisms and interactions with the display are so different from desktop computers that new paradigms are warranted.
Worryingly, these paradigms have begun spreading to the desktop, where keyboards for fast typing and pixel-precision mice effectively render them pointless. Coupled with the flat design trend, UI elements are increasingly growing both bigger and yet somehow harder to locate and tell apart from non-interactive decorations and content.
Overall, designers of desktop applications seem to have abandoned the fact that a desktop computer is capable of displaying several applications and windows at the same time and that many users are accustomed to this. Instead, we're increasingly treated to small-screen, single-app paradigms copied from smartphones. That's a turn for the worse in its own right, but perhaps more troubling and annoying is the recurring sidestepping from the tried and true UI design that is so ingrained in many users it's practically muscle memory by now.
Examples to prove a point
Consider these title bars of a few popular Windows 10 applications:
The image above is composed from screenshots taken on the same computer during the span of a few minutes. No settings have been changed between shots. Immediately, a plethora of UI problems become apparent.
Can you even tell how many windows there are? The answer is six - although the top three and bottom two could, when ending up stacked like this, look as if they're two single applications.
All of these title bars denote active windows. The top one, Outlook, looks exactly the same when inactive, as does Slack. Except for the command prompt (cmd.exe), the change between active and inactive on the remaining windows is so subtle that when aligned next to each other, it's virtually impossible to determine which one has the input focus.
Almost all of the title bars contain some kind of UI widget. Some have little tool icons, some have tabs, some have drop-down menus, some have combinations thereof. There is no set behavior and, more importantly, the clickable area for traditional operations (move, focus, raise) on each title bar is now of variable width. If you're accustomed to a title bar being for handling the window and nothing else, it's very easy to misclick and activate an application feature you didn't intend to. Oh, and the little Visual Studio Code logo on the second title bar from the top? Nope, not an icon. Purely decorative.
What's perhaps most jarring about this is that four of the six applications are made by Microsoft themselves, thus setting the standard for this kind of erratic design. We can already see the effects: the taking over of window title bars seems to get worse with time. Consider the latest version of Slack (click for a larger image):
Since Windows 2 (not 2000 - I'm really talking about Windows 2), users have been able to resize windows by dragging their top border and corners. Not so with Slack, anymore. The red lines in the image denote the remaining hotspots available for resizing. The blue lines denote the remaining hotspots available for moving the window. The rest is now taken up by a non-standard concoction of widgets that most users will either soon learn keyboard shortcuts for or that could very easily be incorporated into a more traditional UI.
Instead, some usability whizkid at some point decided to completely nullify the single most fundamental way of managing the window of an application mostly running on platforms where stacking, floating window management is not only the norm but pretty much the only available option.
(Update 2020-04-22: It seems that Slack today pushed an update of their desktop version that dims the title bar slightly when the window is inactive.)
Microsoft and Slack aren't the only culprits. Google, for example, have gotten increasingly into some kind of A/B testing of late and their Chrome browser now features this type of tooltip when hovering on tabs:
Usually, a browser tab will display a small, floating tooltip after having been hovered for a bit of time. This massive monstrosity pops up without delay and covers a large area of the underlying UI. The usefulness of browser tab tooltips can be discussed in itself, but this is no doubt both pointless and distracting.
Google aren't the only ones capable of producing distracting UI:s, though. The newly released Firefox version 75 features what has become known as "the megabar":
This new take on the URL bar pops up when you least expect it, is very hard to get rid of and, as a bonus, covers the tried, tested and highly usable bookmarks toolbar below it. Just like widgets in title bars, this breaks the behavior of a UI concept in such a major way it's hard to begin describing: text input fields are ubiquitous, ancient and their basic concept has been the same since at least the early 1980:s.
Another blow against recognizability and usability is harder to take a screenshot of, namely auto-hiding scroll bars. On a smartphone, it's a great invention because you can free up real estate on a small display and you've usually got your thumb resting close to the screen, ready to do a test scroll in your full screen app to see if more content is available.
On the desktop, however, a scroll bar is very useful for determining your current position in the content without having to break away from what you're presently doing and reach for the mouse. It's also useful for doing the same in windows that are currently not in focus. For example, in a tailing log file reader or command prompt with a debug stream, you might be interested in knowing if you're looking at the latest output or not. With auto-hiding scroll bars, this becomes much harder and you have to resort to other, often less apparent or more cumbersome methods.
In lieu of screenshots of hidden scroll bars, let's look at how QT5 renders them by default:
Which part of this is the bar and which part is the tray? I know by now that the slightly brighter part is the bar, yet I frequently misclick, because the flat design makes them so hard to tell apart. Worse still is the infinitesimal contrast, so low that on older, cheaper laptop screens, it's downright impossible to tell the difference between bar and tray. New users probably don't know that with the right tools, QT5 can be configured to sport a more traditional look, so the default look should be geared towards them. Those intent on customizing the appearance of their personal desktop will usually find a way to do so anyway.
Missing Menu Bars
Another apparently unfashionable UI standard is the menu bar. It used to be a lowest common denominator between platforms and, when still present, it works basically the same on Windows, Mac and Unix-likes. For the most part, it even keeps the traditional "File, Edit, View" approach to things. The Gnome designers, however, have decided that such menus are apparently a bad feature and they should probably never have been used in the first place. To rectify more than three decades of such folly, they have created... something I'm not sure what to call.
One of the tricks up their sleeve is the hamburger menu. On smartphones, it's a great feature, but on the desktop, it's unnecessary: If there's anything we have on today's wide screen displays, it's horizontal space. In Gnome, it seems to be a catch-all for UI operations that didn't end up somewhere else. Like in Evince:
Open, Save, Print, Close. All of them reasonable tasks, except there's no adherence to standards. In Gnome-MPV, the hamburger menu looks like this:
No Open or Close here, you silly user! What did you expect? Some kind of coherent thought? If you want to open a file, just click the little icon to the left featuring a plus sign:
There's also a button with the Gnome-MPV icon on it. One might assume this button would contain features specific to Gnome-MPV, such as the ones found in the hamburger menu, but no. Instead it looks like this, containing options for preferences and quitting:
In Evince, the button featuring an Evince icon produces this menu:
Bummer! In Evince, you clearly have to look somewhere else to find in-app preferences and a quit option: things are wildly inconsistent between applications, creating confusion and frustration for users. I also can't find a way to navigate these menus using the keyboard once they're open, as opposed to normal drop-down menus and other similar hamburger menus.
There are so many more examples in just these two Gnome applications alone that it's bordering on parodical. For example, they are designed for Gnome's new paradigm of incorporating toolbars into the window title bar, thus institutionalizing the crimes of the Windows applications mentioned above. The difference is of course that if you're running another window manager, it just looks silly, for example leaving you with two close gadgets. It also means that to keep a reasonably large area free for moving the window (At least that's better than Slack!), widgets that could previously have been fitted into a toolbar below the title bar now needs to be opened separately, such as this search box in Evince (click for a larger image):
These are just a few examples of crimes against basic concepts of desktop UI design. There are plenty more and they're present on all platforms. They are also, it seems, growing increasingly common: the times of coherency and industry standards seem to be over. I hope that with this little rant, I can plant a seed to bring them back. If not, it was at least good to blow off some steam.
I'm going to end by discussing some counter arguments I've come across when discussing modern UI design on various online forums:
Technology is progressing! You can't stop change! Just deal with it!
These and similar truisms and platitudes are commonly used when no real argument is available. It's people like you and me who decide to change UI design, not an unstoppable force of nature. Changing things doesn't necessarily imply improving them and it's improvement we should strive for, otherwise change is pointless.
You're living in the past!
Considering the apparent anachronisms in the above screenshots, I can't argue with the fact that I am. However, that doesn't automatically mean I'm wrong. It also doesn't mean I think all modern desktop environments should look like Windows 95 or CDE. There are other roads forward and other ways to improve the look and feel of UI:s without breaking fundamental concepts.
Electron apps can't follow a single platform's standard!
Multi-platform applications will of course never fully incorporate into the host environment the way native ones do. But because of this, it's of extra importance that they at least adhere to the paradigms that are translatable between all the common target platforms of today. Drop-down menus, clean title bars and a clear indication of window focus aren't hard to implement, even if they don't look exactly like their surroundings. In fact, a multi-platform framework should make it easy for developers to implement these concepts and hard, if not impossible, to work around them.
We shouldn't complain about free software! It's free!
Yes. Yes we should. Don't get me wrong - I have a massive amount of respect and admiration for the time, skill and effort people put into FOSS projects. They are improving my personal quality of life significantly and for that I'm very grateful.
It's true that Gnome and KDE are FOSS, which is a thing of wonder. But they are also large enough to, just like Microsoft and Google, have a significant impact not only on normal end users but on aspiring designers and programmers as well. We should be able to share our views and discuss what that impact might result in.
In short: Anyone setting an example should also be held to a standard.
Putting things in the title bar saves screen real estate!
This is true to some extent, but screen real estate in general isn't much of a problem anymore. If this had been done in the days of 640x480 VGA, it could maybe have been a viable argument. Today, anyone working enough with computers to worry about a few pixels extra can buy a screen the size of a small TV with a 2560x1440 pixel resolution for around US$200. Even the cheapest of laptops come with at least a 1366x768 resolution, which is en par with the famed megapixel displays of yesteryear's professional workstations, coveted precisely for their generous amount of screen real estate.
If anything, the problem with screen real estate comes from the current trend of UI design with so much white space between elements that what used to be a simple requester is now a full-screen application, as evident in this example (click for larger image):
For those spending all their workdays coding on a 13" laptop, my tip is to stop worrying about screen real estate and start worrying about your back, neck, hands and shoulders a bit more. Trust me, RSI is a serious thing.
Designing UI:s is hard and application software can't please everyone all the time!
This is true and, as a software developer of more than 20 years, I have a huge amount of respect for the complexity of UI design. I also happen to know that such complexity is not a valid excuse for willingly and knowingly breaking UI concepts that have been proven and working for, in some cases, more than four decades. In fact, a lot of the examples above introduce more complexity for the user to cope with. The intricacies of each application and window decoration must be learned separately and time and energy is spent by repeatedly parsing the differences.
What about Apple?
I can't comment on the current state of MacOS since the time I've spent actually using a Mac during the last 8 years or so probably totals to a few hours. Apple used to be good at this, and I hear they still do a decent job at keeping things sane, even post-Jobs.
You're old and angry!
You bet! Now get off my lawn, punk.