Every piece of software has its limits. There’s only so much an application or a game or a website can do. You may wish you could double-jump in Halo. You can’t. You may wish you could use a lasso tool to select icons on your desktop. You can’t.
When a user wants an application to do something it can’t, it’s the job of the user interface to tell the user “No.” But there are right and wrong ways for software to say “No.” And when they say it the wrong way, users stop using.
When I’m working out an interface (in my own work it’s usually for games), I find it helpful to break down the ways of saying “no” into four categories. Here they are in order from irritating to ideal.
When an interface responds with rejection, it reacts to user input with an error message. The user has done something wrong. You tell them so and ask them to click (or tap) “OK.” This is easily the worst way to respond to user input, yet we encounter these interfaces every day.
One of the most horrific examples of this behavior is found in Windows XP. Windows teaches the user that they can drag and drop a document icon onto anything. Whatever they drop it on may respond or not, but the act of dropping is always valid. So drag a document onto an application and the app will try to open it. Drag a file into a folder and the file will move there. You know the drill.
Taskbar items in Windows XP violently break this rule. If you see “Internet Explorer” in your taskbar and you try to drop a file on it—a web shortcut or HTML document—Windows responds with rejection. Instead of passing the document to the application, instead of graying-out the taskbar so it doesn’t look like a valid drop target, instead of hiding the taskbar so that dropping is impossible, Windows sends up a modal dialog box. “You can’t do that!” Windows pronounces. To which you reply, “But I just did!” and both of you feel hurt and insulted.
This is the worst way to respond to prohibited input. Rejection should be avoided wherever possible—and avoiding it is virtually always possible. You’d think that kicking the user in the knees for simply doing what’s obvious would have fallen out of favor in UI design by now. Yet rejection is still commonplace.
The WordPress iPhone App rejected me this morning. I tapped “Edit” while the app was contacting the server. I got an alert saying, “You can’t do that right now.” I thought, “Then why did you dangle that button in my face? Why didn’t you gray it out or something? Why did you let me do something dumb so that now we’re wasting time having this conversation?”
Rejection stinks. Don’t do this to your users. Especially since there are much better options.
Prevention is the proverbial “graying out” of those items that can’t be used right now. It can also take other forms, like changing the appearance of the mouse cursor or showing a red X when a dragged item has no target. Prevention does the same job as rejection, only better. It’s a softer “no” that advises the user not to attempt an action but doesn’t punish the user if she does.
Windows Vista fixed XP’s bad taskbar behavior by switching to a prevention. A red “don’t do it!” sign appears next to an icon when you drag it over the taskbar. (The problem that remains in Vista is that the “don’t do it!” sign tends to get clipped off the bottom of the screen. Oops.)
An interesting example of prevention is the iPhone’s tendency to “push back” when you drag a page too far off the screen. In the Weather app, for instance, if you drag the leftmost panel to the right, blackness appears to the left of the panel—nothing’s there! Apple could have chosen to stop you cold with a rock-hard prevention or a warning message (“ERROR: There are no further weather panels in that direction. Tap OK to continue.”). Instead the iPhone gently pushes back. This sort of “soft prevention” is one of the first things user notice about the iPhone and it’s part of what makes the device feel so friendly.
If rejection allows the user to do something dumb and prevention warns the user against doing something dumb, evasion makes it impossible for the user to do something dumb. That’s because evasion takes the tools that are necessary to do something dumb out of the user’s hands.
Imagine, for example, a Trash Can icon on an operating system desktop. What if the user tries to drag a read-only, non-deletable file onto the Trash Can—a prohibited operation. With rejection we wait for the user to drop the item, then shout at her about it. With prevention we gray-out the Trash Can (or shut tight the lid), and if she insists on dropping the file we just bounce its icon back where it started.
But with evasion, we not only close the Trash Can lid, we remove the Trash Can from the desktop as soon as the user starts dragging a read-only file. Now the user simply cannot make the mistake of dropping the file in the trash. It’s impossible. That user interface option is removed for the moment.
The advantage of evasion over prevention is that it helps the user feel smart. The user interface never really says “No.” Rather, it reshapes the environment temporarily so that the user can’t even begin to ask. In most situations this makes the user feel better because now, whatever the user does, the action will be accepted. Everything the user does is correct.
Half-Life 2 used evasion to answer the age-old problem of how to stop the player from shooting his friends. In prior first person shooters, if you had a gun and a friend was in the room, you could aim the gun at the friend and pop him in the ear. Some games would allow you to kill your friends but would mark the mission as failed (rejection). Others would simply prevent damage to the friends you shot—the game pretended you hadn’t done anything naughty. The game prevented your action by covering it up. Not very realistic or believable.
What Half-Life 2 did was to make your gun unable to fire when it was aimed at a friend. This is an evasion—making a prohibited action impossible. You could click the fire button all you wanted but your gun was not going off. The game showed that firing was disabled by pointing the gun downward. It was as if your character knew better than to point his gun at people, and he automatically dipped it when it might endanger someone. This is a beautiful evasion: consistent with the world, expressive to the player, and effective in denying bad behavior.
The ultimate way to say “no” is to make sure the user never even thinks to ask the question. Of all these strategies, misdirection is the most difficult to use. It digs deepest into the whole philosophy of your interface. But it gives the best user experience because what we said about evasion is even more true for misdirection. If evasion takes away the user’s ability to ask a dumb question, misdirection prevents the user from thinking to ask the question in the first place.
Have you ever tried opening the trunk of a car in Grand Theft Auto IV? No? Why not? Because you’ve never wanted to, that’s why. There’s nothing you could do with a trunk if you managed to get it open. Sure, maybe you could store things there. If you could open the back of a van you could probably store a motorcycle. But that’s just not the way the game works. The whole game is structured around a different model of storage and a different model of interacting with cars. So you never tried opening a trunk and the game never told you “no.” That’s misdirection.
You know the silver “grill” at the bottom of the iPhone Springboard interface? The one where you put your four favorite apps? Have you ever tried dragging that grill to another part of the iPhone screen? After all, you can drag the Windows taskbar or the Mac OS X Dock to the four sides of the screen. Why not the iPhone grill? Maybe because it doesn’t look movable? Maybe because the bottom of the screen seems a pretty sensible place for it?
That’s misdirection: forming the interface in such a way that illegal actions simply never come to mind.
The Total Affirmation Interface
The best way to think about how to say “no” to the user is to think about it in reverse. An ideal interface always says “yes” to any user input. The user can do no wrong.
The way to achieve this goal is to put into the user’s hands only those UI elements that are always happy no matter how you use them. In their visual representation, in their physical response, in their interaction with each other, every element is always in a legal state and always responds either in the way the user expects or, at worst, in a way that the user can quickly get use to.
I’ll hold up the iPhone Springboard interface once more as a model of this philosophy. It’s a “grill” with four icons plus a series of X pages where X is >= 2. Each page contains at most sixteen icons in a 4×4 grid. The icons are uniform in size, grid-aligned (unless easing between positions), and always left-justified and non-sparse (like English text without spaces). When dragged, an icon can move freely but can never be dropped onto another icon. This rule is enforced through an evasion: they always make space for each other, like polite Englishmen in a long queue. If you have the audacity to drag the icon off the screen, Springboard takes you to a new screen. But what about the top of the screen? As you drag there, the place in the queue that is reserved for the icon is held by the other icons, so you know all the time that if you release the drag, the icon will slide back into a legal place.
You can do no wrong, so Springboard never tells you “no.”