Object-Oriented GUIs are the Future
/Do you like futuristic prototypes and imaginings of human-computer interfaces such as those sampled on the wonderful site HUDs and GUIs? Be sure to pull down the Categories menu there, and check out the clips from films such as Minority Report. Another excellent site is Sci-fi Interfaces.
Two tantalizing aspects of design illustrated in those clips are embedded and ubiquitous computer hardware and software—computers whose functions manifest not in objects specialized to be "computers," and often not even as "tools," but in objects that already exist for everyday purposes. A common example is a car whose functions are enhanced with computers. More futuristic examples include a door that guesses when a person wants it to open and close, as shown in Star Trek.... Oh, right, that's been in my grocery store for decades. But in the future they will be foolproof. More futuristic examples include pieces of paper that enhance, store, and retrieve what I draw. Embedded and ubiquitous computing do not push just the computer software application into the background of the users' awareness; they push the computer itself into the woodwork.
We need not wait for embedded and ubiquitous computers before we can get a substantial portion of their benefits by making human-computer interfaces object oriented. "Object oriented" in this sense has nothing to do with whether object-oriented programming is used. Instead, it means that the user interface as perceived by the user is oriented to the users' domain objects rather than to the computer software applications.
Look carefully at those clips in the HUDs and GUIs site, and notice that even when users are interacting with an overt "computer," rarely are they focusing on "applications" (i.e., "programs"). Watch those clips again, and notice which "things" the users are seeing and manipulating. Those things most often are objects in the users' domains of work or play, rather than the tools (computer applications) that the users use to deal with those domain objects. This is most apparent in direct manipulation gestural interfaces. The users are grabbing, rotating, and flinging people, diagrams, buildings, and spacecraft. Sure, when they fling a "person" from one computer screen to another, they are not flinging the actual person, but they are thinking of that person as what they are flinging; they are not thinking of flinging that "window" to another screen. They do not update the last known location of the evildoer by opening a spreadsheet, finding that evildoer's name in the first column, looking across the rows to find the Location column, then typing to change that row-column intersection's cell from one building's name to another. Instead they simply drag whatever representation of the evildoer they happen to have in front of them (list row, window, picture, icon, ...) onto whatever representation of the new building that is handy (satellite image, schematic, list row, ...). All representations of both the evildoer and the building instantly change to reflect that new association of those objects (evildoer to building). The users focus on the objects of their domain—the objects in their mental models while they are doing their tasks to meet their goals given their situation of use.
Similarly, users simply flip views of the same manifestation of an object, from graphical to alphanumeric to pictorial to audio, rather than searching again for that same object in each of several different applications, each devoted to only one such type of view. The heroine is looking at a live satellite image of the building, then flips that exact same portion of the screen to show a schematic view of the building, as if putting on x-ray glasses. She does not fire up the separate "Schematic Viewer" application and, once there, hunt down that building. She does not even drag that building's image from the "Satellite Viewer" application to the "Schematic Viewer" application. She merely flips views as if putting on a different pair of virtual eyeglasses, the whole time keeping her vision and therefore her attention fixed on that building in front of her eyes—the domain object of her attention.
In short, all those futuristic scenarios have the users' mental models of their task domain objects thrown onto the computer displays in front of them. In the futuristic world of embedded and ubiquitous computers, "computers" have no natural place in the users' mental models of their task domain, so the computers have no place as recognizable, concrete, discrete, distracting entities in the users' physical environment. Likewise, the computer software applications have no natural place in the users' mental models of their task domain, so the computer applications have no place as recognizable, concrete, discrete, distracting entities on the displays.
Back here in the present, most users are stuck not only with physically discrete, intrusive, distracting, attention-demanding, time-consuming computers; they are also stuck with similarly disadvantageous discrete, multiple, computer software applications. Most graphical user interfaces (GUIs) are application oriented. But users need not be so burdened, even today! We have the technology to drastically reduce if not eliminate the notion of the computer application from the user's physical environment, so they can deal with only a single application, and within it focus purely on the objects from their mental model of the task domain. One easy way to implement object-oriented GUIs was to use NASA's Mission Control Technologies (MCT) open source software platform—the original desktop version, not the completely different web browser contained application that was renamed to capitalize on the legacy of the desktop version. You could simply build plugins to provide additional types of objects, views of objects, and actions on objects. Users would see all those appear within the same MCT application. It was general purpose, not restricted to space mission operations, and it was open source! Here is a three-minute video demonstrating the MCT user interface, and here is a ten-minute video explaining the underlying platform.
One design process for figuring out what the users' objects, attributes, and actions should be is called The Bridge, and it is described in the book chapter Bridging User Needs to OO GUI Prototype via Task Object Design (Tom Dayton, Al McFarland, & Joseph Kramer. 1998. In L. Wood, Ed., User Interface Design: Bridging the Gap From Requirements to Design, pp. 15-56. Boca Raton, FL: CRC Press. ISBN# 0-8493-3125-0). The full text is available for free, here.