Why I no longer read my computer screen
From Ben's Writing
A research paper for Psychology 3720 - Learning [Fall 05]
Habit is second nature, or rather, ten times nature. – William James
I say that habit’s but a long practice, friend, and this becomes men’s [sic] nature in the end. – Aristotle
Information displayed on computer screens is intended to be informative, yet there are reasons to believe that after repeated exposure to this information it is possible to ignore a great deal of it. We explore a few naive arguments that try, and ultimately fail, to explain this phenomenon then we propose a framework, based on automatic associative memory to resolve their shortcomings. By framing habit formation in this light we are able to see not only how habits can be formed, but how they might be generalized.
I have noticed that as I grow more familiar with computers, I tend to read less of the information they present me. By this I mean that I rarely think twice about which button to use when saving my work in a word processor, or read many of what the hundreds of informational messages my computer displays. It is not that I am unaware of what the messages say, or which button I pressed, rather I no longer need to actively consider the information they contain. The strange part of this phenomena is that information displayed on computer screens is intended to be informative, yet from my experience, there are reasons to believe after repeated exposure to this information it is possible to ignore a great deal of it.
What this paper concerns it self with is exploring the reasons why we might not read information from a computer’s graphical user interface.
Some would argue that the reason we do not need to continue reading a computer screen is because we have memorized it. That is, that our responses, as users, are based on previous exposure to a certain set of stimuli; that some how our reactions are a product of the window’s dimensions, colour and composition. The weakness of this line of argumentation lies in that fact that the established familiarity with an application can be maintained even after multiple changes to the application’s graphical user interface (GUI). For instance, if the background colour of our word processor were changed, it seems ridiculous to assume that we would cease to understand how to write a paper. Granted, we may be a little confused by the hot pinkness of the background, but this disorientation would only be temporary, and we would be quick to return to your previous performance levels. The same idea applies if the font we were writing in were changed. It seems reasonable to assume that we would still be able to read what was written, it is only that it would look different. Yet, we have to be remembering something, otherwise we would not be able to ever undertake these tasks. Let us consider the idea that specific shapes and configurations trigger specific motor responses; given configuration of window attributes we are looking at presenting a very specialized action in response.
Enhanced memorization argument
This is not to be confused with the previous argument, because here we will have access to al l possible permutations of the window’s attributes and all l possible reactions to them – whether calculated on the spot, or pre-calculated and stored. As a result, we will not fall prey to a simple font change, spatial rearrangement or colour touch up. The problem with this argument is that the amount of information needed is intractable. Not only could it not be calculated fast enough to be useful, but even if it were calculated there would be no room to store it. To illustrate this, consider a simple application with one toolbar, a text editing region and a basic menu. For the purposes of this discussion we will assume a palette of 256 colours available to colour the backgrounds of these three elements. A simple, back of the envelope calculation gives us 256 × 256 × 256 = 16777216 possible choices for colour combinations. Now, if we were to vary the position of the window (based on the top left hand corner), on a 640 × 480 screen, that would give us 640 × 480 × 16777216 = 5153960755200 possible position and colour choices. All this is for a simple application! Just think of how this would explode, if you were to make this calculation based on your favorite word processor. It should go with out saying (but I will say it) that this is simply not a feasible solution to the problem.
Now, if we continue in this manner, systematically eliminating each way it might be done, then we will eventually conclude that it simply cannot be done – which makes no sense. Intuitively, our ability to navigate GUIs without interpreting them directly has to have something to do with with a windows’ attributes, it simply cannot have anything to do with them specifically.
Automatic associative memory
The problems faced by the above proposals are that they miss several key features in the experience of the use of an application, and thus, give us these very superficial account of the experiences’ structure and an oversimplified model of its resolution. To solve these shortcoming we need to change our reference point and consider the entire context in which a window, an image and, ultimately, textual information is presented. The framework we will use to further our discussion is automatic associative memory. The premise of this model is that when a cue and intended action are highly associated, the intended action will be retrieved automatically when the cue is sufficiently processed. The result of this reinforcement is that response times and reliability will increase, but the response itself will require less cognitive resources (Berg, 2003). Before we get in to the details of our argument, it should be noted that it has been argued that the automatic associative model of memory is, in general, incorrect (Berg, 2003). The failure is centered around the model’s prediction: That when an associated cue is presented, the intended action will always follow automatically. It has been shown that if people reinforce the association between cue and intended action, while engaged in other activities, the benefit of the practice is lost (Ouellette & Wood, 1998). The reason given for this is that it is likely the intention of the action of interest is lost, when engaged with other activities. Thus, they conclude, that the reinforcement is no longer effective, because the “effects of practice on behavior are goal-dependent” (Ouellette & Wood, 1998, 67).
We can easily resolve this for the purposes of our discussion, as our motivation is not to find some general overarching theory of habitual responses, rather we are only concerned with explaining those that arise in computer use, and specifically those that relate to GUI use. This restriction is important because GUI use is exclusively goal-directed (Cooper & Reimann, 2003): A user presses the save button in a word processor specifically because they want to save their work. As such we can use this model discuss the phenomena that we are exploring.
Habits and intentions
It has long been established that habits are controlled by antecedent stimuli rather than by goal expectancy (Yin, Knowlton, & Balleine, 2004). It is now also known that the automaticity of a response is “highly dependent on goals that are active at the time of action,” and further, that “[e]nvironmental cues ... make it easier to remember the habitual intention, and once the related goal is reinstated, the action can be initiated effortlessly”(Berg, 2003, 67).
It can now be argued that as a result of repeatedly hunting for a save button in a word processor we might be in a improved position to re-instantiate the save operation irrespective of the descriptive content of the save icon. Where before it helped us by explaining the function of the button, it now serves as a mere landmark for the save operation. Lets look at a small example of how this can be used to explain what is going on with our use of a word processor.
According to the above we can eventually learn to automatically save our work in a word processor with little to no cognitive expense. To do this we must first be aware which word processor we are running (assuming you are aware of other ones). The knowledge of which word processor we are using defines the context, or "environment" we are in. Secondly, we must be aware of how to save our work. This can be done in numerous ways, but for simplicity, we will assume that a toolbar button is being used. Finally, we must intend to save our work.
Now we have defined our goal, and the means by which we can carry it out, it is easy to see how all of this functions. If the above model is correct, then by repeatedly pressing the save button in order to save our work we are slowly eliminating the need to think about clicking the save button in order to save our work. We can say this because we have just shown that this type of repeated action is equivalent to those required to form a habit.
This idea can be taken even further because of the loose definition of the term "related goal." There is no explicitness about which goal is required, only that it need be related; thus, we can imagine how two major, and unrelated goals — like writing a paper and balancing your checkbook — can have similar sub-goals, and as a result habits in common.
To see this structure more clearly it is possible to imagine a simple branched hierarchical structure, with the most general of goals near the top, above all applications and the most specialized at the bottom, under the individual applications. Like a tree turned upside down.
Now it is easy to see how the equivalence of the sub-goals help generalize across contexts without a loss in automaticity. For instance, imagine a suite of office work applications. (MS Office, Work, OpenOffice, etc. any will do.) The word processor application is used for editing documents that are generally comprised of text. One feature commonly found in word processors is the ability to remove text from one location (cut) and place it in another (paste). This action can be carried out in all the other applications as well, and not only with text, but with images, and data cells too. They can all be moved around using exactly the same mechanism, they only differ in implementation. Thus, we can transfer the automaticity of cut-and-paste which we trained in our word processor to a spreadsheet application.
As a final closing note, I would like to present part of the motivation for writing this paper. The following is a summary of the initial hypothesis, and the subsequent response that eliminated it. It is interesting because it still is a factor in the types of behaviors this paper concerns it self with, but the affect it has are only relative to our current time.
It has been argued that assimilating information from a computer screen is more difficult than from other mediums, like paper (Garland & Noyes, 2004; Sauter, Gottlieb, Jones, Dodson, & Rohrer, 1983). For instance, Sauter et al. were interested in determining the health implications of video display terminals in the work place. The results of the study found that "eye-strain [was one of ] the most frequently reported disturbances" (Sauter et al., 1983, 289). This suggest that the actual physical attributes of the screen may actually interfere with the desire to read. Meaning there is a direct reward structure associated with not reading or interpreting information on a computer screen.
The problem here is that this argument is short-sighted; it assumes that working conditions and equipment will remain constant. When Sauter et al. conducted these studies the available display types were Cathode Ray Tub (CRT) based units — similar to the ones in your TV. CRTs actually interfere with a persons ability to read and interpret visual information (Garland & Noyes, 2004). Today, however, it is possible to purchase many other display types, like Liquid Crystal Displays (LCD) which have substantially different properties than CRTs. LCDs have been shown not only to be easier on the eyes but also capable of “significantly [improving] the accuracy of lexical decisions and the accuracy and speed of sentence comprehension”(Gugerty, Tyrrell, Aten, & Edmonds, 2004, 81). Thus, LCDs are improved medium with which to display textual elements. This does not mean eye strain does not play a part in this observed phenomena, just that it should not be considered a cause, instead it can be looked upon as an accelerant.
It occurs to me that, given what we have seen, we should design application interfaces that lend themselves to habit formation. The habits should be good ones, of course, and when ever possible they should afford us some level of generality in order to promote the transfer of automaticity from one application to another. I also think it is hard to get a complete and accurate picture across of the processes which are involved in creating habits in such a short amount of space, but it is my hope that the essence of it has been captured above. It is interesting to think that we are capable of performing such complex tasks with so little cognitive expense, but at the same time it is hard to fathom what life would be like without this ability. It seems that habits play an integral role in our lives and that they constitute more than merely a lazy man’s ticket out of reading.
Berg, S. S. M. van den. (2003). Prospective memory: From intention to action. Unpublished doctoral dissertation.
Cooper, A., & Reimann, R. (2003). About face 2.0: The essentials of interation design (2 ed.). Indianapolis, IN, USA: Wiley Publishing Inc.
Garland, K. J., & Noyes, J. M. (2004). Crt monitors: do they interfere with learning? Behav. Inf. Tech., 23 (1), 43–52.
Gugerty, L., Tyrrell, R. A., Aten, T. R., & Edmonds, K. A. (2004). The effects of subpixel addressing on users’ performance and preferences during reading- related tasks. ACM Trans. Appl. Percept., 1 (2), 81–101.
Mills, C. B., & Weldon, L. J. (1987). Reading text from computer screens. ACM Comput. Surv., 19 (4), 329–357.
Ouellette, J. A., & Wood, W. (1998). Habit and intention in everyday life: The multiple processes by which past behavior predicts future behavior. Psychological Bul letin, 124 (1), 54-74.
Sauter, S. L., Gottlieb, M. S., Jones, K. C., Dodson, V. N., & Rohrer, K. M. (1983). Job and health implications of vdt use: initial results of the wisconsin-niosh study. Commun. ACM, 26 (4), 284–294.
Yin, H. H., Knowlton, B. J., & Balleine, B. W. (2004). Lesions of dorsolat- eral striatum preserve outcome expectancy but disrupt habit formation in instrumental learning. European Journal of Neuroscience, 19 (1), 181-189.