Are
We Being Technological Yet? Or, Why Isn’t There a Colon in This Title? by Dana Ringuette |
![]() |
[Originally delivered at the Illinois Philological Association, Millikin University, March, 2003.] |
hen John Naisbitt published his soon to be bestseller, Megatrends, in 1982, he put forward “the five most important things to remember about the shift from an industrial to an information society”:
Now, from the perspective of 2003 we might marvel at the degree of prescience inscribed in Naisbitt’s book—and it is remarkable. But we might also—with a little historical legwork—realize that Naisbitt was not so much predicting “trends” as he was responding to and documenting the rise of a techne, an art or technology, which was itself a response. What I am suggesting here can be illustrated by a brief, descriptive chronology: In 1975, the Altair 8800, a computer kit with 256 bytes (bytes!) of memory, was the cover story for Popular Electronics magazine, a publication with an audience of electronic professionals, amateurs, hobbyists. The Altair 8800 was really the first “personal computer,” and built around the Intel 8008 microprocessor, which was developed for use only one year earlier. The response to the magazine’s story was extraordinary. The machine as a kit sold for $395, and its creator had hoped to sell around 400 machines—total—in order to break even: he received that many orders in one afternoon. Also in 1975, Digital Research developed and made available the CP/M Operating System, the first integrated, standardized, and readily understandable software for a small computer, integrating a keyboard. In 1977, the Apple II computer, using the Rockwell 6502 microcontroller which integrated an operating system and user-interface, was introduced. In 1978, the C Programming Language was published and the Zilog Z-80 CPU was developed. In 1980, the KayPro portable computer went on the market and was enormously successful in a variety of ways, not the least of which was the sounding of the initial—faint, yet undeniable—tolling of the death knell for huge mainframe computer design. And throughout 1981 and 1982 came the introduction of the IBM PC and the MS-DOS operating system, version 2.0, using (and Microsoft would probably say “improving upon”) the command and file structure of Digital’s CP/M. So take a look in the rear-view mirror of 1982: in the seven years following the development of the Altair 8800 (I shudder to think this is about the same amount of time it took me to complete a doctorate), we can see that the development of computer technology was astounding, even mind-boggling. And if, in the rear-view mirror of 2003, we try to take a measure of what has occurred in the ensuing 20 years, the development could be regarded as downright miraculous. An example: from the 256 bytes (bytes!) of memory of the Altair 8800 and an operating system which required no more programming code than would operate within 256 bytes, not to mention no video terminal and no keyboard, we have moved to (Microsoft would say “progressed to”) Windows XP, which contains, for better or worse, 30 to 40 million lines of programming code utilizing gigabytes (or billions of bytes) of memory. It is all the more astounding that any one of us could go on-line today and buy such a machine—delivered to our door—for not much more than the 1975 price of an assembled Altair 8800 ($495). Still, and without taking the shine off Naisbitt’s 1982 projections, we might observe that it didn’t take a rocket scientist to foresee the social and economic ramifications of something that was happening—and had happened—right before his (my!) eyes. So my point here is one that others have made, but one which certainly needs to be made again: the development of such technology was an effect, not a cause. It was a response to an already existing demand. As Richard Lanham put it several years ago, “the demand for the medium had preceded the medium itself.” We find ourselves now in the position of a Naisbitt in 1982, with one important exception: we need to look even more critically at this phenomenon instead of simply allowing ourselves to be moved passively in its flow (an issue which Naisbitt himself takes up in a different way in a 1999 book). That is, we need to look at the phenomenon rather than trying to look through it, because, as many critics have already argued, this technology is a techne—an art—as important as the invention of moveable type and even, some argue, the invention of the alphabet itself. It is an engine bringing about changes in how we think about language and art, about the social, artistic, and professional environment for language. In the spirit of “looking at,” then, let me sketch out a few of those changes that are occurring—indeed, have occurred—still awaiting our response as teachers in the humanities and the arts: 1. The “information age” becomes the “idea age” becomes the “nebulous age.” As we know, the shift from an industrial economy to an information economy passed some time ago. Yet, the collapse of “information float” is so much upon us that it is no wonder we seem to live in an age of the imprecise, the unformulated, and the tenuous—perhaps to the point of paranoia-inducement or at least cynicism. At one point, perhaps ten years ago, people argued that the only way one could successfully negotiate the amount of information available was through what was called the “refined form of ideas.” In other words, sifting through the glut would be possible only through a prior, defined set of ideas, which was itself changing every day. But the defined sets—indeed the ideas—come so fast and furious that they rival the glut of information. This is not to enter into a discussion of determinable or indeterminable Truth, but only to observe that Heraclitian flux not only wins out over Platonic immutability, it seemingly drags the carcass, Achilles-like, around the intellectual and theoretical city of Troy. Hence, the nebulous age—or, as Naisbitt puts it in his recent book, the “technologically intoxicated zone”—we now inhabit. Many of you have already experienced this vicariously or virtually through your students. Take any undergraduate—what level or year does not matter—through an introduction to the capabilities of an adequate on-line library search. If you’re watching closely, you will see the student’s eyes glaze over at the sheer amount of resources, above and beyond the regular core of sources to which a student is accustomed. Anxiety ensues, not because of the massive and inclusive amount of information available, but because of the potentially mind-numbing sixth sense that not all information is created equal, even though in presentation, format, and appearance, all that information looks, roughly speaking, equal. Not only does a student have to become familiar with it, but she or he has to discriminate and distinguish. We should have a response to this effect of technology, and it can’t be simply a call for more use of or practice with technology, although I suppose that certainly may be a subsequent and ancillary element of an appropriate response. Rather, I think the response should be a commitment to developing in our students what Leroy F. Searle calls a “sophisticated critical literacy” enabling a “focused imagination and reasoning.” Such literacy takes as its subject matter “the imaginative, sophisticated use of language,” and it leads to learning how to be, and what it means to be, a thoughtful, aware, careful, penetrating, perceptive, and responsible reader—and, by extension, learning how to be and what it means to be a careful, aware, perceptive, articulate, and responsible writer. In this sense, the experience of technology, of the internet, might be studied and taught as a literary experience because in terms of the use of language, computer science and literary study are not always so far apart. Computer science recognizes the importance of “extensibility” (a program, language, or whole system which can be modified by changing or adding features) and “recursion” (how each step in an expression or process is determined by the application of a formula to preceding terms). And as we know in literary study, the study of language itself is “extensile” and our understanding of language is “recursive,” capable of increasing in extent and range even as it gains depth and complexity. Language is a techne requiring that we look at, not simply through, it. 2. The move to smaller footprint computers and dispersed servers and to increasing access to computer technology was not a lateral side-step or by-product of the use of huge, centralized, mainframe computers, but rather a shift away from the power invested in lab-coated technicians and computer hackers, away from large centers or repositories of information and thus away from the administrators or invisible arbiters of what counts as information. A case in point is Google and its strategy to crawl across a wide spectrum of internet sites and servers in search of pertinent information. Actually, distributive software is another case in point. If this shift is healthy—and it certainly started out that way—and if it is a possible democratizing of computer technology and what has become the internet, then it makes sense that we, as teachers in institutions of higher education, do what we can to foster and promote this empowerment. This means working to insure for our students immediate and ongoing access to this technology, but it also now means, more importantly, that we teach ways of how to read and use it, how to learn from it, which of course would mean that we discover and learn its strengths and weaknesses. Otherwise we are condemning both ourselves and those we say we want to empower to a sort of cultural oblivion of ignorance and passivity, a reserved seat, in other words, in the nebulous age. 3. Whether we like it or not—and whether we acknowledge it or not in our students—computer technology has changed the way we and they think, see, and understand. A few years ago, Jay David Bolter argued that the printed book will move to the “margin of our literate culture”: this is not to say that books will disappear (Amazon.com certainly did away with that anxiety), but rather that print, as Bolter argues, “will no longer define the organization and presentation of knowledge, as it has for the past five centuries.” This does not mean the end of literacy, or worse, the domination of something like television. Literacy must now be defined by a new kind of book, redefining what we now speak of as “text.” Television has always only subsumed language within video. The new text subsumes, as we see almost every day (on any good web site or on CNN, MSNBC, and other television news stations), alphabetic and iconic information, typographical format, and images, which in turn promotes volatility within that “text.” 4. This is where our ongoing anxiety over such things as “intellectual property rights” in the arts and humanities—and what has become the next generation of “distance education”—can do us harm. How is it that we will figure out such a thing as “intellectual property rights” when even now, as a discipline, we don’t even know what the “property” is? And I’m not talking about something so abstract as “authoritative editions,” or “author” or “original,” or even so concrete as course syllabi and materials, but rather how we will recognize or know what is and is not collaborative. How much of our (and I use that pronoun advisedly) classroom material, say, with or without integrating technology, is “ours,” and how much is it the work of someone else or of many people? Where do my ideas begin and end, and where does the technology start or end? I would like to be able to say, right here, that this is simply a longstanding rhetorical problem, but that would not be sufficient. Rather, again it’s both a literary problem and a literacy problem, and again a problem of defining what it is that we, professionally and intellectually, do. The University of Phoenix, of course, is usually the poster-child for the evils and dangers of on-line, distance education. And well it should be—but not primarily because of intellectual property rights. Its response to the imprecise and the unformulated is very simple: identify the impreciseness as an attribute of the instructor and then design the course in such a way that Koko the talking gorilla, possessing enough facility with a keyboard, could teach the course. As an “instructional designer” there has put it (in a recent Chronicle of Higher Education article), regarding the dangers of leaving instructions to teachers open-ended: “It’s too nebulous, we have to tell them exactly what to talk about.” Or, as the president of the University of Phoenix puts it even more bluntly regarding the process of designing and reviewing courses: the process “‘operates for the lowest common denominator’ of instructional ability,” and thus “‘it’s a safety net. That’s the beauty of it.’” Such beauty is at best pathetic, and this “net” will fail for the same reasons that colleges of education have worked so mightily to improve their standing and image across the country: teaching must not be defined simply as a set of methods by which to convey information (note the analogy here to the equally dangerous, truncated definition of “information technology” as merely a “tool”), as if what is called “content” is static and can be taught by anyone, anywhere. Rather, education involves an infinitely more elegant, more complex, more knowing relation among teacher, student, and object of study – in our case, the imaginative, sophisticated use of language. We can’t get anywhere by simply working the other side of the same street that the University of Phoenix works, which would be to try to compete with them without deigning to recognize it as such, and which we do when, for example, we say we “teach” large sections of 50, 100, 200, 300, even 500 students: how can we be surprised when the University of Phoenix argues that they are merely building upon what already exists, what higher education already does? Even though my sense is that the University of Phoenix, like Western Governors University, is destined to become a very convenient but overpriced organization for vocational training (if perhaps an advanced vocational training), this doesn’t allay my fears for lower-division classes in the humanities, which lately have borne even more heavily than usual the brunt of the damage caused by cutbacks, retrenchment, and so-called inventive rethinking. 5. And so consider the lowly colon as punctuation – and its rather recent (historically speaking) promotion to a place of prominence as the marker of title and subtitle in scores of articles, conference papers, and books. How did this happen? And consider too that the colon is often either punctuation—that is, a written symbol that does not correspond either to phonemes (sounds) of a spoken language nor to lexemes (words and phrases) of a written language—or a glyph—that is, a typographical symbol which is not a true punctuation mark, but is like an ampersand, asterisk, number sign, and so on. And its use has been extended as a key element in “emoticons,” and then it is neither punctuation nor glyph, but perhaps more like a lexeme or phoneme or image. The colon is a grammatical, rhetorical, graphical function, doing work which can only be defined in relation to what is around it, close to hand, or connected: just like all language, especially imaginative language; just like technology, especially imaginative technology. 6. My last point is abstract, but I think it’s important because it concerns how we think of and define ourselves as teachers and thinkers of the imaginative and of reason. Coleridge observed, in the course of arguing for Shakespeare’s “judgment [being] equal to his genius,” and against treating him as some sort of idiot savant, that “no work of true genius dares want its appropriate form, neither indeed is there any danger of this.” And he goes on to say that “As it must not, so genius cannot, be lawless: for it is even this that constitutes it genius—the power of acting creatively under laws of its own origination.” As teachers and writers, we study and, if we’re lucky, perform this “power of acting creatively,” that is, we look at, not through artistic, poetic language in order to understand and appreciate it, and to help others to do so. It may be time to recall the comprehensive vision of the encyclopedic Coleridge and to appreciate the potential for beauty in all imaginative activity. We may not have had an active, practical hand in the “genius” that went into the development of this techne we now call information technology, but we surely must take part in helping—collectively—to understand and define the “form” or the “laws” which will fuel its “power of acting creatively.” §§§ |
|