Skip to content

I, Wetware#

So, let’s say this essay was written by a computer program.

I’ve fed it all of my digitized writing to date, so it can emulate my predilection for commonly known but usually unnecessarily long words, my run on sentences, my verbose, British-ish style, my sense of humor as it dries out with age, my habit of making up words that I refer to as phrasemashing, my laymen’s fascination with theoretical science, and my preference for split infinitives and recursive humor.

(Also, ironic use of punctuation.)

In addition, I’ve given it access to my email, chat histories, facebook account, fed in my resume, and given it a list of things within view of my desk. This makes it nearly believable that I both have enough friends better versed than I in computer science to write the program writing this essay, and that I was able to get them drunk enough to agree to do it.

To keep the material fresh, the program has a little loop that occasionally browses wikipedia for something relevant, like this:

“The mind perceives itself as the cause of certain feelings—"I" am the source of my desires—while scientifically, feelings and desires are strictly caused by the interactions of neurons, and ultimately, the probabilistic laws of quantum mechanics.”1[1]

Finally, so the program could have a sense of history and its origins, I fed it the complete episodic memory of one of its predecessors, Staff_Robot. Staff_Robot is a now mostly defunct primitive AI that listened in on our workplace chat room, and occasionally spouted back various nonsense based on our conversations. Besides thousands of hours of tech babble and occasional quotes from user profiles, it was also fed the bible and the complete transcript of No Cure for Cancer. Here are a few of the things it decided to say:

but that give 'drugs' a bad perms error?

had to install custom filter girls by height

pot heads are easy to implement, we add up to 'auto-reject'

just today, a 13-year-old girl, who told me a pointer right now

i think staffrobot is staffrobot

Much of Staff_Robot’s phrases are nonsense, some are eerily relevant, many are slandering me.2[2] It’s a bit like a brain damaged child with a huge vocabulary, but then again, its entire brain doesn’t go much over sixteen megabytes, significantly less than conservative estimates for the human brain’s storage capacity.

So this program is pretty a good representation of me and my digital communications, and is writing this essay as if it is me, and expressing what I might express if I was actually writing it. But, being written through a series of algorithms pumped through silicon instead of the usual neural wetware, does it lack the validity of previous essays? I co-wrote the program, and most of its input is previous communications by me, so it’s essentially just a variation on the program I co-wrote in my head that takes my experiences and compels me to formulate them into compact, highly readable and hopefully someday lucrative short form writing.

As a programmer by day, I feel a little embarrassed in front of real programmers who did things like “studying” in college. Most of them actually know how computers work, why they overheat, and what logical chains underly processes like creating a variable. I don’t have a clue: I use programs interpreted by other programs compiled by other programs into programs the programs on a computer can read and use to set bits in the hard drive to on or off. Fortunately, there’s a giant framework of programs already written that do all this for me, so I can pretend I’m a programmer for lots of money. Come the apocalypse and the small band of computer engineers left to rebuild society, I’ll stand useless among them, and would be quickly killed for food.3[3] So what’s the difference between the programs that turn my hackery into things that make computers do things, and the program that takes my prior writing and pumps out new writing?

Well, for one, the programs I write at my job are the end product. When something goes wrong, my program is the one that gets checked for problems. It stands on a bed of other programs we all assume are roughly working, and it represents the final intent, in the same way this essay stands on the English language and the millennia of writing before it. Each intended output is credited to its creator, not the mountains of science and society that preceded the creation and made it possible. So I created the program, but the program gets the credit for this essay, in the same way I get the credit for most of the things I do in my life, and not my parents.

Yet in programming, it’s perfectly acceptable for me to write programs that write other programs or do work for me, and, magically, I get the credit for both the program and the output. An essay–though it has input (my life), a purpose (get me personal fulfillment, money, and sex), and a process (drink, type)–is different from a program because it is an art form, so I can cloud the input/output explanation with many, many extra words trying to determine undefined aspects of my character, purpose, and effect on others. Because it’s art, and expressing things that I want to express, it contains all the questions of identity and personal interaction that so confuse people on a daily basis, so all that is mysterious and so far unknowable about me is responsible for the creation of an essay. Except this one, because this one is written by a program, and if you want to know what and who the program is, you just open it up in your favorite text editor and read it.

But if you didn’t know it was written by a program, what would be the difference? And say you discovered, after this program had written hundreds of essays,4[4] that all that time it was a program, wouldn’t the program have become as complex an identity as I am, assuming you don’t know me? Still, the program’s output would be dismissed as separate from other kinds of art, because the last act of human expression was writing the program, so all the complicated social intentions of humanity ended there.

I don’t think this is bad. The reason computer generated art doesn’t interest me the way human generated art does is because I don’t care what a computer wants to express most of the time. I consider a computer an a to b machine that does what it’s told, and I’m not interested in its feelings the way I’m interested in another human’s feelings.

But this makes me a hypocrite, because I think humans are basically a to b machines that do what they’re told, and what really sets them apart is that I can’t fix them or turn them off, but I can have sex with them.5[5] In fact, I’m probably better versed in my everyday interactions with humans than I am with my computer. Why does a human make a joke? Because it made a quick connection and wants to tighten social bonds. Why does a human pass out in a Supergirl costume with his head on a watermelon? It’s halloween and he had too much to drink. Why does this program suddenly quit every now and then? I have no idea. We interpret the incredibly complex system of a human as a combination of social pressure and genetic tendency, and that’s tantamount to the way we interpret computers as a combination of software (social pressure) and hardware (genetic tendency). The result is something so complex that even the people who built it won’t always know what went wrong or why.

This point has been made before, but in our attempts to create artificial intelligence we consistently compare it to human intelligence. Fine. But what we really want is an artificial human intelligence. We want to reverse engineer ourselves; the average computer program already has what it wants. Considered as organisms, microchips and their programs are already outnumbering us, and doing quite well in their native environment. The conversation gets confused when we mix the ideas of the program as its own complex, unknowable system, the program as a tool, and the program as an extension of ourselves. As its own entity, the program becomes our best attempt at becoming God, but lives separate from us, and we have to live with it, and if it is indeed as complex and difficult to get along with as we are, we may have to accept ourselves as determinate, programmed beings in our own right, and we have to compare the robots’ art to our own. As a tool, we negate all those worries, and the program remains a really good hammer, and cannot create the art we love to bullshit about. As an extension of ourselves, we have a chance to change our nature without fighting for the nebulous status quo of being human. Accepting the last point of view entails moving forward towards greater possibility with the same uncertainty we have now.

Now, am I me? Is the me writing this essay me, or a program, or a program representing me, or an expression of me, or an extension of me? Trapped in my single two-and-a-half dimensional point of view, I think it is not me. Probably the single most crushing concept I ever read was from Umberto Eco, writing on semiology. The point was the system of meaning is the thing, and the end of the system was the end of the thing. My hope that there will be some kind of continuity of my being beyond the dissolution of my physical existence is duly crushed by the idea that the system of my being is a dying body, and my dreams will vanish regardless of what this program writes post of my humous.

The dream is the extension of our being. That we will merge our duplicatable silicon tools with the system of our being. The drive to create artificial intelligence is not to create something separate from ourselves, which is why we balk at the idea of a machine creating art as we do: we would have pushed our creation beyond ourselves, and become disposable carbon. The drive to create an AI is the drive to know how we work, and somehow preserve it. And if it turns out we’re no different from the programs we create, can maybe we can back ourselves up, or write ourselves again. If we’re no different, then maybe we can release the love of the immediate stream of consciousness and accept that we are an algorithm, and the continuity of our beings is embedded in a larger system, where sleep is little different from death, and the contents of our lives are the slow resolution of a equation that will continue beyond us, though it needed us to complete its answer.

Myself, I prefer the effect of art over the origin, so its origins are less interesting to me than the sensory downbeat it creates when I first encounter it. From my perspective, it would have to be irrelevant whether the art came from a human, a computer, or an infinite number of monkeys. But that’s not completely true because part of the effect of art on me is the imagining of the artist’s mind. What was it trying to pass along?

If the program’s art is as valid as mine, perhaps I am extraneous, determinate, doomed, and immortal. Would you believe in this? Would you listen to art from a program? Would you think it was possible? Meaningful? Is there an important difference between the intent of an individual process that looks like us, and the unfolding of the universe?

EOF