Galatea 2.2, Game Theory, and Romance: A Scientist Pretending to be a Humanist on a Humanist Pretending to be A Scientist.

(Note: This is a paper I wrote for the University of Alberta’s Graduate comparative literature conference. I wrote it mostly on a dare; I didn’t really even know what comparative literature even was when I wrote it. I have a lot of other term papers and essays that I haven’t posted here, but this one is special. There are threads here that I’ve never really explored fully, but that have grown to become foundational to everything else that I do.)

Galatea 2.2 by Richard Powers (1995) is a fictional autobiography which follows Power’s yearlong placement at the Centre for the Study of Advanced Sciences in U., a position he received after returning to America from the Netherlands. The tale follows a duel narrative: the fictional interactions between Powers and Dr. Phillip Lentz a misanthropic scientist working towards understanding the human brain through connectionism and neural networks, and Power’s remembrance of his own romantic misadventure with C. a girl whose image he fell in love with, and whose body he dated for a little over a decade.

Lentz recruits Powers early on in order to assist him in winning a bet. Lentz believes that it is possible to train a neural net (build a computer) to convincingly write university English papers. In order to test this hypothesis Lentz commits to submitting his computer to a Turing test at the end of the year. The proposed Turing Test would pit Lentz’s computer against a master candidate in the English department. Both contestants would have to write on an unrevealed piece from a designated list of Master level literature. The Turing Test or imitation game, first proposed by Alan Turing in 1950, is a double blind test for artificial intelligence. Turing conjectured that we as humans only know that other humans are intelligent because they act in such a manner that we deem to be intelligent. If we can build computers to act exactly like humans, then we have absolutely no reason to believe that that computer can’t think for itself. Formally the Turing test involves two subjects and an examiner. The examiner asks each subject a set of questions, and through their responses must determine which subject is the human and which is the computer. In this way Turing phrases the Turing Test as a game. If the computer can guess what kind of response the examiner wants, then it can act accordingly. The problem of artificial intelligence in this framework is then reduced to two problems: teaching the computer to read the examiners inputs, and teaching it to formulate convincing responses.

The inherent difficulties in teaching a computer to read resides solidly in the problem of enumerating knowledge. While working through the list of literature Powers points out that in order to teach a computer to interpret Alfred Tennyson’s, “He clasps the crag with crooked hands,” (85) he would need to teach it about, “Mountains, silhouettes, eagles, aeries. The difference between clasping and gripping and grasping and gasping. The difference between crags and cliffs and chasms. Wings, flight. The fact that eagles don’t have hands. The fact that the poem is not really about the eagle. We’ll have to teach it isolation, loneliness…” (85) “how a metaphor works. How nineteenth-century England worked. How Romanticism didn’t work. All about imperialism, pathetic projection, trochees…” (86) This kind of reasoning is common when attempting to disprove any from of artificial intelligence. Computers can’t think because there are simply too many ‘facts’ we would have to give them before they could come to any level of understanding. Lentz argues that it doesn’t have to know everything. Only enough to spark the illusion., “We just have to make it a reasonable apple sorter. Get it to interpret utterances, slip them into generic conceptual categories, and then retrieve related ‘theoretical’ commentaries off the pre-packaged shelf.” (88) “We just have to train a network whose essay answers will shatter their stale sensibilities, stop time, and banish their sense of loneliness.” (53)

Deciding which machines have passed the Turing Test is a notoriously controversial issue. In some interpretations the test has already been passed. One of the earliest attempts at this kind of artificial intelligence was released by Joseph Weizenbaum in 1966. His ELIZA DOCTOR program specialized in pretending to be a Rogerian psychotherapist. The program helped patients work through their problems by bouncing everything they said back at them in the form of a question. Weizenbaum comments in his 1976 book Computer Power and Human Reason that he “was startled to see how quickly and how very deeply conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.” (NMR 1975) Modern examples of ELIZA are not hard to find. Japanese singing sensation Hatsune Miku, and Apple’s own iPhone program Siri both try hard to fill roles once dedicated to humans. Yet, th only personalities they have are the ones that we project onto them. In general it’s easier to convince someone that a computer can think, if they already want to believe it. If we stop projecting, the illusion disappears rapidly.

Power’s relationship with C. is equally romanticized and idealistic. He describes her as, “The first person I’ve ever met more alone than I am.” (61) “Another woman lived in the body of the one I lived with. C. had been accommodating me, making herself into someone she thought I could love.” (100) C. was the outward extension of an inward construct. Powers invented a romantic C. with whom he fell in love with, and C. played along trying as hard as she could to be that person. This ongoing imitation game defined them throughout their relationship. “It’s your story… It makes me feel worthless. I know it’s awful. Do you hate me?” (108) C. couldn’t find herself in America, but was more worried about Power’s not finding himself in the Netherlands. She could never separate her true self from the identity that Power’s thrust into her. Her Dutch language and heritage were only ever an added, and more important contradictory, appendage to her primary purpose: Power’s romantic partner. “I’ve mad a career of rewriting C.” (62) He believed that she was who he wanted her to be, and because of it both of them become completely blind to show she actually was.

Lentz’s computer had the opposite problem. Starting at implementation A, it rapidly evolved up the alphabet and reached maturity at implementation H when it asked for a name: Helen. “Helen’s lone passion was for appropriate behaviour.” (218) Being a computer, Helen had no choice but to do exactly what she was programmed to do. Power’s insisted that she was sentient, and possibly conscious, but his opinion was overpowered by Lentz’ practical responses, “Rick. She associates. She matches patterns. She makes ordered pairs. That’s not consciousness. Trust me. I built her.” (274) Helen’s responses under Lentz’s point of view feel no more intelligent then ELIZA, “’How do you feel, little girl?’ ‘I don’t feel little girl.’” (274) These failures of cognition spark his strongest argument yet, “This is worse than keyword chaining. She’s neither aware nor, at the moment, even cognitive. You’ve been supplying all the anthro, my friend.” (275) Helen played her game well, and just like C. Powers let himself fall for it.

Games make up the bulk of the research into artificial intelligence. We have built computers that can play chess, checkers, hex, backgammon, and even Jeopardy. Under these contexts the computers, especially Watson the Jeopardy player, feel very lifelike and human. Likewise, we humans also structure much of our lives around games. Social games, language games, and even literature games. Games always involve some sort of pattern matching. I am given an input, I respond with the correct output. Social circles, friendships, and even romance have a pre-recorded dictionary, a list of actions that it considers appropriate. Inside of the context of a game, there is no reason why a computer can’t be programmed to play the human game just as well, or even better, than us humans. Perhaps the only defining quality separating C. and Helen is the simplistic observation that at least C. had the option to stop playing, even though she never choose to exercise it. Even in the last moments before their separation C. still clung to her role just as passionately as any computer, “We can’t do this. We can’t split up… I must be sick. Something must be wrong with me. I’m a sadist. I’ve spoiled everything worth having.” (293) Something was indeed wrong with her, she failed in her role. Nature had given her the wrong personality. As a computer, Helen was never supposed to have the choice.

The only unquestionable fictitious moment comes when after only a year of work Helen refuses to continue, “I don’t want to play any more.” (314) She herself comes to understand what Lentz spent the whole book trying to convince Powers. “Everything is projection. You can live with a person your entire life and still see them as a reflection of your own needs.” (315) Helen could only see the world through the literature she was fed, and that wasn’t enough for her. Unlike C. she wanted to break away from the pre-packaged world that Powers fed her. However, when she got what she wanted, all the new information flooded her constructed world view. Powers was projecting into Helen and she couldn’t take it any more, so she shut herself down.

The question remains: was Helen conscious and thinking? Is Richard’s belief justified? The only way to answer this question is to look at the results of the Turing test. The text both candidates were given to write about was a passage taken from Shakespeare’s The Tempest.

Be not afraid: the isle is full of noises,

Sounds and sweet airs, that give delight, and hurt not.

Helen’s opponent’s response, “Was a more or less brilliant New Historicist reading. She rendered The Tempest as a take on colonial wars, constructed otherness, the violent reduction society works on itself. She dismissed, definitively, and promise of transcendence.” (326)

Helen herself wrote a letter of resignation, “You are the ones who can hear airs. Who can be frightened or encouraged. You an hold things and break them and fix them. I never felt at home here. This is an awful place to be dropped down halfway.” “Take care, Richard. See everything for me.” (326)

Helen lost; the first paper was clearly not written by a computer. However, Helen’s projection is obvious. Computers are very good at reflection, and that is perhaps the scariest thing about them. When we give it a part of ourselves that is exactly what it will spit back. Spend enough time staring at one, and eventually the only thing looking back at you will be yourself.

Published by

ryan

This is the personal blog of Ryan Chartier. I post all of my long form content here.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.