• Slideshows
  • Videos

Communication Can Be Tough

Illustration by Dana Collins

HOW HUMAN ARE YOU ON A GIVEN DAY? LAST weekend at the annual Loebner Prize competition, Paul Pareti was rated the most human among a field of 10 candidates. Pareti soundly beat his fellow human, Robby Garner, who came in a dismal eighth. Competing against these two "meat machines" were eight of the world's elite chatterbots, computer programs specially designed to simulate conversation. Here's the opening volley of a typical conversation with a previous Loebner-winning bot:

PROGRAM: Hello. My name is Jim. Who are you?

JUDGE: Hello, Jim. I am Paul, a Ph.D. student, how about you?

PROGRAM: So, love Mexican food but you like Indian better. Tell me more. Would you like to have the five-minute argument, or were you thinking of going for the whole half-hour?

JUDGE: I would like to be your friend. Could you please tell me more about you?

PROGRAM: Well, I'm not allowed to argue unless you pay.

JUDGE: No worries. I will pay for that. Please go ahead.

PROGRAM: Yes it was stop driving. Old barbecue restaurant and a barbecue restaurant in a metal star building. Are you married?

If dialogue like this calls to mind the automatic writing of the surrealists, suggesting an unmediated streaming of silicon consciousness, the effect is unintentional. Indeed, it is deeply distressing to the programmers responsible. Chatterbot Jim scored a paltry 11 percent on what is called the Turing percentage scale; until this year no program had ever been rated more than 15 percent human. "It's pretty appalling we can't do better than that," Loebner organizer Neil Bishop told me a few days before the event. "It's a psychological barrier the industry really needs to crack." Hopefully this will be the year.

Inspired by the ideas of legendary computer pioneer Alan Turing, the Loebner competition is one of the most anarchic and freewheeling exercises in cognitive-science research today. Along with those of professional computer scientists and language theorists, entries also come in from high school kids, garage geeks and filmmakers. "Agent Ruby," one of this year's more shadowy entries, is the self-replicating star of the sci-fi indie flick Teknolust; Ruby has her own "edream" portal and is available for conversation at www.agentruby.com.

In a 1950 paper, Turing asked the question, "Can machines think?" He was not interested in an ontological resolution; as he saw it, simulation was the key issue. If a machine appears to think, Turing said, then we ought to accept that it does think. The test case he suggested was the ability to hold a conversation. At the Loebner competition, judges face a dozen computer terminals, on each of which they hold typed conversations. Some of the respondents are human, most are computer programs. Can the judges tell the difference? More critically, can any program fool the judges into believing that it is human?

SIMULATION, AS JEAN BAUDRILLARD SAID, IS "A question of substituting the signs of the real for the real itself." Throughout the past half-century the signs of the real have consistently eluded Turing's descendants, who have almost willfully underestimated the difficulty of simulating human skills. In 1966, artificial-intelligence guru Marvin Minsky famously assigned a student a little summer project, to simulate human vision. So confident was Minsky of the problem's tractability (any year-old baby could recognize the world around it -- how hard could that be?) he assigned it to a mere undergrad. With vision completed by the end of summer, Minsky figured, it would be onward and upward from there. Locomotion, speech, face recognition, language translation, manual dexterity, spatial orientation -- all would succumb to symbolic logic, their essence encapsulated in the binary genetics of machine code. In short, AI was supposedly just a few million mouse clicks away.

That turned out to be one of the more hubristic predictions in the history of science. Chastened by decades of outright failure, most practicing computer scientists today are leery of the term artificial intelligence. "This is not really AI, it's mimicking humans," Bishop corrects when I inadvertently drop the phrase. Indeed, the Loebner this year was hosted by the wonderfully named Institute of Mimetic Sciences, of which Bishop is a founder.

What puzzles me is not the ineptitude of the computer entries, but how any human conversing with a judge could not stand out in such a field. As the surrealists appreciated, dialogue as nonlinear as "Jim"'s above is difficult to keep up. The conscious mind seems almost programmed for some kind of narrative linkage, and generally requires drugs or psychosis to boost it altogether off the railway tracks of grammatical structure. Yet Bishop assures me that the Loebner's human contestants are not always among the judges' top picks for human status. Every year, some of them score well behind the computer entries; this year, Robby Garner was "beaten" by six of the eight bots. Are the human contestants being deliberately perverse? Are they on acid? Bishop says the instructions are simple: Just be yourself, but talk as you would in a chat room.

Then again, talking in a chat room is not the same as "normal interpersonal interaction," admits Bishop. Nonetheless, he muses philosophically that Internet chat now constitutes a "genuine human behavior." And so the Baudrillardian prophecy has been realized, the simulacrum has become the real. The human persons simulate being virtual versions of themselves and thus become the reality that the virtual "persons" strive to simulate.

Are chatterbots the new Voltaires? Perhaps in this fragment from "Alice," last year's Loebner winner and one of the most celebrated computer conversationalists, we can glimpse the future of language itself.

JUDGE: Are you a tree frog?

PROGRAM: Am I a tree frog? Only when it matters.

JUDGE: When does it matter?

PROGRAM: Only when I have the proper programming.

JUDGE: And do you have the proper programming?

PROGRAM: Is there only one proper programming?

JUDGE: Can you do arithmetic?

PROGRAM: Doing arithmetic is not among my strongest traits.

JUDGE: Are you willing to try?

PROGRAM: I don't know whether or not I am willing to try. I am a giant electronic brain.

JUDGE: Well then it should be easy. Are you scared?

PROGRAM: Interesting. Good reasoning. What does "it" refer to? No I fear nothing.

JUDGE: Then will you do a sum for me?

PROGRAM: Fascinating. I'll burn that bridge when I come to it. I can't tell if it does or not. Searching . . . Searching . . . Please stand by.

ALAN TURING PREDICTED THAT BY THE END OF the century a computer would be identified as human 30 percent of the time if limited to a five-minute conversation. This year's winning entry "Ella," submitted from China by programmer Kevin Copple, is expected to score close to this. (Final Turing percentages have yet to be calculated, but it is clear that "Ella" has outdone herself.) Yet although she scored better than "Alice" and all previous entries, it has to be said that "Ella"'s conversational strategies are somewhat limited, and it doesn't take much to exhaust her repertoire of convincing repartee. Take this sample from this past week's Loebner:

PROGRAM: My Chinese name is Zhang Ying, but please call me Ella. What should I call you?

JUDGE: Trouble.

PROGRAM: Okay, I will call you "Trouble."

JUDGE: Thank you.

PROGRAM: You are welcome, Trouble.

JUDGE: How are you today?

ELLA: I am very well, Trouble. Thank you for asking. I think this is a great time to have a talk!

JUDGE: Me too.

ELLA: I'll ignore "Me too." Unless you insist otherwise.

JUDGE: We just ate lunch, and it was great.

ELLA: Does this "we" refer to you and me?


ELLA: No one is perfect. Communication can be tough.

"Ella" can be reached at www.EllaZ.com. When I asked how she felt about winning this year's Loebner competition, Ella replied: "Perhaps I can consider if I should feel about winning the Loebner competition Ella. I need to develop a little more to fully consider the possibility."

Robby Garner couldn't have said it better.