By Hillel Aron
By Joseph Tsidulko
By Patrick Range McDonald
By David Futch
By Hillel Aron
By Dennis Romero
By Jill Stewart
By Dennis Romero
|Illustration by Dana Collins|
HOW HUMAN ARE YOU ON A GIVEN DAY? LAST weekend at the annual Loebner Prize competition, Paul Pareti was rated the most human among a field of 10 candidates. Pareti soundly beat his fellow human, Robby Garner, who came in a dismal eighth. Competing against these two "meat machines" were eight of the world's elite chatterbots, computer programs specially designed to simulate conversation. Here's the opening volley of a typical conversation with a previous Loebner-winning bot:
PROGRAM: Hello. My name is Jim. Who are you?
JUDGE: Hello, Jim. I am Paul, a Ph.D. student, how about you?
PROGRAM: So, love Mexican food but you like Indian better. Tell me more. Would you like to have the five-minute argument, or were you thinking of going for the whole half-hour?
JUDGE: I would like to be your friend. Could you please tell me more about you?
PROGRAM: Well, I'm not allowed to argue unless you pay.
JUDGE: No worries. I will pay for that. Please go ahead.
PROGRAM: Yes it was stop driving. Old barbecue restaurant and a barbecue restaurant in a metal star building. Are you married?
If dialogue like this calls to mind the automatic writing of the surrealists, suggesting an unmediated streaming of silicon consciousness, the effect is unintentional. Indeed, it is deeply distressing to the programmers responsible. Chatterbot Jim scored a paltry 11 percent on what is called the Turing percentage scale; until this year no program had ever been rated more than 15 percent human. "It's pretty appalling we can't do better than that," Loebner organizer Neil Bishop told me a few days before the event. "It's a psychological barrier the industry really needs to crack." Hopefully this will be the year.
Inspired by the ideas of legendary computer pioneer Alan Turing, the Loebner competition is one of the most anarchic and freewheeling exercises in cognitive-science research today. Along with those of professional computer scientists and language theorists, entries also come in from high school kids, garage geeks and filmmakers. "Agent Ruby," one of this year's more shadowy entries, is the self-replicating star of the sci-fi indie flick Teknolust; Ruby has her own "edream" portal and is available for conversation at www.agentruby.com.
In a 1950 paper, Turing asked the question, "Can machines think?" He was not interested in an ontological resolution; as he saw it, simulation was the key issue. If a machine appears to think, Turing said, then we ought to accept that it does think. The test case he suggested was the ability to hold a conversation. At the Loebner competition, judges face a dozen computer terminals, on each of which they hold typed conversations. Some of the respondents are human, most are computer programs. Can the judges tell the difference? More critically, can any program fool the judges into believing that it is human?
SIMULATION, AS JEAN BAUDRILLARD SAID, IS "A question of substituting the signs of the real for the real itself." Throughout the past half-century the signs of the real have consistently eluded Turing's descendants, who have almost willfully underestimated the difficulty of simulating human skills. In 1966, artificial-intelligence guru Marvin Minsky famously assigned a student a little summer project, to simulate human vision. So confident was Minsky of the problem's tractability (any year-old baby could recognize the world around it -- how hard could that be?) he assigned it to a mere undergrad. With vision completed by the end of summer, Minsky figured, it would be onward and upward from there. Locomotion, speech, face recognition, language translation, manual dexterity, spatial orientation -- all would succumb to symbolic logic, their essence encapsulated in the binary genetics of machine code. In short, AI was supposedly just a few million mouse clicks away.
That turned out to be one of the more hubristic predictions in the history of science. Chastened by decades of outright failure, most practicing computer scientists today are leery of the term artificial intelligence. "This is not really AI, it's mimicking humans," Bishop corrects when I inadvertently drop the phrase. Indeed, the Loebner this year was hosted by the wonderfully named Institute of Mimetic Sciences, of which Bishop is a founder.
What puzzles me is not the ineptitude of the computer entries, but how any human conversing with a judge could not stand out in such a field. As the surrealists appreciated, dialogue as nonlinear as "Jim"'s above is difficult to keep up. The conscious mind seems almost programmed for some kind of narrative linkage, and generally requires drugs or psychosis to boost it altogether off the railway tracks of grammatical structure. Yet Bishop assures me that the Loebner's human contestants are not always among the judges' top picks for human status. Every year, some of them score well behind the computer entries; this year, Robby Garner was "beaten" by six of the eight bots. Are the human contestants being deliberately perverse? Are they on acid? Bishop says the instructions are simple: Just be yourself, but talk as you would in a chat room.
Then again, talking in a chat room is not the same as "normal interpersonal interaction," admits Bishop. Nonetheless, he muses philosophically that Internet chat now constitutes a "genuine human behavior." And so the Baudrillardian prophecy has been realized, the simulacrum has become the real. The human persons simulate being virtual versions of themselves and thus become the reality that the virtual "persons" strive to simulate.