Illustration by Travis ChathamDateline: An unnamed Iraqi village. Locale: Hospital reception room.
Décor: Tattered, rundown, rudimentary. The captain’s mission: To obtain information
about the local medical facilities.
On the computer screen in front of me, an animated Army captain is attempting
to speak with an Iraqi hospital receptionist. This is a fictional scenario in
a state-of-the-art military training game. On the other side of the virtual room,
the receptionist listens politely as the captain explains that he has come with
supplies and he would like to speak to the hospital director. The receptionist
seems to hesitate, but then responds that he will be happy to assist.

As a piece of animated action,
the scene above is not likely to quicken
the pulse of any gamer, but for the U.S. military it offers a glimpse into the
future. The characters here are not mouthing a pre-assigned script, they are literally
making decisions about what to do and say. Far more than mere cartoons, these
virtual people have each been endowed with a virtual mind complete with its own
internal “desires” and “goals.” Technically known as “agents,” they are driven
by a revolutionary software system known as PsychSim that enables programmers
to simulate the cognitive faculties of human minds. Dr. Stacy Marsella, a leading
agent researcher and one of the primary architects of PyschSim, declares that
agents actually “think for themselves.” Indeed, the ultimate goal of agent
research is to create autonomous self-determining minds capable of a full spectrum
of human behavior.
A small, dark-haired man with a doctorate in artificial intelligence, Marsella is a project leader at USC’s Information Sciences Institute in Marina del Rey, one of the world’s top centers for agent research. Sitting in his office overlooking the marina, Marsella effervesces with visionary zeal about the potential for what he calls “virtual humans” and dreams openly about agents that can interact with real humans as cognitive and emotional equals. Outside in the marina, actual yachts bob on actual waves beneath a slate-gray sky; inside, I begin to feel that I am being pulled into the Matrix. With PsychSim, Marsella explains, programmers can create agents that “can reason about themselves and about other agents and make decisions about how to respond based on what they believe the other agents are doing.” Marsella sees a future in which we will increasingly interact with these ersatz people – at first they will just do mundane tasks such as answering phones to take orders for pizza, or responding to simple queries, but eventually they will be capable of vastly complex interactions.Last year, Marsella and his colleague Dr. David Pynadath developed an agent-based game in which parents of childhood cancer patients engage in virtual counseling sessions with a virtual therapist. The game, called Carmen’s Bright Ideas, was pilot-tested at six hospitals nationwide, including Children’s Hospital of Los Angeles. Another group Marsella is working with at the ISI’s sister organization, the Institute for Creative Technologies, is beginning to work on agent-driven games that could help people suffering from phobias and Post-Traumatic Stress Disorder. Marsella and his colleagues believe they will someday be able to simulate not only a vast range of social interactions but a panoply of personality types and psychologies. In effect, they are attempting to create virtual cyborgs — beings without bodies yet endowed with minds, thoughts and even feelings of their own.
Talking
heads: In this PsychSim
program, “agents” engage in
a virtual conversation.




But what does it mean to talk about a virtual mind? What, indeed,
is a mind of any variety?
On his computer Marsella brings up a graphic labeled “Theories of Mind.” It’s
absurdly simple in execution, childish almost, yet it characterizes a profound
philosophical argument about what it means to be a thinking being. The focus of
the graphic is a purple smiley face with a large thought bubble emanating out
of its head. As Marsella explains, the thought bubble renders the agent’s mind.
Within this bubble are several other smiley faces, each of which has its own thought
bubble coming out of its head, each in turn containing other smiley-face
agents. The point here, Marsella says, is that “each agent has encoded within
it a model both of itself and of the other agents within the system.” It has,
as it were, a mental image of each member of its community; it “knows” that it
exists and it “knows” that its colleagues exist.
Crucially, its mental model includes a conception of what it thinks the other agents are thinking. Thus, says Marsella, an agent can make judgments about what they believe any other one will do: “Fred has a model of Alice. So Fred can reason about Alice and how Alice thinks about him. Therefore, if he does some action, what will she think and how will she react. Based on what he thinks she will do, he can make a decision about what he will do. These guys aren’t just looking up tables to see what to do next, they are doing little simulations in their heads.” While Marsella has been explaining his philosophy of the virtual mind, his PsychSim co-creator has been sitting quietly across the table and looking on with bemused silence. Pynadath, who speaks with the measured tones of a classical scientist, is an expert on multi-agent interactions. He too was trained in the field of artificial intelligence, but he comes from the more technical end of that spectrum. After all this talk about simulated psyches, Pynadath seems to feel a need to inject a bit of “hard” science into the discussion. “From an AI point of view,” he notes, “we are using software techniques developed for non-human situations, for example with the robot rovers on Mars.” The difference is that whereas most simulation software so far has been used for modeling physical interactions, PsychSim models social interactions. As Pynadath notes, simulation software has become very good at modeling the interactions between molecules of gas and grains of sand, but modeling relations between human beings requires an understanding of social and psychological dynamics. Until very recently, artificial-intelligence researchers believed that modeling the mind was simply a matter of simulating rational cognition, an activity that was seen to be epitomized by strategical games such as chess and go — but over the past decade, computer scientists have come to understand that a virtual mind needs a virtual psychology. To “think” requires not just an ability to carry through a chain of logical inferences; it also requires a mental environment, or psychic context, in which such rationalizations can be given meaning. Having heard the theory of virtual minds, I was eager to see one in action.
Marsella handed me over to Mei Si, a 28-year-old Chinese graduate student in USC’s
Department of Computer Science. “We try to recruit students who have training
in psychology as well as in computing,” Marsella told me as we headed to her office.
Si already has a master’s degree in psych.
In a windowless room she shares with another graduate student from China — ocean
views are clearly reserved for the upper tiers of the hierarchy — Si turned on
her computer and scrolled through what seemed like an endless piece of software.
She wanted to show me the chunk of code defining the mind of the aforementioned
hospital receptionist. It is an extremely minimal agent, she tells me; unlike
some of its confreres, it has no desires and only two specified “goals.”
“The most important goal that agents have is to behave like a human,” Si remarked as she pointed out where various mental parameters are defined in the arcane text. For the hospital receptionist, whose only function is to impart basic information, the primary goal is to act according to social norms; the second goal is to be liked. Si showed me where in the relevant lines of code she had defined variables labeled “norm” and “being likeable.” Each variable is assigned a value in the range from 0 through 1 and the higher its value, the greater the urgency the agent has to satisfy this goal. Aside from being liked and acting normal, other goals that agents might have are to be polite, to maintain safety within the game’s setting, or just to give an appropriate response within the course of a conversation. Si’s research is specifically focused on implementing the dynamics of agents’ conversations. While humans take this for granted as a trivial task, maintaining a coherent flow of dialogue is a major challenge for artificial-intelligence researchers. To date, computer-generated conversation has been notoriously nonlinear. Si explained the complexities involving even the most banal exchange: An agent needs to “know,” for example, “‘If you ask me a question, should I respond, and what kind of response should I give?’ They need to know what sort of responses are appropriate, when to say ‘thank you,’ or if that’s not appropriate, and perhaps instead I should just look surprised.” In Si’s work the first goal is merely to keep the conversation going.Agents can talk amongst themselves as part of a pre-defined scenario within a game environment, but by far the most significant conversations they will have are with the human players. From the agents’ point of view, Pynadath told me, “a human is regarded as just another agent,” albeit one about whom the agent probably has a rather elaborate mental picture. In the smiley-face model above, the human agent would be represented by a much longer list of attributes than other virtual agents. I wondered what agents think about us. How does a virtual mind view one that is
instantiated in flesh and blood? “From a structural point of view,” Pynadath said,
“the human is no different than any other agent.” But since we are the ones to
whom these virtual minds must primarily respond, our behavior is critical to their
behavior. Given this, Pynadath noted that an agent may well have a “different
attitude” to humans than to its virtual colleagues. In particular, “it is likely
to think that the human is less predictable.”
At present, most of the ISI’s agent research is being funded by the military.
The Iraqi hospital scene is one small part of an elaborate learning game produced
for the Army that teaches soldiers headed for the Middle East to speak “strategically
useful” Arabic. One of the military’s other goals is to use agent software to
simulate large-scale command structures. In theory, says Marsella, these simulations
can be as large as you like. Indeed, an agent may stand in for an individual person,
but it may also represent a group. Marsella and Pynadath have been developing
agent software that enables them to simulate the social dynamics of entire cities.
And the armed forces are not the only ones invested in the potential of large-scale
social modeling — it’s also of interest to those charged with maintaining homeland
security. Marsella and Pynadath are currently working with other USC researchers
to develop a project that would model the dynamics of a large international port
such as Los Angeles’s, a labyrinthine system involving many different agencies.
With nearly 500 ships entering the L.A. port every month, it is not possible to search every vessel, let alone every container. Decisions must be made about which ones warrant closer scrutiny, and the various agencies are not particularly good at sharing information or coordinating their efforts. “One of the goals of the project,” says Marsella, “is to help them to understand how cooperation and information-sharing can be enhanced, so that they can make better decisions about which ships and which containers to inspect.” The project is still in its infancy, and it may be years before there will be any concrete suggestions about how to improve port efficiency. But in the long run, if Marsella and Pynadath can get their model to accurately emulate the real situation, such simulations may save time, money and even lives.As I listened to this talk about simulating the vast morass of the L.A. port, with its daily traffic of ships and cargo from all over the world, its cast of sea captains and smugglers and customs officials, I found myself looking out to the boats beyond the ISI window. Maybe, I thought, Matrix-like, all this is merely a simulation in some giant piece of software installed on some alien computer in a faraway galaxy, and we humans are just virtual agents reflecting within our virtual selves software models of one another. In one form or another, the idea of the simulacra has haunted and enchanted Western culture since at least the time of Plato’s cave. Are we finally at a point where we might realize this surreal fantasy by creating a true virtual reality complete with sentient minds?As I considered this science-fiction scenario, I wondered out loud if Marsella and Pynadath ever feel as if they are playing gods. I didn’t really expect an answer; I had meant the question rhetorically. Yet without missing a beat, both men answered in the affirmative. “Yes,” they said almost in unison. Then in a tone at once excited and wistful, Marsella added, “It’s a rather eerie feeling.”

Advertising disclosure: We may receive compensation for some of the links in our stories. Thank you for supporting LA Weekly and our advertisers.