By Catherine Wagley
By Channing Sargent
By L.A. Weekly critics
By Amanda Lewis
By Catherine Wagley
By Carol Cheh
By Keegan Hamilton
By Bill Raden
You have to say one thing for Gary North: He has the courage of his convictions.
North is the Christian fundamentalist whose millenarian Web site (www.garynorth.com) consistently deployed the most apocalyptic warnings of what was in store for the world when the year 2000 rolled around and all the computers failed. You would have thought that by midmorning on January 1, North would have been zapping Web pages right and left, but no. His dire -- and completely erroneous -- prognostications are still there today. Some files with post-anticlimax dates have even been added. Maybe North‘s own calendar failed to roll over properly, and he thinks it’s 1900.
As everybody now knows, Y2K was a dud. What a letdown! Admit it, you were hoping for at least one little cataclysm somewhere -- not where it would affect your own friends or investments, of course -- just to show that there really had been something to worry about. Couldn‘t just one satellite have plunged into the Indian Ocean? Couldn’t you have gotten just one bank statement saying that your accumulated Cash Maximizer interest was now $20 million? Just a little storm-of-the-century thrill, without actually having your own house slide down the hill? a
The last word on Y2K belonged to the scoffers. The whole thing had been a vast hoax, they said, a revenge-of-the-nerds conspiracy by corporate computer departments trying to pump up their budgets. Hundreds of billions of dollars were spent to prevent problems that weren‘t going to happen anyway.
Were they right? Most of the discredited prophets, who had seen themselves as Paul Revere and now look more like Chicken Little, still say no: There was a real problem, and it was morally necessary to prepare for worst-case scenarios, however unlikely they might be, just as Boeing designs airplanes for wind conditions that occur once in 100 years.
And we prepared. Millions of computers and other chip-dependent equipment were reprogrammed or replaced, billions of lines of code were sifted for date-related functions, and countless errors were corrected. Practically all the usual work of corporate “information technology” departments was set aside for a year in favor of Y2K remediation. Estimates of the cost of the whole operation run as high as $200 billion. And the process was accompanied by a steady tattoo of alarmist warnings that whatever was done, it would be too little, too late.
Peter de Jager, a Canadian computer consultant whose voice was one of the first to be heard crying in the wilderness about Y2K, now concedes that the early warnings were too strident. But he insists that they had to be. When he started trying to get business managers to pay attention to the Y2K problem in 1996, nobody wanted to listen. The only way to get them to take it seriously was to paint it in the direst possible terms. By mid-1999, when it was apparent that much of what was wrong, at least in the United States, would be fixed, de Jager and many others had tempered their warnings. But they then found it hard to get press coverage for calmer forecasts: “I’m sorry to hear that you‘re more optimistic,” one reporter told de Jager.
The vocabulary of Y2K predictions -- “connectivity,” “data corruption,” “cascading failure,” “supply-chain breakdown” -- was seductive. It conjured up unprecedented events whose likelihood no one could assess with certainty, but which sounded logical, possible and even inevitable. It was hard to guess how effective improvised remedies would be if computers did break down. A general pessimism pervaded much of the computer community, perhaps because programmers, accustomed to being held to very high standards of accuracy in their work, did not believe that the ordinary people who used their software on factory floors and loading docks and in offices could keep things moving if their computers went down. In many cases they were right; businesses like banks, whose databases contained hundreds of millions of records, could hardly fall back on pencil and paper if their screens went dark. And if switching equipment controlling electrical-power grids or refineries or nuclear-power plants really did hiccup, who knew what would happen?
One of the most ominous and least analyzable worries had to do with so-called embedded systems -- special-purpose chips that monitor and regulate the operation of all kinds of machinery, starting with household appliances and going right up to the much-publicized reactors and airliners. The chips exist in immense numbers. Many have internal timing functions, and some of these could be date-dependent. Analysts worked themselves -- and others -- into near panic over these mysterious, unseen chips. No one knew how many were at risk of failing, or the consequences if they should fail, but given their sheer numbers, it seemed -- remember Murphy’s Law? -- that something bad was sure to happen.
In fact, nothing did -- at least nothing really bad. Not a single newsworthy disaster in the entire world, no loss of life or destruction of property, has been tied to Y2K.
The truth is that there have, in fact, been many, many Y2K problems throughout the world before and after January 1, probably millions of them. (A very partial country-by-country list is at www.iy2kcc.orgGlitches2000.htm.) But for various reasons -- principally that they have not been sufficiently spectacular -- most have not been publicly reported. The Y2K story is passe now; organizations have nothing to gain by admitting that after they spent a lot of money to eliminate problems, they still had them; and most of the interesting problems occurred outside the United States. Routine data-processing errors -- those experienced practically daily by banks, for instance -- are usually caught before they affect the public; and the tribulations of ferry or railway dispatchers in Mali or Kazakhstan did not interest us.