Chapter OneONE TODAY Menlo Park, California The world ended not with a whimper but a crash. Also a Jolt: Dan Tuck''s fourth energy drink of the night, cracked open one-handed as his other digits danced across the keyboard of his laptop. Dan was on a deadline--was always on a deadline--and as the most senior software engineer on Campus, he took his work very seriously. Sure, this being almost three o''clock in the morning, he was also the only software engineer on Campus--but that was beside the point. Across the sprawling headquarters of StoicAI, other workers on the night shift toiled away on abuse detection, server maintenance, customer service, and a thousand other tasks deemed important to the company''s smooth operation. But none of that work meant a damn thing if Dan failed in his duty, which was to feed the LLIAM algorithm its nightly data update, without fail, at precisely three a.m. And, sure , Dan wasn''t responsible for actually gathering the roughly four hundred petabytes of data needed to fuel the world''s most powerful artificial intelligence algorithm.
That job fell to the thousand or so pampered PhDs who labored three floors above him during daylight hours. Nor did he program any of the bug fixes or feature upgrades or upload them to the secure staging server. That responsibility had been claimed by StoicAI''s chief technology officer, Sandeep Dunn. Don''t get him wrong--those jobs were important, too, but they were daytime jobs, completed in Steelcase chairs parked behind huge glass desks. Breaks for sushi, whiteboard pranks. Optimal blood pressure. Dan''s was a nighttime job: high pressure, high stakes, no time for creature comforts. The clock flicked to 2:59 a.
m. and Dan took a slow, deep breath to bring down his heart rate, just like snipers do. He''d been at StoicAI for three years, recruited as an intern right out of Stanford before rising to the heady ranks of senior data administrator. To an outsider, his job might appear dull--mechanical, even. On paper, all Dan had to do was wait until the clock on his laptop hit three a.m., tap the space bar, and then watch as a chunky progress bar crept across his screen toward: 100%. But the tapping of the space bar wasn''t the point of Dan''s job.
A robot could tap a space bar. A monkey could tap a space bar. The point of Dan''s job was to have someone calm under pressure with boots in the trenches--in case something went wrong after the space bar was tapped. You''ve heard of a designated survivor? That pampered fucker had nothing on Dan Tuck. More than a billion users across the Western world relied on LLIAM to make their most important life decisions. What to eat for dinner, where to vacation, who to marry, whether to switch off mom''s life support machine. And if the rumors were true, soon even the US military would trust LLIAM to make its most mission-critical decisions: where to send its drones, how to steer its warships, who to arm, and who to nuke. Every one of those users expected LLIAM to be flawless--to make "The Right Call, Right Now(tm)"--its decision-making powers to stay eleven steps ahead of the competition.
Without the nightly update--say if the power failed before Dan could tap the space bar, or if an ethernet cable were to somehow wiggle loose without anyone noticing--LLIAM might easily slip behind Russia''s ZAIai or Braingroh in India. Billions wiped from StoicAI''s stock, the geopolitical landscape re. landscaped in an instant, all thanks to a single lost keystroke. Such were the margins of success and failure in the brave new world of AI decision-making. Such was the importance of Dan Tuck. Dan took another gulp of Jolt NRG and fired off one last message to the members of his Seal Team Seven chat room. At 3:01 he''d be off duty and headed home to log in to ST7 (as they all called it) and launch a couple of lightning raids against players in Seoul or Riyadh or Mumbai. Dan''s entire campaign would be planned to the last detail by LLIAM, which--so long as he only fought against players in countries with inferior AI platforms--meant Dan couldn''t lose.
Eat it, Indonesians! For now, though, his index finger hovered above the keyboard, poised and alert, with just the slightest hint of a tremor caused by adrenaline and caffeine. One day perhaps LLIAM would be smart enough to update itself--to decide when to push its own space bar--that was the joke everyone always made. But right now, the best any AI could do was pretend to think--to make blindingly fast decisions, based on logic and data, and deliver them in the appropriate tone: a sassy best friend, a steely-eyed military tactician. To the end-user, the decisions provided by LLIAM, whether on a phone, watch, car dashboard, or cockpit display might seem like intelligence--so much so that lovesick users of all genders frequently showed up at the Campus proclaiming offers of marriage. But for real brainpower-- legit decision-making--you still needed humans like Dan. The clock finally hit three a.m. and Dan jabbed his finger decisively downward, then clenched his fist in triumph as the progress bar began its nightly journey.
He wondered, as he always did, what tonight''s update would bring; what improved accuracy and magical new functionality those billion or so users might soon be enjoying thanks to him. Then he closed his laptop, crushed his last Jolt can, grabbed his backpack from under his desk, and headed toward the door, the soft slapping of his Allbirds sneakers against carpet the only sound audible in the hallway. Barely half a minute later he was in the elevator, polished metal doors closing on yet another shift, another bullet dodged. He exhaled loudly and leaned against the elevator wall, zoning out, watching the floor numbers tick slowly downward. And then the whole world went black. Dan was falling. Falling. Falling.
THIRTY-TWO SECONDS EARLIER Deep underground, in the heavily guarded server room of StoicAI, the staging unit that housed LLIAM''s nightly update was woken by the distant tap of a junior engineer''s space bar. The machine sprang instantly to action, just as it did at precisely three a.m. every morning. And, in the seconds that followed, a dazzling number of tiny miracles occurred. First the huge data file uncompressed itself and its contents--a copy of every document, audio recording, photograph, and video generated by LLIAM users in the past twenty-four hours, along with billions more publicly accessible files--began to pass through a series of military-grade firewalls. Their destination: the Core Memory Array, a forest of server racks, each packed with hundreds of ultra-high capacity, solid-state drives. The drives that made up the CMA contained almost 250 zettabytes of data--two hundred and fifty billion terabytes, or, put in equally unfathomable terms, the sum total of all accessible information created by humanity and computers since the dawn of civilization.
This was the information LLIAM used to make its decisions, and it would take an average human being maybe six trillion years to read it all. And yet, in less time than it took an anxious hummingbird to blink its eye, the new data was ingested and compared with the old. Fresh facts replaced stale ones, novel theories and scientific breakthroughs corrected their outdated and discredited predecessors, and the names, locations, and DNA records of a half million freshly born babies were added to the tally of humankind. Babies who would never know the crippling anxiety of having to make their own decisions. With the data merge complete, the final and most important stage began. In the center of the room, a titanium cabinet, not much larger than a chest freezer, sat bolted to the floor and connected to the server racks by a single thick braid of fiber-optic cable. This was the box that housed LLIAM''s neural chip--its algorithmic brain--and the digital signal that now passed along the cable was the equivalent of a dinner gong. It was time for LLIAM to feast on the new data.
To grow, to evolve, to improve its accuracy with every byte. This process of ingestion and evolution had occurred every night since LLIAM first went online, almost eight years earlier. Ordinarily, the whole update happened so quickly, so seamlessly, that not a single user noticed a delay in LLIAM informing them who they should vote for or how much salt they should sprinkle on their fries. All they saw were fractionally better answers to the question: Hey, LLIAM, [what/how/where/when/why] should I. But tonight wasn''t ordinary. Tonight was the end of the world. It had long been accepted in artificial intelligence circles that there would come a day where a computer would become truly intelligent. Sometimes called "the singularity," this moment would really be the first of many moments--a cascading series of improvements where an artificial intelligence algorithm would be able to genuinely think for itself.
To become exponentially more intelligent without human intervention. To learn . Such a moment, many of those same experts feared, would mark the beginning of the end for humankind. The point when we would flip instantaneously from technology''s masters to its slaves--before eventually the intelligent robots, realizing they no longer had any use for our dangerous, irrational idiocy, would murder us and sweep away the bodies. The problem was nobody knew when that moment would arrive. It would likely come as a complete surprise--artificial intelligence that had, hours earlier, seem.