Alexander C. Karp is co-founder and CEO of Palantir Technologies. Nicholas W. Zamiska is the company's head of corporate affairs and general counsel in the CEO's office. Their book, “The Technological Republic: Hard Power, Soft Belief, and the Future of the West,” will be published in February.
Shortly after dawn on July 16, 1945, a group of scientists and government officials gathered on the barren sands of the New Mexico desert to witness the first-ever test of an atomic weapon. An explosion was described by an eyewitness as “bright purple.” The thunderous boom of the bomb's explosion ricocheted off the desert and seemed to linger for a long time.[1945年7月16日、夜明けから間もなく、科学者と政府関係者の一団がニューメキシコ砂漠の荒涼とした砂地に集まり、人類初の核兵器実験を目撃した。爆発の様子を目撃した人は「鮮やかな紫色」と表現した。爆弾の爆発による雷鳴は砂漠に跳ね返り、長く残っていたようだ。
J. Robert Oppenheimer, who led the project that culminated in the experiment, had been thinking that morning about the possibility that this destructive force might somehow contribute to lasting peace: He recalled that the Swedish industrialist and philanthropist Alfred Nobel had hoped that his invention, dynamite, would end war.
Seeing that dynamite was being used to make bombs, Nobel confided to a friend that more powerful weapons would be the best guarantor of peace: “The only thing that can prevent nations from going to war is terrorism,” he wrote.
We might be tempted to retreat from such grim calculations, and take refuge in the hope that if only those who bear weapons would lay them down, humanity's peaceful instincts would prevail. But nearly 80 years have passed since the first atomic test in New Mexico, and nuclear weapons have only been used in war twice, in Hiroshima and Nagasaki. For many, the power and horror of the bomb seem distant, dim, almost abstract.
Humanity's record of managing nuclear weapons is imperfect, and indeed has ended in near-catastrophic failure dozens of times, but it is remarkable. For nearly a century, peace prevailed around the world, free of military conflict between great powers. At least three generations — billions of people and their children and grandchildren — have never experienced a world war. John Lewis Gaddis, professor of military and naval history at Yale University, described the absence of major conflict in the postwar era as a “long peace.”
The Nuclear Age and the Cold War essentially solidified for decades a calculation among great powers that made true escalation, rather than regional border skirmishes and tests of strength, extremely unattractive and potentially costly. Steven Pinker has argued that the “decline of violence” in the broader sense may be the most important and yet most underappreciated development in human history.
It is impossible to attribute all or even most of this credit to a single weapon: many other developments are part of the story, including the spread of democratic forms of government across the globe since the end of World War II and previously unthinkable levels of interconnected economic activity.
The great-power calculations that have helped prevent a new world war may also shift rapidly. But U.S. military superiority, however fragile, has undoubtedly helped preserve the peace. But the determination to maintain such superiority is increasingly out of fashion in the West. And the doctrine of deterrence is in danger of losing its moral appeal.
The atomic age may soon come to an end. It is the century of software, and future wars will be driven by artificial intelligence, whose development is proceeding at a much faster pace than conventional weapons. The F-35 fighter jet was conceived in the mid-1990s, and the plane, the main strike aircraft of the U.S. military and allied forces, is expected to operate for the next 64 years. The U.S. government expects to spend more than $2 trillion on the program. However, retired General Mark A. Milley, former chairman of the Joint Chiefs of Staff, recently questioned, “Do we really think that manned aircraft will rule the skies in 2088?”
In the 20th century, software was built to serve the needs of hardware, from flight controls to missile avionics. But the rise of artificial intelligence and the use of large language models to recommend targets on the battlefield is changing that relationship. Now, software is taking the lead, and hardware, like the drones in Ukraine and elsewhere, increasingly serves as the vehicle for carrying out AI recommendations.
And for a nation that holds itself to a higher moral standard when it comes to the use of force, technological parity with its adversaries is not enough: Weapons systems possessed by an ethical society, and one that is legitimately wary of their use, can function as an effective deterrent only if they are far more powerful than an adversary's ability to kill innocent people.
The problem is that young Americans who are most capable of building AI systems are often the ones most reluctant to serve in the military, with Silicon Valley engineers turning away from the turmoil of geopolitics and moral complexities. While support for defense jobs is slowly emerging, the bulk of the money and talent continues to flow to consumers.
Our engineering elites are rushing to raise money for the video-sharing apps, social media platforms, advertising algorithms and shopping sites that infiltrate our lives, unhesitantly tracking and monetizing our every move online. But when it comes to working with the military, many hesitate. The rush is simply to build. Too few ask what to build and why.
In 2018, about 4,000 Google employees wrote a letter to CEO Sundar Pichai demanding the company halt Project Maven, a program to develop software for U.S. Special Forces used in surveillance and operational planning in Afghanistan and elsewhere. Helping soldiers plan targeted operations and “deadly outcomes” was “unacceptable,” the employees said, and demanded that Google never “develop war technology.”
Google tried to defend its involvement with Project Maven by claiming that its work was merely “non-offensive,” a particularly delicate legal distinction from the perspective of frontline soldiers and intelligence analysts who needed better software systems to survive. Diane Greene, then head of Google Cloud, held a meeting with employees to announce the company's decision to end its work on the defense project. A Jacobin article called this a “stunning victory over American militarism,” noting that Google employees had impressively stood up against what they believed was a misdirection of their talents.
But the peace enjoyed by Silicon Valley opponents of working with the military is made possible by the credible threat of force from that same military. At Palantir, we build software architecture for U.S. and allied defense and intelligence agencies that will enable the deployment of AI weapons of the century. As a society, we should have the ability to debate the merits of using military force overseas without shying away from providing those deployed to danger with the software they need to do their jobs.
Most worryingly, a generation's disillusionment with and apathy toward our collective defense is leading to a massive redirection of intellectual and financial resources toward serving the needs of consumer culture. The technology sector is under less pressure to produce products of lasting, collective value, and so is ceding power to the whims of the market. “The Internet is an incredible innovation, but all we're talking about is a superfast, global combination of the library, the post office, and the mail-order catalog,” David Graeber, a former professor of anthropology at Yale and the London School of Economics, wrote in a 2012 essay for the Baffler.
The tech world's tilt toward consumer concerns has reinforced a certain escapism: There's a Silicon Valley instinct to ignore the important problems we face as a society and focus on trivial, ephemeral issues. From national defense and violent crime to education reform and medical research, problems that many in the tech industry find difficult to solve, messy, politically charged, or not worth addressing.
A year after the Google revolt, an uprising by Microsoft employees has threatened to halt work on a $480 million project to build an augmented reality platform for U.S. Army soldiers. Employees wrote letters to CEO Satya Nadella and President Brad Smith, arguing that the company “was not contracted to develop weapons,” and demanding that the company cancel the contract.
When OpenAI released its AI interface ChatGPT to the public in November 2022, it banned its use for “military and warfare” purposes. After the company lifted the blanket ban on military uses this year, protesters gathered outside OpenAI CEO Sam Altman's San Francisco office, demanding that the company “end its relationship with the Department of Defense and stop accepting military customers.”
The anger of these mobs has trained tech leaders and investors to avoid any hint of controversy or blame. But their silence comes at a high price. Many Silicon Valley investors and their legions of extraordinarily talented engineers simply brush aside the hard questions. A generation of promising founders say they are willing to take risks, but when it comes to deeper investments in social issues, caution often prevails. Why wade into geopolitics when you can build another app?
And then they built the apps.The proliferation of social media empires has systematically monetized and channeled the human desire for status and recognition.
Meanwhile, the foreign policy establishment has repeatedly miscalculated in its dealings with China, Russia, and others, believing that economic integration would be enough to weaken their leaders' domestic support and their interest in military escalation abroad. The failure of the Davos agreement was to abandon the stick in favor of the carrot. Meanwhile, Chinese President Xi Jinping and other authoritarian leaders have wielded power in ways that Western political leaders could never comprehend.
Speaking to business and political leaders at the Seattle Chamber of Commerce during a visit to the United States in 2015, Xi fondly reminisced about reading The Old Man and the Sea. During a visit to Cuba, he visited Cojimar on the north coast, the place that inspired Ernest Hemingway's story of a fisherman and an 18-foot marlin. Xi ordered a mojito with mint leaves and ice, the writer's favorite, explaining that he “just wanted to feel with my own eyes what Hemingway was thinking when he wrote this story.” He added that it is important for leaders of countries that account for almost a fifth of the world's population to “make an effort to deeply understand cultures and civilizations different from our own.” We would be wise to do so, too.
Our widespread reluctance to proceed with the development of effective autonomous weapons systems for military use may stem from a justified skepticism of power itself. Pacifism satisfies our instinctive empathy for the powerless. It also frees us from having to navigate the difficult trade-offs the world presents.
Chloe Morin, a French writer and former adviser to the prime minister, suggested in a recent interview that we should resist the easy urge to “divide the world into rulers and ruled, oppressors and oppressed.” It is a mistake to systematically equate powerlessness with piety, a form of moral arrogance. Both the ruled and the rulers are equally capable of serious sins.
We are not just talking about a thin, shallow patriotism in lieu of thoughtful and sincere reflection on the strengths and weaknesses of our country. What we want the American technology industry to keep in mind is not whether a new generation of autonomous weapons incorporating AI will be built, but the important question of who will build them and why?