On July 16, 1945, the world mutated when the first atomic bomb was detonated. Physicist J. Robert Oppenheimer wrote: “We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad-Gita; Vishnu is trying to persuade the Prince that he should do his duty, and to impress him, takes on his multi-armed form and says, ‘Now I am become Death, the destroyer of worlds.’ I suppose we all thought that, one way or another.”
Oppenheimer and his team of scientists spent four years on the Manhattan Project, codename for the development of the super weapon that would help end the Second World War. Just weeks later, atomic bombs were dropped on Hiroshima and Nagasaki. Both cities were destroyed and hundreds of thousands were killed or maimed. On August 17, Oppenheimer hand-delivered a letter to the U.S. Secretary of War expressing his revulsion and his wish to see nuclear weapons banned. In September, Japan surrendered.
During the war, Oppenheimer was recruited to become a technocrat, wedged between science and the military, and pulled off the impossible. But his remorse at creating a bomb of such stupefying destructiveness led him to spearhead lobbying efforts to bring about international control of nuclear power. He convinced world leaders and scientists alike that security was only possible through the newly formed United Nations. By 1957, the International Atomic Energy Agency (IAEA) was created, and in 1970 the Nuclear Non-Proliferation Treaty (NPT) was signed by 189 states; it has served as a cornerstone of global nuclear controls ever since.
Oppenheimer foresaw a nuclear holocaust and enlisted the world’s most famous physicists, such as Einstein or Bertrand, for his crusade. His Jeremiad landed him cover stories in major magazines around the world, but his past dalliances with communist front organizations in academia cost him his security clearance. But the controversy he stirred up, and his dedication, became the face of a movement that eventually pulled the world away from the brink.
Today, the world needs another Oppenheimer because transformative technologies are being developed at such a pace that they could soon represent a graver threat than the atomic bomb. The main concerns involve artificial intelligence (AI). According to some projections, AI-driven machines will be smarter than humans and capable of designing machines of their own within a couple years. This specter, sometimes called General AI or the “intelligence explosion,” represents a bigger threat to humanity than a nuclear winter, pandemic, or climate change.
Unfortunately, there is no single crusader devoted to sounding alarm bells. But the good news is that many people—including famous scientists and technologists—are voicing concerns at conferences, through petitions, and in academic papers while scary research and risky projects continue apace.
Still, it took a mass murder for Oppenheimer to realize his science could destroy the world. If history is to repeat itself, there will have to be a major mishap before the world’s leaders focus on the very real possibility of oblivion—and some fear that mishap will not be reversible.
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” writes futurist Nick Bostrom in his 2014 bestseller Superintelligence: Paths, Dangers, Strategies. He is an acknowledged expert on dystopian scenarios as the director of the Future of Humanity Institute at Oxford University. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
Elon Musk, too, has been vocal about the risk of AI. He has estimated that diabolical general intelligence is just a few years away and that efforts must be stepped up to preemptively remove the possibility of a catastrophe. But such spokesmen are juggling many other entrepreneurial balls in the air and don’t have time to save the world. No tech tycoon of any stature is single-mindedly pursuing remedies.
It took 25 years to iron out a global non-proliferation treaty. But today’s challenge is more difficult. The development of nuclear weaponry relied on finite, traceable resources and demanded investment from large governments. No one could develop a nuclear weapon in their garage. By contrast, tomorrow’s cataclysm may just be one mad computer scientist away. The Internet has dispersed knowledge across the globe, allowing anyone to access bomb-making recipes, dangerous code, or diabolical networks.
Calamity is likely because development and research into the world’s transformative technologies have not been ring-fenced with moral, ethical, or security frameworks. This heightens the probability that disaster will occur, whether by deliberate malfeasance or by accident. Enforceable global standards exist for everything from engineering and accounting to medicine and nuclear power. But none exist for research, software development, synthetic biology, genetic engineering, artificial intelligence, or robots, even if individual countries have begun regulating these technologies independently.
The late, legendary physicist Stephen Hawking pulled no punches when outlining the hazards of artificial intelligence. “Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,” he said in 2017. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”
To date, the great and good have contented themselves with petitions and open letters signed by important scientists. But their recommendations have not captured global attention, nor have they provoked a political movement.
Still, there have been some successes. In 1975, the Asilomar Conference on Recombinant DNA led to guidelines about bio-safety that included a halt to experiments that combined DNA from different organisms. In 2015, an open letter concerning the convergence of AI with nuclear weapons was signed by more than 1,000 luminaries, including Apple co-founder Steve Wozniak, Stephen Hawking, and Elon Musk. It called for a ban on AI warfare and autonomous weapons, and eventually led to a United Nations initiative. But in March, the United Nations Secretary General was still urging all member nations to agree to the ban. Only 125 have signed thus far.
And in 2017, Bostrom published a set of AI restrictions called the Asilomar AI Principles that outlined values, research restrictions, ethics, safety, and risk mitigation strategies for AI. The Future of Life Institute in Boston also profferred recommendations: “With more powerful technologies such as nuclear weapons, synthetic biology and future strong artificial intelligence, planning ahead is a better strategy than learning from mistakes.”
But without robust ethical and legal frameworks there have already been lapses. In November 2018, for instance, a rogue geneticist, He Jiankui, broke longstanding biotech guidelines and announced that he had used technology known as CRISPR-Cas9 to alter the embryonic genes of twin girls to protect them from the HIV virus. He was fired from his research job in Shenzhen, after an investigation showed that he had intentionally dodged oversight committees and used potentially unsafe techniques. Since then he has disappeared from public view, and is possibly under house arrest or in hiding. Clearly genetics labs as well as those developing AI must be monitored and policed. At present, they are not.
Whether the world is five, ten, or 20 years away from the “intelligence explosion” is irrelevant. There is no question it is coming. Hopefully we’ll wake up before it’s too late.