"The physicists have known sin; and this is a knowledge which they cannot lose.”

A Generation of Oppenheimers

The 21st century will feature multiple decisions as impactful, or more, as the Manhattan Project. We're not ready.
by Henry Elkus

As J. Robert Oppenheimer sat back, absorbing the indescribable image of the first atomic bomb detonating in front of his eyes, he likened it to the Bhagavad Gita’s description of the divine — “the radiance of a thousand suns were to burst forth at once in the sky.” Infamously, coming to terms with the power of his creation, Oppenheimer remembered another line from the Gita, God Vishnu’s proclamation: “Now I am become death, the destroyer of worlds.”

This has become the most enduring image of the Manhattan Project, the American creation of the first Nuclear Weapon. And for good reason — it encapsulates the monumental scale of impact the moment represented.

Oppenheimer and his fellow physicists quite literally changed the world; only months later the United States dropped two nuclear weapons on Hiroshima and Nagasaki. An entirely new paradigm of geopolitics followed, reorienting humanity into the nuclear age of mutually assured destruction, where we remain today.

Two years later, though, Oppenheimer said something far more consequential.

Packed into an MIT lecture hall in 1947, he told students: “In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.”

Oppenheimer knew that there was further significance beyond the terrible, staggering impact that the nuclear weapon had unleashed. The technology could not be taken back. The Manhattan Project had brought into existence something irreversible, something that could not be undone to prevent future use. There once existed an age in which society knew no nuclear weapons, and that age was past. This is a knowledge which they cannot lose.

“In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.”

Every generation habitually claims they are the important generation yet. In today’s case, that might be true, or it might not. But to debate this is to miss a point far more crucial. Today’s generation is the first generation of Oppenheimers.

Beginning slowly, society has experienced an increasingly steep curve of change. And in recent human history, that curve has hit exponentiality.

As philosopher Nick Bostrom notes, “a few hundred thousand years ago, growth was so slow that it took on the order of one million years for human productive capacity to increase sufficient to sustain an additional one million individuals living at subsistence level. By 5000 B.C., following the Agricultural Revolution, the rate of growth had increased to the point where the same amount of growth took just two centuries. Today, following the Industrial Revolution, the world economy grows on average by that amount every ninety minutes.”


That segment of Bostrom’s example, where growth catches the wave of staggering exponentiality, is the contemporary world we are only beginning to live in.

There are some domains of society that haven’t followed the trendline of exponential growth. But there are many crucial domains that have, and they have violently pushed us into a new and often unrecognizable age that is only just beginning.

My favorite way to understand this shift is best described by futurist Ray Kurzweil (I used it as the basis of this talk). Imagine a chessboard. On the first square of the chessboard, we place just one grain of rice. Every subsequent day, we move to the next square, and we add double the number of grains of the day before. By day 8, the end of the first row, we only add 128 grains of rice. The numbers quickly grow, however.


By the final square of the first half of the chessboard, we place 2 billion grains of rice. Yet it’s the second half of the chessboard where the threshold of truly staggering exponential growth is crossed. Between the first square of the second half of the chessboard and the final square of the board, we go from placing 4 billion to 18 quintillion grains of rice. It’s hard to truly understand the scale of the number 18 quintillion, but as a reference, that is the combined weight of Manhattan, in rice.

We exist at the beginning of the second half of the chessboard. Only each of the squares represents time, and the doubling grains of rice represent societal rate of change.

While an action made early on in the chessboard (the spread of a disease or an insidious idea) could create serious damage, it’s proliferation was limited by a lesser “push.” That same action made today, or in chess squares farther into the second half of the board, is often carried by an exponential current at a speed never seen before, and often not even understood. There are of course notable exceptions to this rule, but the rule proves far too prevalent to ignore.

From our generation’s vantage point at the proverbial beginning of the second half of the chessboard, we have an eventful next square. While the 17th, 18th, 19th, and 20th century comprised a host of epochally consequential decisions, only a small subset were irreversible. There were only a few Oppenheimers. That is about to change.

Think of the next 50 years as the next square of our chessboard. This crucial period will mark many more decisions, whether their outcomes will be felt immediately or not, that will seize exponentiality and prove final. And thus, alive today is a far higher population of individuals who will make those decisions than ever before. Collectively they will make moral judgments, build technologies, and execute business and political determinations, all at a speed that society has little to no experience with.

To add more complexity, these decisions will often be made by a different group of people than that of decades and centuries past. Social power and access to knowledge have historically existed in centralized nodes.

A small cadre of public officials, dynastic families, state and church-backed philosophers, scientists and artists have historically taken advantage of this fact to consolidate the ability to make impactful decisions on a regional or global scale.

Partially as a result of exponential technological development, many modes of power and knowledge have since decentralized. This positive development in society has facilitated increased social mobility, near-ubiquitous access to encyclopedic information, and the ability to transmit an idea, movement, or even a technological product across the planet instantaneously. At one point, polymath and academic Thomas Young (1773 to 1829) was known as the “last man to know everything.” Today billions of people have free access to knowledge orders of magnitude greater than Young could dream of.

With decentralization of power comes greater decentralization of powerful actions and decision making. This is both good and bad. A makeshift hospital in an active warzone can 3D print life-saving devices from across the world over the internet. Yet increasingly, those seeking to inflict harm can utilize the same technology to print weapons. From the groups of programmers creating the 3D printing software to the internet platforms hosting it, to the end product, none of these processes are centralized or controlled by any one government, company, family, or institution. It will often be these decentralized networks, with nodes comprised not only of Heads of State but of hackers, activists, and entrepreneurs who make the critical decisions of the 21st century.


So what are those decisions? It’s not possible to know even a small percentage; an unfruitful path of study would be to predict the critical domains of the future, although it’s a road many nevertheless pursue. But there are three examples that one could strongly argue we are faced with now and in the near future: artificial general intelligence, genetic engineering, and climate.

Steven Hawking told a notorious Frankenstein-like story about the moment at which an artificial intelligence was created that exceeded the brainpower of the smartest human alive. Curious, the human walks up to the robot, secure in the knowledge that he can just unplug it in case of emergency, and asks the question: “is there a God?” The robot removes its own plug and replies “there is now.”


The story reminds me of the 11th-century philosopher St. Anselm, who in his ontological argument, describes God as simply “that which nothing greater can be conceived.” Without venturing any further into judgements of religion, it is nonetheless the case that general artificial intelligence will most likely evolve into the domain that exists outside of that which the human mind can conceive.

But it will be the human mind that initially programs it. Whether in a basement in Moscow or a corner office at Google, there are Oppenheimers alive today (and soon to be born) who will make some of the moral judgements that comprise the codebase of artificial general intelligence. As Bostrom and other thinkers write, there is a window of time in which those decisions will be made before they become irreversible. Today those decisions are meaningful: should the autonomous vehicle place a higher value on the life of five humans in its direct path, or swerve to instead hit the one adjacent human instead? But tomorrow and into the future, these judgments will scale exponentially, entering the realm of self-replication, where they cannot be taken back.

There is, of course, vigorous disagreement about the timetable of the evolution from present-day narrow artificial intelligence to future general AI. But there is less disagreement that it will be achieved in the course of the next few generations. In the context of history, that is barely a blink; remember that many credit the birth of dedicated academic study of AI with Alan Turing’s 1950 Computing Machinery and Intelligence, and the legendary 1956 Dartmouth Summer Project. These are timescales measured in years, not millennia. But it will be more than millennia that will be irrevocably affected.


Overlapping technologically with artificial intelligence will be the domain of genetic engineering, which shows yet another example of the uncertain potential of exponentiality. Bostrom, Yuval Harari and others postulate a future in which the 200,000 years of human evolution (and the many more of general evolution) could be recapitulated algorithmically, computing technology permitting. It didn’t take long after the recent development of CRISPR technology for a Chinese scientist to secretly (and illegally) engineer the first genetically modified babies, not only making them HIV immune but most likely inadvertently enhancing their cognitive capacity. Whether it’s via the decisions of a nation-state, non-state actor, or just a lone scientist, society is heading toward, or already in, its biological Oppenheimer moment.


And then there is climate. I remember sitting in the audience in 2017 as Swedish scientist Johan Rockström meticulously laid out his case that emissions decisions made (or not made) by major countries and businesses during the next 50 years will determine the conditions of human life for at least the next 10,000 years. If Rockström is to be believed, it has been the overwhelmingly human-made decisions during the last 50 years that have finally taken Earth’s climate out of the holocene, a 12,000 year period of relative stability, and intro the aptly named anthropocene.

The International Panel on Climate Change, in its Fifth Assessment Report, concluded that we are not nearly on pace to reduce emissions during this next 50 year period, and could reach a catastrophic 2°C temperature increase in even the next 30 years. Counteracting this will thus require an urgent investment in the development of technologies that actively remove carbon already in the atmosphere, an effort Helena has focused on. Hopefully there are more than a few Oppenheimers who will prevail in this arena.

It’s clear that the conditions of the 21st century, these next few squares of the chessboard, contain a level of complexity and speed society has never experienced before. We can feel it every day. So what is to be done?

Society has relied on “problem-solving” institutions for thousands of years. And they’ve worked pretty well for the first half of the chessboard. But in many cases, they aren’t working now. We need new systems of decision making because our legacy systems were not built for today’s conditions. And thus we need new, supplemental, institutions.

Volumes could be (and have been) written about how these new institutions must be designed and operated in order to thrive in the 21st century. But if there is only one word that should define them, it is “proactive.”

Future decisions, especially those at such a moral scale, should not be made in a rush. We’ll never achieve this goal; the fog of war, inaccurate forecasting, and sheer randomness and complexity will probably make the majority of our “Oppenheimer” decisions take place in haste. But that doesn’t mean we should perpetuate the institutional design that makes reactivity, rather than proactivity, even more likely.

In the 6th century BC, Chinese strategist Sun Tzu wrote about “deep knowledge” — the ability “to be aware of disturbance before disturbance, danger before danger, destruction before destruction, calamity before calamity.”

Our existing systems to solve big global problems, though, have little incentive to employ deep knowledge. While term limits in democractic government safeguard against tyrannical leadership and other threats, they also create a bias towards short term fixes that only temporarily satisfy a constituency. The quarterly reporting system of public companies satisfy the short term economic interests of shareholders, but disincentivize the most powerful businesses in the world to enact long term change that can address global dilemma. And while the philanthropic sector has often catalyzed large-scale change to address societal problems, it has also perpetuated some of the flawed systems that have created them in the first place.


Our current, limited deep knowledge is screaming the warning signs at us. We know that humans will write the moral code for multiple uses of artificial intelligence that could fundamentally change the lives of present and future human generations. Yet human society has a notoriously bad track record, over thousands of years, when it comes to agreeing philosophically on a shared moral code. We know that genetic engineering, if implemented incorrectly, could exacerbate inequality to unconscionable levels and potentially incubate eugenic societies. We know that rising sea levels directly correlated to anthropomorphic climate change will lead to the sinking of small island nations in the present, and larger land-masses in the future. Which institutions are fully focused on solving these problems over the long term, without distraction and incentivization to instead satisfy immediate, and often conflicting concerns?

There surely aren’t enough. Legacy institutions, governments, think-tanks, NGO’s, and corporations, even at their best and most moral, are overburdened and often designed insufficiently for the 21st century.

It is essential to supplement these systems with separate, empowered institutions that can exclusively focus on looking at the set of problems requiring proactive and longer-term solutions.