What Oppenheimer Tells Us about AI

COURTESY OF UNIVERSAL PICTURES

It was Stephen Fry who said: ‘A true thing, poorly expressed, is a lie.’ With Oppenheimer, we are presented with a true story masterfully portrayed through the lens of Christopher Nolan. The story of Dr. Robert Oppenheimer, the American theoretical physicist, and director of the Manhattan Project’s Los Alamos Laboratory during World War II, is a masterclass in storytelling and film.

However, the reason I am writing about Oppenheimer today goes beyond the film’s excellent form, after all “art is not supposed to look nice, it is supposed to make you feel something.” Beyond the cinematic brilliance, the movie raises a profound discussion of ethics and the human tendency to suspend moral judgment in the face of great benefit or great fear. I could not help but draw parallels between the rationalizations made by Robert Oppenheimer and those prevalent in the AI industry today (viewing myself as a member of the general AI community).

Despite the clear moral dilemma that Oppenheimer expressed throughout his life before and after creating the first nuclear weapon, the leading theme in this movie was the idea that Oppenheimer was lured by a great good (ending WWII) into committing a great evil (creating nuclear weapons). This leads me to wonder if we are making the same mistake with AI today, with countries and companies vying for dominance in this global race.

More specifically, the underlying rationalization in the race to weaponize the atom with the mindset of “if we don’t do it, someone else will” is all too familiar in our current discussion of AI. The ethical problem with this mindset is rooted in the concept of competitive pressure and fear of falling behind in the global AI race, this competitive mindset can lead to an unregulated and unchecked proliferation of AI technology. Including potentially harmful unintended outcomes, such as biased decision-making, loss of jobs, or AI-driven surveillance.

The most responsible members of the AI community agree that frameworks and regulations should be put in place to control the ethical use of AI, yet most of these frameworks are not established or simply not enforced in practice. This makes me wonder if we are being led into a naive optimism, believing we can make AI ethical before irreparable damage is done, much like the naive optimism Oppenheimer had believing nuclear weapons will end all wars. We are already witnessing some of the negative consequences of AI on our world, such as perpetuating social inequalities and job displacement, yet regulations and frameworks are still lagging behind.

The process of creating larger and more powerful AI models without considering ethical implications is strikingly relevant to this movie. Yet the most profound idea I take from Oppenheimer is that conducting science without fully understanding its impact on the world is immoral, even if the short-term gains seem substantial. This idea underscores the necessity of incorporating the humanities into great engineering endeavors. The lessons learned from historical developments like the atomic bomb can serve as cautionary reminders in the responsible development and deployment of AI technologies.

To end on a more hopeful note (which is unusual for me), if we ever make ethical progress in AI in the future, it will be through the hands of engineers who are trained in Philosophy, Ethics, Linguistics, Art, and the humanities in general, along with their training in natural science.