Rather than OpenAI, let’s Open AI

by Ashutosh Jogalekar

In October last year, Charles Oppenheimer and I wrote a piece for Fast Company arguing that the only way to prevent an AI arms race is to open up the system. Drawing on a revolutionary early Cold War proposal for containing the spread of nuclear weapons, the Acheson-Lilienthal report, we argued that the foundational reason why security cannot be obtained through secrecy is because science and technology claim no real “secrets” that cannot be discovered if smart scientists and technologists are given enough time to find them. That was certainly the case with the atomic bomb. Even as American politicians and generals boasted that the United States would maintain nuclear supremacy for decades, perhaps forever, Russia responded with its first nuclear weapon merely four years after the end of World War II. Other countries like the United Kingdom, China and France soon followed. The myth of secrecy was shattered.

As if on cue after our article was written, in December 2024, a new large-language model (LLM) named DeepSeek v3 came out of China. DeepSeek v3 is a completely homegrown model built by a homegrown Chinese entrepreneur who was educated in China (that last point, while minor, is not unimportant: China’s best increasingly no longer are required to leave their homeland to excel). The model turned heads immediately because it was competitive with GPT-4 from OpenAI which many consider the state-of-the-art in pioneering LLM models. In fact, DeepSeek v3 is far beyond competitive in terms of critical parameters: GPT-4 used about 1 trillion training parameters, DeepSeek v3 used 671 billion; GPT-4 had 1 trillion tokens, DeepSeek v3 used almost 15 trillion. Most impressively, DeepSeek v3 cost only $5.58 million to train, while GPT-4 cost about $100 million. That’s a qualitatively significant difference: only the best-funded startups or large tech companies have $100 million to spend on training their AI model, but $5.58 million is well within the reach of many small startups.

Perhaps the biggest difference is that DeepSeek v3 is open-source while GPT-4 is not. The only other open source model from the United States is Llama, developed by Meta. If this feature of DeepSeek v3 is not ringing massive alarm bells in the heads of American technologists and political leaders, it should. It’s a reaffirmation of the central point that there are very few secrets in science and technology that cannot be discovered sooner or later by a technologically advanced country.

One might argue that DeepSeek v3 cost a fraction of the best LLM models to train because it stood on the shoulders of these giants, but that’s precisely the point: like other software, LLM models follow the standard rule of precipitously diminishing marginal cost. More importantly, the open-source, low-cost nature of DeepSeek v3 means that China now has the capability of capturing the world LLM market before the United States as millions of organizations and users make DeepSeek v3 the foundation on which to build their AI. Once again, the quest for security and technological primacy through secrecy would have proved ephemeral, just like it did for nuclear weapons.

What does the entry of DeepSeek v3 indicate in the grand scheme of things? It is important to dispel three myths and answer some key questions.

It’s not about openness per se; it’s about standards.

Openness is not just about being nice. Critics of open source often cite competitive disadvantages and security risks as the two biggest arguments against the practice. But in case of AI, this is not only untrue but misses the real point. The country which has an open-source model sets the standards on which other countries build their own models and data frameworks. They own the ecosystem. This leads to a competitive advantage, not disadvantage. Consider how many countries employed Linux as the preferred operating system on which they built tools and services. Similarly, open-source LLMs will set standards on which countries can then proliferate tools and services on their own terms. But by making its models closed-source, the United States risks losing those standards on which all of future AI is built, forcing American entrepreneurs to create tools for other AI platforms instead of their own. Given the very rapid progress in this field, it won’t be long before standards are firmly put in place by those countries that open-source their AI.

Regarding security risks, the concern usually is that malicious actors – both state and non-state – can use the very open nature of the system to exploit the technology for pushing misinformation or for embedding malware. In the worst case, they can exploit these systems for launching cyberattacks on a country’s infrastructure. But in fact the danger that a country will be unable to effectively launch a counterattack against an intelligent LLM-based AI is greater when the software is proprietary.

Consider two hypothetical scenarios where China launches cyberattacks against the United States. In one scenario, proprietary, closed-source LLMs are used; in the other, open-source LLMs form the basis of the attack. With proprietary LLMs, the United States would struggle to analyze and understand the nature of the attack due to the lack of shared foundations, leaving it at a disadvantage and potentially allowing significant damage before a response is possible. Conversely, if the attack leverages open-source LLMs—systems the United States also has access to—it becomes much easier to decode, counter, and potentially repurpose the tools for a defensive or counteroffensive strategy.

Open-source and national security are completely compatible.

An obvious conclusion of the point above is that open-sourcing and having everyone see the same code and infrastructure is better for national security not only because it’s easier to thwart an attack but because it makes the attacker less likely to attack in the first place when they know that an effective counterattack exists and their efforts might therefore be futile. The same principle of deterrence that exists with nuclear weapons exists here.

Another common concern about open-sourcing is that it might involve disclosing critical secrets. Wouldn’t revealing the models you’re developing risk exposing sensitive financial or national security information? The answer is no. Sharing models does not mean sharing the underlying data used to train them or the specific applications for which they are utilized. Consider an example from molecular simulation: even if you share all the code for running a simulation—such as one used in drug development—you are not disclosing the data that informed the simulation, nor are you revealing the details of the drug developed using that simulation. Intellectual property can be created and protected while the tools used to create that property are available to all.

There are methods that allow not only the sharing of foundational model architectures but also the ability to compare models without revealing sensitive information. One such method is the use of zero-knowledge proofs, which enable researchers to demonstrate the validity of a claim without disclosing the specifics of the claim itself. Originally developed for cryptography, these techniques have been creatively adapted for applications like verifying nuclear warheads. It is entirely plausible that similar approaches could be extended to AI models. The idea of sharing enough data to establish a common foundation while safeguarding critical details is not only feasible but already practiced. For example as noted earlier, information such as the number of tokens, parameters, and training costs for many models has been made publicly available. This disclosure provides valuable insights into the models’ foundations without revealing anything about their use in specific applications or the details of the data used to train them.

Open-source and technological dominance are not orthogonal goals.

The most effective way for the United States to maintain its position as the global leader in technology is by ensuring that its technology is widely adopted and utilized by organizations, nations, and individuals around the world. In case of AI, that goal will be accomplished by setting American standards that form the foundation of global technology development. Standards set by open-source models will proliferate while those set by closed-source models will wither because of financial and technical barriers to implementation.

Technology development is not a zero-sum game. China does not have to lose for the United States to win, and vice versa. There is a precise mix of competition and cooperation that can make both thrive. After World War II, American car manufacturers and industrial experts played a significant role in sharing technology, management practices, and manufacturing techniques with Japan. This led not just to Japan’s dominance in car manufacturing but to cheaper, more reliable cars for everyone. And while it did put pressure on the American automobile industry to innovate, that pressure resulted in the emergence of American electric vehicle companies that are now setting global standards. Toyotas and Teslas benefit the whole world. Competition and cooperation both lead to global innovation.

The biggest goal for China, the United States and the rest of the world is to prevent a war where AI wreaks chaos. This is not just a moral imperative but a practical and economic one. Throughout the Cold War, presidents and premiers realized this dual goal which compelled them to seek arms treaties and cuts that made the world safer and their citizens more prosperous. While treaties for AI must be seriously considered, they would be complicated, hard to enforce, long in implementation and risk stifling innovation. A second-best, equally attractive option is for companies at the frontier of AI research to unilaterally make their software open-source. An unequal, closed AI environment where nation-states spend most of their time simply one-upping each other in an escalating technological arms race is both a fruitless effort as well as one which makes the world a more dangerous place. The alternative is a world where models and code are shared as freely as possible within well-defined constraints, sparking innovation that benefits all and inaugurates a golden age of AI against a backdrop of peace rather than war. It should be clear what world we would like to inhabit.