I. Two Births

Artificial intelligence began as a formal field at Dartmouth College in the summer of 1956. McCarthy, Minsky, Shannon, and Rochester brought together a small group of researchers. They aimed to explore the idea that all aspects of learning and intelligence could be described well enough for a machine to simulate them. They named the field. They wrote the research agenda. It felt like a group of people starting something bigger than themselves. They knew it would last, even if they couldn't say what it would turn into.

Two years later, the Soviet Union put Sputnik into orbit, and the United States responded by building an institution. Congress held hearings. Eisenhower signed the National Aeronautics and Space Act. NASA gained 8,000 employees and three research labs from NACA. It also took control of the Naval Research Laboratory's Vanguard project and the Army's Jet Propulsion Laboratory. Additionally, NASA recruited von Braun's entire rocket team. Within three years, Kennedy had committed the country to landing a man on the Moon.

The two key research programs of the late twentieth century began just twenty-four months apart. One of them became a permanent federal agency with its own workforce, its own labs, and its own institutional continuity. The other received military grants from ARPA. These grants renew each year and depend on Congress and the Department of Defense's priorities.

In 1958, no one sat down to declare that space exploration was a national project. Meanwhile, artificial intelligence was left to whoever could get the next funding. The effects of that non-decision are clear now, in March 2026. The U.S. Department of Defense and a $380 billion AI company are in federal court. They are fighting over who controls a technology that the government helped develop with its research funds.

II. What Institutions Do

NASA succeeded because its structure was built for persistence. It has kept knowledge over decades, allowing it to endure as an institution. The agency owned its research centers. It employed its scientists directly. It set its own agenda. Congress oversees it, and the president guides it, but it doesn't rely on someone else's annual grant renewal. NASA designed the missions and made important decisions. Private companies built the Saturn V. They had the engineering skills for success. When Apollo ended, the institutional knowledge didn't scatter. The workforce stayed. The agency moved on to Skylab, then the Shuttle, then the ISS, then Artemis. Some transitions worked well, while others didn’t. The key point is that the capability survived. They protected it because it was inside something designed to last.

AI research had no equal structure during the same period. In 1963, ARPA’s Information Processing Techniques Office gave MIT $2.2 million. This funding was for the Project on Machine-Aided Cognition. Carnegie Mellon and Stanford also received similar amounts. J.C.R. Licklider ran the program and believed in "funding people, not projects," which meant the researchers had a remarkable degree of freedom. The work that came out of those grants was genuinely brilliant. Freedom tied to military funding is not the same as true independence. This difference becomes clear when the funding situation shifts.

In 1969, Congress passed the Mansfield Amendment. DARPA had to fund only research that was "mission-oriented and direct." This research needed clear, short-term military relevance. AI as a field couldn't meet that standard. The honest answer about when it would produce working systems was, "we have no idea, probably decades." By 1974, DARPA's general AI funding had largely evaporated. Researchers scattered to Xerox PARC, to startups, and to other disciplines. The first AI winter.

It happened again in the 1980s. Expert systems and Japan's Fifth Generation project alarmed American policymakers like Sputnik did. In response, DARPA launched the Strategic Computing Initiative and increased funding. Then expert systems turned out to be brittle, and the Japanese program underdelivered, and the money dried up again.

Each cycle pushed those who understood the technology away from public institutions. They moved deeper into the private sector. This happened because industry funding didn’t vanish when Congress got impatient. By 2012, deep learning was booming. Companies like Google and Facebook had more computing power, data, and AI talent than any government agency worldwide. The government spent about fifty years funding AI research, but it never built an institution to keep what it funded. The knowledge just leaked out, slowly, over decades, into private hands.

III. The Bomb They Keep Describing

There is a second thread to this argument, and it comes from something the AI industry itself says constantly.

When companies like OpenAI and Anthropic warn about the risks of advanced AI, they are making a strong claim. This technology could be used to create new bioweapons. It might also disrupt global power dynamics. Additionally, it could lead to autonomous systems that humans can't control. Dario Amodei has written about these scenarios at length. Sam Altman has compared the moment we are in to the dawn of the nuclear age. These are not obscure concerns raised by outside critics. The founders and CEOs of these companies claim their technology could be as destructive as nuclear weapons.

I want to believe that claim. The people seem serious, and it leads to something interesting when you explore it further.

In the early 1940s, the federal government saw that nuclear fission could end civilization. It didn’t give grants to university physics departments and ask them to handle the results carefully. It pulled the entire effort into a centralized, government-run program under military oversight. Los Alamos was not a startup. Oak Ridge did not have venture funding. The scientists who built the bomb worked for the government. They reported to the government and operated under its authority. When the war ended, the Atomic Energy Act of 1946 put all nuclear materials and weapons research under civilian federal control. Not because the scientists were careless. They built something too dangerous for any private company to own.

The reasoning was simple: if a technology poses an existential risk, the public must have control over it. Not as a customer. Not as a regulator trying to write rules fast enough to keep pace with the people actually doing the work. As the institution that does the work and makes the decisions about deployment.

The AI industry talks about existential risk but hasn’t reached the usual conclusion that comes with it. If the technology really is as dangerous as these companies say it is, then nationalizing frontier research is not a radical proposal. It is the conservative one. It is what the United States already did the last time it faced a technology of comparable consequence.

IV. Both Sides of a Broken Structure

All of which brings us to the actual standoff between the Pentagon and Anthropic.

The Department of Defense wants Anthropic's Claude model available for "all lawful purposes." Anthropic has two key rules: it won’t allow its technology to be used for mass surveillance of American citizens. Also, it won’t permit its use in fully autonomous weapons. The Pentagon's position is that it cannot allow a private contractor to dictate how the military deploys a tool it licensed. Anthropic's position is that these are ethical lines the company will not cross regardless of who is asking.

After negotiations broke down, Defense Secretary Pete Hegseth gave Anthropic until 5:01 p.m. on a Friday to capitulate. Anthropic refused. The government labeled Anthropic as a supply chain risk. This label usually goes to companies linked to foreign adversaries. As a result, all federal agencies must stop using Anthropic's products. Anthropic filed a lawsuit on Monday.

Many people overlook an important point: both sides have valid reasons. This is what makes the situation so broken.

The Pentagon is correct. A sovereign government should not need a CEO's permission to use its capabilities for national security. It answers to elected officials, not corporate leaders. That is a genuinely strange arrangement, and it should concern people on both sides of the AI safety debate that we have ended up there.

Anthropic is correct that mass surveillance and autonomous weapons are real issues. The people who create these systems know their flaws. So, they should speak up if they think these tools will be misused. Amodei's statement that "disagreeing with the government is the most American thing in the world" is hard to argue with on its face. Many researchers from OpenAI and Google DeepMind filed an amicus brief. They support Anthropic's stance. This shows that the concern goes beyond just one company trying to safeguard its power.

But the reason both sides can be right at the same time is that the structure they are operating within is wrong. A government that had built and owned its own frontier AI capability would not be negotiating with a private company for access. A government with its own AI researchers wouldn’t rely on a company’s ethics policy. Instead, decisions would happen in a public institution. This means they’d follow the law and be accountable to elected officials. They’d also face the same democratic oversight as nuclear weapons and space exploration.

V. The Objections

The obvious objections to this are serious enough that I want to address them directly rather than pretend they do not exist.

The government is slow. It is bureaucratic. It consistently fails to compete with the private sector for technical talent. Congress has shown very little capacity to understand AI at a policy level, much less at a technical one. The administration in this standoff has dismissed Anthropic's safety concerns as "woke AI." They reacted to a contract dispute with what seems like economic retaliation. To argue for nationalizing frontier AI research while this particular government is in power sounds, at best, naïve.

These are fair points. Some of them are correct about the present moment. But I think it is important to notice what they are actually arguing against. They are arguing against the quality of the current government, not against the principle that technologies of this consequence should be governed publicly. The Manhattan Project was run by the government. NASA landed people on the Moon. The NIH coordinates the biomedical research pipeline that the private sector then commercializes. The fact that some administrations are incompetent does not invalidate the institutional model. It means the institutional model requires a competent administration, which is a problem that needs to be solved on its own terms, not a justification for handing civilization-altering technology to whichever company can raise the most money.

The private sector alternative also deserves scrutiny here, because it is not self-evidently better. The AI industry right now is controlled by a small number of companies with genuinely complicated incentive structures. They compete for talent, for compute, for market share. They are under constant pressure to ship products, capture users, and justify valuations that assume continued exponential growth. Some of their leaders sincerely believe they are building something dangerous. They fund safety research. They draw red lines. They write public statements about existential risk. And then they keep building, because if they stop, their competitors will not.

That is not governance. It’s a race condition that affects the species' future. We're relying on the hope that the current leaders of these companies will keep using good judgment. We also hope their successors will follow suit. Plus, we need the market to avoid pressuring them to choose speed over caution. I do not think that is a reasonable bet for anyone to make.

VI. What I Am Actually Proposing

The proposal itself is more modest than the buildup might suggest. The United States should create a national lab for advanced AI research. This should follow the models used for nuclear research and space exploration. The government should hire AI researchers directly. It needs to create strong models in public institutions. Decisions on using these models should follow democratic processes and the rule of law.

Private companies should absolutely be free to build and sell AI products. I am not arguing that the government needs to run consumer chatbots or regulate every language model in existence. The difference lies between consumer products and advanced capabilities. It's about the tools people use for emails or image creation versus the models the Pentagon aims to use in military conflicts. We already draw this exact line with nuclear technology. Private companies operate nuclear power plants, but they do not design nuclear warheads. Nuclear research that advances capabilities happens in national labs. Public employees run these labs, and federal laws govern them. Elected representatives also oversee the work. That arrangement has been in place for eighty years, and nobody treats it as extreme.

The reason we don't have a comparable arrangement for AI is not that someone weighed the options and chose the private sector model. The government funded university research with military grants in the 1960s. But Congress cut the funding in the 1970s. By the time leaders noticed the aftermath of the second AI winter, Google had more GPUs than the Department of Energy. The current arrangement is the product of institutional neglect compounded over decades, not a deliberate policy choice.

I just want to say that "under a competent administration" is doing a lot of work in this argument, and I know that. The current standoff features a Defense Secretary who sees safety research as just political show. Meanwhile, the president punishes companies that don’t follow his wishes. Nationalizing frontier AI under this government specifically would be catastrophic. But the argument I am making is not that the current government should have this power. The argument is that some government, one that actually respects the rule of law and operates with democratic accountability, needs to eventually take ownership of this technology. The other option is that no accountable institution manages the most powerful technology we've created. And I don't see how that ends well for anyone.

That is not a comfortable conclusion. It requires placing faith in institutions that are, at this particular moment in history, performing badly. But the alternative requires placing faith in corporations, which is a different kind of faith entirely, and one with even less accountability when things go wrong.

Seventy years ago, the United States looked at artificial intelligence and wrote some checks. A few years earlier, it had looked at the atom and built Los Alamos. A couple of years later, it looked at space and built NASA. Two of those decisions created institutions that still function, imperfectly but continuously, to this day. The third produced the world we are living in now, where a Defense Secretary and a CEO are in federal court arguing about whether a language model should be allowed to select targets.

Nobody chose this. That's sort of the whole problem.