#

Hawley and Blumenthal’s AI Bill Is a Brazen Executive Power Grab That Puts National Security at Risk

Juan Londoño

hawley

On September 29, Senators Josh Hawley (R‑MO) and Richard Blumenthal (D‑CT) introduced a bill to create a risk evaluation program within the Department of Energy (DOE). The bill would charge the DOE with conducting various assessments and tests on “advanced” artificial intelligence (AI) systems, with a special emphasis on the potential of regulating hypothetical artificial superintelligence systems. In their press release, the senators repeated their belief that AI products are already “rushed to market with products that are unsafe for the public and often lack basic due diligence and testing.” According to the lawmakers, the bill would establish safety stopgaps and testing to ameliorate that issue. The reality is quite different.

Not only are the senators’ statements about current models being “untested” demonstrably false. Most major developers already publish public testing data regularly. For example, OpenAI has a “safety evaluations hub,” which shows the results of safety tests on issues such as models producing disallowed content, hallucinations, or prompts that circumvent their content policy. However, the scope of the legislation extends far beyond testing. Hidden in the bill is a provision that tasks the DOE with developing “proposed options for regulatory or governmental oversight, including potential nationalization or other strategic measures, for preventing or managing the development of artificial superintelligence if artificial superintelligence seems likely to arise.” 

In other words, this bill would grant the government broad power to seize assets whenever a company crosses the technological frontier of what the bill defines as superintelligence. Granting such powers would hamper the development of frontier models, which, as their name indicates, are the models trying to push the “frontier” of what commercially available AI models can do. Thwarting the development of these models would put the US AI industry at a significant disadvantage in the global race for AI dominance and push American consumers toward riskier, less secure foreign-made frontier models.

Under this bill, a model has to fulfill three conditions to be considered an “artificial superintelligence.” It must be able to operate autonomously for long stretches of time, it must enable a device or software to match or exceed human performance across most tasks, and it must have the capacity to enhance the capabilities of a device or software independently, with little or no human oversight. These types of advanced models are being developed in hopes of enabling AI’s most impactful uses, such as automated, hyper-precise, and hyper-personalized health diagnostics.

blumenthal

However, the bill would put developers under a regulatory Sword of Damocles, constantly hanging over their heads, once their models are considered an AI superintelligence. The government would have the power to take over a company’s assets whenever it sees fit. As a result, US-based AI companies will underinvest and underdevelop their models to avoid being considered a superintelligence.

This approach would recklessly enable other nations’ AI development efforts, as the market for frontier models will be filled by foreign—and potentially adversarial—nations. The US would effectively be ceding leadership to countries like China, which will inevitably take the lead. However, as a study by the National Institute of Standards and Technology’s Center for AI Standards and Innovation has already shown, Chinese models are substantially more susceptible to agent hijacking and jailbreaking attacks than US models. In other words, the models that would replace the vacancy of American frontier models are more likely to go rogue or will have weaker, easily circumventable safety guardrails, making them more vulnerable to being exploited by bad actors. Thus, the bill is ultimately self-defeating, as it will concede the development of the most advanced and riskiest AI models to nations with consistently subpar safety standards.

There are examples in other industries, such as hazardous chemicals or automobiles, where government agencies are asked to validate or evaluate existing testing, but they have stronger limits on government power. In the worst cases, the executive can only go as far as suspending or prohibiting the sale of non-compliant products, a far cry from nationalization. But equipping the DOE with broad regulatory powers to the point where it can seize the assets of a company is an unseen and dangerous concession of power to the executive.

While the lawmakers focus mostly on the ways these frontier models can go wrong, they forget the significant upside these technologies could have in cybersecurityscientific research, and productivity. Under this proposal, the US would have to rely on foreign-made, riskier models or miss out on these AI-powered technological advancements altogether.

The Hawley-Blumenthal AI bill disguises a brazen power grab by the executive as an otherwise harmless third-party safety testing regime. If a model gets advanced enough to cross that “superintelligence” threshold, the government has the authority to potentially wipe out all its investments through nationalization on a whim. It would create a clear disincentive for frontier AI companies to innovate and improve their models, depriving Americans and the world of models capable of producing valuable scientific breakthroughs, or they would have to rely on foreign models. Ironically, a bill premised on curtailing AI-related risks would push the global population toward lower-quality, riskier, and foreign-made frontier models.

Juan Londoño is the Chief Regulatory Analyst at the Taxpayers Protection Alliance