Lawmakers advance bill to tighten White House grip on AI model exports

Vague ML definitions subject to change – yeah, great

The House Foreign Affairs Committee voted Wednesday to advance a law bill expanding the White House's authority to police exports of AI systems – including models said to pose a national security threat to the United States.

"AI has created a technology revolution that will determine whether America remains the world's leading superpower or whether it gets eclipsed by China," bill co-author and House Rep Michael McCauln (R-TX) said, joining the list of leaders comparing the technology to the Manhattan Project.

McCauln's key concern is that while the US government's Bureau of Industry and Security (BIS) has the authority to restrict the export of AI accelerators — something the Biden administration has exploited on multiple occasions to stifle Chinese innovation in the space — it lacks the authority to regulate the export of AI models.

"Our top AI companies could inadvertently fuel China's technology, technological ascent, empowering their military and malign ambitions," McCauln warned.

The so-called Enhancing National Frameworks for Overseas Restriction of Critical Exports (ENFORCE) Act [PDF], if passed by the House and Senate, would amend the Export Control Act of 2018 to grant this authority to the BIS and enable the White House to require US companies or persons to obtain export licenses for distributing AI models deemed a national security threat to China.

"This legislation provides BIS the flexibility to craft appropriate controls on closed AI systems without stifling US innovation or affecting open source models," McCauln touted.

As far as we can tell, the bill in its current form doesn't actually contain any explicit protections or carve outs for open source models, and encompasses essentially any AI system, software or hardware, that: Could be used or modified to behave in a manner that erodes the United State's national security or foreign policy; lowers the barrier to the creation of weapons of mass destruction; or could facilitate cyberattacks.

You can find the full definition below:

‘(C) COVERED ARTIFICIAL INTELLIGENCE18 SYSTEM:

‘‘(i) INTERIM DEFINITION .—For the period beginning on the date of the enactment of this paragraph and ending on the date on which the Secretary issues the regulations required by clause (ii), the term ‘covered artificial intelligence system’ means an artificial intelligence system that:

‘‘(I) exhibits, or could foreseeably be modified to exhibit, capabilities in the form of high levels of performance at tasks that pose a serious risk to the national security and foreign policy of the United States or any combination of those matters, even if it is provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant capabilities, such as by:

‘‘(aa) substantially lowering the barrier of entry for experts or non-experts to design, synthesize acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons or weapons of mass destruction;

‘‘(bb) enabling offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber at-2 tacks; or

‘‘(cc) permitting the evasion of human control or oversight through means of deception or obfuscation; or

‘‘(II) can reasonably be expected to exhibit the capabilities described in subclause (I), such as by demonstrating technical similarity or equivalent performance to models in which relevant capabilities have emerged unexpectedly.

To be clear, the actual wording of the bill is intended to be vague and specifically mandates that the definition "covered artificial intelligence systems" be updated within a year of the bill being adopted.

"We also made the definitions of AI and AI systems in the bill temporary so that the administration may undertake its usual regulatory process and solicit public comment so that the final definitions are appropriately scoped," House Rep Madeleine Dean (D-PA) explained ahead of the vote.

And that's probably a good thing as publicly available, pre-trained foundation models like Llama 2 are routinely modified through fine tuning and other techniques to perform specialized tasks, such as code generation, or to strip away safeguards designed to prevent them from responding to ethically dubious requests.

Because of this, one could argue that many of the most popular models available today could be modified in a way that it falls under the current definition of a "covered artificial intelligent system."

It also isn't clear how such restrictions will be enforced, particularly when it comes to open source models. It's not uncommon to see provisions in end user license agreements warning persons in embargoed countries that the use of the software is prohibited but doesn't actually stop them from accessing the source code.

Without explicit protections for open source models, such restrictions could end up having a chilling effect on model developers for fear uploading a model to an online repository could land them in hot water if it's ever downloaded by a Chinese national.

These concerns, of course, aren't new. Calls by lawmakers last year to restrict RISC-V exports elicited similar warnings from engineers and other techies who argued attempts to do so were doomed to backfire.

While the House Foreign Affairs Committee has voted to advance the bill, there's no guarantee that it'll ever make it to the president's desk as it'll have to survive a vote in both the House and Senate before then, and during an election year. Still, there's always next year. ®

More about

TIP US OFF

Send us news


Other stories you might like