AI firms and civil society groups plead for passage of federal AI law ASAP

Congress urged to act before year's end to support US competitiveness

More than 60 commercial orgs, non-profits, and academic institutions have asked Congress to pass legislation authorizing the creation of the US AI Safety Institute within the National Institutes of Standards and Technology (NIST).

Bills introduced previously in the US Senate and the House of Representatives – S 4178, the Future of AI Innovation Act, and HR 9497, the AI Advancement and Reliability Act – call for a NIST-run AI center focused on research, standards development, and public-private partnerships to advance artificial intelligence technology.

The Senate bill, backed by senators Maria Cantwell (D-Wash.), Todd Young (R-Ind.), John Hickenlooper (D-Colo.), Marsha Blackburn (R-Tenn.), Ben Ray Luján (D-N.M.), Roger Wicker (R-Miss.) and Kyrsten Sinema (I-Ariz.), would formally establish the US AI Safety Institute, which already operates within NIST.

The House bill, sponsored by Jay Obernolte (R-CA-23), Ted Lieu (D-CA-36), and Frank Lucas (R-OK-3), describes the NIST-based group as the Center for AI Advancement and Reliability.

If approved by both legislative bodies, the two bills would be reconciled into a single piece of legislation for president Biden to sign. Whether this might happen at a time of historic Congressional inaction, amid a particularly consequential election cycle, is anyone's guess.

The "Do Nothing" 118th Congress, which commenced on January 3, 2023 and will conclude on January 3, 2025, has been exceptionally unproductive – enacting just 320 pieces of legislation to date compared to an average of about 782. That's the smallest number of laws enacted in the past 50 years, which is as far as the records go at GovTrack.us.

Undaunted, the aforementioned coalition, led by Americans for Responsible Innovation (ARI) and the Information Technology Industry Council (ITI), published an open letter [PDF] on Tuesday urging lawmakers to support NIST's efforts to address AI safety for the sake of national security and competitiveness.

"As other governments quickly move ahead, Members of Congress can ensure that the US does not get left behind in the global AI race by permanently authorizing the AI Safety Institute and providing certainty for its critical role in advancing US AI innovation and adoption," declared ITI president and CEO Jason Oxman in a statement. "We urge Congress to heed today's call to action from industry, civil society, and academia to pass necessary bipartisan legislation before the end of the year."

Signatories of the letter include: AI-focused platform providers like Amazon, Google, and Microsoft; defense contractors like Lockheed Martin and Palantir; model makers like Anthropic and OpenAI; advocacy groups like Public Knowledge; and academic institutions like Carnegie Mellon University.

So this call to action has more to do with national policy goals and frameworks for assessing AI systems than creating enforceable limits, unlike California's SB 1047, which met resistance from the tech industry and was vetoed by state governor Gavin Newsom last month over concerns about the bill's effect on the state economy.

Both the Senate and House bills call for the formulation of voluntary best practices. That sets them apart from SB 1047, which envisioned enforceable obligations to promote AI safety.

California senator Scott Wiener, who introduced SB 1047, responded to Newsom's veto by saying, "While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public. This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way."

In the 70 days remaining in 2024, perhaps lawmakers will find a way to unite and pass federal AI legislation that tech companies themselves have endorsed. But probably not. ®

More about

TIP US OFF

Send us news


Other stories you might like