Don't worry, folks, here comes Chuck Schumer with some ideas about regulating AI
As Europe forms task force to steer probes into ChatGPT
US Senator Chuck Schumer (D-NY) is lately bent on passing bipartisan legislation enabling independent public audits of commercial AI products before they're unleashed on the world.
After months of consultation, the Senate majority leader says he has drafted a framework for regulating the development, deployment, and use of advanced machine-learning tech. The boom in generative AI, all the hype surrounding it, and China's steps to restrict the technology have made it all the more urgent to govern these models and software in America, he said.
That framework would, if Schumer's plans make it to fruition, result in "comprehensive" legislation in the United States that organizations would need to follow when rolling out artificial intelligence.
"The Age of AI is here, and here to stay," Schumer said in a canned statement. "Now is the time to develop, harness, and advance its potential to benefit our country for generations."
It's time to reveal all recommendation algorithms – by law if necessaryCOMMENT
"Given the AI industry's consequential and fast moving impact on society, national security, and the global economy,' he continued, "I've worked with some of the leading AI practitioners and thought leaders to create a framework that outlines a new regulatory regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the US advances and leads in this transformative technology.
"But there is much more work to do and we must move quickly. I look forward to working across the aisle, across the industry and across the country and beyond, to shape this proposal and refine legislation to make sure AI delivers on its promise to create a better world."
- Beijing lists the stuff it wants generative AI to censor
- Uncle Sam threatens AI with its nastiest weapon: An audit
- India to ride the AI rocket responsibly, rather than regulate
- UK seeks light-touch AI legislation as industry leaders call for LLM pause
Schumer's proposed framework, which has been shared around Capitol Hill and is not yet public, revolves around four guardrails: who, where, how, and protect.
Specifically, under these proposals, it should be clear who trained a model and who is supposed to use it, where the training data came from, and how the software will be used. Crucially, enough info should be provided so that outside experts can audit, test, and review machine-learning products and technologies before they are put on the market, and that these findings are made public.
Finally, for the protect part, developers would have to demonstrate their AI systems are aligned with American values and that "AI developers deliver on their promise to create a better world."
Schumer plans to share, if not already, his framework with leaders in academia, industry, and government, and intends to work with think tanks and research institutions to refine his proposals before presenting a law bill. The Register pressed the senator's office for more details, and a representative could offer nothing more than a press release.
That release makes clear the proposed framework "will require companies to allow independent experts to review and test AI technologies ahead of a public release or update, and give users access to those results."
Speaking of bureaucracy... the European Data Protection Board, which helps keep Europe's national privacy watchdogs on the same page, has created a task force to ensure those regulators share information and details of any crackdowns on OpenAI's ChatGPT. This is after Italy took a hard line against the chat bot, over privacy and child protection fears, and other nations are considering the same sort of action.
This basically indicates Europe is converging on a unified response to the next-gen AI system that's been championed by Microsoft and injected into all corners of today's technology.
This all comes as China's cyberspace regulator drafted similar rules that would require outfits to submit security assessments examining the potential safety risks of their AI products. Developers must ensure that the training data used to shape their models will not lead to discrimination, and reflects the country's socialist values, Reuters reported.
The US Department of Commerce's National Telecommunications and Information Administration is seeking public comments to help craft potential policies aimed at improving the accountability of AI products and services. The agency issued its formal AI "Accountability Request for Comment" this week. ®