Infosec guru Schneier worries corp AI will manipulate us
Can we turn to govt, academic models instead?
RSAC Corporate AI models are already skewed to serve their makers' interests, and unless governments and academia step up to build transparent alternatives, the tech risks becoming just another tool for commercial manipulation.
That's according to cryptography and privacy guru Bruce Schneier, who spoke to The Register last week following a keynote speech at the RSA Conference in San Francisco.
"I worry that it'll be like search engines, which you use as if they are neutral third parties but are actually trying to manipulate you. They try to kind of get you to visit the websites of the advertisers," he told us. "It's integrity that we really need to think about, integrity as a security property and how it works with AI."
During his RSA keynote, Schneier asked: "Did your chatbot recommend a particular airline or hotel because it's the best deal for you, or because the AI company got a kickback from those companies?"
To deal with this quandary, Schneier proposes that governments should start taking a more hands-on stance in regulating AI, forcing model developers to be more open about the information they receive, and how the decisions models make are conceived.
He praised the EU AI Act, noting that it provides a mechanism to adapt the law as technology evolves, though he acknowledged there are teething problems. The legislation, which entered into force in August 2024, introduces phased requirements based on the risk level of AI systems. Companies deploying high-risk AI must maintain technical documentation, conduct risk assessments, and ensure transparency around how their models are built and how decisions are made.
Because the EU is the world's largest trading bloc, the law is expected to have a significant impact on any company wanting to do business there, he opined. This could push other regions toward similar regulation, though he added that in the US, meaningful legislative movement remains unlikely under the current administration.
- EU AI Act still in infancy, but those with 'intelligent' HR apps better watch out
- US senators propose guardrails for government AI purchases and operations
- Prepare your audits: EU Commission approves first-of-its-kind AI Act
- AI bigwigs urge AGs to block OpenAI's profit pivot
Schneier noted that the issue of bias in AI agents is hard to solve, largely because users often prefer systems that take their side. He gave the example of an AI agent designed for judges - just as a judge chooses law clerks who align with their views, they'll likely pick a digital aide that does the same.
The key to addressing this is transparency, he argues, so that people can know what to expect. And it's here that government and academia can help by building transparent AI models that serve as a counterpoint to commercial systems.
I think I just want non-corporate AI
"I think I just want non-corporate AI," he told us. "France is building an AI, right? Building an AI with non-profits. I want universities building them, and whether you trust them is complicated. Are you going to trust the Chinese government AI? Probably not."
The French plan, dubbed Current AI when it was launched in February 2025, is a public-private partnership involving the French government, Salesforce, Google, and several philanthropic organizations.
The initiative is backed by $400 million in combined funding and has the support of nine other governments from both inside and outside the EU. The goal is to promote the development of open-source infrastructure, improve access to high-quality datasets, and support AI tools that serve the public good with greater transparency and accountability.
"Current AI can change the world of AI," said French President Emmanuel Macron at the time. "By providing access to data, infrastructure, and computing power to a large number of partners, Current AI will contribute to developing our own AI ecosystems in France and Europe, diversifying the market, and fostering innovation worldwide in a fair and transparent way."
Schneier said that such models were a very good way forward, but corporations will fight such efforts tooth and nail. Nevertheless, it must be done for consumers, Schneier said, and legislators can't afford to fail this time.
"We failed with social media. We failed with search, but we can do it with AI," he asserted. "It's a combination of tech and policy." ®