AI firms propose 'personhood credentials' … to fight AI
It's going to take more than CAPTCHA to prove you're real
Researchers at Microsoft and OpenAI, among others, have proposed "personhood credentials" to counter the online deception enabled by the AI models sold by Microsoft and OpenAI, among others.
"Malicious actors have been exploiting anonymity as a way to deceive others online," explained Shrey Jain, a Microsoft product manager, in a Microsoft Research podcast interview.
"Historically, deception has been viewed as this unfortunate but necessary cost as a way to preserve the internet's commitment to privacy and unrestricted access to information.
"Today, AI is changing the way we should think about malicious actors' ability to be successful in those attacks. It makes it easier to create content that is indistinguishable from human-created content, and it is possible to do so in a way that is only getting cheaper and more accessible."
The answer Microsoft, OpenAI, and various academic researchers propose is personhood credentials – or PHCs – which are essentially cryptographically authenticated identifiers bestowed by some authority on those deemed to be legitimate people.
The idea, described in a research paper [PDF] with more than 30 authors, is similar to the way that Certificate Authorities vouch for the ownership of a website – except that PHCs are supposed to be pseudonymous as a means of providing some measure of privacy.
Beyond some of the corresponding authors' Microsoft and OpenAI affilations, the other co-authors have ties to: Harvard Society of Fellows, University of Oxford, SpruceID, a16z crypto, UL Research Institutes, Tucows, Collective Intelligence Project, Massachusetts Institute of Technology, Decentralization Research Center, Digital Bazaar, American Enterprise Institute, Center for Human-Compatible AI, University of California, Berkeley, OpenMined, Decentralized Identity Foundation, Goodfire, Partnership on AI, eGovernments Foundation, University of Minnesota Law School, Mina Foundation, ex/ante, School of Information, University of California, Berkeley, Berkman Klein Center for Internet & Society, and Harvard University.
The proposed PHC identifiers are not supposed to be publicly linkable to a specific individual once granted – though presumably unmasking a PHC holder could be done with an appropriate legal demand.
However, the paper is careful to note that PHCs would not actually provide privacy – which remains all but non-existent online thanks to the ubiquity of tracking mechanisms and the incentives to surveil.
"While PHCs preserve user privacy via unlinkable pseudonymity, they are not a remedy for pervasive surveillance practices like tracking and profiling used throughout the internet today," the paper concedes. "Although PHCs prevent linking the credential across services, users should understand that their other online activities can still be tracked and potentially de-anonymized through existing methods."
The research also mentions fingerprinting – only as an inadequate AI defense and a form of biometric identification, not as a privacy threat to PHC holders.
The paper presents more of a general framework than a specific technical implementation. The authors suggest that various organizations – governmental or otherwise – could offer PHCs as a way to accommodate various "roots of trust," to use a term commonly applied to Certificate Authorities. US states, for example, could offer them to anyone with a tax identification number and the corresponding PHC could be biometrically based, or not.
"We are concerned that the internet is inadequately prepared for the challenges highly capable AI may pose," the AI-making authors and associates state. "Without proactive initiatives involving the public, governments, technologists, and standards bodies, there is a significant risk that digital institutions will be unprepared for a time when AI-powered agents, including those leveraged by malicious actors, overwhelm other activity online."
- Microsoft security tools questioned for treating employees as threats
- Elon Musk reins in Grok AI bot to stop election misinformation
- Facebook whistleblower calls for transparency in social media, AI
- Microsoft Bing Copilot accuses reporter of crimes he covered
These and related concerns have spurred other initiatives that similarly aspire to authenticate people online with minimized information disclosure. The authors point to the World Wide Web Consortium's (W3C) Verifiable Credentials and Decentralized Identifiers (DIDs), European Union Digital Identity's (EUDI) privacy-preserving digital wallets, and other standards such as British Columbia's Person credential.
The stated goal of PHCs is "to reduce scaled deception while also protecting user privacy and civil liberties." Doing so, however, would require a one-per-person-per-issuer credential limit. The idea is PHCs should not be available in unlimited quantities like email addresses.
Another aim, ironically, is to allow verification delegation to AI agents – so online services can ensure that AI bots have authority to act that comes from a real person.
The authors acknowledge there are still some challenges to overcome – such as ensuring PHCs are equitable, support free expression, don't provide undue power to ecosystem participants, and are sufficiently robust against attacks and errors.
Jacob Hoffman-Andrews, senior staff technologist at the Electronic Frontier Foundation, told The Register that he took a look at the paper and "from the start it's wildly dystopian."
"It provides for governments – or potential hand-wavy other issuers, but in reality, probably governments – to grant people their personhood, which is actually something that governments are historically very bad at," he said.
"Many governments have people who they're responsible for, in one way or another, or who are under their control, that they consider 'lesser people' and they would prefer not to speak online.
"So, while the proposal uses some fancy cryptography to preserve anonymity in an environment where the government grants you a credential to speak online, it doesn't really solve the problem of your government deciding who speaks online or not."
Hoffman-Andrews observed another major problem is that much of the concern about AI is state-sponsored disinformation.
"If you have different governments saying who's a person, who's granted permission to speak online, but those governments also have an interest in deceptive activity at scale, you wind up with institutions in different countries not trusting the personhood of people in other countries and restricting that speech and just further fragmenting already deeply fragmented internet."
What's more, Hoffman-Andrews said there are movements in the US, the UK, and elsewhere to limit the ability of children and teenagers to speak online.
He warned: "In a regime where you need a personhood credential to be able to log in, this actually seems like kind of a custom-built choke point for governments to prevent certain people from getting online." ®