Italy bans ChatGPT for 'unlawful collection of personal data'
Perché siamo il Garante, e tu sei pazzo
Italian privacy enforcers have opened an investigation into OpenAI's ChatGPT for allegedly violating EU and Italian privacy laws by collecting personal data of the country's citizens without "a suitable legal basis."
The announcement of the probe came alongside a decree in which the Guarantor for the Protection of Personal Data (GPDP) said it was imposing an immediate "temporary limitation of the processing of personal data" by the chatbot of Italian citizens due to violations of both the EU's General Data Protection Regulation and Italy's own data protection code.
"The Privacy Guarantor notes the lack of information to users and all interested parties whose data is collected by OpenAI," the GPDP said in a statement. It added that ChatGPT's processing of user data can provide an inaccurate picture, "as the information provided by ChatGPT does not always correspond to the real data."
The Guarantor also expressed concern that OpenAI hadn't vetted the age of its users, which the Microsoft-backed firm says is designed for those 13 or older. There's no age verification process for ChatGPT users, which the GPDP said "exposes minors to absolutely unsuitable answers compared to the degree of development and self-awareness," presumably in the pre-teens using it, and not the AI itself.
In its statement, the GPDP also referenced the data exposure bug that last week caused ChatGPT to display partial payment details and chat histories for other users on people's accounts. While the breach wasn't mentioned in the limitation decree, the GPDP's mention of it in its statement implies its investigation is centered around the incident.
The Guarantor said the temporary limit extends to all personal data of interested parties being collected within Italy's borders, in essence blocking use of the service until OpenAI is able to show that it has resolved the issues identified by the GPDP.
OpenAI has 20 days to respond, the Guarantor said, or else it faces fines of up to €20 million ($21.7 million) and up to 4 percent of its annual global turnover.
Is that AI mistreating you?
This isn't the first time the Guarantor has taken action against an AI that it thought was behaving badly. In February the GPDP announced a similar prohibition against Replika, an AI chatbot app that allows users to customize a virtual companion for anything from friendly chats to a virtual relationship.
The GPDP said last month it was concerned that Replika may increase risks for individuals "still in a developmental stage" (ie, minors), "or in a state of emotional fragility." As we've noted in previous coverage of Replika, CEO Eugenia Kuyda has said that otherwise stable individuals have been fooled by the app into thinking their Replikas are sentient and have built relationships with their personal chatbot.
Italian authorities also made claims that Replika lacked an age verification mechanism. As such, they alleged in February, Replika is breaching the GDPR and unlawfully processing personal data.
ChatGPT, Replika and tools like it are so new that it's easy to forget widespread use has only been happening "for a matter of weeks," said Edward Machin, a London-based privacy lawyer at international law firm Ropes & Gray.
- Leaked IT contractor files detail Kremlin's stockpile of cyber-weapons
- FTC urged to freeze OpenAI's 'biased, deceptive' GPT-4
- So you want to integrate OpenAI's bot. Here's how that worked for software security scanner Socket
- Microsoft wants to stick adverts in Bing chat responses
Machin told us in a statement that most users probably haven't stopped to consider the privacy implications of their data being used to train OpenAI's software. "The allegation here is that users aren't being given the information to allow them to make an informed decision, and more problematically, that in any event there may not be a lawful basis to process their data."
The move to ban OpenAI's processing of Italians' data is one of the most powerful weapons in the GPDP's armory, Machin said. "I suspect that regulators across Europe will be quietly thanking the Garante for being the first to take this step and it wouldn't be surprising to see others now follow suit and issue similar processing bans," Machin predicted.
OpenAI hadn't responded to our questions by the time of publication. ®