GPT apps fail to disclose data collection, study finds
Researchers say that implementing Actions omit privacy details and expose info
Many of the GPT apps in OpenAI's GPT Store collect data and facilitate online tracking in violation of OpenAI policies, researchers claim.
Boffins from Washington University in St. Louis, Missouri, recently analyzed almost 120,000 GPTs and more than 2,500 Actions – embedded services – over a four-month period and found expansive data collection that's contrary to OpenAI's rules and often inadequately documented in privacy policies.
The researchers – Evin Jaff, Yuhao Wu, Ning Zhang, and Umar Iqbal – describe their findings in a paper titled "Data Exposure from LLM Apps: An In-depth Investigation of OpenAI's GPTs."
"Our measurements indicate that the disclosures for most of the collected data types are omitted in privacy policies, with only 5.8 percent of Actions clearly disclosing their data collection practices," the authors claim.
The data gathered includes sensitive information such as passwords. And the GPTs doing so often include Actions for ad tracking and analytics – a common source of privacy problems in the mobile app and web ecosystems.
"Our study identifies several privacy and security issues within the OpenAI GPT ecosystem, and similar issues have been noted by others as well," Yuhao Wu, a third-year PhD candidate in computer science at Washington University, told The Register.
"While some of these problems have been addressed after being highlighted, the existence of such issues suggests that certain design decisions did not adequately prioritize security and privacy. Furthermore, even though OpenAI has policies in place, there is a lack of consistent enforcement, which exacerbates these concerns."
The OpenAI Store, which opened officially in January, hosts GPTs, which are generative pre-trained transformer (GPT) models based on OpenAI's ChatGPT. Most of the three million or so GPTs in the store have been customized by third-party developers to perform some specific function like analyzing Excel data or writing code.
A small portion of GPTs (4.6 percent of the more than 3 million) implement Actions, which provide a way to translate the structured data of API services into the vernacular of a model that accepts and emits natural language. Actions "convert natural language text into the json schema required for an API call," as OpenAI puts it.
Most of the Actions (82.9 percent) included in the GPTs studied come from third parties. And these third parties largely appear to be unconcerned about data privacy or security.
According to the researchers, "a significant number of Actions collect data related to user's app activity, personal information, and web browsing."
"App activity data consists of user generated data (e.g., conversation and keywords from conversation), preferences or setting for the Actions (e.g., preferences for sorting search results), and information about the platform and other apps (e.g., other actions embedded in a GPT). Personal information includes demographics data (e.g., Race and ethnicity), PII (e.g., email addresses), and even user passwords; web browsing history refers to the data related to websites visited by the user using GPTs."
- Have we stopped to think about what LLMs actually model?
- Fintech outfit Klarna swaps humans for AI by not replacing departing workers
- Brit teachers are getting AI sidekicks to help with marking and lesson plans
- Google trains a GenAI model to simulate Doom's game engine in real-ish time
At least 1 percent of GPTs studied collect passwords, the authors observe, though apparently as a matter of convenience (to enable easy login) rather than for malicious purposes.
However, the authors argue that even this non-adversarial capture of passwords raises the risk of compromise because these passwords may get incorporated into training data.
"We identified GPTs that captured user passwords," explained Wu. "We did not investigate whether they were abused or captured with an intent for abuse. Whether or not there is intentional abuse, plaintext passwords and API keys being captured like this are always major security risks.
"In the case of LLMs, plaintext passwords in conversation run the risk of being included in training data which could result in accidental leakage. Services on OpenAI that want to use accounts or similar mechanisms are allowed to use OAuth so that a user can connect an account, so we'd consider this at a minimum to be evasion/poor security practices on the developer's part."
It gets worse. According to the study, "since Actions execute in shared memory space in GPTs, they have unrestrained access to each other's data, which allows them to access it (and also potentially influence each other's execution."
Then there's the fact that Actions are embedded in multiple GPTs, which allow them – potentially – to collect data across multiple apps and share that data with other Actions. This is exactly the sort of data access that has undermined privacy for users of mobile and web apps.
The researchers observe that OpenAI appears to be paying attention to non-compliant GPTs based on its removal of 2,883 GPTs during the four-month crawl period – February 8 to May 3, 2024.
Nonetheless, they conclude that OpenAI's efforts to keep on top of the growth of its ecosystem are insufficient. They argue that while the company requires GPTs to comply with applicable data privacy laws, it does not provide GPTs with the controls needed for users to exercise their privacy rights and it doesn't sufficiently isolate the execution of Actions to avoid exposing data between different Actions embedded in a GPT.
"Our findings highlight that apps and third parties collect excessive data," Wu said. "Unfortunately, it is a standard practice on many existing platforms, such as mobile and web. Our research highlights that these practices are also getting prevalent on emerging LLM-based platforms. That's why we did not report to OpenAI.
"In instances where we uncovered practices, where the developers could take action, we reported to them. For example, in the case of one GPT we suspected that it may not be hosted by the actual service that it is claiming it to be, so we reported it to the right service to verify."
OpenAI did not respond to a request for comment. ®