Apple becomes the latest company to ban ChatGPT for internal use

Didn't stop OpenAI rolling out iOS ChatGPT app, just made things a bit awkward

Apple has become the latest company to ban internal use of ChatGPT and similar products, ironically just as the OpenAI chatbot comes to iOS in the form of a mobile app. 

News of the move was revealed yesterday by The Wall Street Journal, which reviewed an internal Apple document informing employees of the ban. According to the document, Apple's concerns fall in line with other corps who've also forbid ChatGPT from being used internally, namely that the AI could spill sensitive internal information shared with it. 

Apple reportedly barred GitHub's automated coding tool, Copilot, as well. Rumors have been swirling about Apple's AI plans for some time, with the company possibly working on its own LLM to rival ChatGPT and Google Bard.

Cupertino is hardly alone in its decision to prohibit the use of the Microsoft-backed chatbot: It joins an ever-growing list of companies like Amazon and a number of banks including JPMorgan Chase, Bank of America, Citigroup, Deutsche Bank and the like. 

Apple rival Samsung also moved to ban ChatGPT from internal use - twice - due to mishaps. Samsung lifted a ban on employee use of ChatGPT in March, but in less than a month it was revealed by Korean media that Samsung staff had asked ChatGPT for help resolving source code bugs, fixing software used to gather measurement and yield data and turning meeting notes into minutes. 

Samsung reimposed its ChatGPT ban earlier this month to prevent similar incidents from happening again.

The problem with ChatGPT, Google Bard and LLM bots is that the data fed into them is often used to further train the bots, which the UK's spy agency, GCHQ, has warned can easily lead to confidential business information being regurgitated if others ask similar questions. 

Queries are also visible to bot providers, like OpenAI and Google, who may themselves review the content fed to their language models, further risking the exposure of closely-guarded corporate secrets.

Accidents happen, too

Along with the risk that a bot shares confidential information when trying to be helpful to others, there's also the possibility that companies like OpenAI simply aren't coding the best software.

In March, OpenAI admitted that a bug in open source library redis-py caused bits of people's tête-à-têtes with ChatGPT to be viewable by other users. That bug, lead data analyst at Kaspersky Vlad Tushkanov told us, should be a reminder that LLM chat bots don't offer any real privacy to users.

"ChatGPT warns on login that 'conversations may be reviewed by our AI trainers' … So from the very beginning the users should have had zero expectation of privacy when using the ChatGPT web demo," Tushkanov said. 

OpenAI last month added the ability for ChatGPT users to disable chat history, which not only hides a chat from the sidebar in ChatGPT's interface, but also prevents history-disabled chats from being used to train OpenAI's models.

OpenAI said it'll still retain conversations for 30 days when history is disabled, and it'll have the ability to review them "when needed to monitor for abuse, before permanently deleting," the Microsoft-backed company said.

In the same announcement, OpenAI also said it will soon be rolling out a business version of ChatGPT that offers businesses more control over the use of their data, by which OpenAI said it meant ChatGPT Business conversations wouldn't be used to train its LLMs. 

We asked OpenAI some additional questions about ChatGPT Business, such as whether OpenAI staff would still be able to view chats and when it may be released, and will update this story if we hear back. ®

More about

TIP US OFF

Send us news


Other stories you might like