Samsung puts ChatGPT back in the box after 'code leak'
Others also blocked as company works on its own generative AI tech
Samsung has imposed a "temporary" ban on generative AI tools like ChatGPT after what appears to be an accidental source code leak.
An internal memo seen by Bloomberg told staffers they'd better not use tech such as OpenAI's ChatGPT or Google's Bard on pain of termination because of the risk to the company's intellectual property. The newswire said this morning that the leaked memo had been sent to one of Sammy's "biggest divisions," adding that the company had confirmed this.
The memo reportedly said: "Interest in generative AI platforms such as ChatGPT has been growing internally and externally... While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI."
The move seems to be a re-ban by the massive South Korean electronics company. An earlier ban was lifted earlier this year amid unconfirmed reports from Korean media that Samsung staffers had entered corporate secrets into the chatbot in an attempt to iron out bugs in "problematic" source code, as well as when trying to generate meeting minutes, among other alleged blunders. Data leaked reportedly included equipment measurement and yield data from the chaebol's device solution and semiconductor business unit. We asked Samsung at the time to confirm or deny this and have asked again.
If they had done, they wouldn't be the first engineers to try to get the chatbot to help them with the arduous process of testing software and fixing code. Compsci researchers Chunqiu Steven Xia and Lingming Zhang have even created an automated process for this, publishing a paper showing how cheap and effective at least one solution is. Their process, which they've dubbed ChatRepair, not only tests patches, but learns from previous failures, in a paper titled "Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT."
As for OpenAI, the company behind the text-based assistant is already putting up some of its own guardrails as the chatbot attracts more and more attention from regulators. Last week it launched a feature that lets users stop ChatGPT from slurping up text generated in their private conversations and using it to train large language models, for example.
- Misinformation tracker warns 'new generation' of AI-scribed content farms on the rise
- EU legislates disclosure of copyright data used to train AI
- RSA Conference or Black Mirror? Either way, we're doomed ... probably
- AI models may not yet be safe, but at least we can make them affordable … ish
Meanwhile, under updated policies that started on March 1, OpenAI made two changes to its data usage and retention policies:
- OpenAI will not use data submitted by customers via our API to train or improve our models, unless you explicitly decide to share your data with us for this purpose. You can opt-in to share data.
- Any data sent through the API will be retained for abuse and misuse monitoring purposes for a maximum of 30 days, after which it will be deleted (unless otherwise required by law).
In the EU, the proposed new AI Act may soon ask developers to disclose and detail any copyrighted data used to train their ML models.
IBM, on the other hand, appears to have few qualms about using the tech. Yesterday CEO Arvind Krishna said he thought up to 30 percent of IBM's back-office jobs – that's around 7,800 people – could be replaced by AI.
Samsung is said to be working on its own AI tools. ®