This article is more than 1 year old
Samsung reportedly leaked its own secrets through ChatGPT
Well that didn't take long, now did it?
Less than three weeks after Samsung lifted a ban on employees using ChatGPT, the chaebol has reportedly leaked its own secrets into the AI service at least three times – including sensitive in-development semiconductor information.
The ban was intended to protect company data, though was lifted on March 11 to enhance productivity and keep staff engaged with the world's latest cool tech tools.
According to a Korean media report, staff at Samsung subsequently dumped into ChatGPT corporate secrets, including equipment measurement and yield data from the conglomerate's device solution and semiconductor business unit.
One employee told journalists they copied all the problematic source code of a semiconductor database download program, entered it into ChatGPT, and inquired about a solution.
Another uploaded program code designed to identify defective equipment, and a third uploaded records of a meeting in an attempt to auto generate minutes.
That's not such a good idea, particular with internal blueprints because ChatGPT's FAQ states, "Your conversations may be reviewed by our AI trainers to improve our systems."
Samsung's secrets may therefore be accessible to OpenAI, an entity with more than a passing interest in the Korean giant's tech and affairs.
Once the incidents were discovered, Samsung reportedly applied "emergency measures" that include limiting upload capacity to 1024 bytes per question.
"If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network," Samsung chiefs reportedly warned employees.
The chaebol had already warned its employees to not leak proprietary information when it lifted the blanket ban on ChatGPT. OpenAI similarly warns against uploading sensitive data.
Beginning March 1 of this year, OpenAI policy dictates that API users must opt in to share data to train or improve its models, while non-API services – like ChatGPT or DALL-E – require a user to opt out in order to avoid having their data used.
"We remove any personally identifiable information from data we intend to use to improve model performance. We also only use a small sampling of data per customer for our efforts to improve model performance," claims OpenAI.
Local media reports indicate Samsung is now considering building its own in-house AI service to prevent further incidents.
The Reg has asked Samsung to confirm the details of this story, but had not received a response at the time of writing.
- Canada sticks a privacy probe into OpenAI's ChatGPT
- You get the internet you deserve
- Alibaba and Huawei set to debut generative AI chatbots
- Italy bans ChatGPT for 'unlawful collection of personal data'
According to The Korea Times, the incident has shaken domestic tech companies, including chipmaker SK hynix and consumer hardware company LG Display, which are now working on guidelines for use of AI chatbots.
SK hynix has reportedly blocked the use of chatbots on its internal network and employees wanting to use the service must seek security approvals. LG Display has apparently decided on an an educational campaign so its staff understand how to protect company secrets.
Presumably step one is "don't upload them to someone else's website." ®