DDoS-like attack brought down OpenAI this week, not just its purported popularity
Plus: Lab launches dataset sharing initiative for its own benefit
OpenAI's ChatGPT assistant and APIs weathered a distributed denial-of-service (DDoS) attack this week, according to the super-lab.
Its engineers acknowledged at 0554 PT (1354 UTC) on Wednesday that its services were dropping offline at times, and a fix was eventually deployed to bring everything back up. CEO Sam Altman claimed his outfit was struggling to keep up with demand from users, which was leading to stability problems. But then the chat bot and sibling APIs went down again, and traffic that looked like a DDoS attack was blamed.
As you read this, it's supposed to be working again. Here's a timeline of OpenAI's wobbles:
- On Tuesday, November 7, the day after the lab held its Devday and announced a bunch of things, ChatGPT and the OpenAI API was up and down for reasons undisclosed for more than two hours from 1952 PT.
- On Wednesday, November 8, from 0554 PT, OpenAI reported a major outage across its services that lasted nearly two hours to 0746 PT. It said it was experiencing high error rates and "identified the problem and implemented a fix."
- That same day, at 1008 PT, OpenAI CEO Sam Altman blamed the popularity of its tech following the Devday for the downtime. "There will likely be service instability in the short term due to load. Sorry," he said.
- Then at 1203 PT, OpenAI said its services were up and down again, and that it had implemented a fix by 1300 PT. At 1723 PT, it noted that its systems were suffering again, making them unavailable to users.
- At 1949 PT, the biz acknowledged it was being hit by what looked like a DDoS attack.
- Today, November 9, at 1321 PT, OpenAI said everything should be back to normal.
"We are dealing with periodic outages due to an abnormal traffic pattern reflective of a DDoS attack," it said overnight. "We are continuing work to mitigate this."
ChatGPT was not available earlier today as we were looking into this mess. An error message in the web app reads: "Oops! Our systems are a bit busy at the moment, please take a break and try again soon." The Register has asked OpenAI for more comment.
Hacktivist group Anonymous Sudan claimed responsibility for the DDoS attack, though who knows if they actually were behind it.
The outage hit the lab two days after it launched its first-ever developer conference, and debuted its latest large language model, GPT-4 Turbo, which has been trained on information scraped from the internet up until April 2023. The system can handle up to 128,000 tokens at a time, an amount equivalent to about 300 pages of text, four times more than its predecessor GPT-4 system.
Other major updates announced by Altman include the GPT Store, a platform showcasing the most popular custom applications built by third-party developers using its ChatGPT model, who might be able to make a pretty penny.
"Revenue sharing is important to us," said Altman. "We're gonna pay people who build the most useful and the most used GPTs a portion of our revenue."
OpenAI also promised to protect and defend users if they face any copyright infringement claims for using any content generated by its AI models. Following the footsteps of similar pledges made by Microsoft and Google, OpenAI said it will foot the legal costs in any potential litigation for paying customers using ChatGPT Enterprise model, and its corresponding API.
AI safety guardrails easily thwarted, security study finds
READ MOREOn Thursday, the biz launched OpenAI Data Partnerships to collaborate with other organizations to compile public and private datasets to train future models.
So far, it is working with the Icelandic Government and a software biz to boost GPT-4's language abilities in the island state, and with the Free Law Project, a US non-profit focused on building technology designed to make it easier for people to understand and navigate court cases.
"Modern AI technology learns skills and aspects of our world — of people, our motivations, interactions, and the way we communicate — by making sense of the data on which it's trained," OpenAI said in a statement.
"To ultimately make AGI that is safe and beneficial to all of humanity, we'd like AI models to deeply understand all subject matters, industries, cultures, and languages, which requires as broad a training dataset as possible." ®