OpenAI predicts biz can break a billion in revs by 2024
Plus: Suomi security warnings and artists rebel against AI on Artstation
In Brief The squishy brains behind OpenAI's artificial ones are predicting developments like the ChatGPT system will see money flooding in – with a forecast of earning around $1 billion by 2024.
According to an investors' briefing document seen by Reuters the machine-learning biz expects to break $200 million in revenues next year and bust through the billion mark 12 months later. Founded by, among others, Elon Musk and Y Combinator's Sam Altman, the outfit is currently valued at around $20 billion.
Part of the reason for such prognostications could be an increased role from Microsoft. Redmond took a $1 billion stake in OpenAI in 2019 and is reportedly looking to increase its investment, with a view to rolling OpenAI's tools like ChatGPT into the software giant's suite of tools for knowledge workers.
Not that the latter tool has gone down well with many coders. Stack Overflow has banned its use for submissions to the site temporarily because the error rate is so high. Meanwhile, Microsoft is also pressing on with GitHub's CoPilot code creation tool, despite legal issues.
Finnish government warns of AI online attacks
A report this week by the Finnish Transport and Communications Agency warned that AI-supported attack tools are going to become the bane of security workers' lives. The research predicts [PDF] a five-year timescale in which criminals use AI systems to automate and extend vulnerability scanning, scan huge datasets to make phishing more accurately targeted, and use code to impersonate humans for financial and access purposes.
Some of these latter techniques are already in use – GAN systems are routinely used to generate false faces and occasionally voices for fraudulent purposes. This is going to grow rapidly, the authors predict. But the use of AI to search for flaws in firewalls and other systems is also going to see huge growth in this early period.
By year five, however, the team expect to see AI tools extending all the way along the supply chain – from invading a network, exfiltrating data or executing commands, and reacting to protective software's efforts to find and stop the attack. At the moment there are no publicly available training models for criminals to use for this, but that could change soon. "Nation-state attackers will be (or already are) the first likely threat actor to use AI-enabled cyber attacks, because they are deliberate, calculated, well-funded and supported with enough resources to target anything or anyone they deem worthwhile," the researchers state. "After widespread nation-state adoption of AI-based cyber stack tools, the usage of AI in cyber attacks will likely trickle down to less skilled and resourced adversaries."
Artstation users revolt over AI stealing their work
It's been a busy week at the artists' community ArtStation, after protests against the use of images to train the competition. On Tuesday the users of the site – which is owned by Epic Games – began to replace their art with "AI is theft" banners, protesting not only at the inclusion of AI-generated art but also the scraping of their portfolios to generate AI images.
Epic responded with a change to the terms and conditions. While the site's default setting will be to allow AI engines to use images on the site, artists can now opt out. It will, however, continue to post AI generated images, although it's looking at ways to allow them to be screened off by users.
"We believe artists should be free to decide how their art is used, and simultaneously we don't want to become a gatekeeper with site terms that stifle AI research and commercialization when it respects artists' choices and copyright law," it said.
In the long term it looks as though the fight by human artists to protect their work and livelihoods may be a lost cause. Image libraries like Shutterstock and Getty have already decided AI is OK on their sites. ®