Tech world won't have long to fall in line when EU signs off on AI Act
Did your org already start baking AI into systems? Watch out. Staggered timetable for compliance expected after draft leaked
Users and builders of AI systems face a race against time to comply with incoming European legislation if lawmakers continue on their current trajectory.
After the agreed text of the EU's AI act was leaked this week, commentators have urged organizations using the tech to closely monitor the legislation's progress as some businesses may have up to a year to conform.
Kirsten Rulf, a partner and associate director at Boston Consulting Group, told us organizations might have only six to 12 months to prepare for most rules and restrictions, while providers of high-risk systems will be encouraged to meet the regulations much before.
"The general-purpose AI providers, for example, foundation models and generative AI applications, need to be ready to comply within the year. These are ambitious timelines, especially as most European companies are still codifying what responsible AI means for them," she said in a statement sent to The Register.
The current timeline sets out the formal adoption at the EU ambassador level on February 2, after which the act could be published and adopted in May. The timetable will be tight since European Parliament elections are set to take place in June.
"The debate in the European Parliament will also have to be imminent as the elections are nearing," Rulf said, "but businesses need to start planning – if the current pace is maintained, it is highly plausible that the EU AI Act will be ready by the end of this legislative period."
Requirements to comply with the rules will be staggered according to the categories of AI, as set out in the draft act, explained Tanguy Van Overstraeten, partner and global head of privacy and data protection with law firm Linklaters. The legislators are proposing a pyramid system in which some categories will be expected to comply with the rules sooner than others.
For prohibited uses, organizations will be expected to comply six months after the date of entry comes into force, Van Overstraeten said.
Such uses include biometric categorization systems that claim to sort people into groups based on politics, religion, sexual orientation, and race. The untargeted scraping of facial images from the internet or CCTV, emotion recognition in the workplace and educational institutions, and social scoring based on behavior or personal characteristics were also included on the prohibited list, according to a provisional agreement.
The next tier is general-purpose AI, which includes generative AI applications such as OpenAI's ChatGPT. "They talked about general-purpose AI because they want to be broader than just generative AI. Although there is no real scientific definition, they wanted it to be broader than systems made for generating text," Van Overstraeten said.
- EU lassos tech giants in bid to rein in the AI Wild West
- Trust us, says EU, our AI Act will make AI trustworthy by banning the nasty ones
- EU running in circles trying to get AI Act out the door
- Fear not, White House chatted to OpenAI and pals, and they promised to make AI safe
To comply with the law, the EU said general-purpose AI models would need to meet certain criteria. They will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the European Commission, ensure cybersecurity, and report on their energy efficiency, according to the provisional agreement.
High-risk systems will have longer to comply. Autonomous systems will get 24 months, while embedded systems, for example, in medical devices, will get three years, Van Overstraeten said.
The proposed AI Act has attracted criticism for burdening research. Last year, Meta's chief AI scientist, Yann LeCun, said regulating foundation models was effectively regulating research and development. "There is absolutely no reason for it, except for highly speculative and improbable scenarios. Regulating products is fine. But [regulating] R&D is ridiculous."
Van Overstraeten said that regulatory sandboxes would allow for development outside the strict provisions of the legislation, provided that the authorities approve.
"According to the new text, there will be real-world testing for a period of six months, so you could have an AI developer that can first test in a virtual world without exposing any users," he said.
"But then, when that company has been approved by the national authorities, they could get six months for real-world testing, and an additional six months if they need, which I think is a good idea because then it helps businesses to continue developing [AI]." ®