If your AI does the crime, you'll do the time, warns DoJ

Add compliance requirements to your AI to-do list

If juggling the extreme cost and hazy ROI of AI weren't enough of a headache, the United States Department of Justice (DoJ) now expects enterprise compliance officers to start weighing the tech's potential for harm – or risk stiff fines if it breaks the law.

Nicole Argentieri, the principal deputy assistant attorney general for the DoJ's criminal division, discussed the changes made to the Evaluation of Corporate Compliance Program (ECCP) guidelines [PDF] in an address to the Society of Corporate Compliance and Ethics earlier this week.

The guidelines detail how DoJ prosecutors should approach criminal investigations and evaluate service providers' effectiveness at preventing criminal behavior. As such, the ECCP effectively functions as a guide for compliance officers looking to avoid the DoJ's ire.

After a pilot program, these rules have officially been extended to include the use of AI. The tech is are increasingly being deployed by businesses and could therefore conceivably be used to make decisions or facilitate actions that are less than legal.

The ECCP guidelines include a list of questions the DoJ thinks compliance officers should ask themselves about the use of AI systems because those are exactly the questions prosecutors will be asking in the event of an investigation. Examples include:

  • "How does the company assess the potential impact of new technologies, such as artificial intelligence, on its ability to comply with criminal laws?"
  • "How is the company curbing any potential negative or unintended consequences resulting from the use of technologies, both in its commercial business and in its compliance program?"
  • "How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?"

You can find the full list of AI-related compliance questions on page four of the ECCP document here.

According to Argentieri, per a transcript, "prosecutors will consider whether the company is vulnerable to criminal schemes enabled by new technology such as false approvals and documentation generated by AI. If so, we will consider whether compliance controls and tools are in place to identify and mitigate those risks."

Additionally, the DoJ will take into consideration whether the company is actively monitoring and testing AI applications to ensure they're functioning as intended.

In other words, it doesn't matter whether it's the AI that broke the law – the company will be held accountable. Executives should therefore take steps to identify and address these risks before the DoJ comes knocking.

"Because we prosecute corporate crime, we ask not just what happened, but why it happened and what the company has done to prevent misconduct from recurring," Argentieri explained, adding: "We expect corporations to continuously review and update their compliance programs to account for emerging risk factors."

In addition to guidelines governing AI compliance, the ECCP updates also include guidance related to whistleblowers under a program designed to incentivize workers to report illegal activities. ®

More about

TIP US OFF

Send us news


Other stories you might like