Child psychiatrist jailed after making pornographic AI deep-fakes of kids

Perp said to have secretly recorded patients – and digitally undressed them using web neural networks

A child psychiatrist was jailed Wednesday for the production, possession, and transportation of child sexual abuse material (CSAM), including the use of web-based artificial intelligence software to create pornographic images of minors.

The prosecutors in North Carolina said David Tatum, 41, found guilty by a jury in May, has been sentenced to 40 years in prison and 30 years of supervised release, and ordered to pay $99,000 in restitution.

"As a child psychiatrist, Tatum knew the damaging, long-lasting impact sexual exploitation has on the wellbeing of victimized children," said US Attorney Dena J. King in a statement. "Regardless, he engaged in the depraved practice of using secret recordings of his victims to create illicit images and videos of them."

He engaged in the depraved practice of using secret recordings of his victims to create illicit images and videos of them

"Tatum also misused artificial intelligence in the worst possible way: to victimize children," said King, adding that her office is committed to prosecuting those who exploit technology to harm children.

His indictment [PDF] provides no detail about the AI software used; another court document [PDF] indicates that Tatum, in addition to possessing, producing, and transporting sexually explicit material of minors, viewed generated images of kids on an AI deep-fake website.

The trial evidence cited by the government includes a secretly-made recording of a minor (a cousin) undressing and showering, and other videos of children participating in sex acts.

"Additionally, trial evidence also established that Tatum used AI to digitally alter clothed images of minors making them sexually explicit," prosecutors said. "Specifically, trial evidence showed that Tatum used a web-based artificial intelligence application to alter images of clothed minors into child pornography."

Two months ago, according to CNN, a South Korean man was sentenced to two and a half years in prison for generating sexual images of children.

The use of AI models to generate CSAM, among other things, has become a matter of serious concern among lawmakers, civil society groups, and companies selling AI services.

In prepared remarks [PDF] delivered at a US Senate subcommittee hearing earlier this year, OpenAI CEO Sam Altman said, "GPT-4 is 82 percent less likely to respond to requests for disallowed content compared to GPT-3.5, and we use a robust combination of human and automated review processes to monitor for misuse. Although these systems are not perfect, we have made significant progress, and are regularly exploring new ways to make our systems safer and more reliable."

Altman said OpenAI also relies on Thorn's Safer service to spot, block, and report CSAM.

Yet efforts to detect CSAM after it has been created could lead to diminished online security through network surveillance requirements. A recent report from investigative organization Balkan Insight says groups like Thorn have been supporting CSAM detection legislation to make online content scanning compulsory in part because they provide that service. ®

More about

TIP US OFF

Send us news


Other stories you might like