This article is more than 1 year old
Microsoft Teams: A vector for child sexual abuse material with a two-day processing time for complaints
Redmond and Cupertino criticized for slow and weak responses by Australian regulator
Australia's e-safety commissioner, a government agency charged with keeping citizens safe online, has delivered a report on seven tech platforms' mechanisms to protect children from online sexual abuse – and found most don't respond quickly, or have the processes to do so well.
The commissioner oversees Australia's Basic Online Safety Expectations that spell out how Australia expects online platforms to behave. The core of the Expectations is that platforms will do their best to stamp out unlawful or harmful material, allow users to report it, and respond to requests for information from the commissioner.
In August 2022, the commissioner sent Transparency Requests requiring seven service providers – Apple, Meta, WhatsApp, Microsoft, Snap, Skype, and anonymous chat service Omegle – to explain the tools, policies and processes they use to address child sexual exploitation and abuse (CSEA) material and actions. The commissioner asked how they address proliferation of such vile material, the online grooming of children, and the use of video calling and conferencing services to provide live feeds of child abuse.
Among the findings assessing the orgs' responses, the commissioner found Microsoft isn't using the PhotoDNA image-detection technology it helped to develop and promotes as a tool "to stop the spread of online child sexual abuse photos."
The commissioner also singled out Apple and Microsoft for criticism on grounds that neither "attempt to proactively detect child abuse material stored in their widely used iCloud and OneDrive services, despite the wide availability of PhotoDNA detection technology."
"Apple and Microsoft also reported that they do not use any technology to detect live-streaming of child sexual abuse in video chats on Skype, Microsoft Teams or FaceTime, despite the extensive use of Skype, in particular, for this long-standing and proliferating crime."
Microsoft offered the following explanation for not monitoring for CSEA on Teams videos:
As there are significant jurisdictional and other conflicts associated with operating a global service for use by individuals in one country to communicate with individuals in other countries, Microsoft does not deploy classifiers or other automated content detection tools on video conferences held through Microsoft Teams.
Microsoft reported that the average response time for reports of CSEA on Teams was two days – the same as for OneDrive and a day longer than for its Xbox Live services. The report notes that Microsoft first indicated that some review queues for Teams saw matters left under consideration for 19 days.
Other platforms did better: Meta reported that Instagram can detect and remove CSEA in two hours and forty seconds after it is detected on a device, and handles reports from Instagram users in around 40 minutes.
- Egad, did Apple do something right? End-to-end encryption for (most) iCloud services
- WhatsApp boss says no to AI filters policing encrypted chat
- Tech world may face huge fines if it doesn't scrub CSAM from encrypted chats
- Apple quietly deletes details of derided CSAM scanning tech from its Child Safety page without explanation
But the report is also full of evasions, deflections, and excuses for why more comprehensive measures to detect and eradicate CSEA are not in place.
WhatsApp, for example, does not share information about banned users with Instagram or Facebook. If a user is banned for CSEA on Facebook, they may not be banned on Instagram.
Snap and Microsoft don't even try to detect previously unobserved CSEA material.
Only Omegle tries to detect CSEA in livestreams, video calls or video conferences. Snap and Apple don't attempt to identify grooming of minors. Apple doesn't offer any reporting tools in its online services.
We could go on, but you get the idea. Across the report's 63 pages readers will find many examples of inaction that, if corrected, would offer stronger protections to children.
eSafety commissioner Julie Inman Grant pointed out that the report is not comprehensive – it only details responses to questions her agency posed to seven specific service providers. But some of the answers provided describe respondents' global capabilities, meaning this report is at least a window into how some of tech's most powerful address – or fail to address – the horrors of CSEA. ®