Social media snitching bill introduced into US Congress by intel bosses

Companies told to watch customers for terrorist tendencies


A new bill introduced by Democratic senator Dianne Feinstein and Republican senator Richard Burr would oblige social media companies to report "terrorist activity" to the authorities.

The short, three-page bill is called the "Requiring Reporting of Online Terrorist Activity Act" and effectively clones a similar law requiring companies to report child abuse images.

If it passes, the new law would require any company "engaged in providing an electronic communication service or a remote computing service to the public" to report "actual knowledge of any terrorist activity, including the facts or circumstances" and to do so "as soon as reasonably possible."

In essence it would oblige Facebook, Twitter, Instagram and so on to scour their data and actively report to the US government anything that they felt was "terrorist activity."

The same measure was reportedly included in a secret, classified bill that approved the intelligence agencies' various programs – Burr and Feinstein are chair and vice-chair of the Senate Intelligence Committee – but was pulled from the final version. Following the recent shooting in San Bernardino, California, they decided to put the measure out as a separate, public bill.

"Social media is one part of a large puzzle that law-enforcement and intelligence officials must piece together to prevent future attacks," said Senator Burr. "It's critical Congress works together to ensure law-enforcement and intelligence officials have the tools available to keep Americans safe."

Not so fast

Another member of the committee, and a persistent critical voice in the security services' efforts to expand its influence across the entire communications infrastructure, Senator Ron Wyden, was not impressed however.

"I'm opposed to this proposal because I believe it will undermine that collaboration and lead to less reporting of terrorist activity, not more," he said. "It would create a perverse incentive for companies to avoid looking for terrorist content on their own networks, because if they saw something and failed to report it they would be breaking the law, but if they stuck their heads in the sand and avoided looking for terrorist content they would be absolved of responsibility."

He also noted: "Let's make sure the record is clear: the Director of the FBI testified a few months ago that social media companies are 'pretty good about telling us what they see.' Social media companies must continue to do everything they can to quickly remove terrorist content and report it to law enforcement."

Why it's a bad idea

Companies such as Facebook and Twitter have recently stepped up their efforts to shut down accounts that are used to promote violent or racist ideologies; Facebook was specifically criticized by German chancellor Angela Merkel a few months ago for not tackling a wave of hate speech about migrants into Europe from Syria and elsewhere.

While Wyden's suggestion that companies would do less to remove offensive material if they were legally required to report it may seem a little far-fetched, the reality is that, unlike child abuse images, "terrorist activity" is a difficult concept to define.

It would almost certainly play into the prejudices of whoever the social media companies assigned to review potentially infringing posts (the color of people's skin or their religion); it would require the creation of a definition of "terrorism" that has continued to elude experts for several decades; it would presumably extend into written rather than solely pictorial information; and it would almost certainly result in dangerous over-the-top responses from the government in response to what may be misunderstood or satirical comments made online.

In other words, it sounds like a poorly thought-out, knee-jerk reaction that would almost certainly fail a First Amendment challenge. ®

Similar topics


Other stories you might like

  • India reveals home-grown server that won't worry the leading edge

    And a National Blockchain Strategy that calls for gov to host BaaS

    India's government has revealed a home-grown server design that is unlikely to threaten the pacesetters of high tech, but (it hopes) will attract domestic buyers and manufacturers and help to kickstart the nation's hardware industry.

    The "Rudra" design is a two-socket server that can run Intel's Cascade Lake Xeons. The machines are offered in 1U or 2U form factors, each at half-width. A pair of GPUs can be equipped, as can DDR4 RAM.

    Cascade Lake emerged in 2019 and has since been superseded by the Ice Lake architecture launched in April 2021. Indian authorities know Rudra is off the pace, and said a new design capable of supporting four GPUs is already in the works with a reveal planned for June 2022.

    Continue reading
  • Prisons transcribe private phone calls with inmates using speech-to-text AI

    Plus: A drug designed by machine learning algorithms to treat liver disease reaches human clinical trials and more

    In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

    A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

    In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

    Continue reading
  • Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise

    Another terrible launch, but DICE is already working on improvements

    The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show – Battlefield 2042.

    I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

    The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

    Continue reading

Biting the hand that feeds IT © 1998–2021