Relax, breaking a website's fine-print doesn't make you a criminal hacker, says judge in US cyber-law legal row

If you ignore the T&Cs to probe a site's algorithms, don't expect the Feds to swoop in using the dreaded CFAA

Netizens probing websites' algorithms for bias and discrimination, against the sites' terms and conditions, can breathe a small sigh of relief. A US federal court has ruled it's not a criminal offense to flout a consumer-grade website's fine-print.

The question arose after boffins and journalists, with the help of the ACLU, sued [PDF] the US Attorney General in 2016, arguing the nation's Computer Fraud and Abuse Act (CFAA) is unconstitutional.

The plaintiffs wanted to investigate job ad websites by creating fake profiles to test the underlying algorithms for discrimination and bias. Such efforts would be against the sites' terms of use, and the researchers feared they could, as a result, be prosecuted as criminal hackers under the section of the CFAA that outlaws unauthorized access to a computer system. To preempt any such prosecution, the group sued Uncle Sam, challenging the law.

On Friday, District Judge John Bates of the Washington DC courts could not have been clearer in his ruling [PDF]. Rather than decide on the constitutional argument, the judge simply said ignoring a website's T&Cs isn't a criminal violation of federal anti-hacking laws:

The court concludes that the CFAA does not criminalize mere terms-of-service violations on consumer websites and, thus, that plaintiffs’ proposed research plans are not criminal under the CFAA.

That means disobeying the fine print can't lead to federal criminal charges. He thus dismissed the lawsuit as moot. The ruling is a gold mine of zingers, including the observation that online T&Cs are too much faff for people to deal and keep up with:

Websites’ terms of service provide inadequate notice for purposes of criminal liability. These protean contractual agreements are often long, dense, and subject to change. While some websites force a user to agree to the terms before access — so-called “clickwrap” — others simply provide a link at the bottom of the webpage, or place the terms, often in fine print, elsewhere on the page.

Christian Sandvig, a professor of digital media at the University of Michigan, was among the academics suing the Attorney General. He told The Register he and his co-plaintiffs decided to challenge Uncle Sam after he and colleagues at the University of Illinois, Urbana-Champaign presented a paper in 2014. That paper [PDF] discussed ways to audit a website's algorithms for discrimination, by scraping the site, probing its APIs, making fake accounts to test the service, and so on.

“It is quite likely researchers will run afoul of a platform’s terms of service and the US Computer Fraud and Abuse Act,” by doing so, Sandvig et al wrote in their paper, presented to Data and Discrimination, a pre-conference of the 64th annual International Communication Association meeting in Seattle.


Criminal justice software code could send you to jail and there’s nothing you can do about it


“The CFAA has been criticized as overbroad legislation that criminalizes unauthorized access to any computer, where authorization can be defined by the website operator. The inherently oppositional nature of an algorithm audit makes it difficult to imagine that an operator will consent to scraping.”

In the ensuing lawsuit, the plaintiffs argued the CFAA chilled First Amendment rights, and was too vague, making it difficult to know which actions were strictly prohibited. In that regard, they believed people could not be tried fairly.

In response, the US government filed to have the case thrown out, citing a lack of credible evidence. And now Judge Bates has dismissed each side's request for a summary judgment and declared the whole thing moot because, well, the CFAA doesn't criminalize T&Cs violations.

As more companies use software, particularly AI code, to analyze data and make decisions for users, researchers have fought to uncover discriminatory algorithms in these systems. For example, eggheads discovered Facebook allowed advertisers to prevent credit, housing, and job ads from being shown to some of its users based on their race, gender, and age.

“While this case has been in court we've seen all sorts of shocking revelations about the behavior of online platforms,” Sandvig told The Register. “I hope it's now clear to us today in 2020 that developments in AI demand accompanying public interest oversight - it's the job of researchers and journalists to provide it and this role should be protected by appropriate laws and regulations.

“Before this case I had to explain to my students that anyone can be defined as a hacker and a felon if it's convenient for the US government. Now I feel safer retiring that speech.” ®

Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022