Big Tech falls in line with Euro demands to fight bots, deepfakes, disinformation

Six percent of revenues at risk if Code of Practice broken


Meta, Twitter, Google, Microsoft and other tech companies and publishers have agreed to fight disinformation online in accordance with the European Commission's latest Code of Practice rules, which were published on Thursday.

The code [PDF] lists a broad set of commitments that signatories can choose to adhere to in the fight against digital fakery. Among the options are taking steps to demonetize disinformation; businesses should avoid placing ads next to fake news or profiting off the spread of false information online; and clearly labeling political advertisements. 

Other concerns include making data from social media platforms more transparent and available for researchers and supporting the work of fact checkers. The EU updated these guidelines to tackle the rise of fake bots accounts and AI-generated deepfakes too. Signatories promise to outline their internal policies for dealing with manipulated content, and have to show their algorithms used for detecting and moderating deepfakes are trustworthy. 

Věra Jourová, the EU's vice president for values and transparency, said in a statement: "This new anti-disinformation Code comes at a time when Russia is weaponising disinformation as part of its military aggression against Ukraine, but also when we see attacks on democracy more broadly. We now have very significant commitments to reduce the impact of disinformation online and much more robust tools to measure how these are implemented across the EU in all countries and in all its languages." 

33 entities have signed up to the latest version of the Code of Practice. Social media platforms, software vendors, media companies, and advertising industry organisations are among the signatories.

Although the Code of Practice is voluntary, parts of it are backed up by the Digital Services Act (DSA). Thierry Breton, the commissioner for the Internal Market, warned large companies could be sanctioned up to 6 per cent of their annual revenues if they breach the upcoming new laws.

"Disinformation is a form of invasion of our digital space, with tangible impact on our daily lives…Spreading disinformation should not bring a single euro to anyone. To be credible, the new Code of Practice will be backed up by the DSA - including for heavy dissuasive sanctions. Very large platforms that repeatedly break the Code and do not carry out risk mitigation measures properly risk fines of up to 6 [per cent] of their global turnover," he said

Signatories will have six months to implement measures outlined in the Code of Practice. They will be expected to provide a detailed report to the EU Commission at the start of 2023 detailing the actions taken to uphold their commitments. ®

Broader topics


Other stories you might like

  • Amazon can't channel the dead, but its deepfake voices take a close second
    Megacorp shows Alexa speaking like kid's deceased grandma

    In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.

    Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.

    Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.

    Continue reading
  • FBI warning: Crooks are using deepfake videos in interviews for remote gigs
    Yes. Of course I human. Why asking? Also, when you give passwords to database?

    The US FBI issued a warning on Tuesday that it was has received increasing numbers of complaints relating to the use of deepfake videos during interviews for tech jobs that involve access to sensitive systems and information.

    The deepfake videos include a video image or recording convincingly manipulated to misrepresent someone as the "applicant" for jobs that can be performed remotely. The Bureau reports the scam has been tried on jobs for developers, "database, and software-related job functions". Some of the targeted jobs required access to customers' personal information, financial data, large databases and/or proprietary information.

    "In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually," said the FBI in a public service announcement.

    Continue reading
  • Intel demands $625m in interest from Europe on overturned antitrust fine
    Chip giant still salty

    Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.

    In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.

    According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."

    Continue reading

Biting the hand that feeds IT © 1998–2022