US Senator Mark Warner (D-VA) has a plan to save democracy from technology, including making social media platforms liable for what their users post.
In a proposal provided to The Register, he suggests revising the safe harbor provision of the Communications Decency Act, known as Section 230. Section 230 immunizes service providers from liability for what their users post, so long as they make a good faith effort and follow certain statutory requirements.
Warner would like to see service providers be held liable for failing to remove defamatory content (e.g. faked explicit images of an individual) from their platform if the victim has successfully won a court judgement against whoever created the offending content. (With the approval of the SESTA/FOSTA sex trafficking laws earlier this year, Section 230 immunity is already diminished.)
US Senate green-lights controversial anti-sex-trafficking law amid warnings of power grabREAD MORE
Warner's paper suggests tech platforms should be obligated to label bots, including automated voice systems like Google Duplex and automated social media posting software on Twitter. California is already trying to enact rules along these lines with SB 1001, the Bolstering Online Transparency (BOT) Act of 2018, which is working its way through the state assembly.
The paper also suggests forcing tech platforms to determine the geographic origin of online posts, to avoid having people in foreign countries masquerading as US interest groups. At the same time, it acknowledges that VPNs and other methods of IP address masking can make geolocating users difficult and raises privacy questions, without offering a way around these obstacles.
To combat fake social media accounts, the paper suggests crafting a law to impose a duty on platforms to remove inauthentic users more vigorously, possibly under the authority of the Federal Trade Commission.
"Platforms have perverse incentives not to take inauthentic account creation seriously: the steady creation of new accounts allows them to show continued user growth to financial markets, and generates additional digital advertising money (both in the form of inauthentic views and from additional – often highly sensational – content to run ads against)," the paper says.
Other elements of the proposal include calls for requiring:
- API access to platform activity data, for detecting misuse;
- the formation of a government task force to defend against misinformation attacks;
- disclosure requirements for online political ads;
- a federal media literacy campaign;
- an information warfare deterrence doctrine;
- investing the FTC with privacy rule-making power;
- making AI algorithms subject to verification and transparency;
- adopting data protection rules similar to Europe's GDPR;
- rules prohibiting manipulative design (a.k.a dark patterns);
- interoperability requirements;
- and making dominant tech resources available under fair, reasonable and non-discriminatory (FRAND) terms.
The extent to which these suggestions get taken seriously is likely to depend on how well the Democrats do in the upcoming midterm elections in November.
Currently the proposals don't have a snowflake's chance in a blast furnace of making it onto the statute books, particularly when you factor in Silicon Valley lobbying. ®