This article is more than 1 year old

For Facebook, ignorance is the business model: Social net is shocked – SHOCKED – that people behave badly

See no evil, hear no evil, speak of no evil

Analysis No one at Facebook had any idea anyone might use its ad tools to target "Jew haters," said COO Sheryl Sandberg earlier this week.

Of course not. Facebook, like its rival Google, thrives on the income of ignorance and contrition.

To prevent money laundering, financial institutions must comply with know-your-customer laws.

Facebook and Google know everything about their product – the people who use their free services – but as little as possible about everything else, because knowledge goes hand-in-hand with liability.

As Sandberg acknowledged, Facebook's many bright people, who spend a lot of time tweaking the company's highly profitable ad algorithms, did not learn that user self-identification had spawned a bigoted ad buying category until ProPublica told them.

"We never intended or anticipated this functionality being used this way – and that is on us," she said. "And we did not find it ourselves – and that is also on us."

Ignorance is not a bug; it's a feature. It's how Facebook sold $100,000 in ads to Russian agents seeking to influence the 2016 election. It's how Facebook's Instagram republished a rape threat sent to a reporter as an ad.

Revenue is better without responsibility.

"When you have self-service ad platforms where anyone can go in and buy ads and the network (in this case Facebook) is not held liable for the appropriateness of the creative, this is what can happen," said Augustine Fou, a cybersecurity and ad fraud researcher who runs ad consultancy Marketing Science, in an email to The Register. "This is just self-serve advertising on the internet at a scale that is not policeable."

Ignorance is the online ad industry's original sin, which is ironic for a business that sold itself as a way to understand marketing effectiveness through data.


And it's not a new problem. Recall Google's $500 million settlement with the Justice Department in 2011 for allowing Canadian pharmacies to advertise prescription drugs to US customers through its AdWords service from 2003 through 2009.

In a statement released at the time, Google said, "It's obvious with hindsight that we shouldn’t have allowed these ads on Google in the first place."

It was obvious from the outset, but the glint of coin can be blinding.

Facebook's fix to prevent hate-based advertising is to remove the self-reported targeting fields.

That's treating the symptom rather than the disease. The company would do better to ensure every entity participating in its ad ecosystem was identified, disclosed and held responsible for its actions, whether that involves buying ads, accepting them, or anything else.

Ignorance has become the path of least resistance and maximum revenue on the internet. Verification of identities, facts, customers and ad claims tends to be expensive.

So rather than traffic in expensive artisanal news, backed by vetted ads from trusted brands, Facebook and Google have come to prefer homogenized content, a suspect slurry of salesmanship and storytelling, from sources unknown.

A responsible news organization would test its product in an effort to make sure it was fit for human consumption; Facebook and Google prefer to clean up after the gastrointestinal distress with an apology.

Algorithms were supposed to save us. Copyrighted content uploaded to YouTube? No worries, we'll flag it. Sorry to have infringed. Naked people? We can detect that. Spam? It's under control. Malware? We're on it.

But the people who design algorithms turn out to be fairly adept at outsmarting them. That's why Google has grudgingly agreed to refund losses to due ad fraud, which it euphemistically calls "invalid traffic."

Thus Facebook is throwing people at the problem. Sandberg said the company plans to double the number of people focused on election integrity and to add 250 in total across its various security and community teams. According to CEO Mark Zuckerberg, the social network had some 4,500 content monitors in May and planned to add 3,000 more over the course of a year.

As a point of comparison, China, which has 1.4 billion people compared Facebook's alleged 2 billion active daily users, is said to employ somewhere between 100,000 and 2 million people to monitor the internet within the country. And undesirable content still gets through.

That may explain why Zuckerberg isn't optimistic. "Now, I wish I could tell you we're going to be able to stop all interference, but that wouldn't be realistic," said Zuckerberg. ®

More about


Send us news

Other stories you might like