Rampant fake Facebook ad clicks riddle hits dead end

Are you bot or not?


Analysis After a startup claimed that 80 per cent of clicks on its ads in Facebook were bogus, sales of pitchforks and burning torches went through the roof as pundits circled in search of a scandal. However, the figures in the case lead to an unexpected dead end rather than to a smoking gun of unimaginable fraud.

Facebook charges advertisers every time someone clicks on an ad, so obviously companies want to be sure that those clicks are coming from real humans with some dosh to spend rather than rogue software that simulates clicks and ramps up charges for businesses.

E-commerce store builder Limited Run (previously known as Limited Pressing) quit Facebook after concluding a majority of its ad clicks were machine generated. The firm, which specialises in supplying online shopping carts to musicians, analysed its web logs and concluded that (in its experience, at least) the Facebook ad platform was subject to click fraud.

Although the small biz claimed that the majority of clicks came from web browsers that didn't have JavaScript enabled - something unheard of in this day and age - the social network insists that the vast majority of billable ad clicks come from browsers with the scripting language enabled.

In a now deleted Facebook post, Limited Run outlined its concerns:

A couple months ago, when we were preparing to launch the new Limited Run, we started to experiment with Facebook ads. Unfortunately, while testing their ad system, we noticed some very strange things. Facebook was charging us for clicks, yet we could only verify about 20% of them actually showing up on our site.

At first, we thought it was our analytics service. We tried signing up for a handful of other big name companies, and still, we couldn't verify more than 15-20 per cent of clicks. So we did what any good developers would do. We built our own analytic software.

Here's what we found: on about 80 per cent of the clicks Facebook was charging us for, JavaScript wasn't on. And if the person clicking the ad doesn't have JavaScript, it's very difficult for an analytics service to verify the click. What's important here is that in all of our years of experience, only about 1-2 per cent of people coming to us have JavaScript disabled, not 80% like these clicks coming from Facebook.

So we did what any good developers would do. We built a page logger. Any time a page was loaded, we'd keep track of it. You know what we found? The 80 per cent of clicks we were paying for were from bots. That's correct. Bots were loading pages and driving up our advertising costs.

Search Engine Watch added that the e-commerce platform firm wanted to change its name from Limited Pressing to Limited Run at around the time its doubts over click fraud flared up.

A Facebook representative apparently told Limited Run that it would need to spend $2,000 a month on advertising for this name change to be authorised.

This, it seems, is incorrect. Another firm already has a Facebook presence under the same title as Limited Run, so the name change would not have been possible - and Limited Run's initial ire at having its page name held to hostage by Facebook was therefore down to a misunderstanding.

In a statement, Facebook said it was looking into the click fraud issue:

We're currently investigating their claims. For their issue with the Page name change, there seems to be some sort of miscommunication. We do not charge Pages to have their names changed. Our team is reaching out about this now.

Limited Run, which wants to put the incident behind it, has dumped its Facebook page. In a blog post, it thanked supporters and stressed that it had no set out to start a controversy about the effectiveness of Facebook ads, or anything else:

We’d like to let everyone know how much we’ve appreciated their support. It’s meant a lot to us. When we posted about leaving Facebook on Monday, we only intended our small group of customers and followers to know what was happening, and why.

We had no clue it was going to explode like it did. But now, we’re just a very small company, that wants nothing more than to go back to work. We don’t want to be known for this, and we’re going to keep turning down requests for interviews.

Facebook's advertising system is designed so that punters can only see and click on ads when they are logged into the website; they are not shown to anyone just visiting or passing through without an account, we're told. And although someone could create a string of fake accounts to log into the network and click on the ads, the dominant social network claims it disables impostors as soon as it finds them.

This explanation is however somewhat undermined by revelations that 83 million of the site's 955 million users are reckoned to be bogus, according to documents filed with the Securities and Exchange Commission (SEC) earlier this week.

The fakes include 45 million duplicate accounts, 23 million misclassified accounts (such as businesses, pets and so on) and, most troublingly, 14 million accounts that are used to spread undesirable traffic, such as spam, malicious links and (potentially) click fraud.

Former Google click fraud tzar Shuman Ghosemajumder, VP of strategy at web security startup Shape Security, explained the scope of the click-fraud problem posed by fake accounts.

"The level of difficulty in getting those fake accounts to successfully click on ads without getting identified as spam depends on Facebook's click fraud detection systems," Ghosemajumder told El Reg. "If they are very sophisticated, then it would be difficult for attackers to do on a large scale. If they are not, then it could be relatively easy. But the fact that accounts are required to click on ads gives Facebook a great deal of data they can analyse to determine if click fraud is occurring."

Facebook already has systems in place to detect click fraud. These systems attempt to identify and filter certain things, including repetitive clicks from a single user, clicks that appear to be from an automated program or bot, or clicks that are obviously abusive. Its systems also look at whether JavaScript is enabled in the browser.

According to recent Facebook data, nearly all billable clicks resulting from desktop web browsers have JavaScript enabled, contrary to Limited Run's complaints that it was getting billed for clicks generated by bots.

"The difficult part is identifying them [classes of activity] accurately, especially when the attacker is attempting to mimic legitimate traffic," Ghosemajumder explained. "In the case of Limited Run, it was odd that browsers with JavaScript disabled were visiting the website at all, since visits to their site would not be required just to cost them money for clicks on Facebook.

"If it was a sophisticated adversary trying to harm them without getting caught, they would be trying to emulate real user behaviour and wouldn't send bots with JavaScript disabled. In any case, Facebook's response that nearly all billable clicks came from web browsers with JavaScript enabled suggests that they might have been looking at two separate samples of traffic."

Ultimately only a careful analysis of Limited Run log data will reveal what was actually happening, Ghosemajumder concluded.

"It's difficult to know what's going on with this case without seeing the log data from Limited Run. Google and other ad networks have mechanisms which allow advertisers to tie visits in their logs to clicks on ads directly. If there is a dispute, they can send those logs with the click ID's to the publisher for verification or investigation," he said.

"I'm not sure whether Facebook has a feature like that, but they should be able to verify whether Limited Run is looking at visits from billed clicks or not by comparing IP addresses and timestamps." ®


Other stories you might like

  • Meta agrees to tweak ad system after US govt brands it discriminatory
    And pay the tiniest of fines, too

    Facebook parent Meta has settled a complaint brought by the US government, which alleged the internet giant's machine-learning algorithms broke the law by blocking certain users from seeing online real-estate adverts based on their nationality, race, religion, sex, and marital status.

    Specifically, Meta violated America's Fair Housing Act, which protects people looking to buy or rent properties from discrimination, it was claimed; it is illegal for homeowners to refuse to sell or rent their houses or advertise homes to specific demographics, and to evict tenants based on their demographics.

    This week, prosecutors sued Meta in New York City, alleging the mega-corp's algorithms discriminated against users on Facebook by unfairly targeting people with housing ads based on their "race, color, religion, sex, disability, familial status, and national origin."

    Continue reading
  • Metaverse progress update: Some VR headset prototypes nowhere near shipping
    But when it does work, bet you'll fall over yourselves to blow ten large on designer clobber for your avy

    Facebook owner Meta's pivot to the metaverse is drawing significant amounts of resources: not just billions in case, but time. The tech giant has demonstrated some prototype virtual-reality headsets that aren't close to shipping and highlight some of the challenges that must be overcome.

    The metaverse is CEO Mark Zuckerberg's grand idea of connected virtual worlds in which people can interact, play, shop, and work. For instance, inhabitants will be able to create avatars to represent themselves, wearing clothes bought using actual money – with designer gear going for five figures.

    Apropos of nothing, Meta COO Sheryl Sandberg is leaving the biz.

    Continue reading
  • Facebook phishing campaign nets millions in IDs and cash
    Hundreds of millions of stolen credentials and a cool $59 million

    An ongoing phishing campaign targeting Facebook users may have already netted hundreds of millions of credentials and a claimed $59 million, and it's only getting bigger.

    Identified by security researchers at phishing prevention company Pixm in late 2021, the campaign has only been running since the final quarter of last year, but has already proven incredibly successful. Just one landing page - out of around 400 Pixm found - got 2.7 million visitors in 2021, and has already tricked 8.5 million viewers into visiting it in 2022. 

    The flow of this phishing campaign isn't unique: Like many others targeting users on social media, the attack comes as a link sent via DM from a compromised account. That link performs a series of redirects, often through malvertising pages to rack up views and clicks, ultimately landing on a fake Facebook login page. That page, in turn, takes the victim to advert landing pages that generate additional revenue for the campaign's organizers. 

    Continue reading

Biting the hand that feeds IT © 1998–2022