Analysis Facebook has defended its record in thwarting rogue applications and other security in the face of criticism from security firms that it ought to adopt tighter application controls.
The dominant social network disputes findings from a threat report by UK-based net security firm Sophos, released earlier this week, that spam, malware and other attacks have become more effective against Facebook users over the last year.
Facebook reckons the opposite is true while disputing the methodology adopted by Sophos which it said looked, for example, at the volume of spam sent to Facebook users instead of the volume that reached their in-boxes.
Facebook said: "If your spam filter catches all the spam, does it matter that your filter caught 10 per cent more?"
The social networking site reckons less than three per cent of communications on Facebook are spam, compared to industry estimates that email spam makes up 90 per cent of all electronic messages. The implication is that Sophos is focusing on the wrong problem.
Survey scams have become an almost daily occurrence on Facebook over recent months. Typically they use the lure of an application that a potential victim's friend has been tricked into installing, such as a 'Dislike' button or a link to shocking (invariably bogus) news about a celebrity.
Instead of getting the promised content, victims are invited to navigate their way through a thicket of time-wasting surveys. Scammers earn a kick-back for each victim as affiliates of unethical marketing firms.
More ambitious (and lucrative) scams attempt to trick victims into supplying their mobile number, before signing up to a premium rate text messaging service of questionable utility.
The scams take advantage of human stupidity rather than web security vulnerabilities. Both Sophos and Facebook agree that user education is part of the solution, but the two are split on whether Facebook itself could do more to tighten up its controls on how applications are released onto its platform.
In a statement responding to Sophos' report, Facebook said it has plenty of controls already that limit access to information.
We have built extensive controls into the product, so that now when you add an application it only gets access to very limited data and the user must approve each additional type of data (so we do more than anyone else to educate users about passage of data, and force disclosure and user consent for each category beyond the basics).
We have a dedicated team that does robust review of all third party applications, using a risk based approach. So, that means that we first look at velocity/number of users/types of data shared, and prioritise. This ensures that the team is focused on addressing the biggest risks, rather than just doing a cursory review at the time that an app is first launched.
We make sure that we act swiftly to remove/sanction potentially bad applications before they gain access to data, and involve law enforcement and file civil actions if there is a problem.