Knowing Your Customer: You need to, but regulation makes KYC extra-crispy...

Machines join the march against identity fraud


There’s a conundrum called know your customer (KYC), the process of verifying the identity of a company’s clients. A decade ago, KYC was a mild inconvenience that could be tackled using some familiar procedures.

Today, as the volume of business conducted online has exploded and the value of personal data has grown, the issue of KYC has become a quagmire of confusing regulatory demands and technologies, with the potential to cause lasting harm to businesses and their customers.

In 2013, the UK’s now defunct National Fraud Authority put the cost of KYC at £52bn. By 2017 figures from banking organisation Cifas showed identity fraud at a record 174,000 cases – up 125 per cent in a decade.

Identity fraud has spread from financial services to mobile phone contracts and retail, with 80 to 90 per cent of cases taking place online where verification is often weak.

Data breaches are giving criminals the basic material they need to steal or borrow identities. Once details such as names and dates of birth are stolen they are gone for good and can be recirculated in an indefinite loop.

Regulation is adding to the challenge. The last 12 months have seen a clutch of new rules, including the EU’s Fourth Anti-Money Laundering Directive arriving in 2017, and the Revised Payment Services Directive and the EU’s imminent new data privacy regulation, the General Data Protection Regulation (GDPR).

These require strong customer authentication and identification, throwing up an unavoidable problem: how to implement the rules in a way that guards against fraud but does not build a barrier for customers.

AI spots the difference

Confronting the challenge, artificial intelligence (AI) is entering the arena as a means of detecting whether the person logging in is who they claim to be.

AI has grown rapidly of late to give us algorithmic machine learning and deep learning, a type of machine learning derived from the ability of neural networks to learn without human guidance.

If machine learning can be used to automate decision-making for a given data set, the hope is that deep learning can mimic how humans make decisions about the real world, particularly their ability to learn from errors.

The roots of AI go back decades but a surprising amount of deep learning orthodoxy goes back only half a decade at most. That has not prevented it from being set loose on real customers, not only to detect transaction fraud but also to verify identification.

Many companies now say they use AI in some capacity – there are perhaps 30 open source algorithms to get organisations started. Yet this doesn’t appear to have stemmed fraud, even in industries such as financial services where AI should be a perfect fit.

Does this mean AI is being over-sold? Not necessarily, but it does pose some difficult questions about implementation.

“It’s not always being done well in the commercial world. Just because you have access to the algorithms doesn’t mean you know what to do with them,” says Mark Whitehorn, professor of analytics at the University of Dundee.

Many of today’s machine learning systems can be set to work on decision support and fraud detection, but humans are still needed somewhere in the chain to cope with exceptions, anomalies or customer interaction. Having a human in the feedback loop can also help train algorithms in a way that can make them smarter.

“You turn somebody down for a loan and they ask why, and the answer is because the machine learning algorithm said so. What can they change? The bottom line is we don’t know,” Whitehorn says.

The GDPR puts into law the principle that individuals – called data subjects in GDPR – have the right not to be subject to a decision based solely on automatic processing.

Today’s machine learning and deep learning are really about risk control, which improves as larger data sets are fed in over time. One example is neural networks used in areas such as image analysis, where employing people would be too expensive or not be accurate enough.

The challenge with documents is that they can be easily forged to a high standard, with a vast number of variations according to the type of document and where and when it was issued. This is difficult for humans to keep up with but it’s an ideal job for deep learning technology, which can spot even tiny anomalies.

“AI has come on leaps and bounds and can do very complex learning such as recognising pictures,” says Whitehorn.

Another challenge is spotting suspicious patterns, for example noticing when a customer has made purchases in three countries within a short time frame. Machine learning simply measures the extent to which an event departs from what is defined as normal for that type of customer.

Humans still needed

It may seem counter intuitive but despite the drive to automate detection, humans often remain the best able to spot fraud.

PwC’s Global Economic Crime Survey 2018 found that 22 per cent of fraud was picked up by humans at some point in the transaction verification process. This was ahead of whistleblowing and internal tip-offs on 13 per cent, internal audits on 8 per cent, and accidental discovery, also on 8 per cent.

PwC concluded: “The percentage of frauds detected by technology has decreased since our last survey [in 2016], especially in the key areas of suspicious transaction monitoring and data analytics.”

This points to the limits of machine learning and the growing volume of work needed to keep the algorithms ticking over. People don’t anticipate changes in the model of fraud; they take five years’ worth of data and produce a model from it, not taking the changes into account.

The biggest limitation on the evolution of machine learning and deep learning is a shortage of people who know how to make the technology work. This might be an argument for wrapping it into a service that allows organisations to implement AI without having to start from scratch.

As more industries look to machine and deep learning, the skills shortage will only become more acute. A company offering machine learning, deep learning and possibly human verification looks good on paper but what counts beyond the brochure?

“It’s not just about pushing the data through an algorithm and it works. How do you distinguish between one company and another? On track record and who they have working for them,” says Whitehorn.


Other stories you might like

  • Heart FM's borkfast show – a fine way to start your day

    Jamie and Amanda have a new co-presenter to contend with

    There can be few things worse than Microsoft Windows elbowing itself into a presenting partnership, as seen in this digital signage for the Heart breakfast show.

    For those unfamiliar with the station, Heart is a UK national broadcaster with Global as its parent. It currently consists of a dozen or so regional stations with a number of shows broadcast nationally. Including a perky breakfast show featuring former Live and Kicking presenter Jamie Theakston and Britain's Got Talent judge, Amanda Holden.

    Continue reading
  • Think your phone is snooping on you? Hold my beer, says basic physics

    Information wants to be free, and it's making its escape

    Opinion Forget the Singularity. That modern myth where AI learns to improve itself in an exponential feedback loop towards evil godhood ain't gonna happen. Spacetime itself sets hard limits on how fast information can be gathered and processed, no matter how clever you are.

    What we should expect in its place is the robot panopticon, a relatively dumb system with near-divine powers of perception. That's something the same laws of physics that prevent the Godbot practically guarantee. The latest foreshadowing of mankind's fate? The Ethernet cable.

    By itself, last week's story of a researcher picking up and decoding the unintended wireless emissions of an Ethernet cable is mildly interesting. It was the most labby of lab-based demos, with every possible tweak applied to maximise the chances of it working. It's not even as if it's a new discovery. The effect and its security implications have been known since the Second World War, when Bell Labs demonstrated to the US Army that a wired teleprinter encoder called SIGTOT was vulnerable. It could be monitored at a distance and the unencrypted messages extracted by the radio pulses it gave off in operation.

    Continue reading
  • What do you mean you gave the boss THAT version of the report? Oh, ****ing ****balls

    Say what you mean

    NSFW Who, Me? Ever written that angry email and accidentally hit send instead of delete? Take a trip back to the 1990s equivalent with a slightly NSFW Who, Me?

    Our story, from "Matt", flings us back the best part of 30 years to an era when mobile telephones were the preserve of the young, upwardly mobile professionals and fixed lines ruled the roost for more than just your senior relatives.

    Back then, Matt was working for a UK-based fixed-line telephone operator. He was dealing with a telephone exchange which served a relatively large town. "I ran a reasonably ordinary, read-only command to interrogate a specific setting," he told us.

    Continue reading

Biting the hand that feeds IT © 1998–2021