This article is more than 1 year old

AI-powered IT security seems cool – until you clock miscreants wielding it too

Field both embraced, feared by enterprise

Comment We're hearing more about AI or machine learning being used in security, monitoring, and intrusion-detection systems. But what happens when AI turns bad?

Two interesting themes emerged from separate recent studies: the growth of artificial intelligence coupled with concerns about their potential impact on security.

A survey of 5,000 IT professionals released late last month revealed three major threats techies believe they will face over the next five years: malicious AI attacks in the form of social engineering, computer-manipulated media content, and data poisoning. Just four in 10 pro quizzed believed their organizations understood how to accurately assess the security of artificially intelligent systems.

That was according to the Information Systems Audit and Control Association's (ISACA) second annual Digital Transformation Barometer, which named AI and machine learning among the top three technologies likely to be deployed in the next year.

They were also listed in the top five technologies likely to face resistance.

Interestingly, ISACA highlighted the different perceptions of AI risk between the digitally informed and business leaders who are technically illiterate.

"For AI, having digitally literate leaders correlates to lower perceived risks, which can be key when making the case for deploying technologies," ISACA noted. "33 per cent of companies whose leaders do not possess technological expertise perceive AI to be high-risk, while just 25 per cent of companies with digitally literate leaders perceive AI to be high-risk. Organisations led by digitally literate leaders were almost twice as likely to deploy AI than other organizations (33 per cent compared to 18 per cent)."

When it came to emerging technologies, a decision on whether or not to deploy was found to be largely affected by familiarity. Using AI as an example, 76 per cent of enterprises testing it said that it was worth the risk, with just nine per cent saying it was not. In enterprises that were not testing AI, the confidence in it being worth the risk dropped by a third, while the proportion of respondents who said it is not worth the risk more than doubled.

Rise of the Machines

Are the ISACA members right to be concerned about AI security risks, or does simply understanding a tech make you fear it less?

A paper published earlier this year, titled The New Frontiers of Cybersecurity, backed by the National Natural Science Foundation of China, sided with the former statement.

Stupid computer

AI quickly cooks malware that AV software can't spot


It asserted that machine-learning is capable of transforming security by mining information and learning from various types of data – such as spam emails, messages and videos – and then evolving an autonomous detection or defense system. Continuous self-training will continue to promote the performance of AI-powered systems, including their stability, accuracy, efficiency, and scalability. But this also works the other way round.

"AI is pushing the boundaries of the abilities of hackers," the paper noted. "Autonomous hacking machines powered by AI can craft sensitive information and find vulnerabilities in computer systems, thus making it much more difficult to fight hackers. Worse yet, AI is able to learn sensitive information, such as personal preferences, from a vast amount of seemingly insensitive data.

"These facts lead us to believe that hackers weaponized by AI will create more sophisticated and increasingly stealthy automated attacks that will demand effective detection and mitigation techniques."

Knowing AI and not fearing it has its place; understanding it as an tool in the hands of the enemy, however, is also worthwhile. Luckily, so far, miscreants prefer to run relatively simple attacks, usually involving phishing or automated exploitation of known vulnerabilities, than training and developing sophisticated machine-learning cyber-weapons. ®

We'll be examining machine learning, artificial intelligence, and data analytics, and what they mean for you, at Minds Mastering Machines in London, between October 15 and 17. Head to the website for the full agenda and ticket information.

More about


Send us news

Other stories you might like