Ad imageAd image

Fighting AI with AI: Cybersecurity arms race heats up again

Alfred Siew
Alfred Siew
7 Min Read

Earlier this year, a deepfake video was so convincing that a finance worker in Hong Kong believed he was in a conference call with his chief financial officer (CFO) and other colleagues.

Given instructions to wire out HK$200 million, he promptly did so, only to realise later that he had been tricked in a elaborate AI-generated ruse and sent the money to fraudsters, Hong Kong police revealed in February.

If any proof was needed, this was the starkest of examples of AI as a sophisticated tool in the hands of determined cyber criminals. Seeing, in the age of AI, is no longer believing.

The unfortunate Hong Kong victim had been suspicious of a message purportedly from the CFO but cast his doubts aside after he thought he saw other colleagues in the video call, CNN reported.

While such scams are still rare – they take a good amount of preparation and social engineering – cyber criminals have not been slow to take up the new capabilities offered by AI for their bread-and-butter cyber attacks.

Just as the technology helps office workers churn out work faster, so does AI help cyber criminals create more realistic scam messages, automate repeated attacks and basically scale up their efforts.

The Cyber Security Agency (CSA) of Singapore warned in July that malicious actors are exploiting AI to enhance cyber attacks through social engineering and reconnaissance.

This is likely to increase, driven by the ever-growing stores of data, which can be used to train AI models for higher quality results, it added.

In a sample of phishing e-mails, 13 per cent contained AI-assisted or generated content, which were grammatically better and had better flow and reasoning, making them more convincing and dangerous, the government agency stressed.

As AI research progresses, malicious actors may leverage the advances for future cyber attacks with AI-proliferated worms, automated hacking, and automated payload crafting, CSA predicted, in a threat landscape report.

Unsurprisingly, the good guys have taken to AI to fight the AI-powered cyber attacks as well. In a perennial arms race that began when the first computer viruses became prevalent in the 1980s, AI is now a big part of the arsenal.

For early PC viruses, anti-virus software became a part of the defence. As the Internet came online, organisations erected firewalls to block out the bad guys.

Today, much of an organisation’s cyber defence activity is usually automated, for example, from patch updates to threat detection and recovery.

And this is where AI can be a force multiplier for cyber defenders, who are often inundated by endless warnings and alerts on their virtual dashboards.

AI-powered tools enable them to be proactive – instead of being reactive – by seeking out and identifying potential threats before they are attacked.

“With AI, organisations can predict potential threats by analysing vast amounts of data in real-time, identifying vulnerabilities and prioritising them based on risk,” said Scott Caveza, a staff research engineer at Tenable, which helps businesses find cyber vulnerabilities.

AI also automates the detection, analysis, and mitigation processes, speeding up incident response and ensuring that even subtle, complex threats are identified and neutralised quickly, he told Techgoondu.

“Another key advancement is AI’s ability to provide adaptive security measures,” he added. “As threats evolve, AI systems can learn and adjust cyber posture in real-time, countering new attack methods as they emerge.

Most cybersecurity vendors today offer some form of AI-enabled capability. Indeed, the technology is not new to the sector, only that it’s now more enhanced.

Some, such as Check Point Software Technologies, deploy an entire AI-based platform to look out for incoming threats by analysing the massive amounts of telemetry it receives and then sending out alerts proactively.

Other vendors provide insights by closely monitoring and analysing the activities on an organisation’s servers and using AI to flag likely threats. This way, the human operators zoom in on the most important alerts.

Securonix, which offers security analytics and operations management, recently rolled out a GenAI-powered tool to detect insider threats, for example, from disgruntled employees or contractors.

Instead of ignoring mistyped keywords for, say, the Dark Web or money laundering, the system trained on large language models can understand the underlying intention and meaning despite variations in a keyword, said Haggai Polak, chief product officer of Securonix.

Prevention, of course, is the best way to avoid being a victim of a cyber attack. However, even the best equipped and prepared organisations get hacked at some point. Here, AI makes a difference in recovery as well, especially if ransomware is involved.

Already, AI and machine learning are being integrated into enterprise primary storage to fight ransomware in real-time, according to enterprise storage vendor NetApp.

“Today’s technology can detect ransomware with high precision by analysing file-level signals within the storage infrastructure, in real-time,” said You Qinghong, solutions engineering lead for Greater China, Asean and South Korea at NetApp, an enterprise storage vendor.

“Storage systems can also take immutable snapshots of business-critical data that can be restored promptly when
the need arises,” he added.

“There should also be a last line of defense – tamper-proof, point-in-time copies of data stored offsite, that can be quickly restored,” he stressed.

Share This Article
Follow:
Alfred is a writer, speaker and media instructor who has covered the telecom, media and technology scene for more than 20 years. Previously the technology correspondent for The Straits Times, he now edits the Techgoondu.com blog and runs his own technology and media consultancy.
Leave a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.