A false sense of security: why it’s time to get real with AI

Guest Post by Kai Grunwitz, Senior VP EMEA, NTT Security…

Artificial intelligence (AI) is no longer the stuff of science fiction films. It’s already here, driving a Fourth Industrial Revolution that promises to radically reshape the world and society we live in. The changes for the way we live and work is more disruptive than anything since the industrial revolution.

Much has been made of its application in cybersecurity and threat detection. While there is plenty to get excited about, AI is not a silver bullet. There are many more tried-and-tested tools and processes, which are just as important, if not more so, to helping organisations to mitigate cyber risk.

Not a Holy Grail

AI and its sub-discipline of machine learning are fast becoming indispensable to the modern Security Operations Centre (SOC). Here, systems powered with the technology are able to ingest enormous volumes of data and spot the anomalous patterns indicative of a potential threat.

While the most common approach is supervised learning — where the analyst ‘teaches’ the algorithm what conclusions it should come up with — increasingly we are seeing effective unsupervised learning, which can work without human guidance. The result? Highly effective threat detection capabilities that free up security teams to focus on higher value tasks.

But AI is not the Holy Grail for security. In fact, believing too much in the promise of AI and you might begin to suffer from a false sense of security. Not all AI is created equal. The quality of the algorithm depends on how it is trained, and what records and data are fed into it. Poor data quality will result in weak AI, a bad recognition rate and failing security. So do not blindly trust the label ‘powered by machine learning’

Let’s not forget either that AI could be harnessed by cyber criminals themselves, e.g. to monitor targeted users’ social behaviour, email writing style and messaging behaviour in a bid to improve the hit rate of spear-phishing attacks. We must be open to the opportunities, but not be blind to the challenges of AI.

Back to basics

In our preoccupation with advanced machine learning algorithms, we should also be careful not to ignore best practice that could have a far bigger impact on our cybersecurity, including the basics:

1. Renew the focus on training the workforce on how to spot phishing and other security attacks. After all, cybersecurity is a shared responsibility.

2. Senior managers are a key weakness in an organisation’s frontline. NTT Security tests of senior execs in customer organisations revealed some worrying results; nearly 100% of the time we managed to compromise accounts to access critical systems — sometimes within a few minutes.

3. Don’t forget security patching. It has been given added urgency now that organisations are witnessing an explosion in IoT endpoints that require constant updating. If you’re unsure where to start with a burgeoning IoT environment, then conduct a security risk assessment based on guidelines such as ISO / IEC 27005, ISA / IEC 62443 or GDPR.

4. Develop security policies, implement network segmentation, monitor access and conduct rigorous pen testing. AI can help with anomaly detection, but only as one part of a multi-layered approach.

Businesses need to do their security homework first because smart detection based on machine learning capabilities will be even more efficient if we implement the security basics like patching, identity management, network segmentation first. These will help them keep attacks under control and move towards a proactive strategy.

Don’t forget the bigger picture. Our most recent Risk:Value report highlights that global IT departments are spending less today on security than last year, while the number of organisations with a formal security policy in place has barely altered since 2017. This needs to change.

Above all, we must remember that AI is only as good as the people training it. The prejudices of humans can all too easily lead to biased machine learning outcomes. As these algorithms make ever more complex decisions, it becomes harder to understand how they arrive at these decisions. Even without touching the sensitive area of Ethical Intelligence (EI) that raises vital questions.

The decisions of machines will increasingly dominate our future. In cybersecurity, it could mean the difference between protecting hundreds of thousands of customers from – or plunging them into – a major service outage. In the end, do you trust the machine? It’s a question we will need to find an answer for pretty soon.

It is time to get real. Leverage AI for threat detection and cyber defence but also remember to implement a smart proactive cyber security strategy that is enriched by even smarter people.

Visit https://www.nttsecurity.com