In a world of hostile actors, fallible employees, and fast-changing technologies such as AI, leaders need to take personal ownership of security, reports Chris Middleton.
CEOs need to own cyber security and take personal responsibility for it. Failure to do so means that it will be just another cost centre in the organisation, rather than a foundation stone of its strategy. That was the message from one of the speakers at the Banking Cyber Security Forum in London this week, hosted by Transform Finance.
“He has to live it. Cyber security needs to be real for the CEO,” he said, suggesting that it is not about aligning security practices with the business, but about aligning business practices with cyber security.
Speaking under Chatham House rules – in which discussions can be reported, but not named speakers – the speaker said that he felt personal responsibility for each of the nine million transactions that pass through his organisation each year. Clients, counterparties and suppliers all add to the risk that is managed by the service provider, with trust at the centre of complex networks.
Over the years, the speaker’s company has faced down hackers, fraudsters, criminal gangs, and “government sponsored disruption”, and was targeted by a malicious insider in a major incident 14 years ago. The lesson from these attacks is that cyber security has to be personally led from the top, he said.
“When we get hit by a fraudster, we go along with it to some extent so that we can learn about their MO,” he added. “It’s important to understand what they want.”
Regular penetration tests are vital, as is educating the workforce about the need to change passwords regularly and log out of their computers. Indeed, the organisation runs a ‘name and shame’ policy if a password can be guessed, and if someone forgets to log out when away from their desk an email is sent from their computer saying “The beers are on me”.
The organisation has become so security conscious that employees are not allowed to use laptops or other mobile devices for work, and none of the USB ports on their desktop machines are active. This is to prevent customer information from being compromised if laptops and tablets are lost or stolen, or employees are tempted to download data onto portable devices.
The first afternoon session of the forum looked at strategies to manage cyber security. “Criminals love change, they love the disruption of change,” said another speaker, the senior cyber security advisor at a national bank.
Malicious outsiders and malign insiders do exist, he added, but also fallible insiders: reckless or negligent employees, or people who simply make mistakes. Accidents inevitably emerge from human systems: unexpected or out-of-scale incidents that arise from the unintended interaction of everyday events.
To combat this problem demands the adoption of what he called a “just culture”: one in which business and technology leaders examine why some techniques work rather than fail, and are encouraged to openly acknowledge near misses – situations where disaster nearly happened.
The reason for looking at security in this new (and perhaps counter-intuitive) way is that it is no more possible to infer what a secure organisation looks like by examining a successful attack than it is to understand a happy marriage by looking at a divorce, he said.
For this senior security strategist, artificial intelligence (AI) will play a key role in securing the financial services sector in the years ahead. However, leaders need to be pragmatic about a technology that is often characterised by unachievable promises and overlooked human beings, he suggested.
As AI’s pattern recognition abilities are increasingly deployed in anomaly detection, it’s vital that systems remain transparent and bias is rooted out – not just from the training data, but also from the person looking at the output.
And as AI evolves and its complexity rises, it’s conceivable that it will behave in ways that are reckless or prone to accidents, just as people do. The challenge will then be how to code a viable supervisory mechanism and keep humans in the loop of the decision-making process.
Stick to controlled, transparent, narrow applications of AI, he said; mix up the techniques, and factor in fallible mortals – both the ones who are designing the systems and the ones who are using them.
Be part of a discussion and connect with like-minded leaders in your sector at our exclusive event series on banking and RegTech.