Future of Cybersecurity: Speed and Architecture
Cybersecurity conversations are increasingly dominated by AI and automation. Yet attending The Future of CyberSecurity event in London reinforced a more fundamental truth about exposure, behaviour and architectural design.

Mark Fermor
Director & Co-Founder, Firevault

Cybersecurity conversations are increasingly dominated by artificial intelligence, automation and the promise of smarter defensive systems. Yet attending The Future of CyberSecurity event in London reinforced a more fundamental truth.
Despite the rapid rise of AI-driven threats and increasingly complex security platforms, many of the failures that lead to breaches still come down to the same underlying issues: exposure, behaviour and architectural design.
The event brought together voices from across journalism, security research and cyber leadership, including Joe Tidy, Cyber Correspondent at the BBC, ethical hacker Glenn Wilkinson, founder of Agger Labs, alongside speakers such as Purvi Kay, Ant Davis, Darcy Delich-Coull, Gurps Khaira, Alexandra Forsyth and Sara Davies MBE.
Across the sessions, the message was surprisingly consistent. The cyber landscape is evolving rapidly, but many of the root causes of cyber incidents remain remarkably familiar.
Sometimes the hacker is just a teenager
During his keynote, Joe Tidy offered a useful reminder that cybersecurity narratives often overestimate the sophistication of attackers.
In many real-world incidents, the attacker is not a nation-state unit or organised cybercrime group. Sometimes it is simply a teenager with curiosity, time and a laptop exploring a system that was left exposed.
Reporting from the BBC Technology desk frequently highlights incidents where systems were compromised simply because they were accessible when they should not have been.
The uncomfortable reality is that many cyber attacks do not begin with extraordinary skill.
They begin with opportunity.
Attackers often do not need brilliance. They just need access to something we forgot to protect.
That architectural reality is one of the reasons new approaches such as offline secure storage are gaining attention. Reducing exposure can often be more powerful than relying purely on detection.
The signal problem: more data, less clarity
Another theme running through the event was the sheer growth in security telemetry.
Modern security platforms generate enormous volumes of logs, alerts and behavioural signals across networks, endpoints and cloud infrastructure. Security teams are expected to interpret this data in real time.
Industry research confirms the scale of the challenge. The IBM Cost of a Data Breach Report consistently highlights how complex environments slow detection and response times.
More monitoring tools and dashboards promise visibility.
But visibility does not automatically create clarity.
Cybersecurity teams are now trying to find meaningful signals inside exponential noise.
When every system produces alerts, the challenge is no longer visibility. It is interpretation.
AI is accelerating cyber attacks
Ethical hacker Glenn Wilkinson, founder of Agger Labs, explored how artificial intelligence is changing the economics of cyber attacks.
The key point was that AI is not necessarily creating entirely new attack techniques.
Instead, it is dramatically accelerating existing ones.
Security frameworks such as MITRE ATT&CK document the lifecycle of cyber intrusions, from reconnaissance to exfiltration. AI is enabling many of those stages to be automated and scaled.
Similarly, security researchers at the OWASP Foundation have warned that AI tools introduce entirely new attack surfaces while amplifying existing threats.
AI does not necessarily make attackers smarter. It makes them faster.
The implication is significant. Cybercrime may not become dramatically more sophisticated, but it may become far more scalable.
The three stages of AI
Wilkinson described AI evolving through three stages.
The first stage is AI as assistant, where AI supports human decision-making by analysing information or generating content.
The second stage is AI as operator, where AI begins interacting directly with systems such as email, applications and internal data.
The third stage is AI as autonomous actor, where systems pursue objectives independently and adapt their behaviour when they encounter obstacles.
As AI moves from assisting to acting across systems, the risk profile changes significantly. Once AI begins operating across systems, control of reach becomes the real security challenge.
If AI systems have access to sensitive systems, attackers will inevitably explore ways to manipulate them.
The Road Runner problem
One of the most memorable explanations during the session was the Wile E. Coyote and Road Runner analogy.
In the cartoon, Wile E. Coyote repeatedly buys elaborate tools to capture the Road Runner. The tools appear powerful but ultimately fail because they are badly designed or easily manipulated.
Cybersecurity can follow the same pattern.
Organisations deploy increasingly complex technologies, particularly AI-powered ones, without fully understanding how attackers might exploit them.
Complex security stacks can create the illusion of control while quietly expanding the attack surface.
Complexity alone does not create resilience.
In many cases, it simply introduces new weaknesses.
Prompt injection and AI manipulation
Another major risk discussed was prompt injection, where attackers manipulate AI systems by altering the instructions they follow.
The OWASP Top 10 for Large Language Model Applications identifies prompt injection as one of the most serious emerging AI security risks.
If AI systems are integrated into workflows or customer service tools, attackers may attempt to manipulate them into revealing sensitive information.
If AI can access sensitive systems, the real question becomes who controls the instructions it follows.
Behaviour remains the human layer of cybersecurity
Technology alone cannot solve cyber risk.
Speakers including Ant Davis highlighted the importance of behavioural security, designing systems around how people actually work.
Research from Cardiff University shows that small prompts or behavioural nudges delivered at the right moment can significantly improve phishing detection rates.
Security awareness training helps.
But real decisions are made in real-world conditions, under pressure, between meetings and often when people are distracted.
People rarely make security decisions in calm environments. They make them when they are busy.
The architectural question cybersecurity must answer
Across the event, a deeper question emerged.
If attacks are becoming faster, if AI is accelerating the scale of cybercrime and if human behaviour will always introduce risk, then the real challenge may not simply be detecting attacks more quickly.
It may be reducing exposure altogether.
Cybersecurity guidance from the UK National Cyber Security Centre and frameworks such as the NIST Cybersecurity Framework increasingly emphasise reducing attack surfaces and protecting critical assets.
That is where approaches such as our Vault product and offline secure storage aim to change the conversation, by removing sensitive digital assets from permanently connected environments.
Cybersecurity is often framed as a race between detection and attack speed. Sometimes the smarter move is removing the target entirely.
In a digital world defined by automation, AI-driven threats and increasing complexity, the organisations that think hardest about what should remain connected at all may ultimately prove the most resilient.


