Blog

AI and breaches

Written by James Flint | Jun 2, 2025 9:49:11 AM

These days nearly everything we do is mediated by data and, therefore, the internet, if we can use that term to loosely encompass all the ways in which we’re digitally connected. But AI is changing the internet and, as it does, its changing the ways we relate to one another too. Techniques and habits we’ve developed to help us relate others and gauge the reliability of what we find online are becoming compromised by AI tools and agents that can comprehend, analyse, and imitate anything they encounter in the digital realm with unflinching accuracy. The upshot of these developments is making us increasingly vulnerable as we go about our online lives. 

Last year, at Securys, we assembled a team of AI experts – our aiEthix advisory panel (we didn’t need ChatGPT’s help to name it) – with whom  we’ve been meeting every few weeks to discuss and reflect on the extraordinary changes we’re all living through in the world of technology. Our advisors have given presentations on the impact of AI on the arts, and the creative process, on the ever-expanding activities of autonomous AI agents, and the rapidly evolving landscape of regulation and risk management that’s attempting to govern it all. 

What’s become clear to us through these discussions is that generative AI’s abilities to create content with beyond-Turing-test accuracy in all formats, and its ability to create and execute complex decision flows and manipulate other technologies – including computer desktops and web browsers – in the course of doing so, are combining in ways that are not just going to change your business’s internal processes, but the ways in which both your business and the data it holds – its attack surface, in other words – are vulnerable to external threats. And that the new wave of AI regulations, slow paced and cumbersome though they can seem, are among the best tools available to help manage and mitigate the risks around this.

Just look at some of the things we’re up against…

Automated hacking 

The most obvious of these vulnerabilities are technical cyberattacks. AI’s ability to understand and generate code is among its most impressive capabilities, and has brought the ability to write malware or find weaknesses in security code to, within the reach of anyone with access to a browser and the time to play around with Cursor, Claude or Google AI Studio. Last year, researchers found that GPT-4 could develop exploits for most software vulnerabilities, and there are even services like FraudGPT that are actually dedicated to helping bad actors write malware. 

There also now exists malware that itself calls on these kinds of services to rewrite its own code on the fly, in order to better evade detection and navigate firewalls and other defences. A recent research project by HYAS  created a polymorphic keylogger able to dynamically modify benign code into hostile code at runtime, thus allowing it to pass undetected through defensive filters. AI is now also increasingly capable of handling CAPTCHAs and other “not a robot” type interactions, allowing it to access online platforms previously relatively impervious to bots. 

Phishing & whaling 

Scams designed to get people to hand over personal data, access, and eventually money have been getting increasingly sophisticated for years, but AI is taking them to a whole new level. Not only can the technology produce highly personalised and polished emails and documentation at scale, it can now clone voices and faces from tiny samples, and impersonate people on phone and video calls.  

Mark Read, CEO of advertising giant WPP, was targeted in 2024 by a group using YouTube footage and a voice clone to fake his presence in a Teams call in an attempt to solicit money and data, and such deepfake voice scams – increasingly accompanied by full blown video scams – are being used now to impersonate people, and evade phone-based security checks. These fakes are extremely convincing and can automate to target thousands of employees across an organisation. Backed up with phishing emails filled with details mined – again, by AI – from stolen data dumps, the chances of finding and fooling just one person who then grants the desired access to systems or accounts is very high.  

Social engineering 

At this point, these kinds of attack start to blend into what we call social engineering, that is to say manipulating people into divulging valuable information or access. When MGM Resorts was breached in 2023, the hackers scraped LinkedIn profiles to identify a specific MGM employee, and then impersonated that employee in a call to the company help desk, during which they then persuaded the IT staff to reset the employees system credentials, thus granting the attackers access.  

This kind of approach can be supported by automated content generation, where AIs churn out and publish fake websites, online forms and scam emails in multiple languages or run botnets that fill social media sites with malicious links or misinformation, to gather information or make it appear that spurious claims are backed up. 

Just a few years ago, these threats tended to fall into distinct categories, as the skills and resources required to execute them well were particular to each type of attack. But as is probably clear from my descriptions above, AI is making it much easier to do all of them, and therefore much easier to mix and match, using one technique to reinforce another.  

The threats to organisations are therefore changing rapidly into a full spectrum barrage that blurs the lines, not just between types of attack, but also between the traditional silos of data privacy, information management, AI governance and cybersecurity. The relationship between these often quite separate functions needs to be tightened drastically if defences are to be tightened and made more responsive.  

At aiEthix, we very much believe that AI itself is part of the means for enabling this, with its ability to map and communicate knowledge across silos and thus better identify, understand and counter this new generation of threats. And deploying the technology in this way also has the benefit of giving employees hands- on experience of using it in a beneficial way, thus unlocking its value. If you would’d like to find out more about how we are’re helping our clients with this, drop us a line (you can even ask Copilot to write us the email).