Artificial intelligence is accelerating innovation, but it is also reshaping the cyber threat landscape faster than most organisations expect. From highly convincing phishing campaigns to deepfake impersonation and hidden AI-driven manipulation, the assumptions that once underpinned cyber security are no longer reliable.
In our Secure AI in Action webinar, our cyber and AI specialists explored how these changes are playing out in real organisations, where the risks are emerging, and why AI security must now be treated as both a technical and business priority.
This resource captures the core themes and insights from the session, focusing on how AI is changing attack methods, trust models and defensive strategies. For organisations looking to go deeper, we have also published a separate Q&A resource that addresses the most pressing audience questions raised during the webinar, with practical, expert-led guidance.
AI is amplifying familiar threats at unprecedented speed and scale
Many of the cyber threats organisations face today are nothing new, what has changed is their effectiveness.
Generative AI has significantly raised the quality of phishing campaigns. Attackers no longer rely on mass‑produced messages that are riddled with mistakes. Instead, AI enables highly targeted, context‑aware phishing that mirrors internal language, suppliers, and business processes. As a result, grammar and tone are no longer reliable indicators of legitimacy.
AI also allows attackers to automate campaigns at scale and at speed, increasing the volume of attacks without increasing the effort. This shift places greater pressure on users, email security controls, and traditional detection methods, while driving up financial loss and operational disruption.
Trust in voice and video can no longer be assumed
One of the most disruptive developments discussed during the webinar was the rise of AI‑driven impersonation.
Deepfake voice and video attacks are already being used to impersonate senior leaders, finance teams and trusted partners. In some cases, employees have joined video calls believing they were dealing with legitimate executives, only to be instructed to take urgent action, such as transferring funds or bypassing security controls.
Historically, voice and video were seen as strong identity signals. AI has weakened this trust model. Authority, urgency and realism can now be convincingly fabricated, creating a dangerous new social engineering vector that organisations must address.
Hidden AI instructions introduce invisible risk
AI also introduces risks that are far less obvious to users.
Attackers are now embedding hidden instructions within documents, metadata, images and audio files. While invisible to people, these instructions can influence how AI tools behave, summarise information or handle data. The result can be misleading outputs or unintended data exposure that is difficult to detect.
This represents a huge shift in how organisations need to evaluate risk. It is no longer enough to ask whether content is safe for users. Organisations must consider whether data is safe for AI systems to process before AI tools are allowed access.
AI magnifies insider risk and data governance weaknesses
AI tools act on the data users already have access to. In many organisations, historic over‑permissioning, inconsistent access reviews and poor data classification have been tolerated because the impact was limited. With AI, those weaknesses are immediately exposed.
At the same time, widespread use of unsanctioned AI tools introduces further risk. Many public AI platforms provide no guarantees around data retention, training or sovereignty. Without clear policies and approved alternatives, sensitive business data can leave the organisation unintentionally.
Strong data governance, classification and access management are now key requirements for safe AI adoption.
Defence must extend beyond technology
A consistent theme throughout the session was that AI‑driven cyber risk cannot be managed through technology alone.
From a people perspective, awareness is critical. Users need to understand how modern AI‑enabled attacks work, how trust has changed, and when to pause and verify requests.
From a process standpoint, formal AI governance must start at leadership level. Senior stakeholders need to define what the organisation wants from AI, set clear usage boundaries, and take ownership of risk. Without this, AI adoption becomes fragmented, reactive and difficult to scale securely.
With this in mind, it’s important to remember that least‑privilege access, strong identity controls, consistent data labelling and continuous monitoring are more important than ever in an AI‑enabled environment.
Visibility across the attack surface is essential
AI accelerates how attacks move between identities, endpoints, email, collaboration platforms and cloud applications. As a result, siloed security tools are struggling to keep pace.
Extended Detection and Response (XDR) approaches bring unified visibility across these attack surfaces, making it easier to detect, track and respond to threats as they move through the environment. When combined with 24/7 monitoring and expert response, this reduces dwell time and limits the impact of AI‑accelerated attacks.
Safe AI adoption depends on enablement, not restriction
Completely blocking AI is neither realistic nor effective. Productivity‑driven users will seek alternatives, often outside corporate controls.
The goal is to enable AI safely by providing approved tools, clear policies and strong data protections that align with business objectives. When organisations invest early in governance, clean up permissions and adopt layered defence, AI can be rolled out responsibly without slowing the business down.
Final thoughts
AI is reshaping cyber security faster than any recent technology shift. Organisations that recognise this early and respond with strong governance, cultural change and defence‑in‑depth will be best positioned to defend against emerging threats while unlocking real business value from AI.
Looking for more insights?
This article focuses on the strategic landscape and emerging risks discussed during the webinar. During the live session, many attendees also raised highly practical questions around AI governance, real‑world controls and specific attack techniques.
To explore those in more detail, read our companion piece:
The AI security questions every leader is asking.
The Q&A distils the most important audience questions from the webinar and provides clear, actionable guidance to help organisations move from uncertainty to confident, informed action.