This Q&A answers the most common and most critical questions raised during our Secure AI in Action webinar. It explores how AI is reshaping the cyber risk landscape, where organisations are most exposed, and what practical steps leaders can take to defend the business while enabling responsible AI adoption.
If you are looking for a broader view of how AI is driving these changes and what organisations must do next, we explore this in more detail in our companion article, Secure AI in action: how AI is reshaping cyber risk and what organisations must do next.
1. How is AI fundamentally changing the cyber threat landscape?
AI has significantly increased the speed, scale and believability of cyber attacks, making traditional detection methods less effective.
Key points:
- Generative AI enables highly convincing phishing emails with perfect grammar, contextual awareness and targeted language.
- Deepfake voice and video impersonation undermine previously trusted identity signals.
- Automation allows attackers to scale campaigns rapidly at very low cost.
- Attacks now impact the whole organisation, not just IT, creating financial, regulatory and reputational risk.
Takeaway: AI has shifted cyber risk from isolated technical incidents to persistent, enterprise-wide threats that require a broader defence strategy.
2. Why can’t organisations rely on voice, video or authority as trusted identity signals anymore?
AI-driven impersonation has eroded the reliability of traditional trust mechanisms.
Key points:
- Voice cloning and deepfake video can convincingly impersonate senior leaders, suppliers or finance staff.
- Attacks often rely on urgency and authority to bypass rational checks.
- Employees are frequently instructed not to speak or question the request, increasing success rates.
- Single-signal trust is no longer fit for purpose.
Takeaway: Organisations must adopt multi-step verification and secondary authentication processes, especially for high-risk actions such as payments, access changes, or device management.
3. Why is AI risk a business and governance issue, not just a technical one?
AI amplifies existing weaknesses in data access, permissions, and policy, turning them into business risks.
Key points:
- AI tools surface all data a user already has access to, whether appropriate or not.
- Poor governance can result in data exposure, compliance failures and loss of trust.
- Blocking tools alone does not work at scale and drives shadow AI usage.
- Senior leadership ownership is essential to align AI use with business objectives.
Takeaway: AI risk must be addressed through clear governance, executive accountability and policies that balance innovation with control.
4. How do hidden AI instructions and prompt injection create risk?
AI can be manipulated using instructions that are invisible or overlooked by users.
Key points:
- Malicious instructions can be hidden in documents, metadata, images or audio.
- AI tools may follow these instructions even when users cannot see them.
- This can lead to misleading outputs, incorrect summaries or unsafe data handling.
- Users may trust AI-generated outputs without validation.
Takeaway: Organisations must assess not only whether content is safe for people, but whether data is safe for AI tools to process and implement appropriate controls before AI is allowed to access it.
5. Why does insider risk increase with AI?
AI magnifies the impact of human error and poor data governance.
Key points:
- AI tools can inadvertently expose sensitive data if access controls are weak.
- Unsanctioned AI tools are widely used across businesses.
- Many public or foreign-hosted AI tools provide no assurance over data use or model training.
- Users will seek productivity gains, with or without approval.
Takeaway: Businesses must provide secure, approved AI tools backed by strong data classification, labelling and policy enforcement to reduce insider risk without restricting productivity.
6. Why are data labelling and access reviews critical when using Microsoft Copilot?
Copilot makes data visibility immediate and transparent.
Key points:
- Copilot respects existing permissions and sensitivity labels.
- Any data a user has access to can be surfaced instantly.
- Regular access reviews and least-privilege principles are essential.
Takeaway: Organisations must clean up permissions and implement consistent data labelling before Copilot deployment to avoid unintended data exposure.
7. How can organisations enable safe AI adoption without harming productivity?
Security controls must be proportionate and aligned to real business use cases.
Key points:
- Data loss prevention can prevent data being copied into consumer AI tools.
- Enterprise versions of AI tools offer stronger data protection guarantees.
- Blanket blocking drives shadow usage and undermines trust.
- Clear policies should define which tools are allowed, for what purpose, and under what controls.
Takeaway: Safe AI adoption requires policy-driven enablement, not restriction, giving users secure alternatives rather than shutting innovation down.
8. What role does XDR play in defending against AI-driven attacks?
Over 70% of attacks are identity-based. Visibility across the entire attack surface is essential.
Key points:
- XDR provides unified visibility across identity, endpoints, email, collaboration tools and cloud apps.
- AI accelerates attacker movement across these surfaces.
- Technology alone is not enough without 24/7 monitoring and response.
Takeaway: Managed XDR enables organisations to detect, track and stop attacks as they move laterally, reducing dwell time and impact.
9. How should organisations respond to AI-powered vulnerability discovery tools like Mythos?
The industry must adapt defensively as attackers adopt AI.
Key points:
- AI can surface unknown vulnerabilities and weaponise them rapidly.
- Regular internal and external vulnerability assessments remain critical.
- AI can and should be used defensively to augment vulnerability scanning.
- Early detection limits exploitability.
Takeaway: Organisations must proactively identify and remediate weaknesses before AI-enabled attackers do.
10. What is the biggest mindset shift organisations need to make?
Cyber security and AI adoption are inseparable.
Key points:
- AI increases both opportunity and risk.
- Security controls must evolve alongside AI use.
- Culture, awareness and governance are as important as tooling.
- Leadership engagement determines success or failure.
Takeaway: Organisations that treat AI security as a strategic priority will unlock value safely, while those that treat it as an afterthought will remain exposed.
Need some help?
We support organisations at every stage of secure AI adoption, combining governance, cyber security expertise and Microsoft-aligned technology to help you reduce risk while realising AI’s business value. By working together, we can help you move from uncertainty to confident, informed action.