AI has brought a wide spectrum of benefits to businesses, from simple improvements in our everyday workflows to accelerating code development. The technology is there to be leveraged. But as powerful as generative AI is, it does introduce a range of risks. In this article, based on insights shared in the webinar ‘Exploring AI – governance, bias and security’, Katherine Kearns, Head of Proactive Services, EMEA, explains that companies seeking to deploy AI – or with AI already embedded – should understand these risks and be prepared to mitigate them.
Understanding the risks
There are four primary types of risk associated with deploying AI:
-
Data privacy risks: AI systems are only as secure as the data and processes they rely on. They can unintentionally absorb sensitive or proprietary information during training, posing significant privacy challenges that evolve over time.
-
Operational risks: AI can create single points of failure due to model drift and degradation, leading to unexpected outcomes that can impact business operations.
-
Regulatory and compliance risks: The regulatory landscape, especially in regions like the UK and EU, is rapidly evolving, with mandates for human oversight and governance over AI systems.
-
Security risks: AI is increasingly a target for cyber threats, as attackers use generative AI to scale capabilities.
Data exposure in practice
When organisations adopt external AI services, hidden data flows often go unnoticed. AI models might be trained with enterprise data, potentially embedding sensitive information within their frameworks. Logs generated for system optimisation frequently contain confidential data, which can become problematic if retention policies are unclear. Third-party integrations further increase the risk and introduce compliance gaps, highlighting the importance of maintaining visibility and understanding of data flows.
Navigating operational risks
AI systems must adapt continuously to prevent contextual drift—where models become outdated in evolving environments. For example, a model trained in 2024 might not align with a company's updated policy in 2026, unknowingly creating compliance gaps. Continuous monitoring and recalibration of AI systems are essential to align with both internal and external policy requirements.
Regulatory approaches: UK vs. EU
The UK and EU have taken different approaches to regulating the companies which develop (providers) and use (deployers) AI systems. The UK relies on a principle-based model, leveraging existing guidelines and regulatory bodies (Information Commissioner’s Office, the Financial Conduct Authority and Ofcom) to enforce responsible AI use, rather than creating a single law. Whereas the EU's approach is prescriptive with specific obligations based on risk categories (see comparison below).
UK Generative AI framework
Purpose: To encourage responsible AI use via voluntary adoption of five core principles:
1. Safety, security, robustness
2. Transparency & explainability
3. Fairness
4. Accountability & governance
5. Contestability & redress
EU AI Act
Purpose: To impose these obligations based on the risk of the AI systems:
| Risk Level | Example | Obligations |
| Minimal | Spam filters | No specific regulation |
| Limited | Chatbots | Ensure user transparency |
| High | Recruitment | Subject to strict controls |
| Unacceptable | Social Scoring | Banned |
Both frameworks emphasise the importance of mitigating bias and ensuring fairness and transparency in AI systems. While the UK framework provides flexibility, the EU mandates strict controls on high-risk applications to prevent harmful AI outcomes. Therefore high risk systems in European jurisdictions, for example those that are used in recruitment, in credit scoring and healthcare triage, must undergo rigorous assessments, maintain documentation, support ongoing monitoring. If that's not done, organisations face compliance fees of up to 7% of global turnover.
Cyber security implications
Bias in AI models can result in security blind spots (see table 2 below), misallocating defensive efforts and leaving organisations vulnerable. As threat actors use AI to enhance their cyber capabilities, outdated or biased models could see organisations continually outmanoeuvred. Ensuring fairness and transparency is not just an ethical necessity, but also a crucial aspect of cybersecurity.
Table 2 Risks of biases in AI models
| Producing false negatives where real threats go undetected | An AI-based network intrusion detection system biased towards recognising only known attack signatures could fail to detect a new type of malware communication pattern embedded in legitimate-looking traffic. |
| Overlooking certain threat sources due to skewed risk assessments | If an AI is trained with an assumption that only external hackers are dangerous, it might ignore insider threats like malicious employees leaking data because those patterns were underrepresented or undervalued in training data. |
| Generating excessive false positives | An EDR solution's AI model flags unusual but harmless software behaviour (like occasional software updates or legitimate script execution) as malicious. |
Key priorities for organisations
For businesses adopting AI, it is essential to focus on three key priorities:
- Managing emerging risks: Address data privacy, operational drift, regulatory compliance, cyber security, and maintaining model integrity.
- Keeping pace with regulators: Stay updated with the fast-evolving regulatory landscape globally.
- Deploying AI safely: Balance innovation with responsibility, recognising that strong AI governance can be a strategic advantage rather than a barrier.
At S-RM, our mission is to assist organisations in understanding and mitigating the risks associated with AI, transforming what might seem like a compliance burden into strategic resilience and digital trust. Please reach out to the team if you would like to discuss any of the topics covered here or in the full webinar.