The last year has brought about the integration of AI across business, and the due diligence and intelligence world is no different.
At S-RM, we have observed a variety of approaches from clients. This ranges from technology firms seeking to integrate Gen AI-driven search into their compliance offerings through to regulated financial institutions who are more cautious around innovation, but nonetheless in ‘listening mode’, and eager to understand what their options are going into 2026. The latter is well illustrated by a 2025 survey of the private equity industry, which found that of around 100 global firms asked, just 7 percent of reported to be using AI extensively, with 59 percent reporting to be in the early stages of adoption.
In this article we look at the key technical advances in AI over recent years, and the lessons for business adoption within the corporate intelligence space. Three key themes in AI use by compliance and intelligence professionals emerge:
- The increasing use of AI-search for triage.
- Greater clarity on what AI can’t (currently) solve.
- Human oversight and skill remains key.
What has changed in AI search in 2024 and 2025
If 2023 heralded the arrival of Chat GPT, 2024 saw a smaller but nonetheless impactful change - the introduction by Anthropic of Model Context Protocol (MCP), which has made integrating LLMs with datasets more effective than relying on APIs alone, leading to it being described as the 'USB-C port' for the Al industry. Generative Al search has become more effective - more recent, and more thorough, and apparently less prone to hallucinations.
Initial information security concerns have been dampened by the co-location of systems within client's virtual private clouds, or by a tightening of terms by certain providers, especially with enterprise accounts. Whilst 2025 began with Chinese Al company DeepSeek threatening to lead an open- source revolution, it ended with closed models from incumbent players, such as OpenAl's GPT-5.2, Anthropic’s Claude Opus 4.5 and Google's Gemini 3 - leading the pack. All made impressive improvements in hallucination reduction and reliability, in addition to dramatically improved reasoning capabilities, and an expanse of search into multimodal sources (i.e. not just text, but audio, video, images). DeepSeek's longer-term impact however has been to the economics of Al, leading to greater price competition and cheaper routes to inference.
The evolution of Al has been a double-edged development for research professionals however. According to the Al agency Graphite, machine-generated content on the internet has been steadily increasing since the onset of Chat GPT to reach 52 percent of content available on the internet in 2025, leading to a proliferation of politically polarised content and mis- and disinformation. Finding the signal in the noise has become significantly more challenging as a result, as has verifying the veracity of online content with 2025 being punctuated with news headlines of Al-driven deepfakes being used my malicious actors.
The increasing use of AI-search for triage
According to Bain & Company, in 2025 most users of internet search engines such as Google rely on AI summaries at least 40 percent of the time, leading to the new phenomenon of ‘zero-click search’. AI summaries threaten to disrupt conventional search technologies, and overwhelm an internet and media advertising model built around conventional Search Engine Optimisation. More generally, this demonstrates that two approaches to search are evolving: (a) a first look, followed by (b) a more thorough investigation.
The previous arrangement – of internet search – was over-solving for the users in the first bracket. AI has found a way to shorten, improve, and disrupt it.
Within the context of compliance, the same is true. AI is being increasingly used to help triage a wide range of potential subjects – 10,000s of Third Party Risk Management subjects for a major corporation, or 100s of potential acquisition targets for a private equity firm – and identify what risks they present at a headline level, before pushing into formalised, human-driven Enhanced Due Diligence. This can bring together multiple sources of input e.g. sanctions watchlist information, adverse media, and corporate ownership information, into a neat summary. S-RM’s Perspecta team is at the forefront of providing the time-pressed compliance professional with immediate, relevant information on their customer or target entity, combining technology with human oversight to guard against possible AI hallucinations or errors. We want to empower our customers with knowledge and information to take any necessary next steps, be it further enhanced checks, or even the ability to quickly determine suitability of a target for further consideration or not.
With the focus of regulators on organisations to do more with less, and widespread scepticism of relying solely on AI tools, we look to empower already stretched teams to handle the triage of large volumes of subjects quickly and efficiently, whilst remaining compliant in terms of covering key compliance topics in proportionate detail.
Greater clarity on what AI can’t solve
However, there remains nervousness around placing excess reliance on the results of AI-driven search. The danger of this was underlined by a British political scandal in early 2026, which saw a senior police leader pressured into retiring after misleading politicians by reporting fictitious events allegedly surfaced through the use of Microsoft CoPilot search.
As Microsoft’s training reminds users, its tool is designed as a ‘CoPilot, not AutoPilot’ – and with good reason. At S-RM, we have kept a running check of Gen AI search results in the context of compliance relevant research – i.e. for adverse media, against human-driven search results since 2023. Whilst performance is improving, errors continue to occur, typically divided into the following categories:
- Low-profile subject, struggles with identification;
- High-profile subject, struggles with prioritisation;
- Non-Latinate alphabet, struggles with translation accuracy;
- Where data fragmentation is critical to problem solving, and AI cannot access the data in the way a skilled human could – i.e. the beneficial ownership of a company requires a login to an official source; or critical allegations are housed by a paywall-limited niche industry publication.
- Where access to high-grade paywall sources is key. Several leading publications have blocked access to some of the AI search tools. GenAI is also biased to prioritise those sources that are GenAI-Search Engine Optimised. Anecdotally, these appear more likely to include PR-driven or consultancy output, and less likely to include academic or think tank papers;
- Where there is an emphasis on access to highly current information – for instance, to pick up the sanctioning of individuals announced earlier that week or day;
- Where logic and contextual know-how is required to problem-solve in relation to the search – for instance, in de-duplication, or in understanding the obscured potential links between a fraudster and a shell company;
- Where information from on-the-ground human sources would have been more suitable in the first place.
As a result, our assessment is that whilst AI-powered search can fulfil a superior role to screening tools which are based on static datasets and known for their false positives, it remains a substantial way off matching human-driven desktop research. It can rule a Subject in for further research, but it is hard to give a clear bill of health. It is most appropriately used where one is triaging risk and proportionality of effort is key, and it can be used alongside other indicators such as perceived corruption, sanctions, and human rights violation risks presented by the jurisdiction and sector of operation.
When it comes to selecting the ideal AI search tool, we would recommend going beyond LLMs which have been designed for the general market and do not solve for the ‘data fragmentation’ problem referenced above, which means they do not typically boast the integrations to relevant datasets of corporate intelligence data..
Where human oversight and skill remains key
The human factor remains key when it comes to reliance. Because of the ‘black box’ nature of AI – the lack of 'explainability' – any mistake has a disproportionate impact on trust. And for regulated institutions, that means explaining the decision to use Generative AI to fill a regulatory requirement for Enhanced Due Diligence to a regulator too.
The information landscape remains highly fragmented, and subjects of Enhanced Due Diligence typically have multijurisdictional footprints. While some global aggregators specialising in corporate-specific information exist, these are uneven, and do not come close to effectively covering the majority of the approximately 200 nation states globally. They are typically comprised of pre-pulled information and can be substantially out-of-date, stymying the effectiveness of critical checks for sanctions or political exposure.
Research consultancies retain valuable IP in-house which isn’t yet accessible to models. At S-RM, we have developed internal guides relevant to each jurisdiction, which constitute thousands of words on how to optimise access to the various corporate, litigation, media, property, import-export, investor databases available globally, in addition to media archives and social media platforms. Much of this advice goes well beyond immediately intuitive approaches to search; it includes ‘Google Dorking’ specific URLs to unearth the full extent of corporate ownership, and notes on where API access does not retrieve the full range of data accessed via a web licence, in addition to an indication of what data to prioritise during the search process.
It would be challenging for a regulator to be satisfied that the current state of AI search – even using a specialist tool like S-RM’s Perspecta – could meet the high thresholds expected of ‘Enhanced Due Diligence’ – a specific term found in money laundering and anti-financial crime regulation to be used on higher risk subjects.