29 January 2024

6 min read

What are the implications of AI for the due diligence and investigations industry in 2024?

Compliance & regulation
Artificial Intelligence Brain with waves

In 2023, Artificial Intelligence attracted public attention like never before. Large Language Models ('LLMs'), such as ChatGPT, entered mainstream use, and the business world became polarised between the enthusiasm of early adopters and more cautious responses.

2024 will see ever more sectors keen to harness the efficiencies AI can bring. In this article Ian Massey and Toby Thomas look at the potential impact of AI on the due diligence and investigations industry, exploring the trade-offs between efficiency gains on the one hand and data security and data integrity risks on the other.


The due diligence and investigations industry – like many others in 2023 – was awash with debate about the potential applications, benefits, and risks associated with generative artificial intelligence (‘AI’). Of most interest to some industry practitioners are generative AI-powered internet research capabilities, in addition to analytical and summarisation tools, which have the potential to enable research teams to process very large volumes of information more quickly; whilst others have kept a careful distance from the new technology.

Given the proliferation in recent years of publicly available online information, significant advancements in these technologies are exciting. The ever-increasing volumes of public information have been a fundamental driver of growth in the sector. With more and more information available to regulators and potential critics, organisations today have a greater need than ever before for sophisticated due diligence services to understand who they are doing business with, any associated risks, and the implications for their own reputation. However, the sheer volume of information available can be one of the biggest challenges for those in research-focused or investigative roles. Simply put, in some cases, the human brain is not capable of processing the volumes of information now available.

The key recent advancements in generative AI have come from breakthroughs in the use of LLMs. LLMs have been trained using billions of words and can – in effect – be built to cover the entire surface web. The technology is based on research that models the systems of the human brain, using a combination of probability and proximity logic to understand the relationships between words. These advancements have led to the development of new generative AI tools that have the capability to produce sophisticated content, including text, images, audio and code. Examples include GPT-4, Open AI’s latest AI model, which the company has claimed exhibits “human level performance” on several academic and professional benchmarks – including the American Uniform Bar Examination, which it passed.

 

Assuming safe adoption, AI may well enable practitioners to refocus their roles, conducting fewer basic tasks and freeing up more time for higher-impact tasks, such as the interrogation of sources, the analysis of research findings, and client-focused consulting.ˮ

 

Overall, the technology is poised to potentially expedite and slightly adjust the due diligence practitioner’s role. Assuming safe adoption, AI may well enable practitioners to refocus their roles, conducting fewer basic tasks and freeing up more time for higher-impact tasks, such as the interrogation of sources, the analysis of research findings, and client-focused consulting. AI-powered research capabilities have the potential to expedite the time-consuming process of trawling through search engine results, in theory increasing the speed and consistency of the research process. Additionally, summarisation tools may support the extraction of key findings, while ‘prompt engineering’ presents an opportunity for researchers to automate comparisons between multiple datasets, increasing the efficiency of the fact-checking process. We have already seen examples of clients using generative AI as part of broader due diligence processes, with the main progress coming within secure data environments. For example, secure enterprise search companies such as Hebbia enable institutional investors to quickly gain insights from unstructured data in M&A data rooms, especially in relation to quantitative analysis. The picture is murkier, however, when it comes to search engine-focused generative AI models, where some of the key risks associated with AI come into focus.

Few new technologies come without a health warning. In the case of generative AI, firms that rush to apply tools in a due diligence or investigative context face risks in three key areas: data security, data privacy, and data integrity. To use certain AI tools, data needs to be transferred to a third party environment for it to be processed. Due diligence and investigations practitioners frequently handle sensitive data on behalf of their clients, and using such tools means relying on a third party’s information security architecture. It is therefore vital that third party tools are thoroughly tested from an information security perspective. Additionally, data privacy legislation, such as the European Union’s General Data Protection Regulation, imposes restrictions on the transfer of certain categories of information, such as personally identifiable information, across borders. Transferring data to a third party tool without ascertaining where the data is hosted and processed carries risk, so legal teams need to work hand-in-hand with practitioners to establish this during the testing and onboarding process. Also relevant on the privacy front is the manner in which AI tools use machine learning technology, which enables computers to ‘learn’ without direct instruction from a human. When conducting searches via an AI tool, it is important to understand whether the tool will leverage or store the data to further its cognitive abilities.

 

LLMs are predictive tools, not search engines, and they don’t always get it right. They can produce output that appears superficially plausible, but is in reality factually incorrect or based on false referencesˮ

 

The final key area of risk relates to data integrity. As described above, LLMs are predictive tools, not search engines, and they don’t always get it right. They can produce output that appears superficially plausible, but is in reality factually incorrect or based on false references, such as inaccurate weblinks or research papers that appear credible but do not exist. A senior representative from the UK intelligence agency GCHQ recently co-wrote a piece for the Turing Institute, in which LLMs’ potential for incorporation into intelligence reporting was described as being akin to that of “an extremely junior analyst: a team member whose work, given proper supervision, has value, but whose products would not be released as finished product without substantial revision and validation”. Clients rely on due diligence and investigations firms to provide accurate and well-sourced intelligence. Relying on the present generation of AI tools for anything beyond the expedition of auditable and non-cognitive tasks has the potential to jeopardise this, and gaining confidence in the integrity of an AI tool’s output is a major hurdle for experienced researchers to overcome.

 

While AI tools will support the work of due diligence and investigations professionals, the technology has the potential to intensify the complexity of their task and to increase the need for organisations to seek professional investigative support.ˮ

 

And so, to the immediate future. Properly adopted – with eyes wide-open to the risks and limitations outlined above – it seems clear that AI’s role in the due diligence and investigations sector has the potential to be significant. It remains unclear whether the current generation of LLMs are sufficiently reliable to bring this impact, or whether that task will be fulfilled by the next generation, however. Either way, LLMs are likely to bring benefits in the form of increasingly efficient and consistent research processes, enabling practitioners to focus even more on the provision of high-impact analysis and advice to clients. With global instability and uncertainty continuing to rise in numerous risk areas, from geopolitics and security, to integrity, climate, and cyber, business leaders need their advisors more than ever to focus on the output that matters most to them as they navigate these challenges. While AI tools will support the work of due diligence and investigations professionals, the technology has the potential to intensify the complexity of their task and to increase the need for organisations to seek professional investigative support. In a year in which more than half of the world’s population will vote, much has already been written about the potential for AI to amplify the effect of disinformation on electoral outcomes. Its potential impact in the corporate domain has received less focus. In the hands of bad actors, AI tools will make it easier to produce disinformation at scale. In a due diligence and investigations context, practitioners are already regularly instructed by clients to separate fact from fiction, perhaps most notoriously in cases where a company or individual suspects they have been the subject of a Kompromat-style ‘dark PR’ campaign. Technology firms, governments and regulators have their work cut out to ensure that law and regulation keep pace with technological developments. The advent of AI may reshape some aspects of the intelligence industry, but it is unlikely to replace the need for old-fashioned human intuition and judgement when weighing up complex risk decisions.

Subscribe to our insights

Get industry news and expert insights straight to your inbox.