A service of

Artificial intelligence takes over the world, but dealmakers warn of need for deeper due diligence – DealTech

DealTech covers innovation, new technology and emergent trends in M&A and private equity. If you would like to give us any feedback, please contact [email protected]

  • Copyright issues for datasets are a significant risk factor
  • Concerns prevalent in life sciences deals, but spread to all sectors
  • Crunch time for regulations looms

Artificial intelligence (AI) may be taking over the world, but it still has to comply with older regulations. This means that M&A practitioners now have to wrestle with new issues and risks when doing deals.

One significant deal risk concerns copyright issues relating to the dataset used to train the target’s AI systems, said Gina Bicknell, partner at Pinsent Masons. If the AI has used datasets scraped from the internet and it did not have lawful access to the websites, this is a potential red flag in due diligence, she said.

James Baillieu, Partner at Bird & Bird, said: “If the target is using a particularly good dataset, but doesn’t have the necessary consents or permission to use it, the target may need to obtain those permissions – which may be costly – or remove the offending data and then replicate it from other sources, which may or may not be possible. This may naturally impact valuation and integration.”

The worst-case scenario could involve a deal being scrapped altogether, Baillieu said. “I’m not aware of any transaction that has been stopped, but acquirors I’ve advised may have walked away from transactions – or demanded significantly lower prices – if there were material issues with the datasets used by the AI tools.”

Deeper due diligence is required when dealing with AI-driven targets, and these deals are becoming more and more common, Bicknell and Baillieu agreed. “It’s an emerging technology, and everyone is still learning about it”, Baillieu said.

To avoid delays and other issues in deals, it is important to get the right people – such as intellectual property (IP) lawyers – involved in the discussions at an early stage, said Pinsent Masons partner Cerys Wyn Davies. “Everything doesn’t have to be risk-free, but you have to understand the risks”, she said.

Attention on life sciences deals

So far, Baillieu has seen AI-related legal issues come up mainly in life sciences and technology transactions, he said. The data can be for example pricing information on drugs, or the use of data to identify and accelerate the development of drug candidates, he said.

Until the start of last year, financial services and life sciences stood out as the sectors most engaged in the use of AI, but as AI is increasingly becoming an essential part of all businesses, it is important to be aware of the risks regardless of the sector in which you operate, Wyn Davies said.

M&A for AI-related deals shows an upward trend for the past years, from 342 deals globally at a total USD 7.7bn value in 2018, to 565 deals with total volumes of USD 19.5bn in 2023, Mergermarket data shows. In Europe, activity rose from 60 deals at a total USD 1.1bn value in 2018, to 150 deals with total volumes of USD 2.8bn in 2023.

One deal that successfully transacted last year was UK-based Envision Pharma Group’s acquisition of OKRA.ai, which uses AI to provide insights and predictions to commercialise pharmaceuticals.

The largest European AI deal of 2023 was BioNTech’s [NASDAQ:BNTX] acquisition of UK-based InstaDeep for an upfront GBP 362m (EUR 422m) consideration. The acquisition will support BioNTech’s strategy to build capabilities in AI-driven drug discovery and development of next-generation immunotherapies and vaccines, according to the announcement.

BioNTech felt comfortable doing the deal, as it knew InstaDeep, BioNTech’s Chief Strategy Officer Ryan Richardson said. “We had a strategic partnership before and we were their only biotech client. They used publicly available datasets plus our biological insights”, Richardson said.

However, in general, there is a valid risk in these deals, depending on the application field, Richardson said.

BioNTech is also looking at other AI-enabled assets, mainly for partnerships, but would not rule out other AI acquisitions for specific areas, Richardson told this news service in November last year. “There is a battle for AI talent across all industries. This will be a core competency in drug development.”

One AI company backed by venture capital (VC) firms that could come to market is LabGenius of the UK. The company, which is developing an AI platform for drug development, has a score of 54 out of 100, according to Mergermarket’s Likely VC Exit predictive algorithm.* Its CEO and founder James Field told Mergermarket in November that the company wants to raise GBP 30m to develop its own pipeline of antibody assets to validate its platform.

Other deal risks

Aside from copyright issues, there are other AI-related deal risks, Bicknell said. One is use of open-source software code, which if obtained under “copyleft” licences could trigger having to make code available to downstream users, she said.

Assessing cybersecurity robustness and data privacy compliance, for example related to patient and client data, is standard in due diligence, but this could be more time-consuming to go through in AI-related deals due the sheer amount of data that the AI system may process, Bicknell said. Checking the quality of the data used to train the AI system is also important, she added.

While lawyers are very used to looking into, for example, copyright licensing issues during due diligence, AI has made it much more challenging, Bicknell said.

“AI has changed the game. It takes data, assimilates it, transforms it, and often not even the developers and coders understand how it has used the data”, she said. “It can be a black box.”

Sea change in regulation

Given how new the AI field is, regulation is still playing catch-up. This year will see a sea change when it comes to regulations, Bicknell said. But this will continue beyond 2024 into 2025 and 2026 – these will all be crunch years, Wyn Davies said.

Statutes, laws and regulations regarding AI regulation are in their infancy, Bicknell said. One example is the EU AI Act, which EU officials reached a provisional agreement on in December. This will be the world’s first comprehensive law to regulate the use of artificial intelligence. Its safeguards include limitations for the use of biometric identification systems by law enforcement, bans on social scoring and AI used to manipulate or exploit user vulnerabilities, among other things. The European Parliament will vote on the proposals early this year, according to reports.

The EU AI Act will affect AI regulation on a worldwide basis given the size of the EU market, and will instantly make the EU the compliance high-water mark, Bicknell said, drawing parallels to EU’s data protection law GDPR, which came into effect in 2018.

“In M&A due diligence is fast-paced. You often have to measure compliance against the highest watermark irrespective of jurisdiction, particularly for companies with international operations. Let’s assume the vast majority of companies are using AI. Are they using it in compliance with the EU AI Act? This will be the new question that lawyers will ask to assess potential fines and business continuity risks”, Bicknell said.

The EU AI Act is not the only form of regulation, Wyn Davies pointed out. Data protection laws play a major role in the regulation of the use of training data and other processed data. Additionally, copyright law regulates the use and generation of content as inputs to or outputs from the AI tool. However, the application and boundaries of copyright law are being tested in its application to AI technologies, she said.

In the UK, the text and data mining (TDM) exception in the Copyright, Designs and Patents Act 1998 allows text and data mining of copyright works for non-commercial research purposes, provided that the user has lawful access to the work. In 2022, following a consultation process, the UK Intellectual Property Office (IPO) proposed to expand the scope of that exception to enable TDM for commercial purposes, in order to support AI training and innovation in the UK. However, this was met with protests from rights holder groups, and the government has now abandoned plans to extend the TDM exception to cover the development of AI for commercial use.

However, it has confirmed the need to strike a balance between encouraging AI innovation and protecting copyright works.

A new code of practice on copyright and AI that aims to strike a balance between the rights of content creators and AI developers is currently being developed by the UK’s Intellectual Property Office (IPO), and is expected to be published in early 2024, Wyn Davies said.

Other intellectual property offices around the world are doing similar work to clarify the rights of copyright owners and AI developers, she said.

There are also ongoing related lawsuits, including Getty Images [NYSE:GETY] suing text-to-image AI generator Stability AI, and The New York Times suing OpenAI and Microsoft [NASDAQ:MSFT], she pointed out. Authors and musicians have also taken action to protect their works, she said.

For certain, regulation in the AI field needs to catch up with how the technology is evolving, Bicknell said, and M&A processes also need to keep pace.

“Acquisitions of AI assets may need a different framework”, Bicknell said. “Sometimes we can’t quantify the risks and the investor might just have to take a view on whether they’re ready to take the risk”, she said.

*Mergermarket’s Likely VC Exit predictive analytics assign a score to VC-backed companies to help track and predict when an exit could occur through M&A, an IPO, a direct listing or a deSPAC transaction.