Private equity and AI: GPs turn their focus toward portfolio company upgrades
The evolution of private equity’s interest in artificial intelligence (AI) is handily traced through the history of EQT’s Motherbrain programme. The platform, launched in 2016, has been used for everything from identifying bolt-on acquisition targets to sourcing VC deal flow, to tracking down the optimal industrial advisor in the firm’s existing network. But now it’s really getting its hands dirty.
In recent years, more emphasis has been placed on Motherbrain Labs, which acts like a consulting arm for the parent platform, unpacking the risks and opportunities of AI for specific portfolio companies – ranging from product and service development to ways of interacting with customers. This is distributed as pre-deal guidance at the investment committee level.
“That’s going to become part of our value creation plan. Having teams of people who understand AI interfacing with our investment team is very valuable – and it’s almost becoming a necessity,” said Jean Eric Salata, chair of EQT Asia and head of EQT Private Capital Asia.
“If you’re not doing that, you’re going to fall behind. There’s a lot of change happening, and it’s going to be a continuous learning process for all of us.”
Industrywide, a greater focus on plugging AI into the portfolio versus back-office automation within the GP is a natural outcome of a gradual acknowledgment that business models across sectors are changing quickly. By the time an investee company is ready for exit, its operating landscape may be significantly different, and buyers’ standards of AI-related adequacy will be higher.
Strategic buyers are likely to be the most demanding in terms of judging assets on AI readiness in their respective industries. But even financial investors already know what they’re looking for, and to date, they’re not seeing it.
In a recent survey of private equity investors by Bain & Company, 83% of respondents said they desired to see AI data readiness in potential investees, but only 40% of the companies that they evaluated were deemed to have achieved that status.
Likewise, three-quarters said they want to see a bold AI strategy, and two-thirds said they expected companies to demonstrate prioritised use-cases for AI with defined metrics and return on investment (ROI). They found that only 41% of prospective investments were up to snuff on the former and only 33% were the latter.
“Can we go beyond companies just saying on one slide of the CIM [confidential information memorandum] that they have an AI plan?” said Gene Rapoport, a partner at Bain and head of global generative AI initiatives for the private equity practice.
“Can you actually state that not only do we have a plan, we’ve executed against it, we have the right team, we’ve delivered pieces of it? Where we need to take out costs, we’ve done the initial version of that. The fund buying that target is going to have more confidence. It might even bake those AI savings into their value creation plan and their deal model. That is very compelling for funds.”
Shifting sands
Sumeet Gupta, digital and AI practice lead at FTI Consulting, has been implementing AI commercially since the early 2000s and has recognised a shift in private equity sentiment toward the subject in the past three to four years. That is partly driven by concerns about exit positioning.
In this view, private equity firms that historically targeted specific operational solutions when integrating AI into portfolio companies are adopting a more holistic approach. With the advent of large language models (LLMs) and generative AI, there is an awareness that businesses are not just getting faster and more efficient via automation – they’re fundamentally changing what they do.
“What I’m telling my partners is don’t think of investment in any of these foundation layers as thrown money because that will actually increase your asset valuation when you go for an exit,” Gupta said. “If you demonstrate that you’ve done a good job managing your data, for example, that is actually going to give a valuation boost. We’re seeing that now.”
General Atlantic has described the buildout of its AI capacities as the biggest platform shift in its 44-year history. In this area, the firm is best known for its decision-making algorithm called Ada, a kind of non-voting investment committee member. Less ballyhooed is that in the past four years, AI has quietly become a core lever of the value creation team.
Priority portfolio support areas include AI-enabled go-to-market strategies, revenue growth, and optimised product pricing, according to a General Atlantic spokesperson. The standout case-study is the application of a self-learning AI prediction engine to help Danish juice bar chain Joe & The Juice expand its store footprint. As of last June, the plan was to grow from 400 to 1,000 locations.
Similarly, TPG has established a global AI taskforce, staffed by both investment professionals and operational personnel, focused on portfolio support. Speaking at the AVCJ Private Equity Forum 2024, Steve Duncan, an Asia-based managing director at TPG, said this effort benefited from the firm’s headquarters being near the forefront of AI development in San Francisco.
“We just want to be very clear on what is going to be that next application layer that’s coming out of the infrastructure that’s being developed, and then be in a position to roll out that best practice to our portfolio companies,” Duncan said.
“We will have dedicated ops personnel who would be able to run best practice where it makes sense. It’s not for every company. A food manufacturer might not have too much need for AI, but by the same token, software or service investments where there are repeatable tasks, that creates a great value creation opportunity.”
Platform plays
For the bulk of the private equity industry, the most fundamental takeaway from such developments will be the idea that the leading players are forming centralised AI portfolio support platforms.
In any given portfolio, multiple companies will be suitable targets for comparable LLM or gen AI buildouts. The worst approach would be to blow out capital expenditure by addressing each one separately.
Advice for deploying AI capabilities into portfolios universally extols a top-down approach of defining objectives, finding common themes in the portfolio around functions and workflows, and then finding use-cases appropriate to those areas.
The starting point is to map out the portfolio to determine which businesses have models or are in industries that are being fundamentally transformed rather than merely enhanced by AI. Bain’s Rapoport estimates that about half of the portfolio companies that his firm has advised on this point fall into either camp. Those in industries being transformed must be prioritised in the AI value-add plan.
Core to the thinking here is how a given portfolio company’s AI buildout will be viewed by potential buyers down the track. Experiments with generative AI and LLM-informed predictions that don’t play to a broader industry theme or move the needle on important metrics will not find favour.
“A portfolio company might be busy with four use-cases, and they’ll say, ‘We have an AI strategy and it’s all together.’ But when you press them on it, suddenly there are unresolved questions,” Rapoport explained.
“Have you thought about the strategic impact of this technology on your market and the competitive landscape? Does this change the market drivers of the market you operate in? Where does this impact the value proposition of your product?”
The assessment of AI’s impact at the company level should acknowledge that some business lines will be more impacted than others within the same company. Attention must also be paid to the extent new products can or should be developed using AI.
The comprehensive approach is meant to maintain a focus on solving business problems and redesigning business processes at a time when there is a race to build out capabilities before practical outcomes are established.
“You need ideation and prioritization, and then simulation before you actually try to implement something and say, ‘Oh, it didn’t work.’ That’s a mistake a lot of companies made in 2024, which I think is getting better,” said Akash Takyar, founder and CEO of technology consultancy LeewayHertz.
“Everyone is excited because AI can now write articles. Yeah, but did you look at what is the ROI [return on investment]? It’s been all very tactical POCs [proofs of concept], not looking at the bigger picture and comparing everything. When you go top-down more strategically, you will find the right option. It’s the ROI at the end of the day.”
Digesting data
The key bottleneck in all things AI is data, and that is set to become more problematic as PE firms shift from internal tinkering to portfolio implementations. GPs can generally rely on the quality of in-house data used for training back-office automations. In the portfolio, it’s often a mixed bag.
Indeed, Daniel Angelucci, Alvarez & Marsal’s digital and technology services lead for Southeast Asia and Australia, observes that private equity firms generally under-scrutinize companies’ data quality when performing pre-investment due diligence.
He recommends examining how data is managed from a data engineering and architecture perspective. Much of this is about ensuring data can be reused, so due diligence should include looking for software tools like Databricks or Microsoft Fabric, which facilitate repeatable processes.
Data validation, data cleansing, and data enrichment processes must be reviewed to understand how various data sources are deemed authoritative or not. The assessment also covers data governance areas such as access controls and security, as well as whether the target company has historically been able to derive actionable insights from the data.
“If we can say they are in a position to build out a generative AI capability using the datasets they have with a couple of tweaks here and there, that’s hugely valuable. But often they’re not doing that diligence against the data quality of potential portfolio companies,” Angelucci said.
“PE needs to leverage that value capability over time, so my advice is to make the necessary investments to make it happen.”
Even when companies are screened in this way, data quality remains a barrier. This is the case even for the likes of EQT, which typically adds data scientists to its due diligence team with a view to determining whether target companies can feasibily pursue future AI projects.
“Sometimes we find that the value-creation initiative can’t be done due to factors like poor data or processes in the company – but getting to that result quickly is a great result in itself,” said Petter Weiderholm, head of Asia Pacific in EQT’s digital division.
“Failure in this format provides critical insights to our deal teams, helping them identify and address foundational issues that need resolution for future success.”
Weiderholm said his firm has delivered consistent value creation by using AI to find M&A targets for its portfolio companies. This process uses LLMs to pull together public data in various formats relevant to geographic presence, market positioning, and scale. This data is then translated and mapped to other proprietary databases.
The portfolio company can use the same methodology to further extend its pipeline of potential targets. Motherbrain Labs doesn’t own the technology and operate it on behalf of the portfolio companies; it simply advances the investee’s AI capabilities.
“We look for – or look to invest in the development of – a mature digital structure, including a forward-leaning leadership that is curious about technology, modern infrastructure and integration architecture that enables them to get data out of their systems,” Weiderholm said.
“We find that the data quality is typically aligned with a company’s technology maturity. If the company is run primarily by emailing Excel files around, it is hard to get high-value results with AI, beyond basic pilots. In that instance, we typically need to invest in the basics.”
Human factors
The most universal business model aspect being disrupted by new forms of AI is human resources. The effects can be counterintuitive, however, which means private equity firms must be careful about how to address the issue in portfolio companies.
Business process outsourcing (BPO), for example, appears to be a natural segment for deploying generative AI in the form of chatbots that can reduce call centre headcounts. But Alvarez & Marsal’s Angelucci observes that this may not work because customers who are happy to interact with chatbots are not likely to be substitutes for those who prefer to interact with people.
As a result, BPO companies cannot reduce call centre workforces and realise the expected savings. Instead, they are generating value from deeper interactions. This allows for more sophisticated call queuing, whereby customers who prefer a live receptionist get one and the others do not.
Alvarez & Marsal is building a model to see how AI impacts HR. One of the early findings is the idea that LLMs effectively allow inexperienced employees to simulate experience.
“Employees with a lot of insight and little experience – the brilliant people hired just out of university – are your most valuable assets, and they are even more so now because with AI, you can give them the value of experience without having to wait,” Angelucci said.
“People with a lot of experience and not much insight are therefore at risk of replacement. Insight is now much more critical than experience when you hire. That’s going to make a huge difference in how PE firms build out workforces.”
Other areas of development include predictive models to forecast appropriate inventory levels for retailers. LLMs and neural networks are believed to make this possible even in situations where the patterns of consumption have been disrupted by an event like COVID-19, making traditional trend projections impossible.
Meanwhile, marketing and media-related industries are expected to be among the biggest beneficiaries of generative AI, especially where companies can quickly eyeball whether the machine-made product is working or not. The most underestimated risk with generative AI is that, by its nature, prototyping is fast. This has created a false impression that it can be quickly integrated.
Balancing act
The critical balancing act in pursuing upgrades in this vein is to go neither too fast nor too slow. FTI’s Gupta notes that it is common for companies to spend USD 10m on an AI project without doing a proper economic analysis of what business problem is being solved.
At the same time, he sees private equity firms as generally lagging Fortune 200 companies in terms of AI experimentation. In areas such as frontline sales services, marketing, and customer engagement, generative AI can lower costs by 20%-30% – but only for those taking the plunge.
“That’s one of the most common pitfalls I see with companies. They say, ‘This is all hype. I’m not going to do anything. We’ll just let it mature.’ You can let it mature, but you might actually lose some advantage or some level of thinking that you may have been able to explore,” Gupta said.
“Even if you’re sceptical, you have to provide the leeway [to portfolio companies]. I’ve had CEOs tell me that they don’t get the investment from the PE to take a few risks. That’s a problem.”