1. Home
  2. Deal Stage
  3. Business Development
  4. Developing Corporate Governance for AI Use

Developing Corporate Governance for AI Use

Experts weigh in on some of the pitfalls of using artificial intelligence and how regulation and corporate governance around these tools could start to take shape

Developing Corporate Governance for AI Use

While there is little doubt among private equity firms that artificial intelligence will play an important role in driving up valuations for future deals, there is less certainty about how to handle AI-specific corporate governance. Although regulators are actively working to establish rules for the use of AI, most private equity firms have done little in the way of creating policies for this new technology both at the firm level and portfolio companies.

“Right now, it still feels a little like the wild West, and companies are all over the place. They’re figuring out where to test the waters and how to do it in a way that is risk-appropriate until they can get more comfortable with the technology and have the appropriate processes and policies in place,” says Glynna Christian, head of law firm Holland & Knight’s Global Technology Transactions practice.


This section of the report originally appeared in the Fall 2024 issue of Middle Market DealMaker.


“This is all still super new, and lawyers are just catching up with this in the last six to 12 months,” she adds, noting that the number of attorneys focusing on the issue seems to have doubled over the past year. As lawyers get up to speed, Christian says conversations about corporate governance and risk are beginning to be elevated to the board level.

Right now, it still feels a little like the wild West, and companies are all over the place. They’re figuring out where to test the waters and how to do it in a way that is risk-appropriate until they can get more comfortable with the technology and have the appropriate processes and policies in place.

Glynna Christian

Holland & Knight

Lawsuits in the U.S. and Europe against AI platforms are also spurring PE executives to pay attention to the issue, amid allegations of copyright infringement and outright data theft. In the case of Stability AI, Getty Images filed suit alleging that the platform copied millions of its copyrighted images and that the algorithm can be used to replicate the style of its artists. A similar copyright suit was filed against OpenAI by two authors who allege that the AI platform has copied more than 300,000 books to train its algorithms. Meanwhile, PE firms’ own fears about having their data and proprietary information compromised by AI technology loom large and have contributed to the slow pace of adoption within the industry.

The Regulatory Landscape

While comprehensive AI legislation has yet to emerge in the U.S., there are already multiple regulations in place. Federal agencies have issued guidance, including the Department of Justice, the Securities and Exchange Commission and the Consumer Financial Protection Bureau, among others. The White House Office of Science and Technology Policy has issued its own guidance, and multiple bills have been proposed in Congress. There are also a wide range of state-level consumer protection laws. Although nearly all share common themes, such as data and privacy protection, bias prevention and mitigation of false information, the laws differ significantly across states.

Such inconsistencies and a lack of comprehensive AI regulations are making it difficult for PE firms to identify best practices. “There are a lot of AI regulations coming, especially at the state level in the U.S.,” says Avi Gesser, a partner and co-chair of the data strategy and security group at law firm Debevoise & Plimpton. “This is going to lead to a patchwork of inconsistent, sometimes vague, overlapping regulations on a technology that is changing quickly and can be used in all aspects of business, ranging from simple, low-risk uses like brainstorming, to complex, high-risk use cases like deciding who should be hired or which customers should get large amounts of credit.”

The European Union’s Artificial Intelligence Act (the AI Act) might offer clues for how U.S. regulations will progress. A 108-page document that outlines EU regulators’ plans to monitor how AI systems work and how they are being used by businesses, the AI Act applies to any organization that uses AI and does business within the EU.

The law focuses on multiple aspects of AI, from how algorithms or apps use and protect people’s personal information and conform to existing data and privacy laws, to the prohibition of “unacceptable risks,” such as deceptive tactics to influence people’s decision-making. Other aspects include documentation requirements and the ability to demonstrate compliance upon request.

Though the law was passed by the European Parliament in early March and approved by the EU Council in late May, industry experts say the regulation wasn’t well thought out and there is little clarity about which divisions of government will ultimately be responsible for executing it. “Just because something is a regulation doesn’t mean it’s going to stick. … They put a hard year into the AI Act and were dedicated to getting it out the door, but when regulations get passed in a way that leaves so many people asking so many questions, it makes it harder for the next round of laws to get attention,” notes one industry expert.

Risks on the Radar

Regardless, the increase of regulatory focus has the PE industry’s attention. “It’s still early days, but there is no question AI governance is becoming a more frequent topic on due diligence questionnaires,” says Ken Bisconti, co-head of financial software company SS&C Intralinks. “There is a wide variation in the market in terms of comfort around AI and how an acquisition target approaches its governance.” Intralinks itself uses AI tools in its products, which host data rooms for companies going through M&A processes.

Though there are significant differences in the regulations that have emerged, a central concern of regulators is data privacy and security. While regulators are largely focused on third-party risks regarding consumer data, as well as potential biases within AI platforms that could negatively impact individuals, a major concern for PE firms is the possibility that AI tools—particularly generative AI—could result in leaked proprietary data and confidential information. “Firms want to ensure AI systems don’t inadvertently leak sensitive data such as personal information, trade secrets or intellectual property. They are focused on implementing robust cybersecurity measures to protect AI models and training data from unauthorized access and manipulation,” says Bisconti.

Another area of significant concern for PE firms is the possibility of bias within the data generated by AI, which happens when human biases are unknowingly entered into AI algorithms and generate inaccurate outcomes. One notable example comes from Meta’s Galactica large language model, which was shelved after its algorithm promulgated bigotry and misinformation gleaned from interactions with users.

Despite all the talk about hallucinations (incorrect or misleading information) generated by AI models, Quantum Crow Advisory’s Tonya J. Long, an expert in digital transformation and AI, notes that such instances are infrequent compared to a year ago. Instead, she says a more common problem is user error among people operating these new technologies without adequate experience or training. “The term ‘hallucinations’ suggests there’s a problem with the product. But I think we need to flip the narrative and do a better job coaching our people and their output,” says Long. “When unskilled users blindly interact with AI tools without applying context to the responses, it will result in risky and improper use of unvetted data.”

Implementation Questions

Even firms that recognize the need for AI-specific corporate governance struggle to establish it. One of the biggest stumbling blocks is identifying which individuals or groups should be responsible for oversight. “It’s not just PE firms; all of corporate America is trying to determine who is the right person to be involved in making the critical decision on AI and strategy. But it has to be cross-functional because there’s a lot of risk involved,” says Anthony Diana, a partner in the emerging technologies group of law firm Reed Smith.

“Clearly, IT departments should be involved because they’re looking at productivity and they have the use cases,” Diana continues. “But then you need compliance, privacy, legal, security and all those players to be involved in decision-making because they’re all implicated. … It’s too hard for any one person to be able to do the evaluation.”

Generational issues also pose a challenge when it comes to AI. There is a dearth of knowledge and expertise regarding AI within most boards. “The older folks don’t understand it, use it or think that much about it, which is another reason that there’s not a lot of governance. They don’t know what they’re dealing with or talking about,” says Andy Armstrong, an audit partner at accounting, consulting and technology firm Armanino.

Armstrong says it is crucial for the senior people within PE firms to educate themselves about AI so that they can understand it, recognize the dangers and understand where and why AI governance is needed. Often, executives are blind to the extent that AI is already present in their business. “I think leaders would be blown away by sitting down with some of their younger, lower-level people and realizing how much it’s being used in their companies,” he says, citing the fact that many professionals use ChatGPT to help with writing, or AI transcription tools such as Otter.ai or Speak.ai to automatically take notes in meetings.

I think leaders would be blown away by sitting down with some of their younger, lower-level people and realizing how much it’s being used in their companies.

Andy Armstrong

Armanino

Before the Burn

Additional areas of AI-specific concerns within the PE industry include transparency and explainability, particularly the ability to explain to shareholders and regulators how AI systems operate and factor into decision-making. Ethical concerns around issues such potential job losses are also a concern, though many industry experts believe that any positions eliminated because of automation will ultimately result in the creation of other types of support roles.

As the PE industry wrestles with the challenges of putting AI-specific governance into place, understanding the ins and outs of AI usage and risks is critical. Firms need to undertake widespread educational efforts from the board level down, experts say, as well as across the leadership of portfolio companies.

“AI is the most cross-functional technology I’ve encountered in my 25 years in the tech industry, and it’s crucial for teams to work and prioritize together,” says Quantum Crow Advisory’s Long. She believes widespread, consistent consumer governance is still quite a way off.

Because of the challenges around AI governance, it’s likely that firms will drag their feet until the issue becomes imperative. Says Armanino’s Armstrong: “Most companies are reactive, not proactive. So, I think most of these companies, especially in the middle market, are not going to put AI governance into place until they get their butts kicked by it. … And that’s the problem with governance. Their people are using AI, but they haven’t been burned yet, or they don’t know they’ve been burned yet. Unfortunately, governance seems to follow costly mistakes.”

 

Britt Erica Tunick is an award-winning journalist with extensive experience writing about the financial industry and alternative investing.

Middle Market Growth is produced by the Association for Corporate Growth. To learn more about the organization and how to become a member, visit www.acg.org.