In the Picture

Pdf version Archive Subscribe

Unwrapping the AI Act: a new surprise for M&A transactions

December 2024

Imagine ...

In recent years, your company has invested heavily in innovation through the use of AI systems within the work environment. You don't regret it: the company is performing better than ever, and several candidates have emerged to take over the company.

During a strategic meeting about the launch of the acquisition process, your legal advisor explains that a new regulation was recently adopted that imposes strict requirements on the development and use of AI. This regulation would also have an impact on the acquisition process.

This surprises you. You ask for more explanation and advice.

A brief clarification.

On 1 August 2024, the European Regulation on Artificial Intelligence (the ‘AI Act’) came into force. The AI Act takes a risk-based approach, classifying AI systems into four risk categories that are subject to separate rules: unacceptable risk (“prohibited AI practices”), high risk, limited risk and minimal risk. The most used AI systems within companies to date, namely chatbots and AI assistants, including ChatGPT and Microsoft Copilot, qualify in principle as AI systems with limited risk.

The obligations of the AI Act will apply in phases over the next three years, with the application of the ban on prohibited AI practices from 2 February 2025.

In addition to extensive obligations for providers, importers and distributors of AI systems, the AI Act also contains obligations for “deployers” of AI systems, if they are established in the EU or in a third country if the output of the AI system is used in the EU. The broad scope of the definition of deployer means that any company that uses an AI system under its own responsibility for professional purposes is subject to the AI Act.

For example, regardless of the risk classification of their AI systems, companies must take measures to ensure a sufficient level of AI literacy among their staff and comply with additional transparency obligations for certain AI systems. For high-risk AI systems (such as most AI systems used in education, human resource management and healthcare), the obligations of the deployer are more extensive. These include taking appropriate technical and organisational measures to ensure that AI systems are used in accordance with the instructions for use, providing for human oversight by qualified persons, ensuring relevant and representative input data (to the extent that it is under the control of the deployer) and monitoring the functioning of the AI systems.

Failure to comply with the provisions of the AI Act can give rise to significant administrative fines, which can amount up to €35,000,000 or 7% of the offender's total worldwide annual turnover.

Why is this relevant for an M&A transaction?

The impact of the AI Act on business operations should not be underestimated: the use of AI systems within the work environment is becoming increasingly important and the AI Act has a broad application. This impact will also affect M&A transactions.

When preparing for an acquisition, it is important to pay attention to mapping the AI systems of the target company. The seller should, preferably proactively, identify its AI systems and collect all relevant information (for example, what data was the system trained with, and how and by whom was the system developed). At the very least, as part of its due diligence, the prospective buyer must ensure that it makes a proper analysis of this information to identify any risks.

The parties should also consider the AI Act when drafting the share purchase agreement (the ‘SPA’). In this respect, specific representations and warranties may be requested to address AI-specific risks (for example, in terms of intellectual property, data protection, IT & cybersecurity, and compliance with laws). Depending on the importance of AI to the target company, these representations and warranties may be linked to separate limitations of liability in time (‘survival periods’) and amount (‘caps’). The conduct of the target company between the signing of the SPA and the closing of the transaction should also be considered. For example, a covenant may be imposed to prevent the target company from changing the nature of the training data, its AI policy and/or AI procedures. Finally, it can be expected that any concerns identified in the due diligence will give rise to specific indemnities by the seller.

More generally, the AI Act is also likely to have an impact on other aspects of the acquisition process, such as, for example, the valuation of the target company and conditions precedent with a view to rectifying any non-compliance with the AI Act before the closing of the transaction.

In short, just like the General Data Protection Regulation, compliance with the AI Act will become a focus in the M&A process. A first point of attention for companies is to identify all AI systems which they use or develop, and to ensure their compatibility with obligations under the AI Act.

A good resolution to start the new year with...

Concretely.

  • The AI Act entered into force on 1 August 2024.
  • The AI Act imposes obligations on all actors in the AI value chain, including so-called “deployers” of AI systems, regardless of the sector concerned.
  • Failure to comply with the AI Act can result in significant administrative fines of up to €35,000,000 or 7% of the offender's total worldwide annual turnover.
  • In the context of M&A transactions, the AI Act will be an additional focus during all phases of the process, from due diligence to the negotiation and drafting of transaction documents.
  • The various provisions and obligations of the AI Act will apply in phases over the next 3 years.
  • The European Commission will publish guidelines on the practical implementation of the AI Act.

Want to know more?

You can find the AI Act here.

Please consult our website or contact one of our team members if you have questions or require more information:

Close