How GDPR and the EU AI Act Impact M&A Transactions in the IT Sector

GDPR and the new EU AI Act today directly impact the valuation of IT companies and the flow of M&A transactions. For Serbian IT firms targeting the EU market, the key question is whether the code, database, and AI model being sold are truly compliant with GDPR and EU AI Act rules. Below is a practical guide on how these regulations affect due diligence, contracts, and post-closing risks.

In modern M&A transactions in the IT sector, you’re no longer just buying the company and team – you’re buying code, data, and AI models. The value of the target increasingly lies in the user base, historical datasets, and the algorithms that process them.

That’s why GDPR and EU AI Act are no longer a footnote in the contract, but a factor that can:

  • raise or destroy the valuation,
  • prolong negotiations for months,
  • or ultimately – completely halt the transaction.

Especially for Serbian IT companies working for EU clients, the question is simple: can what is being sold actually be used in the EU without a legal “mine” beneath the surface?

For M&A transactions in the IT sector, the most critical GDPR elements are: legal basis for processing, DPIA, contracts with processors (DPA), and data transfers outside the EU. With the EU AI Act, the focus is on the classification of AI systems (high-risk, general-purpose, foundation models) and documentation that accompanies the model’s lifecycle.

We’ve long known GDPR as the “standard” for personal data: legal basis, transparency, minimization, security, DPIA, contracts with processors, transfers outside the EU. But in an M&A context, the question is no longer “does a privacy policy exist,” but:

  • do the actual data flows match what’s written on the website,
  • can user rights actually be operationally fulfilled,
  • is there a history of incidents that will “surface” after the acquisition.

The EU AI Act adds a new layer: classification and control of AI systems. Particularly under scrutiny are:

  • high-risk systems, with strict requirements around data, oversight, and documentation,
  • foundation and generative models, where the key is where the training data comes from and how third-party copyrights are protected.

For the buyer, the question is no longer just “does the target use AI,” but: can that AI legally be sold in the EU tomorrow and under what conditions.

There are three typical places where the M&A story breaks down.

A common picture is that the product looks impressive, and then it’s discovered that:

  • key parts of the code were written by freelancers without proper transfer of rights,
  • open-source components with copyleft licenses “contaminate” the entire code,
  • it’s unclear who actually owns the AI model (weights, pipeline, datasets).

On paper – modern SaaS. In practice, an IP mosaic that the buyer can hardly take over with peace of mind.

“We have user consent” often becomes in due diligence:

  • a checkbox written five years ago,
  • overly broad processing purposes,
  • reliance on “legitimate interest” without serious analysis.

For the buyer, this means potential risk of mass user complaints, regulatory proceedings, and the necessity to fundamentally redesign the product itself to make it compliant.

This is probably the most slippery terrain.

When a couple of seemingly simple questions are asked:

  • Where is the training data from?
  • Under what conditions was it obtained?
  • Can it even be used for that purpose?

It often turns out that the answer is a combination of web scraping, “public” datasets of unknown origin, and commercial sources without a clear license for AI training. In the world of the EU AI Act, this is no longer a detail for internal discussion – it’s a potential deal-breaker.

Sale and purchase agreements (SPA/APA) today increasingly have a separate block dedicated to data and AI.

In addition to the general clause “target complies with all regulations,” buyers seek concrete representations that:

  • the business is compliant with GDPR,
  • adequate policies, DPIAs, and contracts with processors exist,
  • AI systems are properly classified and, where necessary, compliant with the EU AI Act,
  • the target has all rights to code, models, and training data.

It’s not uncommon for these representations to be tied to:

  • special indemnity clauses,
  • extended liability periods,
  • escrow or hold-back mechanisms until it’s confirmed that there are no “hidden skeletons” in the data & AI closet.

GDPR fines can go up to 20 million EUR or 4% of global annual turnover, which means that incorrect assessment of GDPR/AI risk in an M&A transaction directly threatens return on investment.

Even when the transaction closes, GDPR and EU AI Act remain as a shadow over the new owner:

  • Regulatory risk – fines, processing bans, mandatory compliance measures.
  • Reputational damage – one serious incident or media article about AI abuses can tear apart the brand value in a country where the target is just trying to position itself.
  • Technical debt – “compliance retrofit” of a product that was never built with GDPR/AI Act in mind can be more expensive than the purchase price itself.

In translation: today, the buyer doesn’t just buy what the system does, but also how much it costs for it to work in accordance with the rules tomorrow.

In practice, well-prepared GDPR and EU AI Act compliance can significantly speed up due diligence and increase the valuation of the IT target.

A seller who wants to maximize the price and speed up the process should, before opening the data room:

  • clearly map which data it processes, why, and on what legal basis,
  • review privacy documentation and contracts with processors,
  • check IP status of code, models, and datasets (especially freelancers and open-source),
  • conduct a basic review of AI systems: purpose, type, data sources,
  • document incidents and the steps taken to resolve them.

Such a seller not only looks more serious but also sends a message that data & AI compliance is part of the strategy, not a necessary evil.

The buyer should, in parallel with financial and legal due diligence, also conduct a separate:

  • data & AI track – analysis of the business model through the prism of data,
  • verification of whether the “key assets” (code, models, datasets) can be legally used and scaled in the EU,
  • assessment of potential remediation costs – how much it costs to actually bring the product to the level that EU regulation requires.

Those who skip this often only realize after closing that they bought a product that needs to be “rewritten from scratch” to survive in the market that was the target of the acquisition.

The most common starting point is the legal basis for data processing, contracts with processors (DPA), processing records, and incidents/data breaches in the past few years.

Yes – if you place an AI system on the EU market or track user behavior in the EU, the EU AI Act will apply.

Through detailed data & AI due diligence, special representations & warranties clauses, and, if necessary, escrow/hold-back structures tied to identified risks.

If you are planning an M&A transaction in the IT sector, timely GDPR and EU AI Act analysis can make the difference between a successful deal and an expensive legal trap.

M&A in IT is no longer just about the number of users, MRR, and roadmap. Behind every growth chart lies the question: are the code, data, and AI models on which everything rests legally and regulatorily sustainable?

The answer to that question today is largely determined by GDPR and the EU AI Act. Those who ignore them in negotiations are essentially agreeing to transfer part of the purchase price into risk.

Need legal advice on copyright and AI?

Follow for more legal insights:

Poslednji tekstovi