New Standards for AI and SaaS Contracts

What happens when your “smart” SaaS system makes a decision that nobody in the company understands, and the client sends you an email asking: “So who is responsible now – you, us, or the algorithm?” In 2025, the answer “that’s just how the software works” is no longer good enough. New standards for AI and SaaS contracts require you to know exactly who bears the risk for model hallucinations, who holds the rights in generated content, and what really happens with user data – especially if you operate from Serbia but target the EU market.

Traditional SaaS contracts were drafted for stable, predictable systems: the software has a bug, you fix it, and move on. AI introduces a completely different dynamic – the model learns, changes, and generates content that even the vendor cannot fully predict. Suddenly, the client is not only buying a “tool”, but a decision-making or content‑generation service, and expects someone to stand behind those outputs. This is why the contract must answer three questions: how much risk the vendor assumes, what counts as an “acceptable” AI error, and when the client is in fact using AI at its own risk.

The first major shift concerns liability clauses. The standard “low cap + broad disclaimer” approach is under pressure, because AI errors can lead to regulatory fines, reputational damage and mass litigation. In practice, we increasingly see: specific carve‑out clauses for AI‑related risks (data breaches, discriminatory outputs, infringement of third‑party IP), differentiated liability caps – a lower cap for general damages and a higher or separate cap for privacy and security breaches – as well as mandatory impact assessments (DPIA, AI risk assessments) explicitly referenced in the contract. For local vendors, this means that “copy‑paste” contracts with generic “no liability for AI decisions” language are a red flag for serious clients, especially those from the EU.

The second front is intellectual property. In an AI + SaaS environment, you must clearly distinguish: the vendor’s background IP (model, code, infrastructure), the client’s data (inputs, datasets, usage history) and the generated outputs (text, images, recommendations). A good modern contract therefore, does the following: confirms that the client retains ownership of its data, grants the client the right to use specific outputs for its business purposes at no additional fee, and limits the vendor’s use of that data for training – for example, only in anonymised form, only within the same product, and never for public models. For teams in Serbia, this is particularly important because foreign partners almost always ask: does your model “swallow” confidential data and use it to train anything outside my tenant? If the answer is not crystal clear in the contract, the deal easily goes to a competitor.

The third key area is data protection. Even if you are physically and legally in Serbia, as soon as you process EU residents’ data through SaaS or AI solutions, you fall under the GDPR umbrella. This means that clients expect: clearly defined roles (who is controller, who is processor, whether there is joint controllership), a clear legal basis for processing – especially for profiling and automated decision‑making, a transparent description of how the model works (at least at the level of data categories, purposes and risks), and detailed rules on storage, deletion and data transfers. The contract can no longer fit on two pages; a data protection addendum (DPA) is practically mandatory, and for more advanced AI solutions, references to DPIAs and internal algorithm governance policies are increasingly expected.

If you are a Serbian startup or a tech-focused legal team working for such clients, the practical consequences are very tangible. You need a new “AI + SaaS” template that already includes: dedicated sections on liability for AI functionalities, clear IP clauses (especially around model training), and a robust data protection addendum aligned with GDPR, even when local law formally applies. In negotiations with foreign clients, it is a major plus if you can provide an understandable “explainability” section – a short, business‑friendly overview of how your system works, where its reliability limits are, and what it must not be used for. In 2026, legal risk around AI does not disappear, but it can be significantly tamed – provided your contract no longer treats the algorithm as just another piece of code, but as a partner that can make mistakes, sometimes very expensive ones, for both you and the client.

From the client’s perspective, AI and SaaS are no longer a “cool gadget”, but a source of very real legal and business risks. If you operate from Serbia but use cloud solutions to process data of customers, employees or users from the EU, both local legislation and GDPR come into play, regardless of the fact that you sign a contract with a “foreign vendor”. In practice, every serious client today should at least: request a clear description of AI functionalities before signing (what exactly the system does, on which data, with what limitations), negotiate liability clauses – especially for data breaches and “wrong” AI decisions – and check where the data is physically stored and whether it is transferred outside Serbia or the EU.

For domestic companies, the biggest shock is often that “default” SaaS contracts rarely protect user interests to the extent they assume. Typical contracts favour the vendor: low liability caps, broad permissions to use data, vague deletion language and almost no mention of risk assessments. This is why, for clients in Serbia, it is almost essential to involve legal experts who understand both technology and regulation before embarking on larger AI and SaaS implementations – these are not ordinary commercial contracts, but complex legal relationships intended to address serious legal risks. Skipping that step is like buying high-quality surgical thread, good anesthesia and instruments – and then deciding to perform the surgery yourself.

Does this risk suit us, are we ready to take it, and if so – at what cost?

Poslednji tekstovi