Key Guidelines for AI Innovators: How to Protect AI as a Patent

AI technology can be protected by a patent only if the AI solution has a technical character – a concrete solution to a technical problem through technical means, not just an “abstract algorithm”. This excludes patents for mathematical models or algorithms as such but allows protection of AI applications when they deliver a measurable technical effect.

The AI solution must solve a specific technical problem through the use of technical means (e.g., optimization of resource allocation in a computer network, improvement of device operation, automatic image processing, cryptography).

It is not possible to patent the algorithm itself or the underlying mathematical model, but only its technical application.

It is necessary to clearly document how the AI system delivers a concrete technical result – for example, a precise description of the model architecture, the training process, and the effects on system performance.

The AI innovation must be new and must not be obvious to a person skilled in the art – this implies a thorough prior art analysis.

It is necessary to clearly identify which features are innovative and why they represent a step beyond known solutions.

The application must describe in detail the overall operation and the innovation, including specific technical features.

The claims must focus on the technical aspects and avoid overly broad or abstract wording (e.g. merely “AI model”), focusing instead on concrete technical implementations and effects. Deterministic, precise terminology should be used, making it easier to identify the innovation and the potential grounds for an official objection.

Cooperation with patent and AI experts is recommended in order to identify potential pitfalls and crystallize the most valuable components for protection.

Other forms of protection should also be considered – trade secrets, copyright, contractual protection – especially for components that are not suitable for patenting.

The inventor must be a natural person – AI systems today cannot be recognised as inventors, which has been confirmed in all leading jurisdictions.

In the case of collaboration between several people and AI, those who provided a creative contribution should be designated as inventors.

The latest changes in patent laws for AI in 2025 include stricter rules, new interpretations of patentability, and specific regulatory initiatives in key jurisdictions.


In the US Congress in 2025, the PERA and PREVAIL bills were proposed, which would change the framework for determining patent eligibility, particularly regarding abstract ideas and technologies relying on AI.

In the European Union, the new Artificial Intelligence Act (“EU AI Act”), which is expected to become fully applicable in 2026, introduces a more detailed categorization of AI applications by risk level, while European patent guidelines have been further aligned with the requirements for more precise description of the technical contribution of AI systems.

China has introduced a ban in Nanjing on the use of generative AI for directly drafting patent documents – applications and evidence of research must be the result of actual research, not generated by AI.

India, through the Patent Amendment 2025, has simplified procedures for startups, tightened rules for patent challenges, and introduced a fast-track system for green and AI innovations.

All leading laws maintain the requirement that the inventor must be a natural person, so AI systems still cannot be legally recognized as inventors.

More attention is being paid to the issue of obviousness: as AI facilitates the search for solutions and lowers the threshold for the inventive step, standards for recognizing inventiveness are becoming stricter.

In many countries, regulatory scrutiny is increasing over how AI is trained, especially where protected data or copyrighted works are used, which can affect the validity of AI patents if the models were trained on disputed data.

The latest changes require greater respect for transparency, human contribution, and clarity of the technical effect of AI innovations in order for them to be patentable, and each jurisdiction is introducing specific procedures for AI in line with local strategies and global trends.

The development of artificial intelligence (AI) in Serbia has entered a new phase – work is underway on the first national Artificial Intelligence Act, which should improve legal certainty for the digital economy and society. However, the drafting and consultation process surrounding this draft is accompanied by numerous challenges, and the legal profession has significant objections that must be clearly defined and addressed in time.

One of the main objections to the draft law concerns the vaguely defined rules on the protection of intellectual property rights over AI solutions. The draft does not provide concrete answers to whether and how AI-generated works can be the subject of copyright or patent protection. Particularly problematic is the persistent uncertainty regarding the patentability of software-based and generative AI innovations, leaving innovators in a state of legal insecurity concerning the realization of their rights.

Experts also highlight the underdeveloped concept of liability in cases of damage caused by AI systems, as well as a weak framework for transparency in the operation of AI solutions. Without clear guidelines on who is liable and how for errors or abuses involving artificial intelligence, there is concern over potential abuses and a legal vacuum. In addition, issues of oversight, human control, and data access remain insufficiently specified.

The drafting process is marked by a deficit of trust between institutions and the public, and public consultations are often perceived as insufficiently transparent and inclusive.

The lack of broad education and involvement of experts from various fields indicates the need for more intensive dialogue among all stakeholders. There is also a concern that fast‑tracked adoption of the law, without exhaustive debate, could lead to legal shortcomings with long‑term consequences for the protection of the rights and interests of citizens and the economy.

The greatest challenge of this law is how to encourage innovation while at the same time providing full protection for existing rights – copyrights, industrial property rights, data, and personality rights. The use of large datasets for training AI systems must be clearly aligned with data protection and copyright regulations, which requires additional control and accountability mechanisms.

Drafting the Artificial Intelligence Act in Serbia is a necessary step, but it is crucial that this legal framework be clear, predictable, and grounded in transparent dialogue among all interested parties.
The legal and expert community must continue to insist on more precise solutions in the areas of intellectual property, liability, and rights protection, so that Serbian regulations follow European and global standards and enable the safe and efficient development of AI technologies.

Need legal advice on copyright and AI?

Follow for more legal insights:

Similar Posts