The European AI Act is the first comprehensive legislation of its kind to regulate the development and use of AI on the basis of risk assessment. For example, systems that threaten fundamental rights are to be banned, while high-risk applications (e.g. in healthcare or justice) face strict requirements. The partially effective part focuses on so-called generic models, i.e. large language and other advanced systems such as ChatGPT, Google Gemini or Anthropic Claude. Providers of these services must now comply with requirements for describing technical parameters, managing risks or respecting copyright when working with training data.
Together with the EU rules, the EU has also introduced a voluntary code of best practice. This is intended to serve as a practical guide on how to comply with the obligations under the regulation. Entities that subscribe to the code will be able to demonstrate compliance with the new legislation more easily.
For existing AI models, there is a two-year transition period until August 2027. New systems placed on the market after 2 August 2025 must comply with the rules immediately. The Czech Republic, as an EU member state, will have to take this regulation into account in its national AI oversight. Although the entire AI Act package is not yet fully completed, the current amendment marks the first concrete step towards uniform European rules in this area.