Why is the Czech Republic preparing an “AI law” when we already have the EU AI Act
The EU AI Act (Regulation (EU) 2024/1689) is a directly applicable regulation – similar to the GDPR. This means that the main rules for AI (risk categorisation, obligations of providers and deployers, prohibitions of certain practices, rules for high-risk systems, transparency requirements, etc.) are written directly in the European regulation and apply across the Union. However, some of its provisions are phased in
It’s just that even a “directly applicable” regulation needs to add practical things in each country, which the EU leaves to the member states – typically:
- who is the competent authority (and who handles what types of cases),
- exactly how supervision and proceedings are carried out (procedural framework, time limits, coordination of authorities),
- how national innovation support tools work (e.g. regulatory sandbox),
- how sanctions are enforced and who collects them.
This is exactly what the Czech draft law on artificial intelligence aims to address. According to the summary of expert practice, it is supposed to be a regulation that will complement the AI Act, in particular on supervision, authorisation/testing processes and sanction mechanisms.
At the same time, the Ministry of Industry and Trade (MIT) communicates that it wants to follow a “minimalist” path: i.e. not to rewrite the obligations from the AI Act into the Czech law, but to add only what is needed to operate in the Czech Republic, while not hampering innovation with unnecessary administration.
Who will be the “AI police” in the Czech Republic?
The most important practical question for companies and authorities is: Who will control me and where do I turn when I solve a problem? The proposal divides competences among several institutions to build on their existing expertise (finance, telecommunications, personal data, etc.).
The expert summaries and the text of the proposal show the following basic model:
- The Czech Telecommunications Office (CTU) is to be both the “primary” supervisory authority and a single point of contact.
- The Czech National Bank (CNB) supervises entities already subject to its supervision (typically the financial sector) if they use AI in their regulated activities. This is consistent with the logic: supervision remains where it already exists.
- TheData Protection Authority (DPA) has a role to play, particularly where high-risk AI meets the protection of personal data and fundamental rights.
- The Office for Technical Standardisation, Metrology and State Testing (OSTM) is to act as a notifying authority for the area of notified bodies and the control of their obligations (i.e. inter alia the follow-up to conformity assessment).
- The Public Defender of Rights is also to be part of the system – the proposal envisages his role in terms of protection of fundamental rights.
Practical implications for companies: when developing or deploying AI, it is not enough to know only the AI Act. You will need to know which authority has jurisdiction over your case (and when a case may “spill over” between authorities). The proposal even foresees an exchange of information between the CTU and the OIA when inspections or proceedings are initiated.
Are you solving a similar problem?
Do you have a question about AI and the law?
Are you using AI in your company or are you just considering it? Are you unsure if your tool falls under high-risk AI, who will supervise you, or how to prepare for an inspection under the AI Act and the Czech AI Act? Send us your question and get an answer within 24 hours.
More information
- When you order, you know what you will get and how much it will cost.
- We handle everything online or in person at one of our 6 offices.
- We handle 8 out of 10 requests within 2 working days.
- We have specialists for every field of law.
High-risk AI: permissions, real-world testing and tight deadlines
In practice, “high-risk” systems – i.e. AI that can significantly interfere with people’s lives (typically in the areas of employment, education, healthcare, critical infrastructure, public services, etc.) – are the most addressed. The AI Act imposes the toughest obligations on them.
The Czech Adaptation Act does not go into detail on the technical requirements themselves (these are in the AI Act), but addresses the procedural framework of two key situations:
A) Operation of high-risk AI even without a conformity assessment (exceptional regime)
The proposal contains a procedure for authorising the placing on the market or operation of a high-risk AI system without a conformity assessment, providing that only the applicant is a party to the procedure and setting a decision period of up to 120 days.
This is important: it is not a “free pass” without rules, but a formal process in which the Authority examines whether the conditions are met and can amend or revoke the authorisation if the conditions are not subsequently met.
B) Testing high-risk AI in real-world conditions
The second big thing is the approval of testing high-risk AI in real-world conditions (so-called real-world testing). Again, the proposal sets up the procedure so that only the applicant is a participant and sets a time limit of up to 90 days for complex cases.
In practice, this is essential for companies that want to test AI on “live data” or with real users, but also need legal certainty that they are doing the right thing and that any risk is managed.
What to take away from this:
- if you are concerned with the “high-risk” regime, it is not enough to have the AI Act documentation ready – you will also be dealing with the administrative process and communication with the authority,
- the fact that the Act operates with fixed 90/120 day timeframes (allow for these in the schedule) has an impact on project planning.
Regulatory sandbox: what it is and why you should care
A regulatory sandbox is the concept of a “controlled environment” where an innovation can be tested with the supervision and methodological support of the state. For AI, this is particularly useful when the product is on the edge of a high-risk category, working with sensitive scenarios (e.g. biometrics, HR, scoring) or you need to safely verify user and compliance impacts.
The Czech proposal explicitly foresees a sandbox. According to the text of the proposal, the role of the founder is to be performed by the ÚNMZ and the operator is to be the Czech Standards Agency. The expert summaries add that it is to be a tool designed especially for small and medium-sized enterprises and that it will allow testing in a real environment with real clients.
One legal detail that is often overlooked is very practical: participation in the sandbox is not to be “by entitlement”. On the contrary, it is foreseen that participation will only arise on the basis of a contract and the mere submission of an application/offer does not mean automatic acceptance; the conditions and criteria will be published by the operator.
For companies, this means that the sandbox can be a strategic “bridge” between development and the market, but selection criteria and contractual settings (responsibilities, data, confidentiality, testing regime) have to be taken into account.
Tip for article
Tip: Were you surprised that your lawyer asked you for a certified signature, a copy of your ID and information about your income? You’re not alone. However, under the so-called AML law, these steps are not at the discretion of the attorney. What else should you know about the AML law?
Penalties, warnings and offences: what is at stake if you break the rules
The AI Act contains high fines and a generally strict sanction framework. The Czech draft law builds on this by adding procedural and institutional aspects of enforcement: who hears offences, who collects the fine, how the statute of limitations runs, etc.
A useful “softer” intermediate step follows from the technical summary: if someone violates an obligation in a less serious way, the supervisory authority can first give him a warning and invite him to remedy, with a deadline for remedy of no less than 15 days from the receipt of the warning.
This is important for compliance in practice: it gives room to correct the error (e.g. complete documentation, adjust internal process, improve user information) before going full “sanction” mode.
The proposal itself then also addresses the technical details that make the difference in real life:
- 5 year limitation period (with an absolute limit of 8 years for interruptions),
- conversion of the fine into CZK at the CNB exchange rate and rounding up to whole crowns
- and the coordination of data for reporting on fines.
Simply: the Czech law is supposed to be a “guide to the application” of the penalty part of the AI Act on Czech territory – who, how and in what procedure.
What this means for companies and employers
If you are a company, the AI Act may affect you in several roles: you may be a provider (you develop or market AI), a deployer/operator (you use AI in your company), or an importer/distributor. While the Czech draft law does not add extra “new technical obligations” for you (these are in the AI Act), it fundamentally changes two things:
- You will have specific Czech authorities that you will realistically encounter (inspection, inquiries, proceedings, testing/operation permits).
- You get the possibility (and sometimes necessity) to deal with things “procedurally”: application, testing plan, deadlines, completion of documents, response to calls, communication between authorities.
Examples:
A company deploys a tool that sorts CVs and recommends candidates. If it falls into high-risk AI mode, “it works” is not enough. You’ll be dealing with internal documentation, risk management, transparency to candidates – and in practice, who your supervisory authority is if a complaint comes in (in some scenarios, the OSC may also play a role).
The bank or fintech uses AI to assess creditworthiness. Here, it is realistic that the supervision will fall under the CNB as it is an entity subject to its supervision. The mistake we see most often: the product team only addresses the “model” but does not legally address the role of the organization (who is the provider, who is the deployer) and who is responsible for compliance.
Checklist: what to do now
- Take an internal inventory of AI: where AI is being used, for what, with what data, who is the vendor.
- Identify roles: are we a provider/deployer/importer? (can be a combination).
- For riskier use cases, prepare the groundwork for possible test management (test plan, impacts, control mechanisms).
- Establish an “evidence-ready” process: be able to respond quickly to a call from the authority and document what you are doing and why.
MIT also publicly anticipates that the bill could be effective during 2026 (depending on the legislative process).
Summary
The Czech draft law on artificial intelligence is a so-called “artificial intelligence bill”. the Czech AI Bill is an adaptation law to the European AI Act, which will mainly set practical rules for supervision, management and enforcement in the Czech Republic, not new technical requirements for AI itself; designate the main supervisory authorities (in particular the Czech Telecommunications Office as a single point of contact, the CNB for the financial sector and the Office for Personal Data Protection); and regulate the processes for authorising the operation and testing of high-risk AI systems in real-life conditions with fixed deadlines (typically 90 to 120 days), introduce a regulatory sandbox as a controlled environment for safe testing of innovations, set rules for offences and penalties, including the possibility to call for remedies first and impose fines later, while giving companies and public institutions a clearer legal framework on who and how they will interact with when developing, deploying or using AI in practice; the law is expected to come into force during 2026 and its main objective is to ensure the enforceability of the AI Act in the Czech environment without unnecessary administrative burdens.
Frequently Asked Questions
Is the Czech bill an "implementation" of the AI Act?
In common parlance, yes, but legally it is more accurate to say “adaptation” law. The AI Act, as an EU regulation, applies directly; the Czech law is mainly intended to supplement the institutional and procedural aspects (supervision, management, sanctions, sandbox).
Who will be the main supervisory authority?
The proposal envisages that the primary role and the single point of contact will be the Czech Telecommunications Office, with additional roles for the CNB and the OCC.
When might a permit be needed for high-risk AI?
The proposal addresses processes for exceptional situations where high-risk AI is allowed to operate without a conformity assessment, as well as the approval of real-world testing.
How long can these proceedings last?
Both the expert summaries and the proposal work with indicative timeframes: typically up to 90 days for testing in particularly complex cases, up to 120 days for operating permits.
What is a regulatory sandbox and who will operate it?
The Sandbox is intended to be a controlled testing environment. The proposal foresees that the founder will be the ÚNMZ and the operator will be the Czech Agency for Standardization; the participation is to be contractual and non-cost.
Do I face an immediate fine if I make a mistake?
The proposal also foresees a “softer” procedure: for less serious infringements, the supervisory authority may first give a warning and a reasonable period of time to remedy (at least 15 days). Then the infringement/penalty mechanisms come into play.
When does it start to apply?
In the technical summaries and communication around the proposal, it is expected to take effect during 2026, but the exact date will depend on the legislative process.