The mere dissemination of false information is not automatically punishable in the Czech Republic. The Czech legal system does not recognise “misinformation” as a separate criminal offence. The problem arises only when a particular statement causes panic, incites hatred or affects the rights of a particular person or company. Prosecution can be threatened if the statement is spreading a scare message, if the content damages the reputation of a particular person or company, if it incites hatred or discrimination, or if the statement has a large reach and real impact. However, it depends on the form, context, intent and actual impact.
Not sure if you’re already “over the edge” of the law? Or is someone taking action against you? Send us a specific post, article or video and a brief description of the situation. We’ll legally assess whether it’s just a protected opinion or may already be a misdemeanor, civil defamation, or a felony, and suggest the quickest and most effective course of action.
In common parlance, the terms disinformation and propaganda are often conflated, but for the purposes of law (and future litigation) it is useful to at least make a working distinction between them. Disinformation usually refers to a message that is false or misleading and is disseminated with the aim of influencing someone – perhaps to change their mind, create fear, disparage an institution or divide society.
Propaganda is broader: It is not just factual untruths, but systematic communication designed to shape attitudes, often working with emotions, information selection, abbreviations and repetition. Simply put, propaganda can also use truthful information, but it presents it in a way that serves a particular purpose.
And where are the alternative sites in this? That’s typically a marketing or community label, not a legal category. Some “alternative” sites publish opinion, commentary or investigative reporting – and there is nothing illegal about that. But others may disseminate content that infringes on the rights of specific individuals(defamation), causes panic (alarmist messages) or incites hatred. The law usually does not address what kind of website it is, but what specifically was posted, with what reach, in what context and with what impact.
Imagine someone sharing a post, “The government is secretly going to close the banks, everyone take your money.” Even if it’s just an opinion, in certain circumstances it can feel like a panic-inducing challenge. And that’s exactly the moment when a discussion can become a legal issue – not because of the label of misinformation, but because of the consequences.
Many people expect a simple definition. But Czech law doesn’t usually work that way. Public debate can be strident, inaccurate, emotive and sometimes objectively meaningless, and yet remain protected by freedom of speech. The fundamental point is that the state in a democratic society is not supposed to be a “ministry of truth”, deciding across the board which information is allowed and which is not.
This does not mean, of course, that everything is allowed. But it does mean that the law typically punishes only specific illegal acts (e.g. scaremongering, incitement to hatred, defamation), not the mere existence of a false claim.
We can see this in court practice. In the media-monitored case of activist Ladislav Vrabel, the Constitutional Court addressed the boundary between punishing the dissemination of alarmist news and protecting freedom of speech. The case concerned allegations of a planned Czech nuclear attack on Russia. The Constitutional Court overturned a previous damning decision, stressing that democracy must also stand for the spreading of nonsense and that sacrificing freedom of speech would be dangerous. But at the same time, in other cases, courts have upheld sentences for alarmist reporting – typically when a socially dangerous impact or a particular form of dissemination is proven.
Are you solving a similar problem?
Not sure if you are already "over the edge" of the law or if someone is committing an illegal act against you?
Send us information about your case and we will review it for you. A legal assessment of a specific text is often the fastest way to a clear-eyed decision.
More information
- When you order, you know what you will get and how much it will cost.
- We handle everything online or in person at one of our 6 offices.
- We handle 8 out of 10 requests within 2 working days.
- We have specialists for every field of law.
In practice, “misinformation” content is most often broken down into three legal levels: misdemeanour, civil and criminal. The greater the interference with protected values (security, public order, human dignity, reputation, minority rights), the greater the chance that the matter will not remain a moral criticism.
In criminal law, the dissemination of alarmist messages often comes up in debates about disinformation. The issue is not that someone has written something stupid, but the osituation where a false message is spread that is capable of causing serious concern (or provoking a harmful reaction), and culpability is given. A specific example from recent years is the Jana Peterkova case. The court finally imposed a two-year suspended sentence on the former journalist for spreading an alarmist message during the covid-19 pandemic.
This type of case is useful in that it shows the “hard” side of the rules: If alarmist information is spread in a time of crisis and society is sensitive, there can be criminal consequences.
In addition, there is a civil side: Typically when a misinformation claim hits a specific person or company. Defamation, false accusations, false quotes, manipulated “evidence” – all of these can give rise to liability for interference with personal rights or reputation, including demands for an apology, removal of content and compensation for non-pecuniary damage.
A short checklist on how to spot misinformation:
- Is the communication just an opinion/comment or is it masquerading as fact?
- Does it have reach (large group, channel, website) or is it a private message?
- Can it cause panic or incite hatred?
- Does it infringe on the rights of a specific person (name, photo, company)?
- Is it a crisis situation (epidemic, war conflict) where the impacts are greater?
Searches like “list of disinformation sites” are extremely common – people want a “simple map” of who to trust. But it’s legally tricky. In the Czech Republic, there is generally no one binding state “blacklist” that automatically determines what is a disinformation site and what is not. More often you will come across lists or databases created by the media, the non-profit sector or analytical projects. These can be useful as a guide, but are not themselves a judgment call.
What are the implications for practice? Two things that seemingly go together, but in fact go together:
- Anyone has the right to publicly criticize the content of a website, to point out manipulations or the repeated dissemination of falsehoods.
- But at the same time, it can be problematic to label a particular entity as “misinformation” in a way that affects its reputation, business relationships, or credibility without careful justification – especially when such a label is presented as irrefutable fact, without sources, and without the opportunity to respond.
For example,a typical dispute might look like this: a company pays to advertise on a website that is labeled “alternative” on the Internet. A competitor then publicly writes: “They cooperate with disinformation sites and promote propaganda.” If the company loses clients or its reputation suffers because of this, it can defend itself. Not by explaining that it is not disinformation, but by challenging the specific false claims and their real impact.
Again, from the perspective of website operators, it is important to know that “disinformation website” is not a legal status that automatically prohibits something. The risks depend on the specific texts, statements and their consequences. And if a site finds itself on a public list, it often pays to first assess whether it is a value judgment (criticism) or a claim that already infringes rights.
Tip for article
Hint: The Russian Federation’s attack on Ukraine has brought with it a number of reactions here, including in the online space. One of the most significant steps was a “plea” from the government and the Military Intelligence to stop disinformation websites operating in the Czech internet space. However, this step triggered a debate on censorship of information that is still being conducted to this day.
Who is responsible: the author, the sharer or the operator of the website and platform?
One of the most common questions is: “If I just share, am I okay?” Unfortunately, there is no universal answer. Responsibility can vary depending on whether you are (a) the original author, (b) someone who takes the content and distributes it, (c) the site operator who publishes it, or (d) the group administrator who actively promotes, pins, and purposefully distributes it further.
In the criminal context, intent and social danger typically play a role. In the civil arena (protection of personality, reputation), the key is often who published the statement, in what form, and to how large an audience. If someone takes a rumour and adds their own comment such as “here is proof that they are stealing”, they may become an independent author of the intervention. With sharing without commentary, the context is often addressed: are you sharing for amusement, for warning, or for confirmation? In practice, the same link can have a completely different legal meaning depending on how you frame it.
For website operators, there is the added responsibility of the editorial process and responding to takedown notices. If a site repeatedly disseminates false information and ignores warnings, the risk of litigation increases. And for large social platforms, their own rules and procedures come into play (but this is more a question of moderation than Czech criminal law).
In addition to the platforms’ internal rules, there is also a legal liability regime for intermediaries: typically, the platform does not have to check everything in advance, but when it becomes aware of illegal content, a quick response (for example, removal or restriction of access) is expected. In the EU, this is now reinforced by the Digital Services Regulation (DSA) with rules for reporting illegal content.
An available attorney advises: When you want to minimize risk, don’t add a “truth stamp” to problematic content. If you’re not sure, use a careful framework (“there’s an allegation floating around…, it’s not verified…”) and ideally add a source that refutes the matter. And if you’re on the other side – i.e. someone is spreading untruths about you – collect evidence (screenshots, URL, date, reach) and address it quickly.
Cases from practice: Why sometimes the sentence falls and sometimes it doesn’t
To make it clear that disinformation is not a box, it is worth comparing two publicly known cases that have appeared in the media specifically in connection with disinformation and alarmist news.
The first is the aforementioned case of Jana Peterková: The Municipal Court in Prague gave her a two-year suspended sentence for spreading a scare report during the covid-19 pandemic. Courts in similar cases usually assess not only the falsity itself, but also the circumstances (time of crisis), the form of dissemination and the potential impact on the public. In practice, such cases are often based on whether the message was likely to cause serious alarm, disturb public order or threaten certain protected interests.
The second case concerns Ladislav Vrabel. The Constitutional Court overturned a conviction in a case involving allegations of a planned Czech nuclear attack on Russia, and emphasised the importance of freedom of speech even in situations where blatant nonsense is being spread. Subsequently, it was reported in the media that the acquittal was confirmed as final.
What does this imply? Not that you can say anything, nor that misinformation is always punishable by jail. It’s more that the courts are gently weighing in:
- Is that a statement that approaches political speech and an exaggerated assessment?
- Or is it specific alarmist information that poses a real threat to society in a particular situation?
What does a lawyer say?
Not every argument on the internet belongs to a lawyer. But there are situations where “just bad information” can quickly turn into a problem that, without a legal strategy, you will already pay for in time, money and reputation.
You typically need a lawyer when:
- someone is spreading specific untruths about you (or your business) and you can prove it,
- the content goes viral (large groups, shares, media) and the damage grows by the day,
- you are labelled as spreading misinformation or propaganda and it affects jobs, contracts or business partners,
- you run a website and you’ve received a summons or are facing criminal charges,
- you are on a sensitive issue (health, epidemics, national security, minorities) where the boundaries of “free speech” are judged more strictly,
- you are dealing with your website being publicly associated with terms such as “disinformation sites” or appearing on internet lists, and you need to assess whether this is permissible criticism or already a reputational hit.
On the other hand, prevention is often enough when it is a single statement without impact or specific harm. But even then, a quick consultation can help: sometimes a properly worded request for removal is enough, sometimes it is better to collect evidence and prepare a course of action now.
The biggest problem with misinformation and propaganda is uncertainty. People suspect something is wrong, but don’t know whether it is just a “bad opinion” or already a legally actionable matter.
What we can do for you in practice:
- We can assess a specific post/article/video: What exactly is a legal problem (and what, on the other hand, is not a problem).
- We will suggest the most effective course of action: from a “soft” removal notice to a request for an apology to a lawsuit or criminal complaint (where it makes sense).
- We’ll prepare a pre-action challenge and help with evidence: for online content, it’s important to have a good track record (time, reach, version).
- We will represent you in civil disputes over reputation and personality rights, or in criminal proceedings.
If you want to know quickly what you are at risk of or how to defend yourself, email us a description of the situation and send links/screenshots. The more specific the documentation, the more accurate and quicker the recommendation.
Tip for article
Tip: Anti-Semitism is a form of hatred towards Jews that can manifest itself in verbal attacks, threats, Holocaust denial, or even violence. How is this behaviour punished in the Czech legal system?
Summary
Disinformation is a complex topic in the Czech legal environment, because the law does not generally recognise the concept of disinformation itself and does not punish a “lie” simply because it is false; the specific content, its form, reach, context and actual impact are always decisive. Freedom of expression also protects exaggerated, emotive or nonsensical statements, but the state intervenes when the message causes panic, incites hatred or interferes with the rights of specific individuals or companies – this may be a misdemeanour, a civil dispute (for example, to protect a reputation) or even a criminal offence, typically spreading a scare message. The law does not address whether a site is an “alternative” or “disinformation” site, but what has been published and with what consequences; there is no binding national list of disinformation sites, only indicative lists that must be treated with caution. Responsibility can lie with the author, the sharer and the site operator or group administrator, and whether someone adds their own “truth stamp” to the content or ignores calls for removal also plays a role. Case law shows that sometimes the punishment will fall (for example, in the case of alarmist messages in crisis situations), while at other times the courts will stress that democracy must also stand for the dissemination of nonsense; the threshold is therefore always assessed on a case-by-case basis.
Frequently Asked Questions
Can the state ban a disinformation website?
A blanket ban on “disinformation websites” is not a common tool in Czech law. The reason is simple: freedom of expression is a fundamental right and its restriction must be exceptional, proportionate and justified by law. Therefore, the state does not usually intervene against websites as a whole, but deals with specific content – for example, individual articles, videos or posts that meet the characteristics of an offence or a criminal offence. Exceptions may arise in exceptional situations (for example, in the context of national security), but even then, interventions are judged very strictly and often end up being reviewed in court.
What is the difference between an "opinion" and a "false statement"?
The difference is crucial, but not always obvious in practice. Opinion is a value judgment – for example, “the government is not handling the situation” or “the media is lying”. Such statements are usually protected by freedom of speech, even if they are exaggerated or emotive. By contrast, a statement of specific fact (“company X is cheating customers”, “doctors are hiding the truth about vaccines”) may already be legally problematic if it is false and causes harm. Legal disputes often revolve precisely around the question of whether the reader could have understood the statement as fact or merely as opinion.
Does it matter how many people saw the misinformation?
Yes, outreach plays a big role. A statement in a private conversation is judged very differently from a public post that goes out to thousands of people. In both criminal and civil litigation, the assessment is how widely the content was available and whether it could realistically have caused harm or panic. Therefore, large Facebook groups, Telegram feeds or websites with high traffic tend to be riskier than closed communications between individuals.
What if I'm spreading misinformation unintentionally?
Inadvertent dissemination of false information does not automatically imply criminal liability, but neither does it imply “impunity”. In criminal law, culpability is examined, but in civil litigation (for example, for the protection of personality), liability may arise even without intent. In other words, even if you “just shared what you got”, there may be an obligation to apologise, withdraw the content or pay damages. It depends on the specific situation and impact.
Can it be a problem that I don't delete something?
Yes, especially for website operators, group administrators or site administrators. If you are alerted to problematic content and ignore it for a long time, it can backfire. In some cases, the law takes into account whether the operator had the opportunity to intervene and did so. Responding quickly (checking, temporarily hiding, adding context) often significantly reduces the legal risks.
How do I defend myself when someone spreads misinformation about me?
The key is not to act rashly. The first step should be to secure evidence – screenshots, links, date and range. Only then does it make sense to address the next steps: a takedown notice, a request for an apology or legal action. In disputes over misinformation, it is often not the person with the “stronger emotions” who wins, but the one who can show exactly what is false and what damage it has caused.
Why are cases like this so often settled in the courts?
Because the boundaries of free speech are not sharp lines, but rather grey areas. Each case is different and the courts weigh the specific circumstances: content, form, context, time (e.g., crisis), reach, and the author’s behavior. This is why it cannot be said that “misinformation is always criminal” or that “there is never any risk of punishment”. If reputation, business or even criminal liability is at stake, it pays to have the situation legally assessed before the dispute gets into full swing.