The European AI Act has not even been fully implemented yet, and already the first decisions putting its rules into practice are emerging. A recent case involving the use of artificial intelligence for biometric identification of people shows that AI regulation will not just be a “paper scare”. On the contrary, public authorities are taking it seriously and are already addressing where innovation ends and undue interference with fundamental rights begins.
Artificial intelligence working with biometrics (such as facial recognition) is one of the riskiest technologies, according to AI Act. The reason is obvious: It allows you to identify specific people, track their movements and behaviour, and potentially create detailed profiles without their knowledge.
Therefore, the AI Act essentially prohibits real-time remote biometric identification in public spaces. However, it does allow for exceptions, typically for the prevention of serious crime or the search for specific individuals. It is these exceptions that are formulated rather generally – and their practical application has so far been rather theoretical.
A recent court decision is interesting in that it explicitly relies on paragraph 35 of the AI Act’s preamble. This emphasises that when assessing the admissibility of the use of biometric identification, the specific circumstances of the case and the impact on the fundamental rights of the persons concerned must be taken into account.
The authority which decided on the authorisation did not only focus on whether the technology was technically functional. In particular, it considered:
This approach confirms that the preamble of the AI Act is not just a “non-binding introduction” but an important interpretative guide.
The decision sends a clear signal to anyone considering the use of AI systems working with biometric data. It is not enough to rely on technology alone to deliver security or efficiency. Legal and ethical considerations are key.
Companies and public institutions should expect that widespread and preventative AI tracking of individuals will be highly problematic, and any deployment of biometric AI must have a clear, legitimate and narrowly defined purpose. In addition, a thorough impact assessment on fundamental rights and documentation will be required and internal rules will play a crucial role in any controls.
One of the most important aspects of the decision is the emphasis on proportionality. In other words, even if the objective is legitimate (e.g. protection of security), any technological tool cannot be used without further consideration.
For example, the use of AI for biometric identification may be disproportionate if:
If you are considering implementing higher risk AI systems, it pays to proceed with caution. Analyse which AI Act risk category your system falls into. Prepare for a fundamental rights and data protection impact assessment and set up internal processes to demonstrate the legality and proportionality of the solution. Then consult with experts before deploying the technology.
Not sure if your AI solution is legal? Contact us for a quick legal analysis.
Our team of experienced attorneys will help you solve any legal issue. Within 24 hours we’ll evaluate your situation and suggest a step-by-step solution, including all costs. The price for this proposal is only CZK 690, and this is refunded to you when you order service from us.