
The EU AI Act is now in force and if your SaaS product uses artificial intelligence in any capacity, it almost certainly applies to you. Whether you're processing customer data with an ML model, using AI-powered features to make automated decisions, or integrating a third-party LLM into your platform, you need to understand your obligations.
This checklist walks you through every key requirement, from understanding how the EU AI Act classifies your system to the documentation, transparency, and technical controls you need to have in place.
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It entered into force in August 2024 and applies on a rolling timeline with the most critical provisions now active in 2026.
For SaaS companies, the Act matters because it applies to any company that places an AI system on the EU market or puts it into service within the EU regardless of where the company itself is based. If you have EU users, you're in scope.
The Act establishes a risk-based framework. Not all AI systems are treated equally. Your obligations depend on which risk category your system falls into.
Before anything else, you need to understand which role(s) your company plays. The Act defines several distinct actors:
Most SaaS companies will be providers if they've built their own AI features, and deployers if they use third-party AI APIs (like OpenAI, Anthropic, or Google Gemini) within their product.
Important: Being a deployer doesn't mean you have no obligations. Deployers still carry meaningful compliance responsibilities under the Act.
The EU AI Act divides AI systems into four risk categories. Your classification determines everything else.
These AI systems are banned outright. They include:
If your product falls into this category, it cannot legally operate in the EU.
This is the most demanding compliance tier. High-risk AI systems include those used in:
If your SaaS product operates in any of these verticals and uses AI to make or inform significant decisions, you are almost certainly high-risk. High-risk providers must meet strict requirements around technical documentation, human oversight, data governance, and registration in the EU's AI database.
Limited-risk systems are subject primarily to transparency obligations. This includes chatbots, AI-generated content tools, and systems that interact with humans. Users must be informed they are interacting with AI.
The vast majority of AI systems spam filters, recommendation engines, basic analytics fall here. No specific legal obligations apply, though voluntary codes of conduct are encouraged.
If your SaaS product includes a chatbot, AI content generator, deepfake tool, or any system that interacts with humans, you must:
If your company is headquartered outside the European Union but your AI system is used by EU-based customers, you are required to appoint an EU AI Representative under the AI Act.
This is a legal entity or individual established in the EU who acts as your authorised representative for AI Act compliance purposes, analogous to the GDPR Article 27 representative requirement.
Your EU AI Representative must:
EU Presence provides EU AI Act Representative services, get set up in minutes, with a named EU contact who can handle your regulatory obligations from day one.
Compliance with the AI Act is not a one-time exercise. Once your system is live, you must:
If your product contains high-risk AI features, August 2026 is your hard deadline for full compliance. That's months away and building documentation, governance processes, and technical controls takes time.
Does the EU AI Act apply to my SaaS company if we're based in the US?
Yes. The AI Act applies to any provider or deployer whose AI system is used by customers located in the EU, regardless of where the provider is based. You may also need to appoint an EU AI Act Representative.
How do I know if my AI feature is "high-risk"?
High-risk systems are listed in Annexes II and III of the AI Act. The key factors are the sector (healthcare, HR, finance, education, law enforcement) and whether the AI makes or substantially influences significant decisions about individuals.
What's the penalty for non-compliance with the EU AI Act?
Fines can reach €35 million or 7% of global annual turnover (whichever is higher) for prohibited AI practices. For other violations, fines up to €15 million or 3% of turnover apply.
Is a GDPR compliance programme enough to cover AI Act obligations?
No. GDPR and the AI Act have overlapping but distinct requirements. A DPIA under GDPR may be required alongside AI Act conformity assessments, but they are separate legal obligations.
What is an EU AI Representative and do I need one?
If your company is based outside the EU but your AI system is available to EU users, yes - you are required to appoint an EU-based representative. EU Presence can act as your EU AI Representative.
The EU AI Act introduces significant obligations but with the right structure in place, compliance is entirely manageable. The key steps are: understand your risk classification, build your documentation, put human oversight in place, and appoint the right EU representative if you're operating from outside the Union.
Book a free demo with EU Presence to see how we can handle your EU AI Act representative requirements, GDPR compliance, and broader EU regulatory obligations so you can stay focused on building your product.