Blog

EU AI Act compliance checklist for SaaS companies (2026)

Regulatory Compliance
|
March 3, 2026

The EU AI Act is now in force and if your SaaS product uses artificial intelligence in any capacity, it almost certainly applies to you. Whether you're processing customer data with an ML model, using AI-powered features to make automated decisions, or integrating a third-party LLM into your platform, you need to understand your obligations.

This checklist walks you through every key requirement, from understanding how the EU AI Act classifies your system to the documentation, transparency, and technical controls you need to have in place.

What Is the EU AI Act and why does it matter for SaaS?

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It entered into force in August 2024 and applies on a rolling timeline with the most critical provisions now active in 2026.

For SaaS companies, the Act matters because it applies to any company that places an AI system on the EU market or puts it into service within the EU regardless of where the company itself is based. If you have EU users, you're in scope.

The Act establishes a risk-based framework. Not all AI systems are treated equally. Your obligations depend on which risk category your system falls into.

Step 1: Determine your role under the AI Act

Before anything else, you need to understand which role(s) your company plays. The Act defines several distinct actors:

  • Provider - You develop and place an AI system on the market (the most obligations)
  • Deployer - You use an AI system developed by someone else in a professional context
  • Importer - You bring an AI system developed outside the EU into the EU market
  • Distributor - You make an AI system available without substantial modification

Most SaaS companies will be providers if they've built their own AI features, and deployers if they use third-party AI APIs (like OpenAI, Anthropic, or Google Gemini) within their product.

Important: Being a deployer doesn't mean you have no obligations. Deployers still carry meaningful compliance responsibilities under the Act.

Step 2: Classify your AI system's risk level

The EU AI Act divides AI systems into four risk categories. Your classification determines everything else.

Unacceptable Risk (Prohibited)

These AI systems are banned outright. They include:

  • Social scoring systems used by public authorities
  • Real-time remote biometric surveillance in public spaces\AI that exploits psychological vulnerabilities to manipulate behaviour
  • AI that exploits psychological vulnerabilities to manipulate behaviour
  • Systems that infer sensitive attributes (race, political opinions, sexual orientation) from biometric data

If your product falls into this category, it cannot legally operate in the EU.

High Risk

This is the most demanding compliance tier. High-risk AI systems include those used in:

  • Recruitment and HR (CV screening, candidate ranking, performance monitoring)
  • Credit scoring and financial services
  • Education (student assessment, admissions)
  • Law enforcement and border control
  • Critical infrastructure management

If your SaaS product operates in any of these verticals and uses AI to make or inform significant decisions, you are almost certainly high-risk. High-risk providers must meet strict requirements around technical documentation, human oversight, data governance, and registration in the EU's AI database.

Limited Risk

Limited-risk systems are subject primarily to transparency obligations. This includes chatbots, AI-generated content tools, and systems that interact with humans. Users must be informed they are interacting with AI.

Minimal Risk

The vast majority of AI systems spam filters, recommendation engines, basic analytics fall here. No specific legal obligations apply, though voluntary codes of conduct are encouraged.

Step 3: If you're high-risk - Your mandatory compliance checklist

  • Technical documentation - Maintain detailed documentation of your system's design, training data, architecture, capabilities, and limitations before placing it on the market.
  • Conformity assessment - Conduct a formal conformity assessment (self-assessment for most systems; third-party for certain use cases like biometrics).
  • Register in the EU AI database - High-risk AI systems must be registered in the publicly accessible EU database before being placed on the market.
  • Human oversight mechanisms - Your system must be designed so a human can monitor, override, or shut it down. This must be a real capability, not just a policy.
  • Accuracy and robustness - Document and test your system's performance, including against adversarial inputs. Ongoing monitoring is required post-deployment.
  • Data governance - Training, validation, and test data must meet quality standards. Data used for high-risk systems must be relevant, representative, and free from bias to the extent technically feasible.
  • Logging and traceability - Your system must log events automatically to enable post-incident analysis. Logs must be retained for at least six months (or longer where required by other regulations).
  • Transparency to deployers - Provide deployers with clear, accurate instructions about your system's intended purpose, capabilities, limitations, and required oversight measures.

Step 4: If you're a deployer - Your checklist

  • Use AI systems only for their intended purpose - Don't use a provider's AI system for use cases outside its documented scope.
  • Appoint a point of contact - For high-risk systems, designate someone internally responsible for compliance.
  • Conduct your own Data Protection Impact Assessment (DPIA) - If the AI system processes personal data (very likely for SaaS), a DPIA under GDPR is required alongside AI Act obligations.
  • Implement human oversight - Even if your provider has built oversight tools in, you are responsible for ensuring they are actually used within your organisation.
  • Monitor and report - If you identify a serious incident involving a high-risk AI system, you must report it to the relevant national market surveillance authority.
  • Inform employees - If you use AI for monitoring or decision-making affecting employees, you must inform them clearly and in advance.

Step 5: Transparency obligations for limited-risk systems

If your SaaS product includes a chatbot, AI content generator, deepfake tool, or any system that interacts with humans, you must:

  • Disclose AI interaction - Users must be clearly informed they are interacting with an AI system, not a human. This must happen at the start of the interaction.
  • Label AI-generated content - Content generated by AI must be machine-readable marked as AI-generated (this applies especially to audio and video).
  • No fake personas - AI systems must not impersonate real people in a misleading way.

Step 6: Appoint an EU AI representative (if you're based outside the EU)

If your company is headquartered outside the European Union but your AI system is used by EU-based customers, you are required to appoint an EU AI Representative under the AI Act.

This is a legal entity or individual established in the EU who acts as your authorised representative for AI Act compliance purposes, analogous to the GDPR Article 27 representative requirement.

Your EU AI Representative must:

  • Be named in the technical documentation for your AI system
  • Cooperate with national competent authorities on your behalf
  • Be the point of contact for market surveillance authorities

EU Presence provides EU AI Act Representative services, get set up in minutes, with a named EU contact who can handle your regulatory obligations from day one.

Step 7: Ongoing obligations after launch

Compliance with the AI Act is not a one-time exercise. Once your system is live, you must:

  • Monitor post-market performance - Track and document how your system performs against its intended purpose. For high-risk systems, establish a post-market monitoring plan.
  • Report serious incidents - Malfunctions that cause or could cause death, serious injury, or significant property damage must be reported to authorities without undue delay.
  • Update documentation - Any substantial modification to your AI system (changes to training data, architecture, or intended use) may require a new conformity assessment.
  • Stay current on regulatory updates - The AI Act includes delegated acts and implementing regulations that will evolve. Subscribe to updates from the EU AI Office.

EU AI Act timeline: Key dates for SaaS companies

  • August 2024 - AI Act entered into force
  • February 2025 - Prohibited AI practices ban became applicable
  • August 2025 - GPAI model obligations became applicable
  • August 2026 - High-risk AI system obligations become applicable
  • August 2027 - All remaining provisions fully applicable

If your product contains high-risk AI features, August 2026 is your hard deadline for full compliance. That's months away and building documentation, governance processes, and technical controls takes time.

Frequently Asked Questions

Does the EU AI Act apply to my SaaS company if we're based in the US?
Yes. The AI Act applies to any provider or deployer whose AI system is used by customers located in the EU, regardless of where the provider is based. You may also need to appoint an EU AI Act Representative.

How do I know if my AI feature is "high-risk"?
High-risk systems are listed in Annexes II and III of the AI Act. The key factors are the sector (healthcare, HR, finance, education, law enforcement) and whether the AI makes or substantially influences significant decisions about individuals.

What's the penalty for non-compliance with the EU AI Act?
Fines can reach €35 million or 7% of global annual turnover (whichever is higher) for prohibited AI practices. For other violations, fines up to €15 million or 3% of turnover apply.

Is a GDPR compliance programme enough to cover AI Act obligations?
No. GDPR and the AI Act have overlapping but distinct requirements. A DPIA under GDPR may be required alongside AI Act conformity assessments, but they are separate legal obligations.

What is an EU AI Representative and do I need one?
If your company is based outside the EU but your AI system is available to EU users, yes - you are required to appoint an EU-based representative. EU Presence can act as your EU AI Representative.

Getting compliant doesn't have to be complicated

The EU AI Act introduces significant obligations but with the right structure in place, compliance is entirely manageable. The key steps are: understand your risk classification, build your documentation, put human oversight in place, and appoint the right EU representative if you're operating from outside the Union.

Book a free demo with EU Presence to see how we can handle your EU AI Act representative requirements, GDPR compliance, and broader EU regulatory obligations so you can stay focused on building your product.

Keep reading

View all

Your EU expansion starts here

We handle compliance and regulations, so you can focus on what you do best.