Why this is trending now
The EU’s AI Act is real and imminent. Despite industry lobbying for a multi-year pause, the Commission has rejected broad delays and is pressing ahead with phased obligations. Think of it as the GDPR moment for AI — with specific rules for high-risk systems and general-purpose AI (GPAI) models.
First, what exactly is the EU AI Act?
It’s a comprehensive risk-based regulation. Systems are classified by risk tiers — unacceptable (banned), high-risk(strict obligations), limited risk (lighter transparency), and minimal risk (few obligations). Providers, deployers, importers, and distributors each get duties. The Act also layers in GPAI requirements (for foundation models and the platforms that expose them). Timelines are staggered so organizations can adapt.
The timeline you can’t ignore
- General provisions started phasing in 2025; watchdogs and coordination bodies are being stood up.
- High-risk systems: core requirements apply from August 2026 for systems placed on the market after that date (biometrics, critical infrastructure, etc.).
- GPAI / foundation models: models placed before Aug 2, 2025 must comply by Aug 2027; newer models follow earlier. Some in Brussels floated “grace” via a voluntary Code of Practice, but the Commission’s public stance remains: no formal pause to the Act’s clock.
TL;DR: If you build or deploy AI in or into the EU, you’ll need a compliance plan in 2025, operational changes by 2026, and GPAI adjustments by 2027.
Who is “in scope”?
- Providers (you build/sell AI): You face the heaviest lift — technical documentation, risk management, data governance, cybersecurity, post-market monitoring, and, for GPAI, model cards / documentation and copyright safeguards.
- Deployers (you use AI): Expect records of use, human oversight, and impact assessments for high-risk contexts.
- Importers/distributors: Duty to check CE marking, documentation, and halt distribution of non-compliant systems.
High-risk vs GPAI: what’s the difference?
- High-risk is about application context (e.g., hiring, education, health, critical infrastructure, law enforcement).
- GPAI is about model class (foundation models with broad potential uses).
A GPAI model embedded in a high-risk application triggers both layers: model transparency + application-level controls.
What does “compliance” look like (practically)?
- Data governance: provenance, consent basis (where applicable), bias testing strategies, and data retention rationale.
- Model documentation: capabilities, limits, hazards, known failure modes, and evaluation metrics.
- Human oversight: clear “when to intervene” rules; audit trails; escalation paths.
- Post-market monitoring: field performance logs, incident reporting, corrective action processes.
- Security: threat models, access control, model + dataset integrity checks.
- For GPAI: training data summaries, copyright safeguards, model cards, API usage policies (downstream guardrails).
The “grace period” debate (and why you shouldn’t bank on it)
Several reports suggested Brussels could offer leniency to GPAI providers who sign a Code of Practice while the ecosystem catches up. But official statements since have emphasized no blanket delay — the Act’s timelines hold. Treat any voluntary code as bonus guidance, not a replacement clock.
A 30/60/90-day action plan
Days 0–30: Map & triage
- Inventory every AI system (build, buy, and shadow). Tag each by risk tier and market exposure (EU vs non-EU).
- Identify high-risk candidates (hiring, credit scoring, medical decision support). Flag GPAI dependencies(commercial APIs, open weights, in-house foundation models).
- Appoint an AI compliance owner; create a single source of truth (Confluence/Notion) for all artifacts.
Days 31–60: Controls & documentation
- Draft risk management plans per system (hazards, mitigations, test evidence).
- Stand up human oversight runbooks and routing (what operators can override, when, and how it’s recorded).
- Start model cards and data sheets; define a bias & performance evaluation cadence.
- For GPAI, align with copyright policies (training data disclosures; opt-out handling; downstream acceptable-use).
Days 61–90: Operationalize
- Pilot post-market monitoring: telemetry, incident flags, “kill switch” checks.
- Close supplier gaps: update DPAs, SLAs, and flow-down clauses with AI vendors.
- Run a tabletop audit (simulate a regulator question set). Fix what breaks.
- Publish an AI Transparency page (non-marketing) summarizing systems, uses, oversight, and how users can appeal.
Common pitfalls to avoid
- Paper compliance without live controls (regulators look for working processes).
- Ignoring third-party AI (vendors’ models are still your responsibility when you use them).
- Over-collecting data “for AI later” (privacy laws still apply).
- Shipping “assistants” without guardrails (hallucinations in high-risk contexts are compliance landmines).
What success looks like
By early 2026, your high-risk systems have traceable training data, repeatable evals, human-in-the-loop, and incident response. By 2027, your GPAI use includes clear model documentation, copyright controls, and downstream usage policies aligned to the Act. This isn’t just a regulatory box-tick; it’s how you build trust with customers and avoid enforcement pain later.
Final word
Don’t wait for “perfect clarity.” The direction is locked; dates are public; and the gap between compliant and non-compliant will soon be obvious to customers and partners. Start small, document honestly, and iterate. Your 2027 self will thank you.

