The AI Act Delay: What Businesses Really Need to Know


Over the past few weeks, a piece of news has been making its way through legal teams and IT departments across Europe: the enforcement of certain key obligations under the EU AI Act (Regulation EU 2024/1689) may be delayed. This time, the rumour has substance.
The European Parliament has proposed postponing to December 2027 and August 2028 the entry into application of the core provisions of the AI Act — namely, the rules governing high-risk AI systems.
For businesses that had started — or quietly shelved — their compliance efforts, this raises legitimate questions: Can we breathe? Should we keep going? What actually changes?
Short answer: No, you cannot relax. But yes, the timeline shifts. Here is everything you need to know.
On 18 March 2026, the IMCO and LIBE committees of the European Parliament adopted their position as part of the Digital Omnibus, a legislative simplification package put forward by the Commission in November 2025.
For high-risk AI systems listed under Annex III, the proposed new deadline is 2 December 2027 — a delay of sixteen months compared to the original schedule. For systems covered by Annex I — AI embedded in products subject to sectoral legislation such as medical devices, machinery, or radio equipment — the date is pushed back to 2 August 2028.
Here is the proposed new timeline at a glance:
One notable exception: on the watermarking obligation under Article 50§2, the Parliament is actually stricter than the Commission, proposing a deadline of 2 November 2026 — three months earlier than Brussels had envisaged.
The Digital Omnibus does not change the substance of the AI Act — it adjusts the implementation timeline to reflect a concrete reality: the tools needed to comply simply were not ready. The harmonised standards essential to guiding businesses, being developed by the technical committee CEN-CENELEC JTC 21, will not be available before late 2026 at the earliest.
These standards are critical because they provide businesses with a presumption of conformity: a system developed in line with harmonised standards is presumed compliant with the Regulation, significantly simplifying the certification process. As of March 2026, the JTC 21 had published preliminary drafts of some standards, but final versions are not expected before late 2026 or early 2027 — making it materially impossible for businesses to certify their systems before the original August 2026 deadline.
This is the point many businesses are missing. The report is not yet definitively adopted. The 18 March vote was a step, not the finish line. Trilogue negotiations between the Parliament, the Council, and the Commission are still to begin in spring 2026. Final adoption is not expected before mid-2026.
The direct implication: August 2026 remains the legally binding date as of today.
Assuming the trilogue will automatically ratify the delay without taking any action in the meantime would be a serious strategic mistake — and one with potentially significant legal exposure.
The proposed postponement affects high-risk AI systems only. The prohibitions on unacceptable-risk AI practices — in force since 2 February 2025 — and the obligations for general-purpose AI (GPAI) models — applicable since 2 August 2025 — are not affected. These rules remain fully enforceable.
In other words, practices already banned — manipulative AI, social scoring, exploitation of vulnerabilities — are prohibited now. If your organisation is deploying systems that fall into these categories, non-compliance is already a present-tense risk.
Annex III of the AI Act lists the application domains for AI systems classified as high-risk. These include systems used in the following areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice, and border management.
If your organisation deploys AI in human resources (recruitment, performance evaluation), in access management to public services, or in regulated sectors such as healthcare or financial services, you are very likely within the scope of Annex III.
The Digital Omnibus introduces a new category: "small mid-caps", defined as companies with fewer than 750 employees and an annual turnover below €150 million. These businesses would benefit from the same alleviations as standard SMEs, including simplified technical documentation for high-risk systems, lighter requirements for the quality management system, and penalty calculations adapted to their size.
The message from every expert and institution is consistent. This delay should not be seen as a signal to put AI compliance on hold. The requirements of the AI Act are complex and structural. Mapping your AI systems, establishing data governance frameworks, training your teams on AI literacy, and drafting technical documentation will take months.
Businesses that use this window to build solid compliance foundations will hold a decisive competitive advantage over those who wait until the final straight. AI Act compliance is not a box-ticking exercise — it is a fundamental transformation of how your organisation governs its AI.
Priority actions to start now:
The AI Act does not operate in isolation. The AI Act, the CSRD, and NIS2 form an interconnected regulatory framework. NIS2 and DORA impose cybersecurity obligations on AI infrastructure. And the GDPR remains fully applicable to any processing of personal data carried out by or through AI systems.
Treating these regulatory streams as separate workstreams would be both inefficient and risky. An integrated compliance approach — covering AI, data protection, and cybersecurity — is not only more coherent: it is more cost-effective in the long run.
The proposed delays are not unanimously welcomed. Around forty organisations representing the European tech industry — gathered notably around the Digital Europe association — have argued that the simplification on the table does not go far enough. The debate on where to set the cursor between regulatory ambition and operational feasibility remains open.
But one thing is clear: the framework itself is not moving. Transparency, traceability, and control of AI systems will be required. The question is no longer whether your organisation will need to comply, but when and, above all, how.
Our firm supports you through every stage of your AI Act compliance journey — from system mapping and risk classification to technical documentation, data governance, and team training. Get in touch to discuss your situation: https://www.dpo-consulting.com/contact-us