Fixed deadlines are replacing the regulatory ambiguity that made delay a defensible EU AI Act strategy — and VinciWorks compliance manager Naomi Grossman explains why that shift matters even though trilogue negotiations haven’t concluded. Until a final text is adopted, the original Aug. 2, 2026 deadline remains legally in force.
The EU’s attempt to simplify its flagship AI regulation has entered a decisive new moment. What began as a flexibility-driven digital omnibus initiative is now evolving into something more structured and, in some respects, more demanding.
Following a plenary vote in the European Parliament — and with the Council of the EU having separately adopted its own negotiating mandate — both institutions have now established positions on amendments that reshape when and how the AI Act will apply. Trilogue negotiations to agree a final text are expected to begin shortly.
For compliance teams, this is not simplification in the conventional sense. It is more of a recalibration, and it requires attention.
Fixed deadlines replace regulatory ambiguity
One of the big issues in the debate around omnibus has been timing. The European Commission originally proposed linking compliance obligations, particularly for high-risk AI systems, to the publication of harmonized standards. In theory, this would allow businesses to comply once clear technical guidance existed. But, in practice, it created uncertainty.
Both Parliament and the council have independently arrived at fixed application dates for high-risk AI systems — Dec. 2, 2027 for high-risk standalone systems, and Aug. 2, 2028, for AI embedded in regulated products — converging on those two deadlines. However, these dates will not be legally binding until a final text is agreed in trilogue. Parliament has proposed that obligations requiring the watermarking of AI-generated content take effect from Nov. 2, 2026 — an earlier date than the commission’s original proposal of Feb. 2, 2027. The council’s position on this specific deadline has not been publicly confirmed, and the final date will be determined in trilogue.
Though many details are still to be finalized, this is a big shift. Compliance is no longer contingent on when standards arrive. The clock is ticking. For organizations, this removes the viability of “wait and see” strategies, because enforcement timelines will apply even if technical guidance is incomplete.
A political compromise
The agreement between Parliament and Council signals growing alignment ahead of trilogue negotiations and reflects sustained industry pressure to delay high-risk obligations in light of slow-moving standards development.
But it’s important to note that nothing has been formally adopted yet. Until a final text is formally adopted, Aug. 2, 2026 — the original AI Act deadline — remains legally in force. If trilogue negotiations run past that date without conclusion, that deadline applies automatically and without recourse. This makes the dual-track approach not merely cautious but essential.
If negotiations were to stall or fail, the original AI Act deadlines, beginning Aug. 2, 2026, would still apply. This creates a dual-track reality for compliance planning. Politically, delays appear likely, but legally, they cannot yet be relied upon.
For risk-conscious organizations, the only defensible approach is to prepare for the earlier timeline while maintaining flexibility to adapt if the later dates are confirmed.
The EU AI Act Change That No One Is Talking About
GDPR taught us that late adopters risk deadline-driven panic, with awareness and understanding low in the months leading up to enforcement
Read moreDetailsDeepfakes and prohibited practice: expanding the boundaries
Parliament has taken a more assertive stance on prohibited uses of AI. Notably, it has proposed banning so-called “nudifier” systems, which use AI to generate or manipulate sexually explicit or intimate images of identifiable individuals without their consent. This fills an important gap in the commission’s original proposal and reflects increasing concern about the harms associated with generative AI.
This development also indicates that, even at this advanced stage of the legislative process, the scope of prohibited practices is evolving. This means that organizations developing or deploying generative AI should anticipate continued scrutiny, especially where outputs may affect individual rights or dignity.
AI literacy cannot be optional
Another area where Parliament and Council are closely aligned is in resisting any weakening of AI literacy requirements. The Commission had proposed shifting responsibility for AI literacy away from organizations and toward member states, essentially reframing it as a policy objective rather than a compliance obligation.
Lawmakers have rejected this approach. Instead, they have reaffirmed that AI literacy must remain a direct and enforceable obligation on organizations.
In practical terms, this means that employees who interact with AI systems must receive appropriate training and organizations must be able to demonstrate a clear understanding of how those systems operate, the risks they pose and how issues can be identified and addressed. AI literacy needs to be embedded across technical, operational and oversight functions, serving as a core component of the compliance infrastructure.
Sandboxes, supervision & the role of the AI office
The omnibus package also introduces changes to the broader regulatory framework supporting AI compliance. AI regulatory sandboxes, which are intended to provide controlled environments for testing systems under regulatory supervision, are now expected to be operational by December 2027, reflecting a later timeline than initially anticipated.
At the same time, the EU’s AI office is set to play a more prominent supervisory role, particularly in overseeing compliance for general-purpose AI models, where the same provider develops both the model and the system.
These changes are meant to encourage innovation without weakening regulatory oversight. But they do not reduce compliance requirements. What they do is give organizations a clearer, more structured way to test and improve their AI systems as they work towards full compliance.
Simplification meets reality
The digital omnibus was originally presented as a way to ease the burden of overlapping digital regulations, including GDPR, the Digital Services Act and the Digital Markets Act.
To some extent, it delivers on that promise. The EU has set a target to reduce administrative burdens by at least 25% overall and by 35% for small and medium-sized enterprises. Greater clarity on timelines should also help reduce costs associated with regulatory uncertainty.
But simplification has its limits. When it comes to fundamental rights, lawmakers have made it clear that obligations will not be diluted. Instead, they are choosing firm, fixed requirements and deadlines, even if that means giving businesses less flexibility in how and when they comply.
What compliance teams should do now
The latest developments make it clear that organizations must take active steps to build compliance capacity. This begins with timeline-based planning, working backward from the key implementation dates in 2026, 2027 and 2028 to ensure readiness. At the same time, organizations need robust internal processes for identifying and classifying high-risk AI systems.
AI literacy programs should be developed and deployed across the organization, tailored to different roles and responsibilities. Alongside this, governance frameworks must be clearly defined, establishing accountability for approving AI use, monitoring risks and managing incidents.
Documentation will play a critical role here. Organizations should maintain detailed records of risk assessments, technical documentation, training activities and compliance plans. Regulators are likely to focus not only on outcomes but also on whether organizations can demonstrate that they have made genuine and reasonable efforts to comply, even in areas where guidance is still evolving.
The path forward
Rather than softening the AI Act, the omnibus process has brought greater clarity to its implementation. The European Parliament and Council of the European Union are signaling a shared commitment to a regulatory framework that is predictable, enforceable and grounded in the protection of fundamental rights.
For compliance professionals, this clarity could be valuable, but it also increases urgency. The AI Act is no longer a moving target. While some details remain subject to negotiation, the overall direction is firmly established. The real question is no longer whether the rules will change. It is whether your organization will be ready.


Naomi Grossman is a compliance manager at VinciWorks, a provider of online compliance training and risk management software. 







