Headlines focused on EU’s six-month extension of high-risk AI system enforcement to December 2027, noting it was a victory for big tech companies vocal in their opposition. But there was another change — arguably more significant — that stories barely referenced. Nik Kairinos, CEO and co-founder of RAIDS AI, examines the shift from national authority classification to self-assessment, which means legal accountability for compliance now falls directly to organizations and there’s no one else to blame if they’re found to violate the act.
In the fall, the EU announced changes to its AI Act. Headlines focused on alterations to the timeframe, noting that the six-month extension of high-risk system enforcement to December 2027 was a victory for the big tech companies, which had been vocal in their opposition.
However, there was another change — arguably more significant — that the stories reporting on the delay barely referenced. The shift from national authority classification to self-assessment is a critical change for businesses. Since there’s been little attention on it, this important shift risks passing organizations by.
From national authority classification to self-assessment. What does it mean?
These changes mean that legal accountability for compliance with the act now falls directly to organizations. Rather than an outside body deciding who is and who isn’t compliant, the onus is now on organizations themselves to self-certify that their high-risk AI classifications comply. Put simply, there is no one else to blame if they are found to violate the act.
For this reason, many organizations are seeking third-party validation. In many instances, insurance companies, investors and enterprise customers are increasingly demanding third-party validation anyway. According to the IAPP’s 2025 survey, 77% of organizations are currently working on AI governance, with a jump to nearly 90% for those already using AI.
Article 17, prEN 18286, ISO 42001 – how does it all tie together?
Article 17 of the act specifically mandates quality management systems (QMS) for high-risk AI providers. The QMS has 12 core aspects, including regulatory compliance strategy, testing and validation, technical specifications, post-market monitoring, incident reporting and record keeping. Following publication of Article 17, the EU then issued a European standard specifically addressing its requirements: prEN 18286. With the presumption of conformity, organizations implementing prEN 18286 can assume they meet Article 17 obligations.
In short, prEN 18286 compliance becomes legally required for high-risk AI systems marketed in Europe, and it’s this that firms need to focus on.
ISO 42001 is the existing international standard for AI management systems that was published in December 2023. While it’s voluntary, organizations with existing ISO 42001 certification have a significant head start, as it provides the operational foundation for prEN 18286.
What should organizations be doing now?
It’s important that organizations don’t take the six-month delay to the high-risk AI system enforcement as an opportunity to kick the can down the road. Instead, they must view it as a strategic adoption window and treat it as additional time to prepare.
GDPR, which came into force in 2018, taught us that late adopters risk last-minute deadline-driven panic to comply. Surveys at the time found that in the months leading up to GDPR, awareness and understanding were low. Organizations need to learn from GDPR and ensure they’re using all the time available to them to prepare for the EU AI Act.
Immediate steps that organizations need to take are:
- Understand their AI model risks. The scope of the act is wide-reaching: Any AI model used in the EU, regardless of where it originates, is covered. So, if an organization is an AI provider that has customers or partners in the EU, or is a user of AI and has colleagues, partners, teams or stakeholders in the EU, then it needs to comply.
- Know whether they have existing ISO 42001 certification or are working towards it.
- Understand the requirements of prEN 18286 and take steps to ensure they’re met.
- Determine which conformity assessment procedure applies to their AI systems (internal control or third-party assessment).
The world will be watching implementation of the EU AI Act closely. It’s the first attempt to set a global standard of AI regulation that ensures AI is safe and trustworthy, and several other countries have legislation in development. Global organizations that can be sure they’re compliant with the EU AI Act put themselves in the best possible position when other regulations inevitably come into play in the coming months and years.
We also know from GDPR that regulators are not afraid to take on big names, with the likes of Meta, Amazon, TikTok and Uber all receiving fines. Organizations of all sizes need to be sure they’re prepared so they don’t risk the financial and reputational damage that comes with being sanctioned.


Nik Kairinos is the CEO and co-founder of RAIDS AI, an AI safety monitoring platform. With over 40 years of experience in AI and deep learning, he has dedicated his career to turning advanced research into practical, trustworthy solutions that empower people to use AI safely and effectively. 







