While regulation isn’t exactly getting ahead of the rapid deployment of AI, the EU’s AI Act sets a clear standard: All staff must be AI literate. Punter Southall’s Jonathan Armstrong explores what that means and why it applies even outside the EU.
AI adoption in businesses is accelerating. McKinsey data projects that more than 78% of global companies will use AI this year, with 71% deploying GenAI in at least one function. But with this rapid spread comes a challenge that I see repeatedly — a lack of understanding about how these tools work. It’s this issue that regulators are now focusing on with the hot topic of AI literacy.
As AI usage rises, so do the expectations of regulators. A provision of the EU AI Act, Article 4 on AI literacy, now requires organizations to ensure all staff, including contractors and suppliers, understand the tools they are using. It came into effect in February 2025, and while formal enforcement by national authorities begins in August 2026, we’ve already seen the threat of private litigation over AI literacy obligations.
Article 4 is clear: Users and those affected by AI systems must have “sufficient AI literacy to make informed decisions.” This doesn’t just mean developers, data scientists or IT teams. I’ve worked with HR teams using AI in recruitment, marketing teams deploying GenAI and organizations where contractors are using AI systems. They all fall under this requirement.
Some organizations assume AI literacy doesn’t apply to them because they aren’t in the tech sector. But any deployer of AI is included, even if they don’t think of themselves as “using AI.” That’s why businesses need to be proactive.
The European Commission defines AI literacy as the “skills, knowledge and understanding” needed to use AI responsibly. From my perspective, this means:
- Understanding how AI systems work and the data they use.
- Recognizing risks like bias, discrimination and hallucination.
- Knowing when and how to apply human oversight.
- Being aware of legal obligations under the EU AI Act and other relevant frameworks.
Flying Blind on AI: The New Normal for Compliance Teams
With the Senate's decisive vote against a state AI regulation moratorium, compliance officers face a stark reality: Most organizations are using AI, but not everyone has policies governing the technology as it keeps on advancing. Jennifer L. Gaskin reports on how teams are building risk-based frameworks for a world where employees love ChatGPT but can only use Copilot for work — and where AI agents may soon be booking flights and clearing transactions with questionable accountability.
Read moreDetailsThe scope of Article 4 is broad. Any organization using AI in the EU must comply, including US businesses offering AI-enabled services in EU markets. And it isn’t just the tech team at risk: a misbehaving chatbot or a hiring algorithm that perpetuates bias could leave the organization liable.
I also see a generational challenge. Digital natives often find AI tools on their own, via search or social media, which can open organizations to risk if there’s no guidance. Shadow AI is a growing concern, too, and banning AI doesn’t stop usage, it often just moves it onto personal devices, where oversight is limited. Having good training and clear policies in place has never been more important. And good training will aim to get hearts and minds, too.
While the regulatory enforcement of Article 4 begins next summer, businesses today can already face civil action or complaints to data protection authorities if AI is used irresponsibly. I’ve already seen complaints like this made against social media companies, food delivery operators and a UK business involved in a popular dating app who used AI for icebreakers in initial introductions.
For this reason, it’s essential that businesses get on the front foot; there are several practical actions to take:
- Map your AI estate: Have “bring out your dead” sessions and consider an AI amnesty to find out who is using what already. Audit all AI systems, whether for decision-making, customer interaction or content generation.
- Tailor AI literacy training: Make it role-specific. HR teams using AI in hiring need to understand bias data protection and explainability, while other teams may need different risks explained.
- Review contracts with third parties: Vendors using AI on your behalf must also meet literacy standards.
- Set internal AI policies: Define acceptable use, approval processes and human review requirements.
- Engage leadership: Responsible AI use starts at the top. Leaders must embed a culture of compliance and transparency.
From my perspective, the concentration on AI literacy marks a major shift. Businesses can no longer claim they are deploying AI responsibly if their employees don’t understand it. Just as GDPR transformed data practices, the EU AI Act is reshaping how AI is implemented, monitored and explained. What used to be best practice is now a legal obligation, and the sooner businesses act, the better placed they will be to comply.