No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Compliance

Landmark EU AI Act Takes Effect; Here’s What You Need to Know

Risk-based prohibitions expected to start phasing in next year

by Jonathan Armstrong
August 14, 2024
in Compliance, Featured
eu flag on wooden bench

The European Union’s landmark AI Act formally went into effect Aug. 1, changing the way artificial intelligence is regulated across Europe — and, indeed, around the world. This first-ever comprehensive legal framework aims to ensure that AI systems released in the EU market and used in the EU are safe. Punter Southall partner Jonathan Armstrong explores the details of the regulation and what corporations around the globe need to know.

The first thing to say is that even before the passing of the EU AI Act, AI was not completely unregulated in the EU thanks to the GDPR. Previous enforcement activity against AI under GDPR has included:

  • An Italian ban of the ReplikaAI chatbot
  • The temporary suspension by Google of its Bard AI tool in the EU after intervention Irish authorities
  • Italian fines for Deliveroo and a food delivery start-up over AI algorithm use
  • Clearview AI fines under GDPR

But this regulation sets out the following risk-based framework: 

Minimal risk

Most AI systems present only minimal or no risk for citizens’ rights or safety. There are no mandatory requirements, but organizations may nevertheless voluntarily commit to additional codes of conduct for these if they wish. Minimal risk AI systems are generally simple automated tasks with no direct human interaction, such as an email spam filter.

High risk

Those AI systems identified as high risk will be required to comply with strict requirements, including: (i) risk-mitigation systems; (ii) obligation to ensure high quality of data sets; (iii) logging of activity; (iv) detailed documentation; (v) clear user information; (vi) human oversight; and (vii) a high level of robustness, accuracy and cybersecurity.

Providers and deployers will be subject to additional obligations regarding high-risk AI. Providers of high-risk AI systems (and general-purpose AI model systems discussed below) established outside the EU will be required to appoint an authorized representative in the EU in writing. In many respects this is similar to the data protection representative (DPR) provisions in GDPR. There is also a registration requirement for high-risk AI systems under Article 49.

Examples of high-risk AI systems include:

  • Some critical infrastructures, for example, for water, gas and electricity
  • Medical devices
  • Systems to determine access to educational institutions or for recruiting people
  • Some systems used in law enforcement, border control, administration of justice and democratic processes; in addition, biometric identification, categorization and emotion recognition systems 

Unacceptable risk

AI systems considered a clear threat to the fundamental rights of people will be banned outright early next year, including:

  • Systems or applications that manipulate human behavior to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behavior of minors, or systems that allow so-called “social scoring” by governments or companies and some applications of predictive policing
  • Some uses of biometric systems will be prohibited, for example, emotion recognition systems used in the workplace and some systems for categorizing people or real-time remote biometric identification for law enforcement purposes in publicly accessible spaces, subject to some narrow exceptions.

Specific transparency risk

Also called limited-risk AI systems, which must comply with transparency requirements. When AI systems like chatbots are used, users need to be aware that they are interacting with a machine. Deepfakes and other AI-generated content will have to be labeled as such, and users will have to be informed when biometric categorization or emotion recognition systems are being used.

In addition, service providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format and detectable as artificially generated or manipulated.

Systemic risk

Systemic risk is:

  1. Specific to the high-impact capabilities of general purpose AI models
  2. Has a significant impact on the EU market due to reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights or society as a whole
  3. Can be propagated at scale

General-purpose AI

The EU AI Act introduces dedicated rules for so-called general-purpose AI (GPAI) models aimed at ensuring transparency. Generally speaking, this means an AI system that is intended by the service provider to perform generally applicable functions like image and speech recognition, audio and video generation, pattern detection, question answering, translation and others.

For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing — a bit like red teaming to test for information security issues. These obligations will come about through codes of practice developed by a number of interested parties.

two robots having a showdown in the wild west
Featured

AI Is the Wild West, but Not for the Reasons You Think

by Jennifer L. Gaskin
March 20, 2024

As Europe moves closer to blanket rules regarding its use, CCI’s Jennifer L. Gaskin explores the evolving compliance and regulatory picture around artificial intelligence, the technology everyone seems to be using (but that we’re also all afraid of?).

Read moreDetails

What about enforcement?

Market surveillance authorities (MSAs) will supervise the implementation of the EU AI Act at the national level. Member states are to designate at least one MSA and one notifying authority as their national competent authorities. Member states need to appoint their MSA at national level before Aug. 2, 2025, for the purpose of supervising the application and implementation of the EU AI Act. It is by no means guaranteed that each member will appoint its DPA as the in-country MSA but the European Data Protection Board pushed for them to do so in its plenary session in July 2024.

In addition to in-country enforcement across the EU, a new European AI Office within the European Commission will coordinate matters at the EU level, which will also supervise the implementation and enforcement of the EU AI Act concerning general purpose AI models.

With regard to GPAI, the European Commission, and not individual member states, has the sole authority to oversee and enforce rules related to GPAI models. The newly created AI Office will assist the Commission in carrying out various tasks.

In some respects, this system mirrors the current regime in competition law with in-country enforcement together with EU coordination. But this could still lead to differences in enforcement activity across the EU as we’ve seen with GDPR, especially if the same in-country enforcement bodies have responsibility for both GDPR and the EU AI Act.

In certain circumstances, dawn raids may be possible in enforcement action. The first is in relation to testing high-risk AI systems in real-world circumstances. Under Article 60 of the Act, MSAs will be given powers of unannounced inspections, both remote and on-site, to conduct checks on that type of testing.

The second is that competition authorities may perform dawn raids as a result of this act. MSAs will report annually to national competition authorities any information identified in their market surveillance activities that may be of interest to the competition authorities. Competition authorities have had the power to conduct dawn raids under antitrust laws for many years now. As such, competition authorities might conduct dawn raids based on information or reports received.

Penalties

When a national authority or MSA finds that an AI system is not compliant, they have the power to require corrective actions to make that system compliant and to withdraw, restrict or recall the system from the market.

Similarly, the Commission may also request those actions to enforce GPAI compliance.

Noncompliant organizations can be fined under the new rules, as follows:

  • €35 million or 7% of global annual turnover of the preceding year for violations of banned AI applications
  • €15 million or 3% for violations of other obligations, including rules on general purpose AI models
  • €7.5 million or 1.5% for supplying incorrect, incomplete or misleading information in reply to a request

Lower thresholds are foreseen for small and mid-sized companies and higher thresholds for other companies.

Applicability outside the EU

The AI Act’s extraterritorial application is quite similar to that of the GDPR; as such, these rules may affect organizations in the UK and elsewhere,including the U.S. Broadly, the EU AI Act will apply to organizations outside the EU if their AI systems or AI-generated output are on the EU market or their use affects people in the EU, directly or indirectly.

For example, if a U.S. company’s website has a chatbot function that is available for people in the EU to use, that U.S. business will likely be subject to the EU AI Act. Similarly if a non-EU organization does not provide AI systems to the EU market but does make available AI system-generated output to people in the EU (such as media content), that organization will be subject to this Act.

The UK, the U.S., China and other jurisdictions are addressing AI issues in their own particular ways.

The UK government published a whitepaper on its approach to AI regulation in March 2023, which set out its proposed “pro-innovation” regulatory framework for AI, and subsequently had a public consultation on the proposals. The government response to the consultation was published in February 2024.

Since then, the UK government has changed thanks to a shocking election result, and we’ve seen the government’s position on AI change, too. The position of the new Labour Government was set out in the King’s Speech in July 2024 with the new government saying it would, “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” No details are as yet available on what this bill would look like.

What happens next?

The AI Act formally entered into force Aug. 1 and will become fully applicable in two years, apart from some specific provisions. Prohibitions will already apply after six months, and rules on general purpose AI will apply after 12 months.

Date Applicable elements Corresponding sections of regulation
2/2/2025 Prohibitions on unacceptable risk AI will apply, so-called Prohibited Artificial Intelligence Practices Chapters I and II
5/2/2025 Codes of practice must be ready by this time. The plan is for providers of GPAI models and other experts to jointly work on a code of practice. Article 56
8/2/2025 The main body of rules start to apply: Notifying authorities, GPAI models, governance, penalties, confidentiality (except rules on fines for GPAI providers). MSAs should also be appointed by member states. Chapter III section 4, Chapter V, Chapter VII, Chapter XII, Article 78 (except Article 101).
8/2/2026 The remainder of the act will apply except Article 6(1).
8/2/2027 Article 6(1) and the corresponding obligations in this regulation will apply; these relate to some high-risk AI systems covered by existing EU harmonization legislation (Annex I systems e.g. those covered by existing EU product safety legislation) and GPAIs that have been on the market before Aug. 2, 2025. However, some high-risk AI systems already subject to sector-specific regulation (listed in Annex I) will remain regulated by the authorities that control them today (e.g. for medical devices). Article 6(1)

What is the AI pact?

Before the EU AI Act becomes generally applicable, the European Commission will launch a voluntary AI pact aimed at bringing together AI developers from Europe and around the world to commit on a voluntary basis to implement key obligations of the EU AI Act ahead of the legal deadlines. 

The European Commission has said that over 550 organizations have responded to the first call for interest in the AI pact, but whether that leads to widespread adoption remains to be seen. The Commission has published draft details of the pact to a select group outlining a series of voluntary commitments and is currently aiming to launch the AI pact in October.

This article was adapted from material published by Punter Southall Law; it is republished here with permission.

Tags: Artificial Intelligence (AI)
Previous Post

EU AI Act Elevates Responsible Standards, Outpacing GDPR

Next Post

SEC Continues Recordkeeping Crackdown, Fines 26 Firms Combined $390M+

Jonathan Armstrong

Jonathan Armstrong

Jonathan Armstrong is a partner at Punter Southall. He is an experienced lawyer with a concentration on technology and compliance. His practice includes advising multinational companies on matters involving risk, compliance and technology across Europe. He has handled legal matters in more than 60 countries involving emerging technology, corporate governance, ethics code implementation, reputation, internal investigations, marketing, branding and global privacy policies. Jonathan has counseled a range of clients on breach prevention, mitigation and response. He has also been particularly active in advising multinational corporations on their response to the UK Bribery Act 2010 and its inter-relationship with the U.S. Foreign Corrupt Practices Act (FCPA).

Related Posts

GAN Integrity TPRM & AI

Where TPRM Meets AI: Balancing Risk & Reward

by Corporate Compliance Insights
May 13, 2025

Is your organization prepared for the dual challenges of AI in third-party risk management? Whitepaper Where TPRM Meets AI: Balancing...

tracking prices

Pricing Algorithms Raise New Antitrust Concerns

by FTI Consulting
May 13, 2025

Interdisciplinary frameworks can help manage legal, privacy and consumer protection risks

news roundup data grungy

DEI, Immigration Regulations Lead List of Employers’ Concerns

by Staff and Wire Reports
May 9, 2025

Half of fraud driven by AI; finserv firms cite tech risks in ’25

ai policy

Planning Your AI Policy? Start Here.

by Bradford J. Kelley, Mike Skidgel and Alice Wang
May 7, 2025

Effective AI governance begins with clear policies that establish boundaries for workplace use. Bradford J. Kelley, Mike Skidgel and Alice...

Next Post
sec building

SEC Continues Recordkeeping Crackdown, Fines 26 Firms Combined $390M+

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights