CCI staff share recent surveys, reports and analysis on risk, compliance, governance, infosec and leadership issues. Share details of your survey with us: editor@corporatecomplianceinsights.com.
AI oversight disclosures triple as boards elevate technology governance
Nearly half of Fortune 100 companies now specifically cite AI risk as part of board oversight responsibilities, according to new research from EY Center for Board Matters — a threefold increase from 16% last year to 48% in 2025.
The EY analysis of 80 Fortune 100 company filings and statements indicates boards are significantly expanding their technology governance disclosures. AI-related expertise in director biographies and skills matrices jumped to 44% from 26% in 2024, while 40% of companies now assign AI oversight to at least one board-level committee, compared with just 11% the previous year.
Audit committees remain the most common location for AI oversight at 21% of companies, though disclosures about non-audit committee oversight tend to be more detailed. Companies with technology or nominating and governance committees overseeing AI are more likely to formalize these responsibilities in committee charters and provide specifics such as reviewing approaches to responsible AI development and overseeing ethical AI application.
Other key findings:
- 36% of companies now disclose AI as a separate 10-K risk factor, up from 14% last year.
- 73% of companies disclose alignment with external cybersecurity frameworks, such as NIST CSF 2.0 or ISO 27001, up from 57% last year and just 4% in 2019.
- 58% report conducting cybersecurity simulations, tabletop exercises or response readiness tests, a substantial increase from 3% in 2019.
EY’s research covered SEC filings from companies on the 2025 Fortune 100 list through July 31, 2025.
69% of board professionals using AI for governance work
More than two-thirds of board professionals report using artificial intelligence for board work in the past six months, with 40% experimenting with multiple AI platforms, according to a 2025 survey from OnBoard, a board management software company.
Directors and executive team members were most likely to report using AI, while administrators and support staff were least likely, often citing training and usability as barriers. Respondents in corporate and financial services sectors were most likely to use AI, while nonprofit and education respondents were least likely, citing cost and readiness concerns.
The survey of 549 directors, administrators, executives and corporate secretaries also found persistent challenges in board effectiveness. Underperformance of board members was cited by 35% as the area most needing improvement, followed by low engagement (30%) and limited communication between meetings (24%).
Other key findings:
- 71% of board professionals reported their boards are more effective compared to 12 months ago, consistent with 2023 and 2024 results.
- Cost consciousness (15%), usability issues (12%) and regulatory risks (10%) emerged as recurring AI-related concerns.
- 44% of respondents said they were interested in investing in secure AI capabilities.
The survey was fielded in August 2025 and included respondents from nonprofit, education, financial services, healthcare, corporate, government and association organizations.
Nearly half of UK financial firms express high concern over legal risks under lighter regulation
Nearly half of UK banking and financial services firms report high concern over legal liability and regulatory risks posed by broad UK financial services reforms set to take effect starting next year, according to a survey from Skillcast, a compliance training provider.
The survey of 250 full-time employees in banking and financial services found that more than nine in 10 firms reported some level of concern about increased legal liability or reputational exposure under the lighter regulatory environment announced by Chancellor Rachel Reeves in July 2025. The so-called Leeds reforms aim to reduce red tape and reintroduce informed risk-taking into the system while positioning the UK as a global financial services hub.
Respondents ranked financial loss, regulatory penalties and data breaches as top threats under lighter regulation, while fraud exposure was ranked as a lower-priority risk despite the September 2025 implementation of a new corporate offense of failure to prevent fraud under the Economic Crime and Corporate Transparency Act.
Other key findings:
- Regular compliance training for employees was viewed by 57% of firms as the most important factor supporting compliance under lighter regulation, followed by clear internal policies (54%) and access to compliance tools (39%).
- Only 28% ranked ongoing monitoring and risk assessments as a main factor for maintaining compliance.
- The survey found 69% of firms increased compliance spending in the last 12 months, with one in 10 increasing investment by more than 20%.
Political risk climbs to top three corporate threats as preparedness lags, survey finds
Political risk has become one of the top three corporate threats facing companies, with 97% of risk leaders reporting it is impacting their business in some way, according to a survey from Riskonnect, a risk management software provider.
The survey of more than 200 risk and compliance professionals worldwide found that 40% describe the political risk impact as significant or severe. Companies report having to stall or slow hiring (37%), delay major tech investments or capital expenditures (28%) and delay expansion plans (23%) because of domestic political instability. Yet only 17% of respondents say they feel very prepared to assess, manage and recover from political risks.
The survey also found companies are incorporating agentic AI into operations without adequate risk assessment. Nearly 60% of risk leaders say their companies are considering incorporating agentic AI solutions, yet over half of those leaders (55%) haven’t assessed the risks. Risk leaders cite data privacy and security issues (68%), autonomous decisions that conflict with business goals or legal requirements (52%) and unintended actions from runaway processes (38%) as the biggest risks from deploying agentic AI.
Other key findings:
- Two-thirds of companies (66%) entered 2025 with a plan for managing geopolitical volatility, up from 19% last year, though only 18% say they are very prepared for these risks.
- Some 62% of risk leaders say restrictive trade policies or prolonged conflict would increase cyber exposure from state-sponsored attacks and reduced federal cyber investments.
- Only 12% of companies say they feel very prepared to assess, manage and recover from AI and AI governance risks, with 42% lacking policies to govern employee use of AI and 75% lacking dedicated plans to address generative AI risks.
 
			 
			







