No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Governance

Data Authenticity & Accountability Crucial in the AI Age

Companies must blend innovative and traditional methods for policy development, privacy programs and regulatory alignment

by Greg Campanella and Ken Feinstein
April 20, 2026
in Governance
big data concept

Data is one of an organization’s most valuable assets, yet it is one of its most vulnerable, and AI is introducing more risk. One principle remains clear, Greg Campanella and Ken Feinstein of consultancy J.S. Held say: Data authenticity and integrity are foundational for AI deployment and long-term value. 

Structured data has become highly valued in the digital world, and accordingly, the risk of data manipulation and related fraud has increased. AI has enabled threat actors like terrorist groups and cybercriminals to create deepfakes and more easily gain access to environments that hold sensitive personal information. While methods existed to make fake data appear authentic before the advent of AI, new technologies have made it harder to distinguish between real and deceptive data.

Business email scams, BYOD (bring your own device) policies and falsified electronic documents, for example, are big risks to businesses. The consequences of data integrity failures can be severe: costly investigations, litigation, reputational damage and operational disruptions.

Compliance challenges in a fragmented regulatory landscape

GDPR must be considered by all organizations that do business in Europe. Since its implementation in 2018, the primary goal of the GDPR has been to protect the personal data and privacy of individuals within the European Union.

However, the European Commission (EC) recently voted favorably on a digital omnibus that would streamline rules on AI, cybersecurity and data. The package rewrites EU privacy laws and simplifies compliance, which will lower administrative costs. Companies will save an estimated €5 billion, nearly $6 billion, by 2029, according to projections. While details surrounding rules and timelines are still being fleshed out, most provisions are due to take effect by early August.

For the GDPR, the EC says the omnibus package proposes to modernize cookie rules. It also aims to simplify certain obligations for businesses and organizations “by clarifying when they must conduct data protection impact assessments and when and how to notify data breaches to supervisory authorities.”

Ensuring data is encrypted and secure is critical, but companies should note GDPR enforcement varies across countries and should develop data policies based on local and regional laws.

Furthermore, the EC is proposing amendments to the EU’s AI Act. Those amendments will include simplified technical documentation requirements for small and medium-sized enterprises. A proposed amendment would also ease compliance measures, allowing innovators to use regulatory sandboxes. The EC also notes that the omnibus package will introduce “a single-entry point where companies can meet all incident-reporting obligations.” Currently, companies must report cybersecurity incidents under several laws.

In contrast, the US has not enacted a comprehensive federal data privacy law. Instead, state-level and sector-specific laws on the federal level.

However, the Trump Administration is moving to block the patchwork of state laws regulating AI. In late 2025, the administration announced an executive order that directs the US attorney general to establish an AI litigation task force to challenge state AI laws that it deems harmful to innovation and create costly compliance requirements. The order specifically calls out Colorado’s law, highlighting its prohibition on “‘algorithmic discrimination’” as an example of harmful state overreach, arguing that such provisions compel companies to embed ideological bias and generate false results. California will arguably feel the greatest impact of the order, as it is home to 33 of the world’s highest-grossing privately held AI companies and has enacted more AI laws than any other state. The order targets California laws that require creators of AI to offer tools to help users identify AI-generated content and mandate high-level transparency regarding the data used to train models. 

As AI plays a growing role in data storage and management, the US Cybersecurity and Infrastructure Security Agency (CISA) issued guidance that recommends sourcing data from trusted providers, tracking its provenance, maintaining logs of origin and system flow and using cryptographically signed provenance databases and digital signatures to ensure integrity and prevent tampering.

eu desktop flags
Compliance

The EU AI Act’s ‘Wait and See’ Window Is Closing

by Naomi Grossman
April 6, 2026

AI literacy has survived attempts to water it down and remains a direct organizational obligation — not a policy aspiration

Read moreDetails

AI inaccuracies, corporate liability & why governance matters

Within this fragmented regulatory environment, the risks of improper AI use are far from theoretical. Consider a Canadian case in which a passenger used a chatbot on an airline’s website and received inaccurate information about pricing. In court, the airline contended that it bore no responsibility for information provided by its chatbot. The tribunal ordered the airline to issue a refund to the passenger.

This case underscores that AI‑enabled interactions are increasingly treated as formal corporate communications, creating exposure when automated outputs diverge from policies. These developments are prompting a closer internal focus on oversight, documentation and cross‑functional coordination, particularly as compliance teams evaluate the reliability of AI‑generated content and its implications across customer engagement, investigations and disputes.

Given these risks, authenticating electronically stored and transmitted information is crucial, particularly for companies with cross-border operations, complex supply chains or significant financial exposure. To mitigate these risks, companies can establish robust internal governance policies, implement proactive privacy and security protocols and ensure employees are aware of emerging threats and historical methods of information manipulation.

Protecting IP through AI governance

Data is often the most fundamental component of AI performance, offering unique opportunities for asset recognition, protection, valuation and monetization. This is especially relevant, given that data integrity directly supports the value of a company’s IP. For compliance teams and their outside counsel, the IP dimension of data handling and AI governance carries meaningful regulatory, contractual and litigation risk.

Weak provenance controls, unclear licensing rights and undocumented data flows can undermine internal controls, complicate regulatory responses and create issues in transactions and investigations where organizations must demonstrate that sensitive or proprietary data was collected, used and safeguarded appropriately. This includes ensuring that cross‑border data‑use and licensing agreements clearly define and restrict the use of proprietary or third‑party datasets in AI development, as improper use can trigger contractual violations and expose trade‑secret vulnerabilities. The data and computational resources that enable AI model training can themselves constitute protectable trade secrets, and the algorithms used for learning, prediction and generation may also be protected through trade secret, copyright or patent regimes. Because information used to train large language models loses confidentiality and exclusivity once processed, these risks extend to downstream enforcement and licensing. Strict access controls are therefore essential when handling finite or one‑time‑use data.

Proprietary data requires robust protection beyond watermarking for tracking purposes, starting with a clear understanding of the data room and continuous monitoring of when and how data exits that controlled environment. Emerging solutions include secure user-controlled environments that enable data owners to share and license access without transferring or duplicating the underlying asset. These environments enable third parties to perform analysis, validation or model training within a contained framework, preserving confidentiality and ensuring compliance with use and licensing terms.

This is especially critical during high-stakes transactions such as M&A, where sensitive data must be validated without direct exposure. In these scenarios, third-party validators can conduct due diligence within secure environments, maintaining data integrity while facilitating commercial engagement. By combining traceability, containment and controlled access, companies can protect proprietary data while unlocking its full economic potential.

Ultimately, companies that aim to derive value from their data assets must first understand what data they hold, how it is accessed and the obligations or restrictions attached to it, particularly where data‑use rights and licensing terms create compliance exposure. To balance opportunity with risk, companies must rely on secure environments that support data protection and controlled licensing.

Strengthening data authentication in the AI age

As AI reshapes how information is created and shared, the ability to verify data authenticity has become a cornerstone of reliance and compliance. Digital forensics plays a critical role in litigation, regulatory investigations and corporate transactions.

With AI tools creating digital images, documents and, in some instances, hallucinations, it is critical for those seeking to collect, evaluate and present data as evidence in litigation to authenticate the origin of the data they rely on. For compliance teams, the ability to authenticate data is increasingly tied to regulatory expectations, influencing how organizations document controls, substantiate reporting and demonstrate adherence to privacy and security requirements. Moreover, as AI-generated materials become harder to distinguish from authentic data, ensuring that systems handling sensitive information follow defensible, auditable procedures is becoming a core compliance concern.

Forensic computer images, for example, use hash verification to ensure that the information captured is not altered after imaging. The evolution of AI, resulting in more realistic documents and images, has outpaced early AI-generated content-detection tools. Organizations must therefore adopt a layered approach to authentication, combining forensic validation with corroboration from independent sources and third-party attestations. This is particularly vital during M&As, where companies inherit not only data but also governance frameworks or gaps. Assessing inherited privacy, security and compliance structures, eliminating redundancies, and aligning practices with global standards are critical steps to mitigate risk.

This material was adapted with permission from an article first published by J.S. Held.
Tags: Artificial Intelligence (AI)Data Governance
Previous Post

Negligence & AI: Can the Courts Keep Up?

Next Post

SEC Risk-Disclosure Rule Changes Seem Certain & Are Certainly Troubling

Greg Campanella and Ken Feinstein

Greg Campanella and Ken Feinstein

Greg Campanella, CLP, is a senior managing director at J.S. Held in California. He is responsible for leading the management consulting & valuation practice of Ocean Tomo, a part of J.S. Held. His work has focused on strategic advisory and the valuation of intangible assets and intellectual property for acquisitions and divestitures, bankruptcy and restructuring, establishment of monetization strategies, including licensing, mergers, joint venture/partnership formations, litigation support and tax matters.
Ken Feinstein is a senior managing director in the digital investigations and discovery service line within the global investigations practice at J.S. Held. He specializes in investigative data analytics and provides investigations, regulatory risk and litigation support solutions spanning multiple sectors, including retail and consumer products, life sciences, technology, financial services, industrial products and government agencies. His clients include law firms and Fortune 500 legal and compliance teams for whom he delivers large-scale, complex investigations, regulatory response matters, proactive anti-fraud efforts and compliance programs. He is a member of the Association of Certified Fraud Examiners.

Related Posts

A collection of international currencies

What Are Global Financial Regulators Prioritizing in 2026?

by Ajay Katara
May 15, 2026

Singapore, Canada, Australia, the UK and the US moving on distinct but overlapping regulatory priorities; deadlines start piling up in...

A robot locked in cage.

The Time to Set Rules Around AI Use Is Before — Not After — You Deploy It Everywhere

by Theodora Monye
May 14, 2026

Algorithms can flag suspicious activity, but they can’t yet tell you whether fraud is likely

news roundup data abstract rainbow lines

84% of Leaders Expect Effects From AI Regulation Over Next Year

by Staff and Wire Reports
May 14, 2026

1 in 3 employees fear retaliation if they report misconduct; boards beef up security

robot fallen over

‘Blame the Bot’ Won’t Cut It in Front of Regulators

by Jonny Frank, Nathan Gibson, Michael Costa and Kashif Sheikh
May 11, 2026

Responsible automation requires human judgment, independence and evidence

Next Post
atkins sec collage

SEC Risk-Disclosure Rule Changes Seem Certain & Are Certainly Troubling

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights