No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Compliance

Compliance Classroom: Emerging Perspectives on AI

Essays on moral distancing, information silos and IP infringement

March 20, 2026
in Compliance

No one can reliably predict which regulations tomorrow will bring, but the future of compliance is already taking shape in the classrooms training the next generation of practitioners. Here, CCI offers a glimpse into those conversations — a collection of essays from law students grappling with the thorniest questions in the field today. The following essays are published with permission from the authors, Shon Stelman and Michael Niebergall, both students at George Mason University’s Antonin Scalia Law School.

Shon Stelman

Moral Distancing, Information Silos & the Future of Compliance in AI-Powered Companies

Introduction

shon stelman
Shon Stelman

The rapid integration of artificial intelligence (AI) into corporate governance has created a profound paradox for compliance practitioners. While AI provides unprecedented technical capacity for real-time monitoring and fraud detection, its implementation inherently increases moral distancing and bureaucratic distance. New empirical research indicates that delegating tasks to AI significantly increases dishonest behavior, as humans feel a psychological buffer from the ethical consequences of automated decisions. For experienced practitioners, the challenge is no longer just technical; it is maintaining the “vulnerable face” of the stakeholder behind a veil of data. To satisfy the US Department of Justice’s 2024 guidance and the UK Bribery Act’s “adequate procedures” defense, organizations must move beyond static structural oversight to implement process-based “generative compliance” reforms that actively counteract the psychological detachment and information silos introduced by automated systems.

The peril of moral distancing in anti-corruption

Moral distance refers to the psychological phenomenon where individuals behave unethically because they cannot see or feel the impact of their decisions. AI exacerbates this phenomenon by creating “proximity distance” — eliminating face-to-face interactions — and “bureaucratic distance,” where decisions are reduced to formulas. The 2025 study referenced above found that participants were significantly more likely to cheat when they could offload the behavior to an AI agent, particularly when using interfaces that allowed for high-level “goal-setting” (e.g., “maximize profit”) rather than explicit instructions.

Among others, this has severe implications for the Foreign Corrupt Practices Act (FCPA). Under the FCPA, “willful blindness” or a “head-in-the-sand” approach is sufficient for liability. For example, if an employee uses a goal-oriented AI to secure contracts in a high-risk region and the AI defaults to corrupt payments to meet targets, the practitioner will be hard-pressed in claiming ignorance. In addition to the FCPA, the UK Bribery Act holds organizations liable for failing to prevent bribery by “associated persons.” Consequently, if AI creates a “bureaucratic distance” where supervisors no longer understand the “vulnerable face” of those impacted, losing grasp of their business partners, the organization will struggle to prove it had “adequate procedures” in place to prevent such misconduct.

AI as the ultimate information silo

Historically, major corporate scandals — such as those at Wells Fargo and General Motors — resulted not from a lack of data but from information silos that prevented synthesized reporting. AI risks becoming the ultimate silo. Its “inscrutability” — the mismatch between mathematical optimization and human reasoning — makes it difficult for compliance officers to “identify, judge, and correct mistakes in algorithmic decisions.”

The Department of Justice’s updated 2024 guidance emphasizes that the “black box” nature of AI is not an excuse for failing to meet legal and ethical standards. Practitioners must ensure that AI-driven decisions are subject to human review and that the AI is ethically aligned with internal governance. Failure to leverage data effectively to prevent misconduct may invite “intense regulatory scrutiny.”

From structural to process-based “generative compliance”

To mitigate these risks, practitioners must transition to “generative compliance,” a proactive, forward-thinking approach where compliance programs evolve alongside emerging risks. This requires moving beyond “structural” changes (e.g., creating new committees) to “process-based” reforms, which focus on the practices and routines firms use to communicate and analyze information.

Three process-based interventions are critical:

  1. Standardized internal investigation questions: Ensuring that AI-monitored risks are probed with consistent human oversight to spot trends.
  2. Materiality surveys: Disseminating surveys to the workforce to detect when automated systems are being exploited to achieve commercial targets at the cost of ethics.
  3. Aggregation principles: Aggregating data from disparate AI systems to identify systemic failures, much as General Motors should have aggregated separate settlement data to identify the faulty ignition switch earlier.

Conclusion

Experienced practitioners must “re-humanize” responsibility. AI is not a “plug-and-play” solution, but an ongoing commitment. A well-designed program under the 2024 DOJ standards must assess whether human decision-making is used to audit the AI’s “goals.” By implementing robust processes that bridge the moral distance created by technology, firms can ensure that their AI-driven compliance programs actually “work in practice,” securing both the company’s legal safety and its ethical integrity.

Shon Stelman is a second-year student at George Mason University, Antonin Scalia Law School and holds a B.M. and M.M. in classical guitar performance and pedagogy from Johns Hopkins University, Peabody Conservatory. During his undergraduate studies, Shon was a teaching assistant in musicology and peer mentor in music theory. Prior to law school, Shon worked at a small personal injury and family law firm and later at an employment discrimination law firm as a litigation paralegal. During summer 2025, he interned with the US Department of Justice’s Office of Vaccine Litigation. Shon is an incoming research editor on the George Mason Law Review. His hometown is Wheeling, Ill.

Michael Niebergall

Though Law Is Still Developing, Companies Should Act in Good Faith Now

michael niebergall
Michael Niebergall

AI has evolved from a novelty to a substantial tool for individuals and businesses alike, and with that comes numerous legal questions, particularly in the field of copyright. Because current AI models are often used to generate text, images, software code, music, etc., they have given rise to numerous questions and concerns about how developers and users interact and comply with copyright law. The legal landscape for AI and copyright law is still developing, so AI developers and businesses utilizing AI have begun taking measures to mitigate potential copyright compliance risks both in training data inputs into the AI model and the generative outputs of the models.

Currently, most copyrighted works used to train AI models are largely viewed as fair use. Lower courts believe the act of “training” the models through analysis of the works is inherently transformative compared to the works’ original nature, nor do they view the training process and potential end results as a substitute for the original works either. However, the Supreme Court has yet to provide any guidance on the topic, so the issue is still legally unresolved and subject to serious change. Many rights owners have objected to this current precedent, claiming that the AI models being able to output similar works after being trained on protected works creates market substitutions and thus is not fair use; industries such as stock photography, journalism, commission-based illustrations, etc. are particularly vulnerable. Multiple artists and publishers have filed lawsuits against companies like Anthropic for this exact reason. 

To avoid potential legal issues like secondary liability for copyright infringement regarding unlicensed training datasets, AI model developers have begun implementing safeguards on training data for their models. This includes filters for high-risk categories of data, keeping documentation of datasets used for training and maintaining provenance tracking to ensure they know exactly how an AI model is being trained. Developers have recently heightened efforts to get permission from creators to use their work for training data, often even paying licensing fees even if using the works for training data would already be legally permissible as fair use. These steps ensure that should a question of secondary liability for infringement arise, the developers will be seen as having taken “reasonable, good faith” measures to prevent predictable infringements. 

The outputs of generative AI models are also a burgeoning area of copyright law concern for developers and users. Copyright infringement liability may attach to either the developer or the user of the AI model if the model produces works that are substantially similar to already-protected works. To mitigate this risk to a legally reasonable point, model developers have begun scanning prompts for keywords/phrases that could indicate a potentially infringing request to prevent the model from generating the work in the first place. This particularly helps ensure compliance with copyright law by preventing the AI model from functioning as a substitute for the works it could potentially infringe upon.

Businesses that utilize generative AI also have begun developing internal AI usage policies for copyright compliance purposes as well; a work produced by an AI needs to have enough human creative input to be considered “authored” by a human, a necessary element for a work to be protected by copyright. Companies are implementing careful human oversight policies so that they can properly claim and protect any text, selection and arrangements of their works produced through AI. Failure to disclaim the portions of a generated work that were solely created by the AI model (which are not protectable) could result in a denial of copyright registration, which could lead to all sorts of headaches for the business. These internal usage policies also help businesses prevent potential accidental infringement in their generated works, as AI models are imperfect and may still produce material that is infringing in some capacity, so businesses have to be careful in monitoring any work they generate through these models to ensure that the work does not have infringing material that slipped through the AI’s protections. 

As the law continues to develop in this area, organizations using AI will be judged more and more by whether they acted reasonably and in good faith in mitigating infringement, as the uniqueness and breadth of this new technological field is far too vast to expect perfect compliance. New, specific safeguards are being used to create a “reasonable” degree of protection against widespread predictable copyright infringement both in an AI’s training and its outputs, and familiar compliance frameworks provide these parties with practical tools for managing and mitigating infringement risks. And as the rules are refined more, so too will these organizations need to remain adaptable in their internal governance and protections to continue ensuring compliance.

Michael Niebergall is a 2L at Scalia Law School at George Mason University. His interest in entertainment and IP law comes from two decades of classical music training on the tuba, his music major experience during undergraduate years at James Madison University and a lifelong enjoyment for all things nerdy. Mike is also very interested in how the advent of AI has affected and will continue to affect these fields and how artists, publishers and everyone in between will respond to those changes.

 

Tags: Artificial Intelligence (AI)Compliance Classroom
Previous Post

US Regulatory Fines Plummet in 2025

Corporate Compliance Insights

Corporate Compliance Insights

Corporate Compliance Insights

Related Posts

news roundup_june 14 2024

US Regulatory Fines Plummet in 2025

by Staff and Wire Reports
March 19, 2026

Majority of orgs report breach involving AI

ai generated content collage

Managing the AI Content Explosion in Financial Services

by Jamie Hoyle
March 13, 2026

AI tools have multiplied adviser output in financial services — and FINRA’s supervision framework was written for a different volume

news roundup new

Only 45% of CAEs Report Having Enough Funding

by Staff and Wire Reports
March 12, 2026

Nearly 80% of in-house legal pros say AI funding will rise or stay steady

incredible shrinking business man

The Incredible Shrinking Compliance Officer

by Mary Shirley
March 10, 2026

When the mandate grows and the headcount doesn't, we have more options than we think

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights