No one can reliably predict which regulations tomorrow will bring, but the future of compliance is already taking shape in the classrooms training the next generation of practitioners. Here, CCI offers a glimpse into those conversations — a collection of essays from law students grappling with the thorniest questions in the field today. The following essays are published with permission from the authors, Shon Stelman and Michael Niebergall, both students at George Mason University’s Antonin Scalia Law School.
Shon Stelman
Moral Distancing, Information Silos & the Future of Compliance in AI-Powered Companies
Introduction

The rapid integration of artificial intelligence (AI) into corporate governance has created a profound paradox for compliance practitioners. While AI provides unprecedented technical capacity for real-time monitoring and fraud detection, its implementation inherently increases moral distancing and bureaucratic distance. New empirical research indicates that delegating tasks to AI significantly increases dishonest behavior, as humans feel a psychological buffer from the ethical consequences of automated decisions. For experienced practitioners, the challenge is no longer just technical; it is maintaining the “vulnerable face” of the stakeholder behind a veil of data. To satisfy the US Department of Justice’s 2024 guidance and the UK Bribery Act’s “adequate procedures” defense, organizations must move beyond static structural oversight to implement process-based “generative compliance” reforms that actively counteract the psychological detachment and information silos introduced by automated systems.
The peril of moral distancing in anti-corruption
Moral distance refers to the psychological phenomenon where individuals behave unethically because they cannot see or feel the impact of their decisions. AI exacerbates this phenomenon by creating “proximity distance” — eliminating face-to-face interactions — and “bureaucratic distance,” where decisions are reduced to formulas. The 2025 study referenced above found that participants were significantly more likely to cheat when they could offload the behavior to an AI agent, particularly when using interfaces that allowed for high-level “goal-setting” (e.g., “maximize profit”) rather than explicit instructions.
Among others, this has severe implications for the Foreign Corrupt Practices Act (FCPA). Under the FCPA, “willful blindness” or a “head-in-the-sand” approach is sufficient for liability. For example, if an employee uses a goal-oriented AI to secure contracts in a high-risk region and the AI defaults to corrupt payments to meet targets, the practitioner will be hard-pressed in claiming ignorance. In addition to the FCPA, the UK Bribery Act holds organizations liable for failing to prevent bribery by “associated persons.” Consequently, if AI creates a “bureaucratic distance” where supervisors no longer understand the “vulnerable face” of those impacted, losing grasp of their business partners, the organization will struggle to prove it had “adequate procedures” in place to prevent such misconduct.
AI as the ultimate information silo
Historically, major corporate scandals — such as those at Wells Fargo and General Motors — resulted not from a lack of data but from information silos that prevented synthesized reporting. AI risks becoming the ultimate silo. Its “inscrutability” — the mismatch between mathematical optimization and human reasoning — makes it difficult for compliance officers to “identify, judge, and correct mistakes in algorithmic decisions.”
The Department of Justice’s updated 2024 guidance emphasizes that the “black box” nature of AI is not an excuse for failing to meet legal and ethical standards. Practitioners must ensure that AI-driven decisions are subject to human review and that the AI is ethically aligned with internal governance. Failure to leverage data effectively to prevent misconduct may invite “intense regulatory scrutiny.”
From structural to process-based “generative compliance”
To mitigate these risks, practitioners must transition to “generative compliance,” a proactive, forward-thinking approach where compliance programs evolve alongside emerging risks. This requires moving beyond “structural” changes (e.g., creating new committees) to “process-based” reforms, which focus on the practices and routines firms use to communicate and analyze information.
Three process-based interventions are critical:
- Standardized internal investigation questions: Ensuring that AI-monitored risks are probed with consistent human oversight to spot trends.
- Materiality surveys: Disseminating surveys to the workforce to detect when automated systems are being exploited to achieve commercial targets at the cost of ethics.
- Aggregation principles: Aggregating data from disparate AI systems to identify systemic failures, much as General Motors should have aggregated separate settlement data to identify the faulty ignition switch earlier.
Conclusion
Experienced practitioners must “re-humanize” responsibility. AI is not a “plug-and-play” solution, but an ongoing commitment. A well-designed program under the 2024 DOJ standards must assess whether human decision-making is used to audit the AI’s “goals.” By implementing robust processes that bridge the moral distance created by technology, firms can ensure that their AI-driven compliance programs actually “work in practice,” securing both the company’s legal safety and its ethical integrity.
Shon Stelman is a second-year student at George Mason University, Antonin Scalia Law School and holds a B.M. and M.M. in classical guitar performance and pedagogy from Johns Hopkins University, Peabody Conservatory. During his undergraduate studies, Shon was a teaching assistant in musicology and peer mentor in music theory. Prior to law school, Shon worked at a small personal injury and family law firm and later at an employment discrimination law firm as a litigation paralegal. During summer 2025, he interned with the US Department of Justice’s Office of Vaccine Litigation. Shon is an incoming research editor on the George Mason Law Review. His hometown is Wheeling, Ill.
Michael Niebergall
Though Law Is Still Developing, Companies Should Act in Good Faith Now

AI has evolved from a novelty to a substantial tool for individuals and businesses alike, and with that comes numerous legal questions, particularly in the field of copyright. Because current AI models are often used to generate text, images, software code, music, etc., they have given rise to numerous questions and concerns about how developers and users interact and comply with copyright law. The legal landscape for AI and copyright law is still developing, so AI developers and businesses utilizing AI have begun taking measures to mitigate potential copyright compliance risks both in training data inputs into the AI model and the generative outputs of the models.
Currently, most copyrighted works used to train AI models are largely viewed as fair use. Lower courts believe the act of “training” the models through analysis of the works is inherently transformative compared to the works’ original nature, nor do they view the training process and potential end results as a substitute for the original works either. However, the Supreme Court has yet to provide any guidance on the topic, so the issue is still legally unresolved and subject to serious change. Many rights owners have objected to this current precedent, claiming that the AI models being able to output similar works after being trained on protected works creates market substitutions and thus is not fair use; industries such as stock photography, journalism, commission-based illustrations, etc. are particularly vulnerable. Multiple artists and publishers have filed lawsuits against companies like Anthropic for this exact reason.
To avoid potential legal issues like secondary liability for copyright infringement regarding unlicensed training datasets, AI model developers have begun implementing safeguards on training data for their models. This includes filters for high-risk categories of data, keeping documentation of datasets used for training and maintaining provenance tracking to ensure they know exactly how an AI model is being trained. Developers have recently heightened efforts to get permission from creators to use their work for training data, often even paying licensing fees even if using the works for training data would already be legally permissible as fair use. These steps ensure that should a question of secondary liability for infringement arise, the developers will be seen as having taken “reasonable, good faith” measures to prevent predictable infringements.
The outputs of generative AI models are also a burgeoning area of copyright law concern for developers and users. Copyright infringement liability may attach to either the developer or the user of the AI model if the model produces works that are substantially similar to already-protected works. To mitigate this risk to a legally reasonable point, model developers have begun scanning prompts for keywords/phrases that could indicate a potentially infringing request to prevent the model from generating the work in the first place. This particularly helps ensure compliance with copyright law by preventing the AI model from functioning as a substitute for the works it could potentially infringe upon.
Businesses that utilize generative AI also have begun developing internal AI usage policies for copyright compliance purposes as well; a work produced by an AI needs to have enough human creative input to be considered “authored” by a human, a necessary element for a work to be protected by copyright. Companies are implementing careful human oversight policies so that they can properly claim and protect any text, selection and arrangements of their works produced through AI. Failure to disclaim the portions of a generated work that were solely created by the AI model (which are not protectable) could result in a denial of copyright registration, which could lead to all sorts of headaches for the business. These internal usage policies also help businesses prevent potential accidental infringement in their generated works, as AI models are imperfect and may still produce material that is infringing in some capacity, so businesses have to be careful in monitoring any work they generate through these models to ensure that the work does not have infringing material that slipped through the AI’s protections.
As the law continues to develop in this area, organizations using AI will be judged more and more by whether they acted reasonably and in good faith in mitigating infringement, as the uniqueness and breadth of this new technological field is far too vast to expect perfect compliance. New, specific safeguards are being used to create a “reasonable” degree of protection against widespread predictable copyright infringement both in an AI’s training and its outputs, and familiar compliance frameworks provide these parties with practical tools for managing and mitigating infringement risks. And as the rules are refined more, so too will these organizations need to remain adaptable in their internal governance and protections to continue ensuring compliance.
Michael Niebergall is a 2L at Scalia Law School at George Mason University. His interest in entertainment and IP law comes from two decades of classical music training on the tuba, his music major experience during undergraduate years at James Madison University and a lifelong enjoyment for all things nerdy. Mike is also very interested in how the advent of AI has affected and will continue to affect these fields and how artists, publishers and everyone in between will respond to those changes.







