The DOJ’s message is unmistakable: AI can be either a compliance tool or a criminal liability, depending on deployment and governance. Adria Perez, a partner in the global regulatory enforcement group at law firm Reed Smith, discusses both sides of the enforcement landscape in a Q&A with CCI, detailing how the department’s inventory file reveals AI use cases for cryptocurrency tracing, travel-pattern anomaly detection and intake triage, why AI-driven investigations can lead to assumptions that aren’t accurate without proper context and how companies can use AI to manage the volume of AI-assisted whistleblower complaints without overwhelming compliance resources.
Federal prosecutors now wield AI capabilities that can isolate suspicious billing patterns, trace cryptocurrency flows and flag anomalies across millions of transactions, fundamentally changing the detection calculus for corporate misconduct. And while the DOJ deploys AI to enhance its enforcement reach, it’s holding companies to exacting standards for how they manage AI in their own operations, including charging an executive over allegations of false AI-promoting statements and entering into a non-prosecution agreement with a healthcare company over AI-related fraud. The DOJ’s May 2025 memo on white-collar enforcement priorities emphasized that prosecutors will evaluate whether corporate compliance programs adequately mitigate AI-specific risk.
The message is unmistakable: AI can be either a compliance tool or a criminal liability, depending on deployment and governance. Companies must simultaneously prepare for government scrutiny powered by algorithms that can surface patterns human investigators might miss, while documenting that their own use of AI — from pricing algorithms to fraud detection systems — meets prosecutors’ evolving expectations for responsible governance.

In this written Q&A with CCI, Adria Perez, a partner in Reed Smith’s global regulatory enforcement group, provides practical guidance on navigating both sides of this enforcement landscape. Drawing on real-world experience, Perez explains what compliance teams need to know about the DOJ’s AI-powered investigative capabilities, how to document AI risk mitigation efforts that will withstand prosecutorial review and strategies for coordinating legal, compliance, IT and data governance functions in an era when both the hunters and the hunted are using the same technology.
Q: What does the DOJ’s December 2024 report signal about the department’s overall strategy for integrating AI into law enforcement and investigations?
A: The report further confirms how the DOJ sees the value of using AI for its investigations, including white collar criminal investigations. The department is transparent about its AI use cases and provides an “Inventory file” on its website. That file includes AI use cases that enhance their white-collar criminal investigations, such as:
- AI‑assisted financial transaction anomaly detection for cross-border payments and bank transfers.
- AI-assisted cryptocurrency tracing and risk scoring to identify suspicious transactions.
- Financial and crypto network analysis for money laundering and fraud detection by identifying patterns and relationships to support investigations into laundering that can be associated with fraud, bribery or Ponzi‑type schemes.
- Travel‑pattern anomaly detection.
- Audio and video transcription.
- Intake triage and prioritization by applying scoring models to identify high‑priority tips or proposed investigations for expedited review based on certain criteria.
- Summary capabilities to map existing data across various sources, including financial data, travel and expense records, text and application messaging.
Q: How should companies interpret the DOJ’s acknowledgment of risks related to AI errors, bias and privacy? Does this create leverage or exposure in enforcement matters?
A: The DOJ’s “Evaluation of Corporate Compliance Programs” puts companies and institutions on notice that the DOJ may ask about these types of AI risks when reviewing a corporate compliance program and determining what the next steps should be in an investigation (e.g. charging decision, resolution, etc.). It is important for companies to have documentation on how they have sought to identify and mitigate these AI risks.
Q: How might DOJ’s use of generative AI affect due process expectations in investigations involving corporate defendants?
A: Just like with the use of data analytics, it is important for me, as an investigations counsel, to understand ways to find any gaps or the misuse of AI for government investigations. Over a year ago, one of my clients received a letter from the local US attorney’s office demanding payment for an alleged False Claims Act violation. It turned out that the data analytics that led the DOJ to send the letter did not provide all of the context. After several months, we persuaded the DOJ to walk away. AI, just like data analytics, can lead to assumptions that are not accurate. Context still matters.
Q: How is DOJ’s use of AI for identification and surveillance likely to affect corporate monitoring of communications, employee activity and third-party interactions?
A: The DOJ’s use of AI will likely increase the number of inquiries the DOJ makes to companies because it will be easier for the DOJ to identify companies to investigate. A good example is the DOJ demand letter I explained above. AI tools will make it easier and more efficient for the DOJ to summarize and review communications, activities and interactions, including across sources and matters. Hence, companies will likely need to respond to more government inquiries due to the use of these tools.
Q: What does DOJ’s use of AI for predictive policing mean for how companies may be selected for investigation or enforcement attention?
A: As of now, predictive policing (or the use of data to forecast criminal activity) is known more for hardcore or street crimes. Even with the concerns related to that kind of policing, including bias, federal law enforcement, at some point, will focus more on using those same techniques for white-collar criminal activities. Because predictive policing relies on patterns to identify criminal activity, from a white-collar crime perspective, the DOJ can look at data that identifies companies that may be in a particular industry; operating in particular geography; and using specific third parties that are repeat violators. We have seen how the DOJ uses data analytics, but with AI, the DOJ will be able to canvas more data from various sources.
What Oracle’s TikTok Dance Can Teach Everyone About Good Data Governance
Many US companies still resist recognizing data governance and structured management as a value center, but the regulatory and technological landscape increasingly demands organizational discipline around digital assets. Howard & Howard's Rita W. Garry uses Oracle's mandate for the newly restructured US TikTok — storing
Read moreDetailsQ: How should legal, compliance, IT and data governance teams coordinate in using internal AI tools?
A: Just like with other cross-disciplinary topics, it is important for the legal and compliance departments to work with IT and other internal experts to understand what tools are available to enhance compliance programs, including internal investigations. Some of our clients have decided to have separate AI platforms for the business vs. the legal and compliance departments, while other clients have focused more resources on AI review tools for e-discovery purposes.
Q: What ways do you as an outside investigations attorney use AI when conducting internal investigations for companies?
A: We use AI tools for myriad tasks, including summarizing evidence, preparing interview questions, drafting chronologies and generating high-level translations. We also have a matter where we are testing an AI facial recognition tool to avoid spending hours reviewing video recordings.
Q: How should companies assess the credibility of longer, highly polished whistleblower complaints that appear to be AI-assisted?
A: Given the longer and overinclusive AI-generated complaints, it is important to isolate the core complaint or issue that the whistleblower seeks to raise. Based on that core issue, the company can follow its ordinary investigations procedures. If the company questions any of the support or evidence the whistleblower provides, such as a picture, audio recording or video, experts can help determine whether the evidence is AI-generated, which usually entails reviewing the evidence’s metadata and looking for hidden watermarks.
Q: How can companies ensure that AI-driven whistleblower activity does not overwhelm compliance resources while still meeting regulatory expectations?
A: This may be a situation where there is an AI solution to an AI-created risk. Companies could use AI to summarize each complaint to identify the core issues to investigate. In addition, AI could summarize all of the complaints to determine any compliance gaps that may need further attention, such as tariff risks, expense anomalies or questionable bidding practices. AI is meant to simplify a high volume of issues. By mitigating any risks raised by the AI-driven complaints, the idea is that there may be less future complaints about those issues.










