When courts sanction lawyers for AI hallucinations, they hold counsel responsible regardless of which department selected the tool or how sophisticated the vendor’s claims were, and the approach of delegating technology decisions to IT or legal ops fails spectacularly with AI. Exterro’s Jenny Hamilton explores how AI malpractice risk should provide legal ops with a renewed mandate, why the discipline it brings maps to capabilities required for AI governance and how procurement conversations will shift from “Can this tool increase efficiency?” to “Can this tool withstand scrutiny if challenged?”
The legal profession faces a new category of risk that is accelerating faster than previous technology-mediated legal obligations: the use of AI for legal work. This shift places in-house counsel in unfamiliar territory, and it is starting to keep general counsels up at night. As a result, in 2026, general counsels (GCs) will begin to engage more deeply with their legal tech strategy, invigorate the legal ops function with greater authority and push their professionals to use purpose-built legal AI.
The traditional malpractice risk calculus for in-house counsel has been relatively benign compared with that for law firm attorneys and insurers of firms paying claims exceeding $50 million in the past two years. Likewise, the risk of court sanctions has also been relatively rare for in-house counsel.
AI disrupts this dynamic because hallucinated legal advice heightens organizational liability, exposing companies to third-party claims, regulatory violations and transaction failures that cannot be resolved by terminating in-house counsel using AI without subject matter expertise.
AI sanctions and malpractice risk arrive at an inopportune moment for in-house counsel. Legal departments face mounting pressure to reduce outside counsel spend while managing increasingly complex regulatory environments. Many in-house have turned to AI solutions to counteract law firm rate hikes, particularly for experienced regulatory counsel. However, using AI for legal work introduces the risk of malpractice and sanctions that these tools were not designed to guard against. It also significantly increases reputational risk for attorneys who use AI tools without understanding their legal and technical capabilities.
In 2024 the American Bar Association (ABA) issued ethics guidance establishing that lawyers have a reasonable understanding of AI’s capabilities and limitations and must verify all AI-generated output. While the opinion stopped short of imposing strict liability, it reinforced the lawyer’s duty to maintain technical competence established by the ABA in 2012 in Rule 1.1, comment 8.
Despite this, there are now more than 600 AI hallucination cases on record, implicating 128 lawyers and including attorneys from top-tier firms. In Johnson v. Dunn, a federal court in Alabama disqualified a Nashville law firm from the case, referred the attorneys to the state bar associations in all jurisdictions where they were licensed and required them to file a copy of the sanctions order in every pending case in which they were counsel of record.
While these cases focus on lawsuits managed by outside counsel, the judiciary is clear: The duty to use AI responsibly attaches to the attorney personally — not the tool, not the vendor. Failure to adhere to these ethical obligations will increase the risk of sanctions and has already done so.
General Counsel on Demand: Why High-Risk Sectors Are Embracing the Fractional Model
Unlike outside counsel who parachute in and out, fractional GCs embed within the business to shape strategy and build systems
Read moreDetailsPrediction 1: GCs will get more engaged with technology
Driven by the risk of sanctions and malpractice, general counsels will engage at unprecedented levels in their legal technology sourcing strategies over the next year.
Historically, legal departments have delegated technology decisions to IT or legal ops, engaging more substantively when vendor contracts require review or compliance questions arise. This approach fails spectacularly with AI.
When courts sanction lawyers for AI hallucinations, they hold counsel responsible regardless of which department selected the tool or how sophisticated the vendor’s claims were. Consider the duty to preserve evidence, a substantive legal obligation that cannot be outsourced. Courts have consistently held that the obligation to preserve evidence runs first to counsel, who must then advise clients and monitor compliance. The duty to preserve has led to far more malpractice and sanctions cases than issues involving other legal technologies.
AI governance demands the same level of professional responsibility. Professional responsibility will require greater engagement from the top down.
Prediction 2: legal operations reinvigoration
The second shift will be the reinvigoration of legal operations as a discipline. Legal ops emerged in the early 2000s when corporate law departments at companies like GE and Bank of America began applying business operations principles to legal service delivery. The 2008 financial crisis accelerated adoption as legal budgets tightened and CFOs demanded greater accountability for legal spend. By 2010, legal ops were firmly established in enough large enterprises that practitioners formed the Corporate Legal Operations Consortium (CLOC) to develop industry standards. Yet in many organizations, legal ops has struggled to secure consistent executive engagement, support and funding, often being relegated to tactical process improvement rather than strategic program management.
AI malpractice risk should provide legal ops with a renewed mandate to educate and protect in-house practitioners from misuse of AI technology. The discipline that legal ops brings, like technology evaluation, workflow standardization, vendor management, metrics and compliance monitoring, maps to capabilities required for AI governance and will protect in-house from mistakes that cost their organization.
GCs who recognize this will staff and fund legal ops teams to build more detailed guardrails in playbooks that support active AI governance programs, creating a natural growth path for a discipline that has long sought to demonstrate its value to the C-suite.
Prediction 3: greater scrutiny of AI tools used for legal work
The third shift will be increased scrutiny over which AI tools are used for legal work and how they are used. Corporate legal departments have long relied on general-purpose technologies adopted from other functions, often prioritizing cost efficiency and ease of deployment. That approach is increasingly complex to reconcile with the professional obligations imposed on lawyers using AI.
Courts and regulators have made clear that output verification, transparency and security controls matter more than a tool’s novelty or popularity. As a result, general counsel will begin to assess AI systems less on convenience and more on whether their design, training and governance mechanisms support defensible legal workflows.
This does not mean one category of technology will fit all organizations. It does mean procurement conversations will increasingly center on risk alignment: whether a given system enables lawyers to meet their ethical duties, maintain confidentiality, and explain how they reached their conclusions. The central question will shift from “Can this tool increase efficiency?” to “Can this tool withstand scrutiny if challenged?”
Applying lessons learned
These predictions are not speculative. They represent the natural evolution of law department behavior in response to significant shifts in technology and risk.
As AI for legal work gains popularity, 2026 will be an inflection point for GCs. Those who recognize the risk to their people and reputation will respond with greater engagement and investment in education, operations and technology to manage exposure before it impacts the organization at scale.


Jenny Hamilton is chief legal officer of Exterro, a data management provider. She formerly was deputy general counsel at HaystackID and held roles in commercial litigation at multiple firms. 







