As companies race to deploy AI, cautious ones are also racing to deploy guardrails and frameworks around the technology. While this is cause for celebration, Fernando Delgado, Karl Sobylak and Lon Troyer of Lighthouse, an eDiscovery provider, warn being too restrictive could stifle innovation in this open letter to AI governance committees.
Dear AI governance committees,
The biggest risk facing legal teams today is not negligent AI use. It’s AI governance frameworks that unintentionally stall responsible innovation, a trend especially prevalent in legal functions. As consultants and technologists who help corporations navigate regulatory investigations and complex litigation, we see firsthand the consequences when policies designed as guardrails turn into barricades.
We write to you not to argue against caution. Governance is essential. But the most pragmatic path forward is one that enables incremental, accountable adoption and is not stifled by overly restrictive, onerous policies.
Our belief is that the right approach to governance will protect risk without paralyzing progress. To arrive at this “right” approach, we urge you to consider several truths.
One-size-fits-all doesn’t work
Blanket policies obscure critical distinctions in how AI is used across an enterprise, with many policies overfitting to address the risks related to hallucinations. The considerations that apply to consumer-facing chatbots are not the same as those needed for large-scale document analysis in areas like litigation.
As with all new technology adoption, success often comes incrementally. Legal teams need the ability to take steps to gain experience and confidence in using AI responsibly. Such a path requires a governance model that allows them to integrate AI gradually without making the barrier to entry too high.
Risk Leaders, Don’t Let FOMO Force a Hasty Move on AI
Policies, resource management and appetite for risk all part of the equation
Read moreDetailsRisk isn’t uniform
Not all AI risk is equal. While AI may sometimes create risk; other times it may reduce it by increasing speed, accuracy and consistency. Ensuring AI governance policies balance both risk and reward are essential for organizations seeking to benefit from AI adoption.
A few aspects worth underscoring:
- Not all LLMs are the same. Predictive AI, more commonly used in legal today, doesn’t hallucinate.
- Bans miss the mark. Prohibiting “AI” because of chatbot risks doesn’t make sense for supervised, auditable doc-analysis engines.
- Over-specification is brittle. Governance frameworks that tightly bind policies to today’s tools run the risk of falling behind tomorrow’s advances.
In light of these, we recommend risk segmentation; tiers of AI use that reflect the nature, context and controls of each application so policy aligns with the actual risk landscape.
Don’t forget legacy use
Legacy use needs consideration. AI isn’t new to many legal teams. For example, in eDiscovery, machine learning has been used (and governed) for more than a decade, and some LLM-enabled processes have been in production for years. Are policies designed to help advance improvements or are they setting back progress achieved by established tools? Are all instances of AI being reevaluated?
Governance should reflect this precedent and facilitate modernization, not impede it.
From policy to practice
Finally, governance frameworks are only useful if they can be operationalized. Legal teams need playbooks and standard operating procedures that specify who does what, with whom and when. As policies are solidified, some business units may need specialized support to operationalize them:
- Map AI tools to specific legal workflows.
- Embed risk mitigation into process design.
- Ensure compliance without sacrificing productivity.
Scenario-based examples are key; for example:
- How do teams handle data access, audit trails or exception handling for an AI classification engine?
- What happens when a model’s output diverges from human judgment?
These are solvable challenges and answering them makes governance real.
Support, don’t stall
In closing, we leave you with this:
The goal isn’t to freeze AI use until every risk is mapped. It’s to balance risk with innovation.
As governance leaders, you have the responsibility and the opportunity to set policies that both protect what matters most and empower teams to move forward. The organizations that get this balance right will lead the future of legal innovation.
Sincerely,
Fernando Delgado, Ph.D.
Karl Sobylak
Lon Troyer, Ph.D.