Broad top-down mandates to use AI fail because they’re too vague to act on, while unmanaged employee experimentation can expose sensitive data to unauthorized parties. Molly Lebowitz and Anthony Prestia argue that successful AI adoption requires identifying bona fide use cases and establishing clear human checkpoints — and making it easier for employees to experiment safely rather than trying to shut down experimentation.
Generative AI has moved out of specialist teams and into everyday work, with adoption now spanning finance, marketing, product, operations and people teams. Employees encounter large language models not only through their personal ChatGPT or Claude accounts but also through AI features embedded in the business software they already rely on for email, collaboration and HR.
As usage spreads across the enterprise, urgency for quick results follows close behind. In many cases, AI platform adoption is happening without shared intent, clarity of ownership or alignment to real work.
Adoption is often driven from two directions. From the top, broad mandates tell people to “use AI” in hopes of driving value, whether that be reducing cost, driving efficiency or increasing output. From the bottom, employees experiment with personal LLM accounts and AI-powered features inside sanctioned tools. Each of these scenarios introduces new privacy and security risks, burdensome compliance reviews and employee concerns over what the adoption of AI will mean for their jobs.
Both approaches can fail for the same reason: lack of intentional design.
Successful AI adoption depends less on the sophistication of the models than on the intentionality of the approach. Organizations need to be deliberate about where large language models create meaningful value today and align safeguards to the risk and impact of each use case. Equally critical is engaging employees in that effort by clearly explaining changes, providing approved tools, sharing concrete examples and listening to the people closest to the work.
When these elements are missing, adoption stalls or introduces risk without return. In practice, secure AI adoption at scale is a leadership and change management challenge, not a purely technical one.
The Rising Tide of AI-Washing Cases in Securities Fraud Litigation
Opendoor algorithm couldn’t adjust to changing conditions; Upstart model didn’t respond dynamically to macroeconomic changes — both faced fraud claims
Read moreDetailsTurn top-down AI mandates into tangible progress
A sweeping order to “use AI everywhere” often fails because it is too broad to act on and doesn’t leverage the technology strategically. Leaders need to focus on outcomes that bring the most business value, which raises a practical question: Which specific tasks can LLMs take on today, and which still require human judgment?
Generative models handle repetitive drafting and pattern-finding across large data sets fairly well. They can organize unstructured material into something workable, but they also hallucinate, producing confident output that is wrong. In most environments, they raise the floor more than the ceiling. In other words, they make the average output better, but they don’t make the best output brilliant. They’re useful for baseline efficiency, but they can’t replace expertise or judgment at the point of use.
Governance must reflect this reality, providing enough structure to manage risk without burying people in process or shutting down learning. With an intentional approach, leaders set expectations early, identify the few use cases that fit the current state and name the checkpoints that remain human — such as final risk classification, regulatory interpretation or decisions that affect customer eligibility. Those checkpoints do not move.
Change management work then carries those decisions into day-to-day behavior. Teams adjust workflows, receive targeted training and hear consistent messages about how and when to use these tools, so the guardrails show up in practice rather than only on paper.
The message to employees matters as much as the control. When leaders acknowledge limits and frame LLMs as aids to human judgment, employees engage rather than resist. The fundamentals have not changed: Human judgment should remain responsible for decisions and risk, with AI serving as an input rather than a decision-maker.
Mitigate risks associated with “shadow” AI adoption and outside platforms
Unmanaged use of AI — or shadow adoption — can expose sensitive data and lead to security incidents. The risk can be subtle: Drafting an email or announcement with the help of a personal chatbot account may save a few minutes on writing but can also reveal confidential information to an unauthorized third party. Similar risks can surface inside sanctioned tools, like when new, automatically enabled AI features are allowed to train on confidential information or route data outside the enterprise.
In many of these situations, employees are not trying to circumvent policy; they assume that if a feature appears inside a trusted tool, someone has already vetted its use.
Education is the first control. Employees need a plain explanation of how LLM outputs are produced, where models tend to fail and which data stays off-limits. That kind of awareness turns the workforce into an early line of defense rather than something leaders need to contain.
Vendor discipline is a second control. A short list of approved providers under blanket privacy and security terms gives employees a safer channel for experimentation. Those terms can include a prohibition on model training with company data and clear rules for retention and logging. That step channels experimentation into defined lanes and weakens the pull of shadow tools. Examples like ChatGPT or Gemini can sit on the approved list as options, not as the only route.
Material decisions still need human ownership. In many sectors, regulation already assumes that, and internal risk standards do as well. Be clear about where a model can help and where a human must make the call, particularly for decisions that meaningfully affect people, such as access to employment, benefits, healthcare, credit or other services. In these cases, generative tools may support analysis or drafting, but accountability for the outcome must remain with a person who can apply judgment, context and responsibility.
The goal is to make it easier for people to experiment safely, not to shut experimentation down. When guardrails are clear, employees know how far they can go with a tool, when to stop and ask for help and who has the final say. That keeps adoption moving without taking on risk the organization never agreed to.
Ensure that you’re getting value out of your AI deployment
Getting value out of an AI transformation starts with knowing what “better” looks like. Goals and metrics need definition before work scales; in many cases, the right measures already exist inside the business. When results show up in the same reports leaders already read, measurement becomes part of normal performance management, not a separate dashboard off to the side.
The people side decides whether results hold. Explain the “why,” make the risk-reward trade visible, and treat feedback from teams as input on whether the transformation is working. Create simple channels where teams share safe experiments and short examples of time saved, quality improved or friction removed. Over time, those stories and metrics build a culture that treats mistakes as information rather than failure. That kind of culture draws people in and makes changed behavior stick.
Lead AI adoption with intent and people at the center
Even as tools evolve, a company remains a collection of people trying to solve problems and do real work. Generative AI adds a powerful tool to that mix, but leadership isn’t off the hook for deciding where the organization is headed, which risks are acceptable and how people spend their time.
When leaders decide which use cases fit the current risk posture, define where a model should never act alone and bring people into the process, employees hear a clear story about what the organization is trying to achieve, how new tools change their work and why their judgment still matters. Simple, business-facing measures show whether the transformation is doing what it promised, instead of just shifting work from one part of the organization to another.
For compliance, risk and HR leaders, AI adoption is best understood as an acceleration of familiar responsibilities rather than a departure from them. The fundamentals remain the same: shaping behavior, setting boundaries and enabling the organization to move with confidence.
What has changed is the pace and visibility of those decisions. Organizations that acknowledge this shift and learn through controlled experimentation are better positioned than those that hesitate or rely on blanket restrictions. Treating AI as an extension of existing governance and change practices, rather than a substitute for them, allows new capabilities to take hold without eroding trust or accountability.


Molly Lebowitz
Anthony Prestia 







