Balancing efficiencies created by AI with the human understanding and judgement that comes from experience is crucial for finance functions, argues Ryan Padget of IGEN. Get this balance wrong and technology will erode, rather than strengthen, your finance functions.
AI has become central to how many finance and tax teams operate. From transaction classification to compliance checks and forecasting, automation promises speed and scale at a time when finance leaders are under pressure to do more with fewer resources. Yet as organizations accelerate their use of AI, many are overlooking a parallel shift that could prove just as disruptive: the loss of human wisdom from the workforce.
Every day in the US, an estimated 10,000 Baby Boomers reach retirement age, the AARP reported. In the specialized world of tax and finance, this demographic change carries particular weight. In 2023, a study found nearly half of tax leaders were over the age of 58, and a report on IRS workforce trends noted that 37% of its employees will become retirement-eligible by 2028. These upcoming departures represent more than open roles; they represent decades of judgment, regulatory context and lived experience exiting organizations at an accelerating pace.
As institutional knowledge walks out the door, many companies are increasing their reliance on systems designed to optimize intelligence rather than wisdom.
Intelligence and wisdom are not interchangeable
AI excels at intelligence. It is, after all, in the name. It processes vast volumes of data and can identify patterns humans may miss. In finance operations, these capabilities are essential, particularly as reporting demands grow and regulatory complexity increases.
Wisdom operates differently. It is shaped by experience, context and an understanding of how rules behave in practice rather than how they appear on paper. In regulated environments, wisdom shows up when a seasoned professional pauses over a result that looks correct but does not feel right. It is the ability to anticipate how a regulator might interpret an action or to recognize when past enforcement trends should influence a current decision.
AI can highlight anomalies, but it cannot fully understand why an exception matters. It cannot weigh reputation risk against technical compliance, or recall how similar decisions played out during previous audits or economic cycles. Those judgments are learned over time and often through situations that never appear in datasets.
The ‘silver tsunami’ and its impact on risk
Businesses across industries are bracing for the “silver tsunami,” a rapid wave of Baby Boomer retirements that is driving a large-scale loss of experienced workers, institutional knowledge and specialized expertise across the workforce.
The scale of the silver tsunami makes this challenge critical for finance and tax professionals. New professionals often bring digital fluency and adaptability, but it takes years to develop the judgment required to navigate audits, disputes and regulatory gray areas with confidence.
This transition is occurring as scrutiny increases. Tax authorities are becoming more data-driven, enforcement is more targeted, and expectations for governance and documentation continue to rise. Losing experienced voices at this moment increases exposure, especially if organizations assume technology can fully compensate for their departure.
Brave Leaders Aren’t Loud
When leadership becomes a performance, risk cascades through organizations
Read moreDetailsWhy automation alone is not enough
Yes, automation can deliver real benefits and results. It reduces manual effort, improves consistency and allows teams to focus on higher value work. In many finance functions, these tools are no longer optional.
Problems arise when automation is treated as a replacement for judgment rather than a support for it. Systems operate within defined rules and historical assumptions. When circumstances change or when interpretation is required, automated outputs need to be challenged.
In tax and finance, small errors can have outsized consequences. An automated decision may pass internal checks yet fail under regulatory review if context has shifted. Without experienced professionals reviewing and questioning results, organizations risk creating blind spots that only surface after a costly mistake, and that may take years to come to light in an audit.
The solution is not to slow innovation but to balance it. Leaders who want to scale AI responsibly must also invest in preserving institutional knowledge before it disappears.
Intentional knowledge transfer is a practical first step. Mentorship programs, structured handovers and documentation can capture how decisions are made, not just what decisions were made. This includes recording why certain approaches were avoided, how regulators responded in past cases and which risks require added scrutiny.
Upskilling is equally important. Younger professionals should be trained not only on systems but on judgment. Shadowing experienced leaders during audits or complex reviews helps build intuition that software alone cannot provide.
Finally, AI should function as decision support rather than a decision maker. Automation can surface insights, but accountability should remain human. This ensures technology strengthens expertise instead of eroding it.


Ryan Padget is president of IGEN, a tax compliance software provider. He previously served in a variety of roles at Infor, Oracle and Epicor Software. 






