(Sponsored) With AI projected to contribute trillions to the global economy by 2030, organizations can’t wait for regulatory frameworks to catch up. Robert Feldman, chief legal officer of EDB, explains how smart self-regulation is filling the gap and why companies that master it now are seeing 12.5 times the ROI of their peers.
AI and data are projected to contribute nearly $20 trillion to the global economy through 2030. That would make it the third-largest economy on the planet, if AI and data were a country. No country in the world, let alone one so large, functions without regulatory guardrails. But governments are struggling to build regulatory frameworks for an industry that is projected to become the world’s third-largest economy in less than 1,000 days.
What can organizations do in the absence of clear regulatory guidance? The answer: smart self-regulation.
This is the AI gold rush
During a gold rush, it’s hard for regulators to keep pace. Global annual GDP growth is set at 3.1%, according to the latest estimates from the International Monetary Fund (IMF). AI and data growth alone will deliver 7% of that increase, according to Goldman Sachs projections. That is more than twice the size of all other individual growth engines combined.
Based on a recent EDB study of the C-suite of major enterprises in 13 countries, there is a staggering disparity in economic performance between those deeply embedding agentic AI into their business workloads (we called them the “Deeply Committed”) and those who are not (we called them the “Sideliners”). The Deeply Committed experienced more than 12.5 times the total ROI of their peers on the sidelines.
That’s an incredible difference, highlighted by the fact that 98% of these enterprises told us they aspire to be their own AI and data platforms in less than three years. In other words, they look at Google, Amazon and Salesforce (or Ola, BYJU’S and One97 Communications, for those in and around India) and recognize that they too now have a path to be economic powerhouses using their own AI and data treasure troves.
The worry about what will come next
There is a growing fear that this agentic world will take out many established white-collar roles on the front line (customer service, sales, marketing) as well as back-office functions (HR, supply chain and procurement, financial modeling). We already went through the first wave of automation during the Covid-19 pandemic, with a 25% shift to automation in workplace functions. Even many activities that were physical in nature and proximity-focused were significantly impacted. We had the Great Resignation and then the Great Reset.
These presaged the engine of AI and data replacing some white-collar jobs. No amount of regulatory oversight can stop this once-in-a-generation disruption, as there is no turning back from the powerful marriage of AI and data in the agentic world.
Adopting and adhering to core principles is the key — along with each of us taking responsibility
The solution cannot simply be more oversight from governments. Such a strategy cannot keep pace with the rapidly evolving business world we now live in. The solution must be based on the insights we gain and the principles we embrace in this new AI- and data- centric era. We learned three valuable lessons from our research that should form the foundational approach for all companies aiming to deliver on their promise of responsible AI.
Lesson 1: It’s your data and your AI, and they are massively valuable assets
Guard them. Take the view that your data and your AI are sovereign assets that must be secure by design, whether they are static or in motion. This is essential. This means your data and AI need to be observable and usable on premises or in the clouds of your choosing. Your AI technology stack and your large language models (LLMs) should be under your control and able to function inside your data estate. You control how the data is secured, and you ensure that your data never gets tainted or exposed beyond your systems. While that sounds simple, accomplishing it requires a platform that gives you observability and a hybrid architecture designed for this purpose. A retrofit of your current, disparate cloud accounts and multitude of SaaS applications just won’t cut it.
Lesson 2: Trust your people
Make sure your AI is built by your own educated team and give them the tools and training they need to carry out this task. Don’t rely exclusively on third parties. Our research showed that the Deeply Committed group has introduced mainstream agentic AI across all business areas. It’s critical that you give your teams the power to observe, manage and self-regulate according to the responsible principles your organization adopts. That means your teams must have a secure, no-code/low-code approach they can use to build, learn, adjust and self-regulate. If you’re concerned about such trust and its possible outcomes, remember: Studies show that people succeed with radical transparency and self-regulation when they bear the consequences themselves.
The pressure is on all of us to behave and organize in a responsible way. Seventy percent of those who are most successful with their AI understand this deeply. They see the mission-criticality of sovereignty over their AI and data; they know that data and AI need to work together to be compliant, secure and available anywhere, anytime; and they deeply aspire to be their own AI and data platforms. They are in this AI gold rush in the right way — focused on core principles, infrastructure design and radical transparency — and they are moving fast to build secure AI factories across their organizations at scale.
Such organizations will not have to wait for regulatory direction. They will have designed, adjusted and delivered economic results without exposing their AI and models, and they will have done so within a secure, sovereign and hybrid data infrastructure. Even better, they should then experience 250% more ROI.
Lesson 3: Regulatory oversight will not solve all these challenges for us
None of us were alive when the first automobile rolled off the production line. People suggested driving those new machines while people walked in front of them waving red flags. It sounds ridiculous now, but it highlights the challenge regulators face when trying to keep pace with a fast-moving marketplace. It is the challenge we face today in this new world of generative and agentic AI.
For organizations large and small, AI is already a key value driver. We each need to ensure that our AI and data are treated as sovereign assets, secure and available wherever, whenever and however we want to use them. Regulation may come in time, but for now, stay ahead of the game. Build infrastructure locally, work inside those guidelines and create AI that is trustworthy. Interestingly, regional global governments have already taken this approach to the data and AI infrastructure in their regions: The Stargate Project in the US, HUMAIN in the UAE and the EU’s European AI strategy all focus on the idea of trust and AI.
Know that your systems are built to be secure and sovereign, focus on being the best version of your organization, and operate like those deeply committed companies that are seeing extraordinary returns.