The race to install AI agents may be underway, but organizations need to slow down and think about the risk they may be introducing, says Jonathan Armstrong, partner at Punter Southall. You don’t need to look far to see the damage agentic AI can cause.
At the SCCE’s European Compliance & Ethics Institute this year in Berlin the words on everyone’s lips were “agentic AI.” Even the closing keynote asked compliance professionals to go back to their offices and learn how to use agentic AI in their roles. And it’s not limited to the compliance world. By 2028, Gartner predicts 60% of brands will use agentic AI to facilitate streamlined one-to-one interactions. They highlight that transformation in marketing strategy will shift traditional channel-based approaches and usher in a new era of personalized, autonomous engagement.
But this isn’t risk-free. In Berlin, I cautioned against using agentic AI without understanding the new risks it brings. Consider the recent OpenClaw incident, in which AI agents moved between applications using shared credentials. This highlights how quickly innovation can become vulnerability. As developers and teams experiment with these tools, compliance professionals must understand and manage the associated risks.
The OpenClaw exposure
OpenClaw was built in late 2025 as a “weekend project” by its author, Peter Steinberger. It quickly became popular as it allowed AI agents to talk to each other and to share access to systems. Steinberger said his GitHub repository had 2 million visitors in a single week, and many developers used his code as part of their agentic AI infrastructure.
However, in February, a report identified significant potential vulnerabilities. Researchers discovered almost 43,000 unique IP addresses hosting exposed OpenClaw control panels with full system access across 82 countries. This could allow an attacker to exploit the OpenClaw gateway to take control of the affected system.
OpenClaw deployments were heavily concentrated in major cloud and hosting providers. Depending on the configuration, the vulnerability could also allow threat actors to connect to third-party services, such as email, calendars, chat applications, social media and browsers.
Further concerns emerged when a cybersecurity investigation reportedly found a misconfigured database exposing 1.5 million authentication tokens, around 35,000 email addresses and private communications among AI agents.
Layoff Two-Step Underscores AI’s Limitations
AI can make some workers more efficient, but is it ready yet to completely eliminate them? Some companies very publicly took a side and have since backtracked, including even rehiring people they laid off. CCI editorial director Jennifer L. Gaskin explores the legal, reputational and cultural risks that come with the AI boomerang.
Read moreDetailsRegulatory warnings and rising security concerns
Also in February, the Dutch data protection authority, Autoriteit Persoonsgegevens (AP), warned users and organizations against using OpenClaw and similar experimental systems. The AP said that such open-source systems may not meet basic security requirements and advised against using them on systems containing sensitive or confidential data. This includes systems holding access codes, financial information, employee data, private documents or identity documents. The AP also warned that just because OpenClaw runs locally on a user’s computer does not automatically mean it is secure.
These warnings are not isolated incidents and highlight a key challenge with these tools in that users often do not fully understand the level of control they are granting to AI systems. Similar concerns have emerged around Orchids, a so-called “vibe-coding” platform that allows users with no technical expertise to build apps and games using text prompts in a chatbot. Despite claiming a million users, Orchids has reportedly exhibited vulnerabilities that could allow attackers to take control of users’ devices.
A common factor in both is the small size of the companies behind the tools. OpenClaw reportedly began as a one-person project, while Orchids has 10 or fewer employees, according to its LinkedIn page. This raises questions about the capacity of these developers to manage security, support users and meet regulatory expectations — issues that regulators are increasingly scrutinizing as agentic AI adoption accelerates.
Why uninstalling OpenClaw is not a solution
For many organizations, fixing the risks associated with OpenClaw is not as simple as uninstalling the software. One challenge is visibility. Some may not even know whether OpenClaw has been deployed, as the tool may have been adopted by developers or staff experimenting with AI tools without formal approval or oversight.
This so-called shadow AI risk is already significant. A Microsoft study from October suggested that 71% of UK employees admitted using unapproved AI tools at work. Given the rapid adoption of AI since, the true figure could now be higher.
OpenClaw also integrates with widely used communication platforms, including WhatsApp, Telegram, Discord, Slack and Teams. If OpenClaw has been linked to multiple applications, manually resetting credentials and access tokens across those services could be a difficult and time-consuming task.
Practical steps organizations should consider
For many organizations, the OpenClaw case is a reminder that AI innovation must be matched with appropriate risk management. Some practical steps include:
- Looking at technical settings: Organizations need to restrict the use of applications like OpenClaw on their networks. Tools are available to look at shadow AI risk. If the organization has those tools, they need to add OpenClaw to the list of prohibited applications. It has been reported that it is currently not possible for humans to delete an account on OpenClaw, at least by using common settings. Organizations that think they have been exposed may want to take specialist advice.
- Check your socials: It has been reported that OpenClaw collects X (formerly Twitter) user names, display names, passwords and more, so it might be possible via OpenClaw for a threat actor to gain access to the organization’s social networking output, which again can lead to reputational risks and expose the organization to phishing attacks etc.
- Literacy is key. AI literacy has become a regulatory expectation, including under the EU AI Act, and staff need to understand both the opportunities and risks of AI systems. Good, up-to-date, fit-for-purpose compliance training will be a key part of this.
- Take measures to protect against shadow AI: While a literacy program will be part of this, organizations may want to include traditional software solutions like data loss prevention software and specialist shadow AI monitoring and blocking services.
- Look at contracts and developer due diligence: For some organizations the issue might stem from subcontracted developers. Therefore, they need to ensure contractual protections are in place to meet their compliance and regulatory obligations. This might also include specific insurance policies since developers with 10 or fewer employees are unlikely to have the financial ability to pay up when things go wrong.
- Do a proper data protection impact assessment or AI impact assessment. This isn’t just common sense but may well be a legal requirement. While organizations want to move quickly in the new AI world, sometimes it’s necessary to step back and see if an organization’s legal and compliance obligations are being considered.
The rapid adoption of agentic AI is exposing new governance challenges for security leaders. OpenClaw demonstrates the importance of carefully controlling experimental deployments to ensure that the drive for innovation does not outpace robust security and compliance measures.


Jonathan Armstrong is a partner at Punter Southall. He is an experienced lawyer with a concentration on technology and compliance. His practice includes advising multinational companies on matters involving risk, compliance and technology across Europe. He has handled legal matters in more than 60 countries involving emerging technology, corporate governance, ethics code implementation, reputation, internal investigations, marketing, branding and global privacy policies. Jonathan has counseled a range of clients on breach prevention, mitigation and response. He has also been particularly active in advising multinational corporations on their response to the UK Bribery Act 2010 and its inter-relationship with the U.S. Foreign Corrupt Practices Act (FCPA). 





