No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Risk

OpenClaw Reveals Hidden Security Risks of Agentic AI

Drive for innovation cannot outpace robust security and compliance measures

by Jonathan Armstrong
April 27, 2026
in Risk
openclaw homepage

The race to install AI agents may be underway, but organizations need to slow down and think about the risk they may be introducing, says Jonathan Armstrong, partner at Punter Southall. You don’t need to look far to see the damage agentic AI can cause.

At the SCCE’s European Compliance & Ethics Institute this year in Berlin the words on everyone’s lips were “agentic AI.” Even the closing keynote asked compliance professionals to go back to their offices and learn how to use agentic AI in their roles. And it’s not limited to the compliance world. By 2028, Gartner predicts 60% of brands will use agentic AI to facilitate streamlined one-to-one interactions. They highlight that transformation in marketing strategy will shift traditional channel-based approaches and usher in a new era of personalized, autonomous engagement.

But this isn’t risk-free. In Berlin, I cautioned against using agentic AI without understanding the new risks it brings. Consider the recent OpenClaw incident, in which AI agents moved between applications using shared credentials. This highlights how quickly innovation can become vulnerability. As developers and teams experiment with these tools, compliance professionals must understand and manage the associated risks.

The OpenClaw exposure

OpenClaw was built in late 2025 as a “weekend project” by its author, Peter Steinberger. It quickly became popular as it allowed AI agents to talk to each other and to share access to systems. Steinberger said his GitHub repository had 2 million visitors in a single week, and many developers used his code as part of their agentic AI infrastructure. 

However, in February, a report identified significant potential vulnerabilities. Researchers discovered almost 43,000 unique IP addresses hosting exposed OpenClaw control panels with full system access across 82 countries. This could allow an attacker to exploit the OpenClaw gateway to take control of the affected system.

OpenClaw deployments were heavily concentrated in major cloud and hosting providers. Depending on the configuration, the vulnerability could also allow threat actors to connect to third-party services, such as email, calendars, chat applications, social media and browsers.

Further concerns emerged when a cybersecurity investigation reportedly found a misconfigured database exposing 1.5 million authentication tokens, around 35,000 email addresses and private communications among AI agents.

robots at job interview waiting with person
Featured

Layoff Two-Step Underscores AI’s Limitations

by Jennifer L. Gaskin
April 22, 2026

AI can make some workers more efficient, but is it ready yet to completely eliminate them? Some companies very publicly took a side and have since backtracked, including even rehiring people they laid off. CCI editorial director Jennifer L. Gaskin explores the legal, reputational and cultural risks that come with the AI boomerang.

Read moreDetails

Regulatory warnings and rising security concerns

Also in February, the Dutch data protection authority, Autoriteit Persoonsgegevens (AP), warned users and organizations against using OpenClaw and similar experimental systems. The AP said that such open-source systems may not meet basic security requirements and advised against using them on systems containing sensitive or confidential data. This includes systems holding access codes, financial information, employee data, private documents or identity documents. The AP also warned that just because OpenClaw runs locally on a user’s computer does not automatically mean it is secure.

These warnings are not isolated incidents and highlight a key challenge with these tools in that users often do not fully understand the level of control they are granting to AI systems. Similar concerns have emerged around Orchids, a so-called “vibe-coding” platform that allows users with no technical expertise to build apps and games using text prompts in a chatbot. Despite claiming a million users, Orchids has reportedly exhibited vulnerabilities that could allow attackers to take control of users’ devices.

A common factor in both is the small size of the companies behind the tools. OpenClaw reportedly began as a one-person project, while Orchids has 10 or fewer employees, according to its LinkedIn page. This raises questions about the capacity of these developers to manage security, support users and meet regulatory expectations — issues that regulators are increasingly scrutinizing as agentic AI adoption accelerates.

Why uninstalling OpenClaw is not a solution

For many organizations, fixing the risks associated with OpenClaw is not as simple as uninstalling the software. One challenge is visibility. Some may not even know whether OpenClaw has been deployed, as the tool may have been adopted by developers or staff experimenting with AI tools without formal approval or oversight.

This so-called shadow AI risk is already significant. A Microsoft study from October suggested that 71% of UK employees admitted using unapproved AI tools at work. Given the rapid adoption of AI since, the true figure could now be higher.

OpenClaw also integrates with widely used communication platforms, including WhatsApp, Telegram, Discord, Slack and Teams. If OpenClaw has been linked to multiple applications, manually resetting credentials and access tokens across those services could be a difficult and time-consuming task.

Practical steps organizations should consider

For many organizations, the OpenClaw case is a reminder that AI innovation must be matched with appropriate risk management. Some practical steps include:

  • Looking at technical settings: Organizations need to restrict the use of applications like OpenClaw on their networks. Tools are available to look at shadow AI risk. If the organization has those tools, they need to add OpenClaw to the list of prohibited applications. It has been reported that it is currently not possible for humans to delete an account on OpenClaw, at least by using common settings. Organizations that think they have been exposed may want to take specialist advice.
  • Check your socials: It has been reported that OpenClaw collects X (formerly Twitter) user names, display names, passwords and more, so it might be possible via OpenClaw for a threat actor to gain access to the organization’s social networking output, which again can lead to reputational risks and expose the organization to phishing attacks etc.
  • Literacy is key. AI literacy has become a regulatory expectation, including under the EU AI Act, and staff need to understand both the opportunities and risks of AI systems. Good, up-to-date, fit-for-purpose compliance training will be a key part of this.
  • Take measures to protect against shadow AI: While a literacy program will be part of this, organizations may want to include traditional software solutions like data loss prevention software and specialist shadow AI monitoring and blocking services. 
  • Look at contracts and developer due diligence: For some organizations the issue might stem from subcontracted developers. Therefore, they need to ensure contractual protections are in place to meet their compliance and regulatory obligations. This might also include specific insurance policies since developers with 10 or fewer employees are unlikely to have the financial ability to pay up when things go wrong. 
  • Do a proper data protection impact assessment or AI impact assessment. This isn’t just common sense but may well be a legal requirement. While organizations want to move quickly in the new AI world, sometimes it’s necessary to step back and see if an organization’s legal and compliance obligations are being considered.

The rapid adoption of agentic AI is exposing new governance challenges for security leaders. OpenClaw demonstrates the importance of carefully controlling experimental deployments to ensure that the drive for innovation does not outpace robust security and compliance measures.

Tags: Artificial Intelligence (AI)
Previous Post

FINRA Is Still Following Off-Channel Enforcement Even If the SEC Isn’t Leading

Jonathan Armstrong

Jonathan Armstrong

Jonathan Armstrong is a partner at Punter Southall. He is an experienced lawyer with a concentration on technology and compliance. His practice includes advising multinational companies on matters involving risk, compliance and technology across Europe. He has handled legal matters in more than 60 countries involving emerging technology, corporate governance, ethics code implementation, reputation, internal investigations, marketing, branding and global privacy policies. Jonathan has counseled a range of clients on breach prevention, mitigation and response. He has also been particularly active in advising multinational corporations on their response to the UK Bribery Act 2010 and its inter-relationship with the U.S. Foreign Corrupt Practices Act (FCPA).

Related Posts

robots at job interview waiting with person

Layoff Two-Step Underscores AI’s Limitations

by Jennifer L. Gaskin
April 22, 2026

AI can make some workers more efficient, but is it ready yet to completely eliminate them? Some companies very publicly...

big data concept

Data Authenticity & Accountability Crucial in the AI Age

by Greg Campanella and Ken Feinstein
April 20, 2026

Companies must blend innovative and traditional methods for policy development, privacy programs and regulatory alignment

virtual gavel pixels

Negligence & AI: Can the Courts Keep Up?

by Elizabeth Alice “Liz” Och
April 20, 2026

At this early stage, be cautious in how you talk about your commitment to AI best practices

sec office front

Will AI Change FinServ Regulation? Here’s What History Tells Us.

by Hollie Mason and Ryan Murphy
April 20, 2026

Regulators’ actions concerning AI in financial services are likely to increase in scope and frequency

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2026 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • About
    • About CCI
    • Writing for CCI
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • Artificial Intelligence (AI)
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Downloads
    • Download Whitepapers & Reports
    • Download eBooks
  • Books
    • CCI Press
    • New: Bribery Beyond Borders: The Story of the Foreign Corrupt Practices Act by Severin Wirz
    • CCI Press & Compliance Bookshelf
    • The Seven Elements Book Club
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2026 Corporate Compliance Insights