No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Risk

Agentic AI Can Be Force Multiplier — for Criminals, Too

How polymorphic malware and synthetic identities are creating unprecedented attack vectors

by Steve Durbin
April 21, 2025
in Risk
robot hand pointing to sky

As organizations rapidly adopt AI agents for business optimization, cybercriminals are exploiting the same technologies to automate sophisticated attacks. Information Security Forum CEO Steve Durbin reveals how malicious actors are developing teams of autonomous AI systems that can evade traditional security measures through techniques like polymorphic code generation and data poisoning.

AI agents are systems that can autonomously perform tasks on behalf of users. They can adapt to dynamic environments and make decisions without requiring human intervention. Their ability to perceive and act upon vast datasets autonomously is driving innovations and transforming value chains by optimizing processes in sectors such as healthcare, manufacturing, finance and banking. AI agents are expected to be adopted by 82% of organizations by 2027.

Weaponizing AI agents to automate cybercrime

The autonomy of AI agents implies advancements when used ethically and responsibly. However, their ability to make decisions independently and their adaptive nature have attracted the interest of malicious actors who can develop a team of agentic AI malware working collaboratively to automate attacks. 

Such scalable attacks can be executed with unprecedented efficiency and surpass the capabilities of existing threat detection systems. As many as 78% of CISOs believe AI-powered cyber threats are already significantly affecting their organizations.

Here’s how agentic AI can conceivably automate cyberattacks:

  • Polymorphic malware: Like a chameleon, this AI-generated malicious software can relentlessly change its code or appearance every time it infects a system. This enables it to evade detection by defenses that rely on blocklists and static signatures.
  • Adaptive malware: AI can automate malware creation that analyzes its environment, identifies security protocols in place and adapts in real time to launch attacks.
  • Scalable attacks: AI’s ability to automate repetitive tasks is exploited by malicious attackers to potentially launch large-scale campaigns that can simultaneously target millions of users with high precision; for example, methods like phishing emails, DDoS attacks and credential harvesting.
  • Identifying attackers’ entry points: AI systems can autonomously identify vulnerabilities and anomalies by scanning vast networks, finding potential access points. By helping bad actors reduce the time and effort it takes to identify security gaps within a targeted system, AI agents can launch attacks at scale with alarming speed, achieving maximum impact.
  • Synthetic identity fraud: Threat actors exploit AI to create synthetic identities by blending real and fake personal data. Because such synthetic personas can appear legitimate and evade fraud detection, they are commonly used in attacks involving identity theft and social engineering lures.
  • Personalized phishing campaigns: AI amplifies the efficiency of phishing campaigns by scanning and analyzing victims’ personal data in public domains. By farming this data, AI can help create highly personalized and convincing phishing emails.
robot reading book generated by ai
Financial Services

Teaching Machines to Spot What Matters

by Kevin Lee
April 8, 2025

How emerging technologies are transforming inefficient alert systems and reshaping financial crime prevention

Read moreDetails

When AI agents go rogue

AI agents use machine learning to continually learn from vast amounts of real-time data and plan their actions. However, unrestricted access to vast amounts of data, along with autonomy, can threaten an organization’s security and pose regulatory risks when AI agents become rogue and stray from their intended purpose. Rogue AI agents could arise from malicious intent due to deliberate tampering or inadvertently from flawed system design, programming errors or simply due to user carelessness.

Attackers can manipulate AI’s training data to exploit the autonomy of rogue AI agents through techniques such as:

  • Direct prompt injection: Attackers give incorrect instructions to manipulate large language models (LLMs) into disclosing sensitive data or executing harmful commands.
  • Indirect prompt injection: Attackers embed malicious instructions within external data sources like a website or a document that the AI accesses.
  • Data poisoning: Data is poisoned or seeded with incorrect or deceptive information to train the AI model. It undermines the model’s integrity, producing erroneous, biased or malicious results.
  • Model manipulation: Attackers intentionally weaken an AI system by injecting vulnerabilities during their training to control its responses, thereby compromising system integrity.
  • Data exfiltration: Attackers use prompts to manipulate LLMs to expose sensitive data.

Bad actors are using AI to achieve malicious results. In order to tap the true potential of AI, organizations need to consider the potential harm caused by rogue AI while planning their risk management approach to ensure AI is responsibly and safely used.

Defending against malicious or rogue AI agents

The following can help organizations remain secure from malicious AI agents:

  • AI-driven threat detection: Use AI-driven monitoring tools to detect even small deviations in system activity that may point to unauthorized access or malware.
  • Data protection tools: To ensure that sensitive data remains secure even if maliciously intercepted, encrypt it. Make sure important data is only accessible by valid users by using multi-factor authentication to minimize the risk of access by unauthorized users.
  • Resilient AI by adversarial training: To make AI models more resilient against malicious threats, retrain them on past adversarial attack data or subject them to simulated attacks.
  • Reliable training data: An accurate AI model can be developed by using high-quality training data. Relying on dependable datasets reduces biases and errors, helping to keep the model safe from being trained on malicious data.

Autonomous AI agents can increase efficiency and automate operations. But when they turn rogue, they may pose serious risks due to their ability to act independently and adapt quickly. Although minimally invasive today, risk managers should certainly be aware and on guard. By addressing the security issues native to AI, organizations can fully harness the immense potential AI has to offer.


Tags: Artificial Intelligence (AI)Cyber RiskCybercrime
Previous Post

GenAI Adoption Surging in Professional Services

Next Post

Are We at Risk of Automating Ethics Out of Healthcare Decisions?

Steve Durbin

Steve Durbin

Steve Durbin is CEO of the Information Security Forum, an independent association dedicated to investigating, clarifying and resolving key issues in information security and risk management by developing best practice methodologies, processes and solutions that meet the business needs of its members.

Related Posts

GAN Integrity TPRM & AI

Where TPRM Meets AI: Balancing Risk & Reward

by Corporate Compliance Insights
May 13, 2025

Is your organization prepared for the dual challenges of AI in third-party risk management? Whitepaper Where TPRM Meets AI: Balancing...

tracking prices

Pricing Algorithms Raise New Antitrust Concerns

by FTI Consulting
May 13, 2025

Interdisciplinary frameworks can help manage legal, privacy and consumer protection risks

news roundup data grungy

DEI, Immigration Regulations Lead List of Employers’ Concerns

by Staff and Wire Reports
May 9, 2025

Half of fraud driven by AI; finserv firms cite tech risks in ’25

ai policy

Planning Your AI Policy? Start Here.

by Bradford J. Kelley, Mike Skidgel and Alice Wang
May 7, 2025

Effective AI governance begins with clear policies that establish boundaries for workplace use. Bradford J. Kelley, Mike Skidgel and Alice...

Next Post
stethoscope

Are We at Risk of Automating Ethics Out of Healthcare Decisions?

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights