No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Cybersecurity

Can You Spot a Deepfake? Are You Sure?

With synthetic media losses projected to triple by 2027, detection techniques must evolve beyond visual verification alone

by Perry Carpenter
March 10, 2025
in Cybersecurity
deepfake concept pixelated faces

Welcome to 2025, where asking your CFO to turn sideways or stick out their tongue during a video call isn’t weird — it’s just good risk mitigation. Perry Carpenter of KnowBe4 explores the increasingly absurd cat-and-mouse game between deepfake creators and defenders, offering practical (if occasionally awkward) verification techniques to prevent your organization from joining the ranks of companies that have been scammed out of millions. 

Deepfake technology is advancing rapidly. Whether it’s a viral video of a celeb, a CEO’s urgent message or a pundit’s bizarrely controversial statement, the possibility that some piece of media could be a deepfake is no longer science fiction — it’s something very real. Deepfakes are increasingly being weaponized for misinformation, online scams, identity fraud and cyber attacks. By 2027, deepfake-related losses are expected to reach $40 billion, up from $12 billion in 2023. 

Defining deepfakes

A deepfake is a type of synthetic media — AI-generated imagery, video or audio — but deepfakes come in a variety of forms:

  • Face swapping: Replacing one person’s face with another in a video or image
  • Lip-syncing: Changing someone’s mouth movements to match a different audio track
  • Voice cloning: Replicating someone else’s voice by capturing their tone, accent, sound or mannerisms
  • Full-body: Creating an entirely new video of someone performing an action they never did
  • Live: A real-time, not pre-fabricated, digitized version of a real person on a video call, broadcast or online stream.

Identify visual oddities

One of the most effective ways to identify deepfake videos is to closely examine visual cues in the media, as this can reveal tell-tale signs that something is off. 

Commonly identifiable visual oddities include:

  • Unnatural face movements, such as irregularities in eye and eyebrow movements, blinking, lip sync issues, facial expression, missing body parts like tongues.
  • Inconsistent backgrounds where objects appear or disappear or where other sudden changes happen in the background.
  • Blurring, including unexplainable imperfections and pixelations around the edges of a face and body.
  • Side adhesion, which is when videos lose facial integrity because the subject turns their head to the side.
  • Masking problems, such as when an object like a hand or another object hovers over or occludes a face, with the underlying image may reveal a mask.
voice cloning concept speech bubbles repeated
Cybersecurity

AI Voice Cloning Is Giving Rise to Extortion & Vishing Scams

by Perry Carpenter
September 9, 2024

Technology powering new generation of attackers

Read moreDetails

Search for hidden clues in audio

Modern AI technology only needs about 3 to 5 seconds of audio to clone someone’s voice. It’s no surprise that threat actors are increasingly employing voice cloning in social engineering and extortion scams. 

The key to detecting audio deepfakes is listening carefully. AI-generated voices have a flatter delivery, awkward pauses and lack the emotional nuances of human speech. 

Other times, the vocal delivery may be more fluid than the vocal patterns of the person whose speech was cloned. For instance, if the known individual has some unique vocal quirks, such as an accent, breath or nasal sounds or mouth clicks, then this can also help identify a potential deepfake.

Perform real-time tests

A finance worker transferred $25 million to scammers after a video call with a deepfake CFO.  Threat actors are using deepfakes in job interviews to infiltrate organizations. To detect such real-time deepfakes, there are some tests people can run.

A side profile test, for example, can help identify a deepfake by asking the person to move their head from side to side. Deepfakes often struggle to maintain integrity at sharp angles. 

Similarly, a hand-interaction test, where you ask the individual to place a hand or a finger in front of their face, can often reveal a deepfake, as facial occlusion is a common weakness of deepfakes. 

Other real-time tests focus on mouths. Ask the subject to stick out their tongue and you’re likely to see things that don’t look quite right if what you’re seeing is a deepfake, and watch for synchronization between mouth movements. A code-word test, where you ask the subject for a secret word or answer to a question that only the real person would know, can also help you sniff out a real-time deepfake.

Ask yourself some critical questions

An obvious question is “Is this a deepfake,” but that’s not always going to give you a straightforward answer. Instead, pose the following questions: Why am I seeing this? Why is this content visible to me, and what is the context? Who created it? Can I identify the author, the source or the creator? What is the intent? Is someone trying to sell me something, influence my opinions or get my personal info? What emotions are manipulated? Is the content consistent with known behavior or facts? Does the content align with a person’s known behavior, location or timeline?

Deepfake defense strategies for organizations

In addition to training and “spot the deepfake” exercises, several AI-based tools can be used for deepfake detection, including DuckDuckGoose, Sensity AI, Deepware, Resemble AI, TrueMedia.org and FakeCatcher. Using methods like digital watermarking or CAI standards to authenticate, verify and trace digital content back to its original source might also help.

Organizations can consider using some practical strategies, such as implementing robust verification protocols for sensitive communications, ensuring they go beyond visual confirmation. Incorporating multi-factor authentication and additional verification steps, such as secure passphrases, or establishing secondary approval channels can also safeguard against deepfake manipulation and unauthorized access. 


Tags: Artificial Intelligence (AI)
Previous Post

Internal Audit Group Prepares New Third-Party Topical Requirement

Next Post

What Would Aristotle Do? Virtue Ethics as a Compliance Framework

Perry Carpenter

Perry Carpenter

Perry Carpenter is an award-winning author, podcaster and speaker, with over two decades in cybersecurity focusing on how cybercriminals exploit human behavior. He is the chief human risk management strategist at KnowBe4. His latest book, “FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation and AI-Generated Deceptions” (2024 Wiley), explores AI's role in deception.

Related Posts

GAN Integrity TPRM & AI

Where TPRM Meets AI: Balancing Risk & Reward

by Corporate Compliance Insights
May 13, 2025

Is your organization prepared for the dual challenges of AI in third-party risk management? Whitepaper Where TPRM Meets AI: Balancing...

tracking prices

Pricing Algorithms Raise New Antitrust Concerns

by FTI Consulting
May 13, 2025

Interdisciplinary frameworks can help manage legal, privacy and consumer protection risks

news roundup data grungy

DEI, Immigration Regulations Lead List of Employers’ Concerns

by Staff and Wire Reports
May 9, 2025

Half of fraud driven by AI; finserv firms cite tech risks in ’25

ai policy

Planning Your AI Policy? Start Here.

by Bradford J. Kelley, Mike Skidgel and Alice Wang
May 7, 2025

Effective AI governance begins with clear policies that establish boundaries for workplace use. Bradford J. Kelley, Mike Skidgel and Alice...

Next Post
aristotle statue

What Would Aristotle Do? Virtue Ethics as a Compliance Framework

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights