This is the second of a two-part series. Read Part I here.
When an account obsessively blaming an executive shifts from theatrical venting to calm deliberation, migrates to a closed forum and suddenly disappears, threat analysts face a critical decision point in an environment where the distance between internet rhetoric and real-world violence has collapsed. Felix Cook of S-RM concludes his two-part series by detailing how practitioners gather clues while activity remains public, connecting pseudonymous handles across platforms through linguistic fingerprints and posting patterns and build evidence files that withstand legal scrutiny.
Free speech is enshrined in the US Constitution, which means even the most incendiary online discourse is usually legal and protected. Someone can write on social media that they want to harm a specific person, and it might be a joke, a burst of rage with no follow-through or a credible threat. The question is whether anyone can tell the difference early enough, before a name typed on a screen becomes an event in the physical world.
Last year’s UnitedHealthcare assassination illustrated a troubling new reality: A mentally unstable individual can be radicalized online, attach personal grievances to someone not remotely responsible, and act with catastrophic consequences. Political violence is now a risk affecting previously anonymous executives who never imagined they were part of anyone’s ideological narrative.
How, then, do the teams responsible for protecting executives — cybersecurity analysts, investigative researchers and their physical-security counterparts — decide when a threat is credible and gather evidence that holds up within one of the world’s most pro-free speech legal systems?
The paradox of modern threat detection
Imagine you’re a threat intelligence professional responsible for monitoring the online threat profile of a prominent US oil executive. For months, you’ve been monitoring an account that obsessively blames your client for environmental destruction. The tone has been loud and theatrical, emotionally charged but predictable. Then something changes. The posts lose their performative edge and take on a calmer, more deliberate tone, as if the user has shifted from venting to thinking. Almost simultaneously, the handle leaves the public platform and reappears in a closed forum. There, the user begins interacting with others who share the same grievance and reinforce it. For anyone seasoned in this work, that combination — a change in tone, a migration to a smaller space and new engagement with like-minded actors — is a threat signal. And when the account suddenly disappears a day later, whether offline or deeper into the dark corners of the web, what do you do?
This example captures the central paradox of modern threat detection. Credible threats are becoming easier to spot, yet harder to pin down once they are identified. Advances in predictive AI and large-scale modeling have transformed the “needle in a haystack” problem: What once required hours of scrolling through posts to gauge intent, tone and capability can now be done in seconds. But the actors who pose the highest risk are moving into spaces that are increasingly closed, fragmented and pseudonymous. Even legacy public platforms have tightened access. After years of lawsuits, public scrutiny and for their own commercial incentives, many social media platforms treat user data as proprietary, locking down many pathways of analysis.
With Executives Becoming the Targets of Digital Anger, True Protection Begins Online, Long Before the Guards & Gates
Every conference RSVP, smart device and geotagged photo becomes a potential entry point for harassment, stalking or worse
Read moreDetailsFocus on the person, not the platform — and follow their path
By the time a threatening handle goes dark, the analyst’s focus should already have shifted from tracking posts and accounts to understanding the person behind them. The window in which activity is public enough to study is brief, and that is when practitioners gather the clues they need. They map every alias and variation, link accounts through posting patterns, linguistic fingerprints, timestamps and recycled grievances, enriching this information with open-source intelligence.
Once the identity is mapped, the file can be kept warm long after the account itself goes quiet. When the same user resurfaces on another platform, under a slightly altered handle or with a recognizable linguistic tic, the analyst can pick up the trail immediately. Once that trajectory crosses into credible-threat territory, the user becomes an escalation case. Most threat actors are not truly anonymous; they are pseudonymous. And pseudonyms, when viewed through the right lens, can be connected, peeled back and affixed to real identities.
Turning intelligence into actionable evidence
Attribution is what unlocks legal pathways of protection. Once a pseudonymous handle is tied to a real person, companies can issue cease-and-desist letters, pursue injunctions, force platforms to act and make law enforcement referrals that actually gain traction. The evidence must tell a coherent story — what was said, when it was said, how it escalated and how the identity was linked across accounts. Chain of custody matters. Metadata matters. The logic of the escalation matters.
Protective-intelligence teams are not trying to predict the future. Rather, they are building a file that shows, in plain language, why this individual presents a credible risk. It is important that key data — screenshots, timestamps, platform logs, usernames, cross-linked aliases and other digital forensics — are gathered, preserved and collated in a way that withstands scrutiny from lawyers, platforms and, if it comes to it, a court.
Reducing attack surface area and ensuring standing protection
It is good to know when a threat is coming; even better to know that you have strong walls when they come. In the modern era, these defenses are both physical and digital in nature. Unfortunately, most executives have little sense of how much of their life is online. They see the tip of the iceberg and assume that because they personally do not post much, they are insulated. In reality, every email they have ever used, every conference registration, every breached database that happens to contain their information contributes to a digital footprint that can be assembled into a profile. Even household devices — phones, laptops, routers and the growing catalog of “smart” appliances — become entry points if left unsecured.
And that accounts only for the executive. Families generate exposure at a far greater rate. A spouse posting the front of the house, a child tagging a school routine on Instagram, casual travel updates — all of it provides detail a determined actor can use. Effective mitigation requires a thorough digital cleanup: removing exposed personal data, hardening devices, securing home networks and tightening privacy around the people who live with the executive.
Executives now live in an environment where a single grievance can metastasize online without warning, and where the distance between internet rhetoric and real-world violence has collapsed. The organizations that fare best are the ones that build threat assessment into their operating rhythm — treating digital exposure the way they treat financial risk or compliance, as something that has to be monitored, measured and hardened continuously. While it’s impossible to eliminate every threat, the point is to make sure that when someone fixates on an executive, there’s already a system in place that sees the escalation early, responds coherently and leaves no gaps to exploit.


Felix Cook is a director at S-RM Intelligence and Risk Consulting, a global intelligence and cybersecurity consultancy. He has extensive experience managing and executing due diligence, disputes and investigations, strategic intelligence and executive protection engagements across the Americas, Europe and Middle East. 







