Why Cognitive Hacks are Especially Dangerous
You don’t have to look far for examples of cognitive hacking. Unfortunately, the evidence is virtually everywhere. Many believe cognitive hacking led to Donald Trump winning the presidential election. James Bone cautions that security professionals should become intimately familiar with cognitive hacks, how they work and what can be done to protect against them. Much is at stake.
In 1981, Carl Landwehr observed, “Without a precise definition of what security means and how a computer can behave, it is meaningless to ask whether a particular computer system is secure.”
Researchers George Cybenko, Annarita Giani and Paul Thompson of Dartmouth College introduced the term “cognitive hack” in 2002 in an article entitled “Cognitive Hacking, a Battle for the Mind.” The article reads, “The manipulation of perception — or cognitive hacking — is outside the domain of classical computer security, which focuses on the technology and network infrastructure.” This is why existing security practice is no longer effective at detecting, preventing or correcting security risks, like cyber attacks.
Almost 40 years after Landwehr’s warning, cognitive hacks have become the most common tactic used by more sophisticated hackers or advanced persistent threats. Cognitive hacks are the least understood and operate below human conscious awareness, allowing these attacks to occur in plain sight. To understand the simplicity of these attacks one needs to look no further than the evening news. The Russian attack on the Presidential election is the best and most obvious example of how effective these attacks are. In fact, there is plenty of evidence that these attacks were refined in elections of emerging countries over many years.
A March 16, 2016 article in Bloomberg, “How to Hack an Election” chronicled how these tactics were used in Nicaragua, Panama, Honduras, El Salvador, Colombia, Mexico, Costa Rica, Guatemala and Venezuela long before they were used in the American elections.
“Cognitive hacking [Cybenko, Giani, Thompson, 2002] can be either covert, which includes the subtle manipulation of perceptions and the blatant use of misleading information, or overt, which includes defacing or spoofing legitimate norms of communication to influence the user.” The reports of an army of autonomous bots creating “fake news” or, at best, misleading information in social media and popular political websites is a classic signature of a cognitive hack.
Cognitive hacks are deceptive and highly effective because of a basic human bias to believe in those things that confirm our own long-held beliefs or beliefs held by peer groups, whether social, political or collegial. Our perception is “weaponized;” without our knowledge or full understanding we are being manipulated. Cognitive hacks are most effective in a networked environment where “fake news” can be picked up on social media sites as trending news or “viral” campaigns encouraging even more readers to be influenced by the attacks without any sign an attack has been orchestrated. In many cases, the viral nature of the news is a manipulation through the use of an army of autonomous bots on various social media sites.
At its core, the manipulation of behavior has been in use for years in the form of marketing, advertisements, political campaigns, as well as in times of war. In the World Wars, patriotic movies were produced to keep public spirits up or influence volunteers to join the military to fight. ISIS has been extremely effective using cognitive hacks to lure an army of volunteers to their Jihad even in the face of the perils of war. We are more susceptible than we believe, deepening our vulnerability to cyber risks and allowing the risk to grow unabated in the face of huge investments in security. Our lack of awareness of these threats and the subtlety of the approach make cognitive hacks the most troubling in security.
I wrote the book, “Cognitive Hack, The New Battleground in Cybersecurity… the Human Mind,” to raise awareness of these threats. Security professionals must better understand how these attacks work and the new vulnerabilities they pose to employees, business partners and organizations alike. But more importantly, these threats are growing in sophistication and vary significantly, requiring security professionals to rethink assurance in their existing defensive posture.
The sensitivity of the current investigation into political hacks by the House and Senate Intelligence Committees may prevent a full disclosure of the methods and approaches used. However, recent news accounts leave little doubt to their effect as described more than 14 years ago by researchers and, more recently, in Paris and Central and South American elections. New security approaches will require a much better understanding of human behavior and collaboration from all stakeholders to minimize the impact of cognitive hacks.
I proposed a simple set of approaches in my book, but security professionals must begin to educate themselves on this new, more pervasive threat and go beyond simple technology solutions to defend their organization against cognitive hacking. If you are interested in receiving research or other materials about these risks or approaches to address them, please feel free to reach out.
 C.E. Landwehr, “Formal Models of Computer Security,” Computing Survey, vol. 13, no. 3, 1981, pp. 247-278.