Search engines now display AI-generated answers at the top of results instead of traditional links, and over half of users prefer these AI summaries to conventional search listings. When your brand doesn’t appear in those summaries, scammers fill the gap. Jonathan Armstrong of Punter Southall explains this “AI vacuum” risk, why one in five major brands are completely absent from AI-generated results and what compliance teams should do about it, from monitoring AI search outputs to developing generative engine optimization strategies.
AI is dominating conversations everywhere right now, but what exactly are AI vacuums, and why could they pose a risk to organizations?
You’d have to be completely disconnected from the internet to miss the changes affecting major search engines. Increasingly, search engines are moving away from traditional paid-for or organic search results, prioritizing AI-generated summaries. Today, many search engines display an AI-generated answer at the top of results, which users are finding increasingly credible. For instance, a 2025 YouGov survey found that over 50% of respondents prefer AI summaries to traditional search listings. Click-through rates are also high, as these AI summaries typically include links to source content to validate the information.
This shift has significant implications not only for search engine revenue models but also for information security, compliance and legal risk.
Understanding the risk
Internet scams have existed for as long as the internet itself. Historically, attackers diverted traffic from legitimate sites through typo-squatting, misleading domains, metatag misuse or paid search manipulation. As user search behavior has evolved, scammers have adapted, too.
Many of these earlier scams relied on businesses not having a strong online presence. Where a digital information gap existed, attackers could exploit it to capture traffic for their own purposes. AI-first search creates a similar environment, where information vacuums can be exploited.
With AI-first search, a number of risks emerge:
- Manipulated AI summaries could redirect users to scam sites, hijacking an organization’s reputation or potential customers.
- Investment and employment scams could be amplified through AI-generated content.
- Credential phishing could be reinforced using fraudulent AI-informed pages.
The low cost of AI makes these attacks more feasible and scalable. Three years ago, a million tokens of AI inference cost about $60, today; the same computing power costs just six cents, author Nina Schick reported. This reduction enables threat actors to experiment at scale and probe for vulnerabilities more efficiently.
So far, most public examples of AI vacuums being exploited have been light-hearted or humorous, such as generating absurd recipes from Reddit posts. However, the potential for serious harm exists due to the way AI-first search functions.
Why AI vacuums are especially concerning
Earlier generative AI models were trained on restricted datasets, such as Common Crawl. Modern models now access broader datasets, including websites that allow AI crawling. However, AI model operations often lack transparency. For example, in December 2025, the European Commission opened an investigation into Google over concerns about the data used to train its GenAI models.
Efforts by organizations to protect their intellectual property can sometimes worsen the problem. Most reputable crawlers — like OpenAI’s GPTBot, Google-Extended and Anthropic’s ClaudeBot — respect technical measures like robots.txt files, which tell bots which parts of a site they can access. But if an organization restricts AI access too heavily, information vacuums can appear.
A key concern is that many brands are underrepresented or invisible in AI summaries. A Geometriqs study in October found that among the top 80 brands analyzed, average visibility was only 4%, with one in five brands completely absent. Financial services were the second-worst sector at 2.9%, suggesting increased risk of financial scams. Brands outside Anglo-American markets fared worse.
How organizations can respond
To mitigate these risks, organizations should review their AI strategy and risk profile. Suggested measures include:
- Monitoring AI-generated results regularly, as AI search outputs can change frequently.
- Developing an AI optimization strategy, similar to traditional SEO, including reviewing robots.txt configurations and making content AI-friendly. Employ generative engine optimization (GEO) and E-E-A-T (experience, expertise, authoritativeness, trustworthiness) principles to improve credibility.
- Integrating AI risk management into brand protection, covering domain monitoring, trademark enforcement and other reputation safeguards.
Promoting AI literacy internally. Educating staff on AI risks and opportunities aligns with EU AI Act requirements and strengthens mitigation strategies.


Jonathan Armstrong is a partner at Punter Southall. He is an experienced lawyer with a concentration on technology and compliance. His practice includes advising multinational companies on matters involving risk, compliance and technology across Europe. He has handled legal matters in more than 60 countries involving emerging technology, corporate governance, ethics code implementation, reputation, internal investigations, marketing, branding and global privacy policies. Jonathan has counseled a range of clients on breach prevention, mitigation and response. He has also been particularly active in advising multinational corporations on their response to the UK Bribery Act 2010 and its inter-relationship with the U.S. Foreign Corrupt Practices Act (FCPA). 







