In the midst of a pivotal election cycle, voters face new challenges posed by AI-enhanced political campaigns. Cassie founder Nicky Watson explores how AI’s role in elections is reshaping data privacy concerns, from hyper-targeted ads to the spread of misinformation and why stronger regulations are essential to safeguard voter trust and personal information.
In the first major U.S. election cycle where generative artificial intelligence (AI) is widely available to voters, government offices, and political campaigns, the rules of engagement are changing. AI has the potential to reshape the way campaigns are run, but it also introduces significant risks to voter data privacy. As AI-driven technologies become embedded in political advertising and campaigning, it is essential to examine how personal data is being used and protected in this new landscape.
In some states, lawmakers are already acting. Laws in Arizona, California, Florida, Illinois and Wisconsin seek to regulate the use of AI in political campaigns, often by requiring political advertisements to disclose when they were made using generative AI tools. But will AI regulations at the state level create a lasting impact that protects user data privacy or will voters need to rely on national legislative efforts, like the American Privacy Rights Act of 2024 (APRA)?
Targeted ads and data privacy concerns
AI has already made significant inroads into political campaigns, especially in the realm of targeted advertising. Political campaigns are increasingly using AI to analyze vast amounts of personal data to create highly specific, targeted ads. This includes using voter data to predict preferences, interests and even behavior patterns. According to market research firm IDC, U.S. investment in AI will reach $336 billion by 2028, signaling a huge opportunity for marketers but also raising significant concerns about privacy.
A recent survey by Cassie found that one in five respondents had reevaluated their political views after being served targeted ads. While this showcases AI’s power in influencing voters, it also highlights privacy risks. Voter profiles are built using data collected from various sources, often without clear consent, and voters may not be fully aware of how much personal information is being used to target them.
This lack of transparency creates a gray area regarding the protection of personal data. As campaigns increasingly rely on AI to create highly personalized ads, it is critical that voters’ personal information is safeguarded and that the use of AI is transparent.
If the AI Industry Doesn’t Establish Methods to Protect Private Data, Someone Else Will
Risk is high that personal information will be sucked up by AI engines
Read moreAI-generated misinformation: Risks to voter trust
One of the most alarming risks AI presents in political campaigns is its potential to spread misinformation. AI tools, especially generative AI, can create realistic deepfake videos, audio recordings and images, which can deceive voters into believing false information. This opens the door for bad actors to create and distribute fake content that appears legitimate, such as videos of candidates making harmful statements or fake endorsements by celebrities.
These AI-generated materials are often indistinguishable from real content, making it difficult for voters to know whether what they’re seeing or hearing is real. This misuse of AI can severely undermine voter trust, erode public confidence in elections and create chaos in the political process.
The data privacy concern arises when personal data is used to precisely target voters with this kind of deceptive content. For example, AI-driven misinformation campaigns can be tailored using data that voters may have unknowingly provided through online interactions, social media or third-party data brokers. Ensuring that personal data is not exploited for these purposes is crucial.
Regulatory gaps and the path forward
Although several states have enacted laws to regulate the use of AI in political campaigns, such as requiring disclaimers when AI-generated content is used, these efforts are fragmented and vary widely. For instance, North Carolina proposed requiring all political ads created using AI to include a disclaimer, though these bills did not survive the legislative session. Other state laws regulate various aspects of AI in politics in different ways.
At the federal level, the American Privacy Rights Act (APRA) was introduced to create a nationwide standard for data privacy, including rights for consumers to opt out of targeted advertising. However, progress on APRA has been slow, and it won’t be in place for this election cycle.
This patchwork of regulations leaves significant gaps in protecting voter data and ensuring the ethical use of AI. As AI’s role in elections continues to grow, it is essential to prioritize comprehensive national legislation that addresses both data privacy and the responsible use of AI in political campaigns.
In the absence of immediate federal action, businesses and political campaigns must take steps to protect voter data now. This includes implementing strong consent and preference management systems, ensuring transparency in AI’s use and prioritizing data privacy in their operations.