The US stands at a critical AI crossroads as regulatory frameworks established over recent years face significant rollbacks while initiatives like the $500 billion Stargate data center project accelerate. Captain Compliance founder Richart Ruddie analyzes the implications of this regulatory retreat, highlighting how businesses must prepare for a complex environment where national security controls persist even as safety testing requirements disappear, potentially leaving privacy protection to a patchwork of state-level regulations.
AI is no longer a futuristic promise — it’s a present-day juggernaut reshaping the world, perhaps the biggest global change since the internet was opened to the public. The issue with such a monumental change is we’re all being affected by it with or without our knowledge.
The US stands at a crossroads in President Donald Trump’s second term, where a deregulatory push threatens to unravel years of AI governance efforts. The appointment of David Sacks as the White House AI czar signals a pro-innovation stance, but at what cost to privacy? With a Biden-era AI executive order revoked and new regulations in flux, businesses and residents alike face an uncertain landscape. Will this hands-off approach unleash AI’s potential or expose us to unchecked data exploitation?
Where President Joe Biden’s policies had established safety testing requirements for advanced AI models, Trump’s moves explicitly prioritize innovation and technological development, framing deregulation as “unleashing the potential of the American citizen” by removing what the administration characterizes as bureaucratic barriers to progress in the AI sector.
This deregulatory zeal isn’t new. Trump’s first term saw an executive order pushing AI leadership with minimal federal interference. Now, with generative AI booming, the stakes are higher.
The risk? Without guardrails, data privacy could erode as companies race to exploit AI’s capabilities — think mass surveillance, unchecked profiling or algorithmic bias run amok. Privacy-conscious advocates fear this rollback leaves us vulnerable, especially as Trump’s recent order also greenlights federal land for AI data centers, amplifying data collection scale.
The regulatory ripple effect: What’s staying, what’s going?
Trump’s deregulatory wave doesn’t erase all AI rules — national security keeps some intact — but it reshapes the landscape dramatically.
Regulations likely to disappear
- Safety testing mandates: Biden’s EO 14110 required companies to report large-scale AI model training and share red-team results. Revoked, this leaves no federal check on risky AI development.
- Equity and bias rules: The 2023 order pushed agencies to tackle algorithmic discrimination. Trump’s team sees this as “ideological bias,” so expect these to vanish.
- Commerce reporting: Cloud providers no longer need to flag foreign AI training on US systems, per the scrapped Commerce Department rules.
Regulations holding firm
- Export controls: The Bureau of Industry and Security’s September 2024 rules on AI-related semiconductors (e.g., neural network chips) target China and are likely to stay, reflecting Trump’s hawkish stance.
- Bulk data restrictions: EO 14117 (February 2024) curbs sensitive data transfers to “countries of concern.” Trump’s focus on countering adversaries suggests this survives.
- Defense AI: Pentagon initiatives, like AI-driven cyber defenses, align with Trump’s military priorities and won’t budge.
Potential new regulations
- Crypto-AI fusion: Sacks oversees both AI and crypto, hinting at policies blending blockchain with AI — perhaps incentivizing decentralized data systems.
- State-level patchwork: With federal retreat, states like California might tighten AI privacy laws (e.g., expanding CCPA/CPRA), creating a compliance maze.
- Voluntary frameworks: Sacks could push industry-led standards, but without enforcement teeth, they’re more PR than protection.
This flux leaves businesses guessing and privacy at risk as metadata harvesting and AI profiling ramp up unchecked.
David Sacks: The AI czar steering the ship
Sacks, a venture capitalist and tech insider, steps into the AI czar role with a clear mandate: turbocharge US AI dominance. Sacks was an early Paypal team member, ran enterprise social network Yammer, which sold to Microsoft, and in more recent years has been known for his work at Craft Ventures.
After his AI czar appointment was announced in late 2024, Sacks started to rally other Silicon Valley titans and has brought a Silicon Valley playbook — pro-startup, anti-regulation and laser-focus on economic competitiveness — to the White House. His influence is already tangible: In February, tech CEOs like Sam Altman from OpenAI and Oracle’s Larry Ellison joined Trump to unveil Stargate, a $500 billion AI data center project, signaling industry buy-in.
Sacks’ vision aligns with Trump’s “America First” ethos, but it’s light on privacy specifics. His past critiques of Big Tech bias suggest a hands-off stance on content moderation, yet his silence on data protection raises red flags.
Will he push for voluntary privacy standards or let market forces dictate? With Elon Musk whispering in Trump’s ear — Musk’s xAI shuns guardrails — the czar’s direction could tilt toward unfettered innovation, sidelining compliance.
For privacy hawks, this duo spells trouble: Less oversight might mean more data breaches, surveillance and litigation. It may only be when heavy litigation for privacy violations spike that we will see bad actors starting to comply — and that’s a worry for privacy advocates.