No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
    • Upcoming
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Data Privacy

No Strings Attached: Agentic AI Tests Privacy & Antitrust Boundaries

Companies racing to deploy autonomous AI agents face mounting questions about liability and consumer protection

by Joshua Goodman, Minna Naranjo and Phil Wiese
January 15, 2025
in Data Privacy, Featured
puppet with strings cut

When AI can independently book your travel, analyze your emails and make pricing decisions, compliance concerns multiply. Morgan Lewis attorneys Joshua Goodman, Minna Naranjo and Phillip Wiese explore how agentic AI’s unprecedented autonomy challenges existing privacy and antitrust frameworks.

Many in the tech industry view AI agents as the next frontier in the development and commercialization of AI technology. AI agents, or “agentic AI,” can be thought of as AI-powered software applications that can take in information on their own, without the same level of human instruction typically relied upon by current generative AI tools, and then use that information together with other tools to accomplish a goal. 

This nascent technology is likely to raise many important questions for the sector, including novel legal questions involving antitrust and data privacy law that expand ongoing AI debates and concerns. 

In fact, agentic AI is so new that currently there is no consensus in the technology industry on the precise definition of an AI agent, but the basic concept is that an AI agent is a software application that can take in information and then act on it by using a wider variety of tools to achieve a goal with a much higher degree of autonomy than current technologies. Think of a software application that could independently use other software tools like a web browser, spreadsheets or a word processor to accomplish goals defined for it by a user providing instructions in natural language — in much the same way you might instruct another person to help with a task.  

In contrast, the generative AI tools that many people have become familiar with over the past couple of years typically rely on detailed prompts — specific and well-defined instructions — to guide the generation of new text, images or other media. AI agents are expected to operate in a somewhat similar way but with much more autonomy, vaster capabilities and more broadly defined goals.

To illustrate, an AI agent might be given a task like scheduling a trip for you to visit another city, including by connecting autonomously with various travel websites and booking appropriate hotels and transportation on its own. Another example might be asking an AI agent to handle a task in which the agent will analyze a set of spreadsheets for certain data, process the data as needed, go to a particular website and send the results of the data analysis into an online form. Yet another example might be an agent that will read through your emails at regular intervals to find a certain type of correspondence and then take appropriate actions based on the email, such as placing a particular order or arranging for a product return.

While these technologies are not yet in widespread use, and their capabilities and resulting impacts remain largely unknown and untested, several companies expect to roll out AI agents over the coming year.

Antitrust concerns

One use for AI agents could be to assist with product pricing and negotiation tasks. If AI agents are assigned to accomplish goals in these domains, agentic AI applications could potentially raise antitrust concerns that expand upon the issues that have commonly arisen so far in connection with algorithmic pricing.

To date, algorithmic pricing antitrust cases have alleged that competing companies use algorithms to collect sensitive nonpublic data and generate pricing recommendations that effectively fix prices for rental properties.

For example, in United States v. RealPage, the DOJ and various state attorneys general alleged that RealPage, an algorithmic pricing tool, allowed competing landlords to share nonpublic information about apartments, including rent pricing, which was then used to generate pricing recommendations that were noncompetitive and harmful to renters. Other states and the District of Columbia have filed independent litigation regarding RealPage. Similar allegations have been made by private plaintiffs in the multifamily and apartment rental property industry and hotel industry. At present, no court has found there to be antitrust liability based on these types of allegations.

While all of this litigation remains ongoing, two federal district courts have dismissed complaints alleging algorithmic antitrust violations in the hotel industry. Among other issues, those courts cited the plaintiffs’ failure to allege any actual agreement to exchange confidential pricing information, to adhere to recommended prices or even to pool nonpublic, competitively sensitive information from different competitors via the relevant software during its generation of specific price recommendations.

Similarly, in DC’s RealPage case, the court dismissed a defendant from the case based on a showing that the defendant’s use of RealPage’s software did not involve any exchange of proprietary data. The DOJ and Federal Trade Commission (FTC) — which have filed statements of interest in several of the ongoing algorithmic pricing antitrust cases — have so far taken the position that the use of algorithms for pricing decisions can lead to antitrust violations even without explicit agreements to fix prices, and even where the software-generated pricing recommendations are nonbinding and deviated from in practice. It remains to be seen whether courts will endorse that view.

The legal landscape remains fluid as courts continue to navigate these issues, and we expect that similar and new theories of anticompetitive harm may arise in connection with AI agents. For instance, in a 2017 article, Ariel Ezrachi and Maurice E. Stucke distinguished “hub-and-spoke” algorithmic collusion concerns — where a single algorithm acts as the hub of a hub-and-spoke pricing conspiracy — from more complex concerns arising from the conduct undertaken by AI agents. The existing algorithmic pricing antitrust cases, as alleged, basically fall into the “hub-and-spoke” category.

AI agents used in pricing, on the other hand, may raise concerns about autonomous tacit collusion, which to date has not been a major issue with existing generative AI tools given their limitations. Specifically, AI agents acting independently of each other and of humans may be alleged to be capable of engaging in consciously parallel pricing behavior in a way that is more stable, disciplined and effective than human pricing actors. Consciously parallel, unilateral pricing behavior is typically lawful under US antitrust law. Accordingly, even if such outcomes are found to result empirically from the use of AI agents — a big if — antitrust liability under existing law is doubtful.  

It is also conceivable that agentic AI with pricing goals could resort to autonomously creating anticompetitive agreements with each other or with human counterparties, absent any express instruction from a human to do so. While it remains unclear if this possibility is realistic or practical, AI agents would seem to heighten the risk of this possibility compared with existing software because of the higher degree of autonomy and wider range of tools they may engage.

This possibility also raises complex and novel antitrust liability questions that will need to be addressed if this type of conduct is found to occur. For instance, could there be liability under antitrust law if an AI agent entered into a collusive agreement, and to whom would that liability apply? Are there practical scenarios in which an AI agent could even disregard express instructions not to reach anticompetitive agreements, similar to how human actors sometimes disregard instructions to act lawfully and, if so, how would that impact the liability question? And if two AI agents were to reach an anticompetitive agreement, what evidence is likely to exist that it occurred? Identifying evidence to bring claims under antitrust theories for AI agents, as well as the premises embodied in such theories themselves, may introduce a level of complexity far beyond the current algorithmic pricing cases.

stacks of policy papers
Risk

Employees Need Clear Guidance on AI; Have You Written Your Policies Yet?

by Lauren Kornutick
December 3, 2024

Compliance leaders should look at codes of conduct and other policies

Read moreDetails

Data privacy and cybersecurity issues

The use of agentic AI also raises a number of privacy and cybersecurity considerations. As companies roll out their AI agents, consumers may be wary about the collection and use of their personal data, including address or credit card information. Data security and privacy will be important issues for companies to proactively address in order to maintain trust and loyalty with their consumers.  

Using an AI agent may mean collecting large amounts of data to complete a task. For example, scheduling a trip to another city would require information about one’s travel schedule, travel preferences, credit card and identifying information to book hotels and transportation, and potentially other identifying information to complete the process. Similarly, if an AI agent analyzes emails to automate certain actions, the AI agent may obtain personal information contained within consumers’ email traffic.  

As companies collect large amounts of data, it makes them a more attractive target for bad actors and more likely to face cybersecurity attacks. When a company falls victim to a cybersecurity attack, it may face myriad state and federal reporting obligations depending on the data that was lost in the incident, the company’s relationship to the data (i.e., was it a data owner/controller or a service provider) and the residency of the consumer whose data was implicated in the incident. Following a cybersecurity attack, companies may also face civil liability or lawsuits from impacted consumers.

In addition to cybersecurity concerns, agentic will raise privacy considerations. Under numerous state privacy laws, including the California Consumer Privacy Act, companies may need to identify to consumers what personal information they collect, for what purpose that personal information is collected and to whom that personal information is shared. Companies will have to know and disclose what consumer information is provided to others.

In the travel example above, the company may need to disclose which travel websites its AI agent will use to book the travel and allow the consumer an opportunity to opt out of sharing personal information. State privacy laws may also require companies to notify consumers about automated decision-making depending on how it is used, which could include using agentic AI, and companies may also need to provide consumers the opportunity to opt out. Companies may also have an obligation to notify downstream vendors if a consumer decides to opt out of automated decision-making or the sharing of their personal information.  

Compliance considerations

While any company considering implementing agentic AI will need tailored legal guidance for its own particular circumstances, we see a few high-level compliance considerations from a US federal antitrust law perspective that companies may want to consider.

First, using an AI agent to monitor or enforce an express agreement between horizontal competitors to fix prices or output, rig bids or allocate markets will be treated as per se unlawful. Beyond avoiding the use of AI to facilitate a traditional, human-devised anticompetitive agreement, it may also be prudent to ensure that the prompts or configurations for AI agents involved in pricing tasks include appropriate limitations to prohibit the AI agent from seeking to enter into an express anticompetitive agreement on its own. Such limitations may be part of a broader approach to promote safe and ethical AI activity and could also potentially reflect limitations on other actions that raise antitrust risks short of entering into an express agreement, such as sharing certain information. 

Second, the antitrust “rule of reason” generally applies to exchanges of information among competitors that are not predicated on agreements to fix prices or other traditionally per se unlawful categories of activity. Because the rule of reason considers procompetitive benefits, make sure that the applicable procompetitive benefits of using agentic AI are well-documented. Where appropriate, this might include having the agent document the information and steps taken in connection with its actions.

Third, it is also advisable to carefully train business personnel to use and monitor the ongoing deployment of AI agents and to retain and exercise appropriate human oversight of certain key tasks.

With respect to privacy and data security, consumer disclosure and consent is key. Companies developing or using AI agents should identify to consumers what data is collected, for what purposes and who else will receive it so that consumers can make informed decisions about using such agents. Consumers have come to expect that a company’s privacy policy will spell out this information.  

Additionally, companies should collect the minimum amount of data necessary to achieve the goals of their AI agent and document how certain information will be used and when it will be deleted. Strong data maintenance and retention policies will help minimize adverse consequences if data is lost in a cyberattack. Companies should routinely undergo risk analyses of their data and their systems holding personal consumer information to minimize the risk of a cyberattack in the first instance.


Tags: AntitrustArtificial Intelligence (AI)
Previous Post

Unveiling Insights for 2025: Building Resilient Ethics & Compliance Programs

Next Post

What Corporate Leaders Can Expect From Trump on Executive Pay

Joshua Goodman, Minna Naranjo and Phil Wiese

Joshua Goodman, Minna Naranjo and Phil Wiese

Joshua Goodman is a partner at Morgan Lewis in Washington, D.C. With federal government experience spanning multiple administrations — including as a deputy assistant director of the Federal Trade Commission (FTC) and as counsel to the director of the FTC’s Bureau of Competition — he represents clients before the FTC, DOJ's Antitrust Division, state agencies and in court.
Minna Lo Naranjo is a partner at Morgan Lewis in San Francisco. She has worked on litigation, investigation and counseling matters in many industries including pharmaceutical, technology, airline, oil and gas and ride-sharing. Her experience includes multidistrict litigation, class-action and direct action defense, litigation against the DOJ, the FTC and state attorneys general, and counseling on matters spanning cartel and monopolization, breach of contract, fraud and unfair competition.
Phillip Wiese is an associate at Morgan Lewis in San Francisco. He counsels and defends companies in privacy and cybersecurity, as well as in complex commercial and consumer class-action litigation. He helps clients manage data security and other crisis incidents and represents them in any ensuing litigation.

Related Posts

robot waiting for job interview

If AI Can Easily Game Hiring Processes, Maybe It’s Time to Rethink What You’re Looking For

by Vera Cherepanova
July 15, 2025

Using AI to prepare for an interview is OK, but what about using it to perform?

nurse holding chart

Data Privacy at the Crossroads of AI & Life Sciences: US & EU Perspectives

by Marijn Storm, Katherine Wang and Joshua Fattal
July 15, 2025

Regulators and enforcers are watching how healthcare companies use advanced tools

photo collage text messages

Can AI Streamline E-Communications Compliance Program Reviews?

by Jonny Frank, Nathan Gibson, Michael Costa and Kashif Sheikh
July 14, 2025

Where manual reviews take weeks, AI can rapidly compare policy documentation to assessment criteria and flag control gaps

news roundup data grungy

Most Organizations Adopting AI Without Strategy as Risks Mount

by Staff and Wire Reports
July 11, 2025

Leading firms leverage AI across governance functions; privacy deletion requests surge 82%; employees struggle with AI-powered threats; payment system attacks...

Next Post
wad of cash

What Corporate Leaders Can Expect From Trump on Executive Pay

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
    • Upcoming
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights