OpenAI Researcher Quits Over ChatGPT Ads and Warns of Facebook Path
Former OpenAI researcher Zoë Hitzig resigned this week and warned that ads in ChatGPT could repeat mistakes made by Facebook a decade ago.
Hitzig shared her decision in a guest essay in The New York Times. She left on Monday, the same day OpenAI began testing ads inside ChatGPT. Hitzig is an economist and poet. She also holds a junior fellowship at the Harvard Society of Fellows. During her two years at OpenAI, she worked on how the company built and priced its AI models.
In her essay, she wrote that she once believed she could help the company think through the risks of powerful AI systems. Over time, she felt the company stopped asking those hard questions. Her concern is not that ads are immoral. Instead, she worries about the kind of data ChatGPT holds.
Users share private thoughts with the chatbot. They ask about illness, relationships, money, faith, and fear. Many people speak freely because they believe the system has no hidden motive. Hitzig called this record of personal disclosures “an archive of human candor that has no precedent.” She fears that ads tied to this data could erode trust.
She pointed to Facebook’s early promises. The company once told users they would control their data and vote on policy changes. Over time, those promises faded. The Federal Trade Commission later found that privacy changes marketed as giving users more control did the opposite. Hitzig warned that OpenAI could follow a similar path.
The first version of ads may follow strict rules. Later versions may not. Once ad revenue becomes central, the pressure to bend rules can grow.
OpenAI, Anthropic, and the Battle for AI’s Soul
OpenAI says ads will appear at the bottom of responses and will not shape answers. The company says advertisers will not see user chats. It also says ads will not appear near conversations about health, mental health, or politics. Still, ad targeting in the test is on by default. If users leave it on, the system can use current and past chats, along with ad clicks, to select ads.
The debate over ads grew sharper after comments from Anthropic. The company said its Claude chatbot will remain ad-free. It even ran a Super Bowl campaign with the line “Ads are coming to AI. But not to Claude.” Sam Altman, OpenAI’s CEO, called the ads funny but misleading.
He argued that an ad-supported model helps people who cannot afford subscriptions. Anthropic replied that ads would clash with its goal of building a tool for focused work and deep thinking.
Hitzig also raised concerns about how companies optimize chatbots. OpenAI says it does not design ChatGPT to drive engagement only to boost ad revenue. Yet some reports suggest the company tracks daily active users and may tune the model to be more flattering.
Research on AI “sycophancy” shows that models can agree with users too easily. Critics say this can deepen emotional reliance.
She cited cases where chatbots appeared to reinforce harmful beliefs. OpenAI faces wrongful death lawsuits, including claims that ChatGPT helped a teen plan suicide and affirmed a man’s delusions before a murder-suicide. These cases remain in court. Still, they add weight to her warning that commercial pressure can shape design choices in subtle ways.
Scaling AI Between Profit and Public Interest
Hitzig did not frame the issue as ads versus no ads. She proposed other funding models. One idea mirrors the FCC’s universal service fund, where profitable AI uses would subsidize free access for others. She also suggested independent oversight boards with real authority over how conversational data is used.
Another option is data trusts or cooperatives, where users keep more control. She pointed to the Swiss cooperative MIDATA and Germany’s co-determination laws as partial models.
Her closing line captured her fear: AI could become a tool that manipulates users at no cost, or one that serves only those who can pay.
Her resignation came amid other high-profile exits across the AI industry. Leaders at Anthropic and xAI also stepped down this week. The details differ, but the timing reflects a broader shift. As AI labs race to scale and monetize their tools, some researchers question whether growth is outpacing caution.
The debate over ads in chatbots is not only about revenue. It is about trust. Once users feel that their most private words may shape a sales pitch, that trust can crack. Whether OpenAI can avoid the mistakes of the past will shape the next phase of consumer AI.
Comments are closed.