Next Story
Newszop

Ads ruined social media. Now they're coming to AI chatbots.

Send Push
Chatbots might hallucinate and sprinkle too much flattery on their users — “That’s a fascinating question!” one recently told me — but at least the subscription model that underpins them is healthy for our wellbeing. Many Americans pay about $20 a month to use the premium versions of OpenAI’s ChatGPT, Google’s Gemini Pro or Anthropic’s Claude, and the result is that the products are designed to provide maximum utility.

Don’t expect this status quo to last. Subscription revenue has a limit, and Anthropic’s new $200-a-month “Max” tier suggests even the most popular models are under pressure to find new revenue streams.

Unfortunately, the most obvious one is advertising — the web’s most successful business model. AI builders are already exploring ways to plug more ads into their products, and while that’s good for their bottom lines, it also means we’re about to see a new chapter in the attention economy that fueled the internet.

If social media's descent into engagement-bait is any guide, the consequences will be profound.

One cost is addiction. Young office workers are becoming dependent on AI tools to help them write emails and digest long documents, according to a recent study, and OpenAI says a cohort of “problematic” ChatGPT users are hooked on the tool. Putting ads into ChatGPT, which now has more than 500 million active users, won’t spur the company to help those people reduce their use of the product. Quite the opposite.

Advertising was the reason companies like Mark Zuckerberg’s Meta Platforms Inc. designed algorithms to promote engagement, keeping users scrolling so they saw more ads and drove more revenue. It’s the reason behind the so-called “enshittification” of the web, a place now filled with clickbait and social media posts that spark outrage. Baking such incentives into AI will almost certainly lead its designers to find ways to trigger more dopamine spikes, perhaps by complimenting users even more, asking personal questions to get them talking for longer or even cultivating emotional attachments.

Millions of people in the Western world already view chatbots in apps like Character.ai, Chai, Talkie, Replika and Botify as friends or romantic partners. Imagine how persuasive such software could be when its users are beguiled. Imagine a person telling their AI they’re feeling depressed, and the system recommending some affordable holiday destinations or medication to address the problem.

Is that how ads would work in chatbots? The answer is subject to much experimentation, and companies are indeed experimenting. Google’s ad network, for instance, recently started putting advertisements in third-party chatbots. Chai, a romance and friendship chatbot, on which users spent 72 minutes a day, on average, in September 2024, serves pop-up ads. And AI answer engine Perplexity displays sponsored questions. After an answer to a question about job hunting, for instance, it might include a list of suggested follow ups including, at the top, “How can I use Indeed to enhance my job search?”

Perplexity's Chief Executive Officer Aravind Srinivas told a podcast in April that the company was looking to go further by building a browser to “get data even outside the app” to track “which hotels are you going [to]; which restaurants are you going to,” to enable what he called “hyper-personalized” ads.

For some apps, that might mean weaving ads directly into conversations, using the intimate details shared by users to predict and potentially even manipulate them into wanting something, then selling those intentions to the highest bidder. Researchers at Cambridge University referred to this as the forthcoming “intention economy” in a recent paper, with chatbots steering conversations toward a brand or even a direct sale. As evidence, they pointed to a 2023 blog post from OpenAI calling for “data that expresses human intention” to help train its models, a similar effort from Meta, and Apple's 2024 developer framework that helps apps work with Siri to “predict actions someone might take in the future.”

As for OpenAI’s Sam Altman, nothing says "we're building an ad business” like hiring the person who built delivery app Instacart into an advertising powerhouse. Altman recently poached CEO Fidji Simo to help OpenAI “scale as we enter a next phase of growth.” In Silicon Valley parlance, to “scale” often means to quickly expand your user base by offering a service for free, with ads.

Tech companies will inevitably claim that advertising is a necessary part of democratizing AI. But we’ve seen how “free” services cost people their privacy and autonomy — even their mental health. And AI knows more about us than Google or Facebook ever did — details about our health concerns, relationship issues and work. In two years, they have also built a reputation as trustworthy companions and arbiters of truth. On X, for instance, users frequently bring AI models Grok and Perplexity into conversations to flag if a post is fake.

When people trust AI that much, they’re more vulnerable to targeted manipulation. AI advertising should be regulated before it becomes too entrenched, or we’ll repeat the mistakes made with social media — scrutinising the fallout of a lucrative business model only after the damage is done.

This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”

Loving Newspoint? Download the app now