When AI Steps Between Brands and Buyers

The new decision surface and the fight to control it

I’ve been focused lately on how AI search will reshape discovery (fewer blue links, more synthesized answers), and what that means for traffic and SEO. That debate matters, and it’s still playing out.

But a recent flight booking made me realize that AI doesn’t just change how we discover brands; it changes the environment in which purchasing decisions get made.

The moment I stepped out of the funnel

I was booking a flight the way I usually do, starting on an aggregator site to get the lay of the land and then clicking through to my usual airline to finish the purchase. Once I logged in, I got hit with a familiar pop-up: my miles had expired, but I could reactivate them for a fee.

I hesitated.

Pre-AI, that hesitation would’ve played out entirely inside the airline’s ecosystem. I probably would’ve either paid the reactivation fee on the spot or closed the tab and told myself I’d think about it later. Either way, the airline’s funnel would’ve kept me inside a controlled environment designed to push me toward a decision.

But now we all have copilots.

Instead of impulsively acting on the pop-up (or procrastinating), I explained the situation to ChatGPT, gave it some context on my travel habits, and asked its advice on whether reactivating the miles actually made sense.

The answer didn’t just give me math. It gave me clarity, context, and decision-making confidence. (Spoiler alert: I didn’t reactivate the miles.)

That’s when it clicked.

LLMs as a neutralizing layer in commerce

What happened there wasn’t just “AI advice.” It was a structural shift in how the decision got made.

My LLM acted as a neutralizing layer.

It pulled me out of a brand’s decision environment—which they’ve spent decades optimizing around urgency, framing, and emotional leverage—and into a more neutral space to make a purchase decision.

Scarcity messages. Expiring offers. Anchoring. Decoy pricing. Installment framing. More than gimmicks, these are conversion infrastructure that modern commerce has been built on, designed to keep buyers inside brand-controlled spaces long enough for those levers to work.

ChatGPT interrupted that loop.

The moment you move a purchase question into an AI assistant, the emotional temperature drops. The pressure dissipates. The decision becomes analytical instead of reactive.

That doesn’t always mean people don’t buy—and in some cases, AI-mediated traffic can actually convert better by pre-qualifying intent. For example, in Similarweb’s third annual Global State of Ecommerce report, ChatGPT referrals converted at roughly 11% versus 5% for organic search, though at lower overall volume. But either way, the shift is the same: the assistant becomes part of the decision surface.

And once that happens, brands lose exclusive control over how decisions are framed.

Why this matters more than “AI search”

This is why I think AI search is actually the appetizer, not the main course.

Yes, discovery will change. Yes, traffic patterns will shift. But those are upstream effects. The more destabilizing change is downstream, at conversion.

Early data already shows this ambiguity. Adobe, for example, has found that shoppers arriving from generative-AI sources tend to spend more time on site and view more pages, while conversion performance varies by category, context, and scale. Sometimes it’s lower. Sometimes it’s higher. What’s consistent is that behavior changes.

If buyers routinely step out of brand-designed funnels before purchasing—even briefly—the implications compound quickly: more deliberation, greater price sensitivity, weaker emotional leverage, and less predictable outcomes.

From a brand’s perspective, that’s not just a UX issue. It’s a shift in who controls the moment of decision.

Which brings us to agentic.

Agentic as the next evolution of the decision surface

Once you see LLMs as a neutralizing layer, the rush toward agentic commerce makes more sense.

Agentic isn’t just about automation or convenience. It’s the next evolutionary step.

If copilots pull consumers out of brand-controlled decision environments, agentic systems are the attempt to re-embed the decision environment around the assistant itself—defaults, bundles, nudges, incentives—all inside something that feels neutral, helpful, and automated.

In other words, if LLMs neutralize brand-controlled decision environments, agentic systems are the brand’s attempt to shape—if not control—that new neutral layer.

Through that lens, the “agentic hype” makes much more sense. If decisions are going to happen via AI anyway, better to influence the assistant than lose the moment entirely.

But that doesn’t mean it will work the way the hype suggests.

Why agentic adoption will be slower and messier

There’s a tendency to assume agentic adoption will follow the same curve as chatbots: fast, viral, inevitable. I don’t think that’s true.

The real constraint isn’t technology. It’s permission and trust.

It’s one thing to ask an AI for advice. It’s another to let it act on your behalf—to spend money, move points, change bookings, or make irreversible decisions.

Agentic systems only work when users grant access to:

  • payment methods
  • accounts and credentials
  • preferences and constraints
  • real-world consequences

That kind of delegation requires confidence, guardrails, and transparency—none of which scale overnight.

So, while brands will absolutely push agentic experiences aggressively, we’re likely to see a long, uneven middle phase: partial delegation, narrow use cases, and heavy human oversight.

Which sets up the real battle.

Brand-controlled agents vs. meta agents

The future of agentic commerce won’t be decided by whether agents exist. It will be decided by who controls the decision surface.

Brand-controlled agents will optimize for the brand: retention, margin, upsell, lock-in. Their incentives are clear, and their “helpfulness” will always be bounded by commercial goals.

Third-party or meta agents—whether platform-level, OS-level, or independent—will optimize for the user: price, substitution, constraints, and transparency.

The long-term winners won’t be the loudest or most aggressively marketed ones. They’ll be the layers that make delegation feel safe:

  • permissioning and spend controls
  • audit trails and explainability
  • reputation and trust systems

In other words, the infrastructure that builds meaningful, lasting trust between people and agents.

Where this leaves us

Brands are right to see agentic as existential. LLMs have changed the decision surface in ways traditional persuasion can’t fully recover from. Doing nothing isn’t an option.

But I still can’t help but see rushing everyone into brand-controlled agents as a bit of a Hail Mary.

If this plays out the way most structural shifts do, we’ll land somewhere in the middle: slower adoption, fragmented control, and a rebalancing of power toward transparency and value.

Because once decisions are neutralized—once consumers can step outside the funnel at will—the old tricks matter less.

What brands should do now

In an agentic environment, brands don’t win by out-optimizing the assistant. They win by surviving comparison in it.

Even if a brand deploys its own agent, customers will still sanity-check decisions in neutral LLMs and meta agents—asking whether pricing is fair, whether an offer is worth it, and whether a loyalty program is actually delivering value. That’s where the separation happens.

The brands that hold up won’t be the ones with the cleverest agents or the most aggressive nudges. They’ll be the ones that remove friction before an AI has to call it out:

  • Transparent pricing and honest value exchange: No hidden fees. No fine-print gotchas. No “technically correct” offers designed to extract rather than reward.
  • Simple, legible programs that work as advertised: Loyalty that’s easy to understand, easy to redeem, and doesn’t require a spreadsheet to decode.
  • Messaging that assumes scrutiny, not naivety: Clear claims, realistic promises, and language that holds up when customers ask a neutral agent, “Is this actually worth it?”

In other words, brands need to optimize not just for conversion—but for AI-assisted evaluation.

Because in a world where every claim can be cross-examined instantly, the assistants won’t just surface options. They’ll surface the difference between contenders and pretenders.

And in that type of decision environment, what actually delivers value tends to rise to the top.

Leave a Reply

Your email address will not be published. Required fields are marked *