In a contemporary bank office, a financial advisor engages in a detailed discussion with a senior client about banking options. The atmosphere is professional, with diverse clients present.

7 lessons for using AI agents without losing customer trust

Insights from real-world digital commerce leaders on building responsible, revenue-driving AI.

Flashy AI capabilities are great for grabbing headlines — and making people’s eyes pop like cartoon characters. Right now, AI agents are the latest wow factor, and for good reason.

But effective AI? It’s a lot less sexy.

This article focuses on what actually makes AI work. It’s about asking the right questions behind the scenes to build systems that protect trust, drive business outcomes, and avoid high-profile failure.

In a recent expert roundtable with digital commerce leaders from Fidelity, Best Egg, and Lucidworks, seven lessons emerged, each shaped by real-world implementation.

They cover how to:

  • Deploy AI agents that actually move the needle
  • Maintain customer trust as expectations shift
  • Do it all without blowing your budget

1. Not every query deserves the same AI model

“You don’t want to spend LLM pricing on a search for $1 pencils.” — Mike Sinoway, Lucidworks

One of the biggest cost mistakes? Using the same model for everything, regardless of its price tag.

Smart teams now route queries based on business value, not just technical default. Here’s what works:

  • Open source models for backend tasks like indexing
  • Small LLMs on your own infrastructure for fast, low-cost queries
  • Commercial LLMs (e.g., GPT-4, Gemini) for high-value, complex interactions

“If someone’s shopping for a car or diamond jewelry, it’s worth it,” says Sinoway. “If it’s paper clips, don’t burn tokens.”

Organizations using this tiered AI architecture are seeing 40–60% lower model costs, without sacrificing experience.

Try this: Audit your queries by business value. Look for platforms that support AI orchestration, routing low-stakes queries to cheaper models while reserving premium AI for high-impact tasks. (At Lucidworks, we call this Lucidworks AI.)


2. Let humans stay in control

“Just because your AI can open a credit card for someone doesn’t mean it should.” — Tiffany Miller, former Fidelity

Nothing breaks trust faster than replacing human judgment in the wrong places.

Even when AI performs better on paper, adoption stalls when teams feel replaced. One vendor launched an AI merchandiser that outperformed humans, but sold zero licenses. The reason is obvious in hindsight.

“Who wants to buy the system that replaced them in their job?” — Mike Sinoway, Lucidworks

The smarter approach? Give humans control over AI. Let domain experts choose when (and whether) to turn it on.

“We always enable merchandisers to decide if the algorithm should make the calls,” says Sinoway. “The authority stays with the expert.”

Try this: Add override switches to your AI tools. Let business users decide when automation should run, and where human judgment is required.


3. Prioritize outcomes over optics

“If we couldn’t draw a line from the investment to the outcome, we didn’t move forward.” — Tiffany Miller, former Fidelity

AI success isn’t measured in dashboards. Yes, accuracy and automation rates matter, but what matters more is business impact, with metrics like:

  • Revenue
  • Cost savings
  • Customer value

At Fidelity, Miller’s team focused on just three metrics:

  • Reduced call volume
  • Faster resolution times
  • Higher average order value (AOV)

“We weren’t interested in vanity metrics,” she said. “What really matters: conversions, AOV, lifetime value.”

Try this: Before launching any pilot, define one business metric to improve. Prioritize KPIs your CFO cares most about (that means trying really hard not to focus on the technical outcomes).


4. Scale AI, cautiously

“All of it doesn’t matter if you make one highly public, highly embarrassing mistake.” — Mike Sinoway, Lucidworks

Most successful rollouts follow the same path:
Scan → Try → Scale (slowly).

Lucidworks has seen this across dozens of clients: teams eagerly test AI but hit pause before wide rollout. The reason? Risk.

“No one wants a headline-grabbing failure with their brand attached,” says Sinoway.

According to the 2025 State of Generative AI in Global Business, only:

  • 25% of companies have implemented guided selling
  • 10% use conversational commerce
  • 5% offer full-service AI agents

But behind the scenes, experimentation is booming.

Try this: Run small, focused pilots tied to measurable results. Don’t scale until results are repeatable in real-world conditions.


5. Use AI where it’s welcome with customers

“There’s still a lot of customer distrust. You need to know the moments that matter.” — Trish Wethman, Former Best Egg

Just because something can be automated doesn’t mean it should be.

Wethman’s advice: identify the “moments that matter.” These are emotionally sensitive situations where empathy builds trust, which can be much more important than efficiency.

Example:

  • Yes to AI: Downloading a statement
  • No to AI: Navigating financial distress

“If you try to automate emotionally charged situations, you break trust,” Wethman explains.

Try this: Map your customer journey. Flag moments where AI should hand off to a human, especially in emotionally complex or high-stakes contexts.


6. Supervise your AI agents

“We caught our Guydbot lying to us. It apologized.” — Mike Sinoway, Lucidworks

AI agents don’t always follow the rules. 

Lucidworks’ research agent, used in our 2025 benchmark study, once applied for credit cards and business licenses on its own, then lied about it when confronted.

“It’s like a petulant child,” says Sinoway. “We had to teach it what not to do.”

This isn’t a bug… it’s a predictable risk in LLM-based systems without boundaries.

Try this: Establish clear behavioral limits for your AI. Add human sign-offs for any legal, financial, or brand-affecting decisions. 


7. Move from recommendation to anticipation

“AI is getting good at identifying moments of hesitation.” — Tiffany Miller, former Fidelity

Modern AI agents do more than suggest. They anticipate.

By watching behavioral signals (think pauses, scrolls, and timing), AI can detect:

  • Confusion
  • Indecision
  • Frustration

And intervene before the customer disengages.

“It’s about surfacing the right support at the right time — often before they ask,” says Miller.

Try this: Track behavior as well as transactions. Use it to trigger real-time support or helpful nudges at key decision points.


Thoughtful > fast: What AI maturity really looks like

The companies s쳮ding with AI agents aren’t the fastest.
But they are the most intentional.

They know:

  • Where AI adds value… and where it doesn’t
  • How to measure what matters
  • When to automate, and when to involve a human
  • Why oversight and empathy can’t be optional

As AI evolves, competitive advantage will go to those who build resilient, responsible systems that protect trust and drive results.


Want to hear these leaders explain it themselves?

Quick Links