When AI decisions create customer friction


I was traveling for work and used my credit card in two different states within 24 hours. That wasn’t typical for me, but it made sense given the route I was driving. 

Apparently, the combination of multiple states and an unusual purchase pattern was enough to trigger my credit card to be declined at the gas pump. Good thing I had a backup card. I filled up and continued my trip without much disruption.

Still, I was curious. When I got home, I called customer service to understand what happened. The representative explained that their AI fraud detection system had flagged the activity as suspicious and automatically shut off my card. The company had my best interests in mind, but the experience was frustrating. It also made me think about what would’ve happened if I didn’t have another way to pay.

Not long ago, a customer service representative might’ve called me to verify the charges. A quick conversation could’ve cleared things up in seconds. Today, AI often bypasses that step entirely and makes the decision instantly. That efficiency is powerful, but when AI misreads the situation, it creates friction for the customer.

That same dynamic is increasingly showing up in B2B. Every day, we deploy AI-driven systems across marketing and revenue operations, including lead scoring models, account prioritization, fraud detection and automated personalization.

All of these systems are designed to help us move faster and make better decisions. In many cases, they’re designed to save companies money. But they also raise an important question: What happens when the model gets it wrong?

When AI falls short, the impact shows up as lost revenue, lost retention and lost trust.

How AI models interpret signals

AI systems are only as strong as the signals they’re trained on.

Historically, lending decisions were based on criteria that consumers could understand and correct. Credit scores, documented income and payment history all played clear roles. If something looked wrong, a person could ask questions or provide additional information.

Today, many lenders use complex AI-enhanced models that incorporate a wide range of digital signals. On the surface, this sounds innovative. However, in practice, it can produce decisions that feel confusing, intrusive or even unfair. This is especially true when the signals are only loosely connected to a person’s actual ability to repay.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial

Get started with

Semrush One Logo

Korin Munsterman, writing in Accessible Law, highlighted several digital signals financial services companies have used to predict repayment behavior.

  • Device type: Some studies found that iPhone users default at nearly half the rate of Android users. In other words, the type of phone in your pocket could quietly influence whether a lender sees you as higher risk.
  • Email provider choice: Research suggests that people using premium email services such as Outlook defaulted at lower rates than users of older free services like Yahoo or Hotmail. Something as simple as which email service you signed up for years ago could become a signal about your financial profile.
  • Shopping timing patterns: Consumers who shopped between midnight and 6 a.m. were found to default at nearly twice the rate as those who shopped during normal business hours. Late-night browsing may look harmless to you, but to a model it can look like risk.
  • Text formatting habits: Consistently typing in all lowercase correlated with a default rate more than twice that of people who used standard capitalization. Even more striking, people who made typing errors in their email address had significantly higher default rates.
  • Shopping approach: Consumers who arrived via price comparison sites were less likely to default than those who clicked through advertising links.

Individually, each of these signals might have some statistical relationship to repayment behavior. But none of them actually prove someone is a credit risk. 

These inputs may be predictive in some cases, but they don’t tell the full story. When models rely too heavily on patterns like these, they risk misclassifying people who don’t fit the expected profile.

When AI misclassifies B2B buyers

The same issue appears in B2B systems as well. A highly qualified corporate buyer who behaves differently than past buyers may get deprioritized. An enterprise account with low early engagement might be labeled as cold. A model trained on last year’s behavior may fail to recognize how buyer journeys have shifted this year.

Individually, these may seem like small misses. But once automation begins making decisions at scale, the stakes grow quickly.

This is where everything connects back to that moment at the gas pump. In my case, the inconvenience was small. But imagine similar situations in a B2B environment:

  • A high-value account is incorrectly flagged and temporarily locked out.
  • A pricing or eligibility model produces results that feel inconsistent or unfair.
  • A lead scoring model quietly deprioritizes a strategic opportunity.

In these cases, customers experience friction. In B2B, friction has real consequences: friction erodes trust, trust influences renewal and renewal drives revenue. If we’re going to use AI at scale, what does responsible use actually look like?

What responsibility looks like

The burden shouldn’t fall on customers or prospects to absorb the downside of automation. For those of us deploying AI in marketing and revenue systems, responsibility means a few things.

  • Keep humans involved in high-impact decisions: If a model influences revenue qualification, pricing, access or eligibility, there should always be a clear review path.
  • Be able to explain what’s happening: If sales asks why an account score dropped, “the model updated” isn’t a sufficient answer. We should understand the drivers behind the change.
  • Monitor for drift: Buyer behavior changes. Markets evolve. Models trained on historical data require ongoing review, not set-it-and-forget-it deployment.
  • Treat efficiency and experience as equal priorities: Automation should reduce friction, not create it.

AI is an accelerator. But acceleration without oversight can quietly erode the relationships we’re trying to build. When AI gets it right, no one notices. When it gets it wrong, your customer does.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *