Article • 5 min read
7 ways CX leaders can close the AI trust gap with customers
Customers already believe in the power of AI to improve their experiences. Here's how companies can honor those expectations.
作者: Head of AI at Zendesk Cristina Fonseca
上次更新日期: September 18, 2023
Few can deny the potential of AI to improve customer experiences, but many remain hesitant to trust it. That’s why building customer confidence in AI remains one of the biggest challenges facing CX leaders today.
The reasons are understandable: AI is powerful, very technical, and can be difficult to understand. The name alone can spark fears of human obsolescence—or getting trapped in an endless loop with a chatbot that can’t help. But in reality, both customers and companies stand to benefit immensely from these technologies with the right safeguards in place.
To close the trust gap with customers, leaders must ensure that any approach to AI implementation is done responsibly, transparently, and with purpose. Why? Trust isn’t given, it’s earned. And though it can take awhile to build, companies can lose customer trust in an instant with AI that provides incorrect information, doesn’t protect personal data, or simply delivers a less-than-stellar experience.
Building customer trust in AI remains one of the biggest challenges facing CX leaders today.
Bad reputations are hard to shake. If you undermine a customer’s trust in AI or your ability to use it, you risk not only the relationship, but also their perception of the technology as a helpful tool.
The delicate balance between AI opportunity and risk
For now, what customers know about AI excites them. Our research shows that 70 percent of customers already believe in the potential of AI to deliver faster answers and better customer service. Even so, taking the wrong approach can quickly turn this optimism into frustration. People hate chatbots when they don’t work well. You don’t want to erode customer trust for the sake of efficiency—or because you’ve tried to automate too much.
According to Zendesk research, 70% of customers believe that AI can create more personalized and effective customer support experiences.
Introducing any new technology should be a step-by-step process. Start by understanding and addressing customer concerns over transparency, data privacy and security, and performance. Communicate clearly about your policies and plans, source best practices, and be open to feedback and change.
Here are seven tips to help CX leaders close the AI trust gap:
- Higher-quality experiences start with AI that understands your customers
If your AI isn’t trained on CX-specific data, it won’t be able to understand common customer questions—let alone solve them. For customers to trust AI, they have to believe that it can understand and help them.
We trained our AI solution on billions of customer interactions to ensure a deep understanding of each customer request—not only the language used, but also the sentiment and intent behind the query. - Automation isn’t all or nothing
The first step in any AI strategy is understanding what should and should not be automated (hint: this won’t be everything). Smart small with less risky requests to build customer (and team) confidence—things like status queries or shipping updates—and then work up to more complex issues like order cancellations or refunds. - Escalate early and often
Just as it’s important for AI to understand your customers, it’s equally important for AI to understand its own limitations and escalate to a human agent when necessary. By passing off relevant information like order history and account details, it can quickly get the agent up to speed and prevent the customer from having to repeat information. - Use confidence levels for AI prediction transparency
Not all AI predictions are the same—some will be made with greater confidence than others. If someone is acting based on these recommendations, we must be transparent about how confident our AI is in making them.
AI may deliver an excellent response in 80 percent of cases, but it will be remembered for the few times it got it wrong. That’s why transparency is critical to building long-term trust with admins, agents, and ultimately customers. - Keep humans in the loop
When AI is less than confident, it should immediately get a human involved. In fact, human oversight will be the most important factor in keeping AI accurate, safe, and free of bias. According to our research, 81 percent of customers say that having access to a human agent is critical to maintaining their trust with AI-powered customer service. - Deliver on security and privacy
Ensuring data privacy and security is critical to maintaining trust, but we found that only 21 percent of customers strongly agree that businesses are doing enough to protect them. That’s a critical gap.
At Zendesk, we handle billions of interactions containing sensitive data and must meet a high bar set by our customers. This includes developing our models in house and ensuring that no data leaves Zendesk infrastructure. - Create additional safeguards for generative AI
Generative AI models can be a powerful ally in making interactions feel more human, but they can also deliver information that’s misleading or inaccurate. To avoid undermining trust or brand reputation, companies must tread carefully with these evolving technologies and leverage additional safeguards (including human oversight) to ensure that responses remain accurate.
AI can’t and shouldn’t do everything. Instead, focus on areas where AI can actually improve your process—things like shortening wait times or pointing customers to the right help article.
Most customers just want to get help as quickly as possible. When AI can help them do that, it’s the right tool for the job. When it can’t or when it risks creating a worse experience, that’s a job for a human agent.