CloudRadial’s Blog for MSPs

Using AI to Coach Your Service Desk: What Actually Works (And What Doesn't)

Written by Ricky Cecchini | January 29, 2026

I've spent the last few months working directly with MSPs implementing ServiceAI, and I want to share what I'm seeing: the good, the bad, and the stuff that'll make you rethink how you evaluate your technicians.

Most service desk managers have a gut feeling about how their techs are performing. You're pretty sure they're doing okay because you're not getting complaints, and nothing's on fire. But in the back of your head, you know their notes could be better. Their bedside manner might be lacking. And honestly? You just don't have time to dig through thousands of tickets to find out.

That's exactly the problem ServiceAI was designed to solve—not by replacing human judgment, but by making it actually feasible to understand what's happening across your entire service desk.

 

The RPS Score: Your New Best Friend (Or Worst Enemy)

Let me start with something crucial: we created what we call the Relative Performance Score (RPS). It's a 1-10 scale that gives you a numerical value for things that have always been subjective.

We track three main RPS scores:

Ticket RPS looks at the quality of the conversation between your tech and the client. This isn't about whether they solved the problem—it's about how they solved it. Did they explain what they did? Were they empathetic? Did they leave breadcrumbs for the next person (or AI) that might handle a similar issue?

Agent RPS is where things get interesting. This score analyzes how your technician actually performed—their empathy, their understanding of the issue, how they handled frustration, their response time. It's the nitty-gritty quality control that no service desk manager has time to do manually.

User RPS measures the client's behavior. This one's completely out of your control, but it tells you something important: are your techs getting ground down by difficult clients? Because if you've got technicians with great Agent RPS scores but consistently low User RPS scores, you might have an attrition problem brewing.

 

The Part Where I Tell You AI Isn't Perfect (And Why That's Okay)

I'm going to say something that might surprise you: I am terrified of you using this tool as a hammer to punish your technicians.

Why? Because AI can misinterpret things. It absolutely can. And I want to be crystal clear about this.

Here's what I see in real implementations: AI tends to be tough on scoring. I genuinely don't think it's capable of giving a perfect 10. In practice, an 8 or 9 is about as good as it gets for human performance. When I see 5s and 6s, that's when I start paying attention. Anything below that? Either something catastrophic is happening, or the AI misread the situation.

But here's where ServiceAI gets powerful: we give you the tools to verify everything.

 

How to Actually Use This Thing

When you load your ticket data into ServiceAI, the first thing you should do is... nothing. Just let it analyze your existing tickets naturally. Don't tell your techs they're being monitored. Just get a baseline of how things actually work today.

Then, when you see something interesting in the scores, you can drill down. Every performance report we generate includes:

  • The overall RPS scores and insights
  • A breakdown of specific sub-factors (empathy, technical knowledge, communication clarity, response time)
  • Links to the actual tickets we analyzed

That last part is crucial. You can click through to verify whether the AI got it right. Did it say John needs to work on his empathy? Pull up the tickets and see for yourself.

 

The Questions That Actually Matter

The real power of ServiceAI isn't just the scores—it's the ability to ask questions about your own data. Our Orion Assistant lets you do things like:

  • "Who needs training?" (and it'll look at closed ticket rates, open tickets, note quality)
  • "Is Ricky as good as Tevin?" (and it'll compare ticket volume, scores, and specific strengths)
  • "Which tickets need my attention today?" (flagged customers, complicated multi-tech tickets, potential sales opportunities)

I recently helped an MSP discover something fascinating: one of their techs was consistently getting credit for tickets solved by other team members. The tech was putting in minimal work and getting carried. Without AI analysis across thousands of tickets, they never would have caught that pattern.

 

The Stuff That Trips People Up

Let me save you some headaches I've seen in real implementations:

Problem #1: Ghost technicians. Sometimes your PSA pulls in API users or "Client Portal" as if they're real people. ServiceAI will dutifully score them, and suddenly your best technician is... a robot. We built exclusion filters specifically for this—you can omit agents, exclude tickets by subject line, or even exclude entire companies.

Problem #2: Alert tickets tanking scores. If your RMM alerts get assigned to techs and auto-closed with no notes, the AI thinks your tech ignored the client. Solution? Exclude those ticket subjects, or better yet, use separate boards for client-facing work.

Problem #3: Backsync nightmares. When you first connect ServiceAI, you can import historical tickets. But should you import 5 years of data? Probably not. Choose a timeframe that reflects your current processes—usually 3-6 months. Otherwise you're teaching the AI from tickets that don't reflect how you work today.

 

What Actually Changes When You Use This

The most successful implementations I've seen follow this pattern:

  1. Month 1: Baseline everything. No announcements, no changes. Just observe.

  2. Month 2: Start having conversations with techs. Share the scores with context: "Hey, this is AI-generated, it can be wrong, but here's what it's seeing. Does this match reality?"

  3. Month 3+: Track improvements. I watched a tech go from a 5.5 to a 7.9 just by improving note quality. The tech wasn't doing anything wrong—they just weren't documenting their process. One coaching conversation changed everything.

The dashboard lets you compare custom time ranges, so you can actually prove whether your coaching is working. You can show your executive team concrete numbers: "After implementing our new documentation standards, our average ticket RPS increased from 6.2 to 7.8."

 

The Future (And Why I'm Excited)

Right now, ServiceAI's primary function isn't just coaching—it's understanding your AI readiness as a service desk. How far are you from being able to automate the simple stuff? How clean are your processes?

We're working on custom service desk policies where you'll be able to define exactly what matters to you. Want to emphasize faster response times? Prioritize thoroughness? You'll be able to weight the scoring to match your specific service desk philosophy.

But here's the truth: even if full automation never happens (and let's be real, it's coming), the insights you get from understanding how your techs perform across thousands of tickets? That's valuable today.

 

The Bottom Line

ServiceAI isn't about catching bad techs. It's about finally having visibility into your largest surface area of client interaction. It's about making coaching decisions based on patterns across hundreds of tickets instead of gut feelings and spot checks.

Is it perfect? No. Does it require human judgment? Absolutely. But can it help you identify issues, track improvements, and make your service desk measurably better?

In the implementations I've done so far, the answer has been yes.

Just remember: AI is a tool that helps you ask the right questions and find the answers faster. It's not a replacement for your judgment as a service desk manager. Use it to guide conversations, verify your instincts, and track real improvements over time.

And for the love of all that's holy, please don't fire someone just because an AI said so. Verify the data. Talk to your people. Make informed decisions.

That's what we built this for.

 

 

Want to see how ServiceAI analyzes your service desk? Book a demo and let's walk through your actual ticket data together.