In a bid to stay efficient (and avoid complaints), too many FM and workplace teams stick to simple rating scales. But when you only ask for numbers, you only get surface-level data. What’s really happening in the workplace might be just out of view.
Over the years, I’ve seen the same pattern play out again and again. Workplace and FM teams doing their best to listen to employees. Running surveys. Measuring satisfaction. Gathering feedback. But when it comes to the questions they ask, there’s a strong preference for rating scales.
This kind of feedback feels safe. It’s easy to collect. It’s easy to graph. It’s easy to present. It doesn’t take long for people to answer. And if we’re honest, it doesn’t take long to read either. But that safety comes at a cost.
Many of the people we speak to are wary of asking open, free-text questions. They’re worried it’ll open the floodgates to a barrage of complaints. That it’ll be all doom and gloom. That the comments will be unstructured, hard to process, and hard to explain to others. They worry that people won’t even bother answering – and if they do, who has the time to read it all?
So they stick to what they know. They keep the questions simple, and stick to numbers. But I’ve come to believe that this behaviour, while completely understandable, actually traps us in a negative loop.
The truth is, a lot of FM and workplace professionals are used to getting criticised. It’s part of the job. If something goes wrong – lights out, loos dirty, aircon freezing – you hear about it.
So over time, you start to expect it. You develop a bit of a shield. That mindset makes you cautious. You focus on fixing what’s broken. You focus on what’s measurable. And you try not to draw too much attention to things you might not be able to fix.
It’s a defensive posture. And in some ways, one that makes sense. But it can also mean we start avoiding the type of feedback that could actually help us move forward. Because when you only ask for ratings, you only get surface-level data. You can spot patterns, but you don’t know why they’re happening. You don’t get the stories behind the scores.
That’s why it’s worth understanding what different types of feedback are actually giving you – and what they’re not.
Quantitative data – the numbers – is great for spotting trends. It helps you see the scale of a problem, or whether things are getting better or worse. You can segment and compare. 25% of people said X, 15% said Y. It gives you the ‘what’. But it rarely gives you the ‘why’.
Qualitative data – comments, stories, examples – gives you that depth. It shows you the connections between things. The emotions. The specific pain points or bright spots that numbers can’t capture. It can feel harder to manage, but it’s often where the gold is.
One of our clients – a big UK finance brand – were getting solid scores around washroom satisfaction. 7 out of 10, on average. So far so good. But when they looked at the qualitative comments, they saw a much more interesting picture. Yes, people were generally happy with the availability and location of the toilets. But the comments also flagged consistent concerns about cleanliness and the way some people were leaving them for others.
In other words, the 7/10 score was hiding a split experience. The function was fine, but the day-to-day reality still left some people feeling frustrated or let down. Because they took the time to listen, they were able to take quick, targeted action. They adjusted cleaning routines. They clarified shared expectations. And they addressed the issue directly with some light-touch customer messaging.
They didn’t need more data. They needed better understanding. And that only came from combining the numbers with the comments.
At Audiem, we talk a lot about question coupling. That’s where you follow a simple rating scale (like the ones above) with a free-text question, like:
• “Can you tell us why you gave that score?”
• “What would have made that a better experience for you?”
This pairing works brilliantly. You get the ease of a number – something that’s scannable and comparable – followed by the richness of a story. And because the qualitative question is anchored in the rating, it stays focused. You’re not asking for a life story. You’re asking for a reason.
Think about the last time you read Amazon or Airbnb reviews. You glance at the stars, sure. But what actually convinces you is what people say. Their examples. Their explanations. That mix of rating and review is exactly what makes those platforms useful – and successful.
I have a hunch that at the heart of this is trust. And I think there are three types of trust that matter here.
There’s an old myth that ostriches bury their heads in the sand when they’re scared. They don’t. But in our world, it’s still a useful metaphor. Because sometimes, sticking to safe, surface-level feedback is a bit like doing just that.
If you want to understand the real experience of the people you support, you need to lift your head. Ask the extra question. Be ready for more than just numbers. It’s not always easy. But I guarantee it will be worth it.
If you want to see how question coupling works in practice – or you’re curious about how Audiem can help – get in touch. We’d be happy to show you what’s possible.