UX Research Survey Design

Read the Room: A Survey Design Problem

User satisfaction surveys often assume enjoyment is the goal. This post explores why that framing fails for task driven, enterprise, and institutional tools.


TL:DR

If a tool exists to get something done, asking users whether they are “enjoying” it produces misleading data. Satisfaction surveys often get reused without reconsidering context, turning neutral or required experiences like login flows and compliance tools into artificially negative signals. Good survey questions match the job the product is meant to do, not a generic idea of delight.

“Are you enjoying this app?”

No. No, I am not.

And that answer has nothing to do with whether the app is good.

The Enjoyment Question, Revisited

Every so often, a survey pops up and asks if I am enjoying the experience. Not if it is working. Not if it helped me finish what I came to do. Not if it was clear, efficient, or reliable.

Enjoying it.

This question assumes I am here for delight. I am usually not.

Most of the time, I am here because I have to be.

Two-Factor Authentication

Picture this:

I am logging into a system. I need a code. My phone buzzes. I mistype the code. The screen reloads. I try again. Then, right at that moment:

“Are you enjoying this app?”

Absolutely not.

I am annoyed and trying to complete a required step so I can move on with my day. But it’s not a reflection of the quality of the two-factor authentication system. It is a reflection of the situation it exists in. The survey just caught me at the worst possible moment and labeled it “user sentiment.”

Compliance Tools

Think time tracking, medical systems, financial dashboards.

No one opens these tools hoping for joy.

They open them hoping to:

  • finish quickly
  • not make a mistake
  • not get locked out
  • not have to contact support

If the tool does those things well, that is success, but asking about enjoyment reframes a neutral or utilitarian experience into a negative one by default.

Error States and Interruptions

Surveys often appear:

  • after an error
  • after a timeout
  • mid task
  • immediately after friction

At that moment, the user is reacting to interruption, not evaluating the product holistically. You are measuring frustration with the moment, not the system and that distinction matters.

How does this question end up everywhere?

Part of the problem is not bad intent. It is inertia.

Satisfaction surveys get copy pasted from product to product with very little reconsideration of what the tool actually does. A survey that made sense for a consumer app, marketing site, or content platform quietly becomes the default pattern for everything else.

At some point, “Are you enjoying this app?” stops being a research question and becomes a checkbox.

A Very Brief History

Most of the surveys we use today were designed for very specific contexts. Over time, they escaped those contexts and started showing up everywhere.

Customer Satisfaction (CSAT)
This is the classic “How satisfied are you?” question. It came out of customer service and retail, where emotion and brand perception matter. CSAT works best after:

  • a support interaction
  • a purchase
  • a clearly bounded experience

It is much less useful mid task or inside required workflows.

Net Promoter Score (NPS)
NPS was created to measure loyalty and advocacy, not usability. It asks whether someone would recommend a product to others. That makes sense for products people choose to use. You cannot recommend something you are forced to use.

Enjoyment and Delight Metrics
Largely from consumer tech, gaming, and content platforms. Enjoyment is a valid signal when:

  • exploration is the goal
  • engagement is optional
  • emotional response drives retention

They assume the user opted in.

Usability and Task Success Measures
Older than all of the above, but somehow easier to forget. Task completion, error rates, time on task, and clarity questions are boring but effective. They shine in:

  • enterprise systems
  • administrative tools
  • healthcare, finance, and education platforms

They measure whether the thing worked, not whether it sparkled.

When teams reuse a satisfaction survey without rethinking the context, the survey starts measuring the wrong thing. A question designed to measure brand affinity gets dropped into an authentication flow. A delight metric ends up evaluating a compliance step. Emotional language gets applied to neutral experiences. The data looks real, the numbers move, but the conclusions drift.

When you ask the wrong question, you get answers to a different problem.

Asking about enjoyment:

  • conflates emotion with usability
  • overweights moments of friction
  • penalizes tools that are functional but unglamorous
  • trains teams to chase delight where clarity would be more valuable

You end up with data that sounds actionable but is misleading. “We need to make it more enjoyable” becomes the takeaway, when the real issue might be task length, error recovery, or cognitive load.

What to Ask Instead

For tools built around utility, try questions like:

  • Was this task easy to complete?
  • Did anything slow you down?
  • Was anything unclear or unexpected?
  • Were you able to do what you came here to do?

Neutral is not failure. Neutral is often the goal.

Final Answer to the Survey

Am I enjoying this app?

No.

But it worked.
I got in.
I moved on.

And that is exactly what I needed.