User feedback is essential to continuous improvement. For marketers, it can be from community interactions, NPS surveys or other forms of customer satisfaction surveys, the customer success team and advisory boards. Turns out, the same channels apply for customer marketing managers seeking stakeholder feedback from sales, marketing, customer success, PR, digital, RFP & Proposals, social media and so on.

In this blog, we share what we’ve learned about leveraging surveys to gain insights that help inform your program decisions related to your advocate database and the output of that database: customer content. If you don’t have the right advocates you’re nothing more than a collector of happy customers. If you produce customer content (videos, case studies, etc.) that isn’t relevant then it just won’t get used and you’ll have wasted a good deal of your limited time and budget in the process.

Survey Particulars


How Many?

How many and what questions to ask are, together, what will guarantee a good result. Remember, you’re asking people to spend their time on your survey, which they will do if they understand it will benefit them. You could probably ask 100 questions if you’re the curious type. But you only have so much of their attention span and time. So make your wish list of the things you want to know, then prioritize that list to 10-15 questions. More than that, and people tend to bail out. You’ll want to pretend you’ve been asked to take the survey and time it out. If you can promise a 10-minute experience in your survey promotion and invitation, completions will be much higher.

Question Construction

Every participant should be able to answer every question without getting stuck: “None of these answers quite fit my situation; now what do I do?” This can easily be avoided with an “Other” choice when it comes to multiple choice questions. For reporting, you want to minimize this, keeping the responses limited to specific answers; but not at the risk of the participant getting stuck.

When you include “on average” or “typically” or “annually” or “in the last quarter,” you provide important direction. Be as specific as possible so answering and making sense of the results is easy. We’ve seen program managers ask questions like, “What type of content do you find most valuable?” If the first reaction is “It depends,” you’ll gain limited value from asking it. Some people will answer as if you mean “to close the deal,” and some will think you mean “to open the door,” for example. Another potential can of worms is something like, “Did you find what you need in our advocate database?” If the answer is yes, then you can probably make some deductions based on the stakeholder’s role. If a seller is focused on Fortune 1,000 health insurance providers, then you know the database has ample Fortune 1,000 health insurance advocates.

What if the answer is no? Think about the possibilities.

  • There’s a need for more advocates in a particular industry using a particular product with a particular use case
  • The industry and product coverage is good, but there are not enough advocates willing to host site visits
  • The industry and product coverage is good, but there’s a need for more VP-level business contacts; or director-level technical contacts

The list could go on and on. So, choose your questions carefully to gain actionable feedback versus simply causing even more guesswork from the answers. Branching questions can help get to the important stuff, as can providing a text field for more detail. But in every case ask yourself if your design decisions will get useful information.

Anonymous or Not?

An anonymous survey may yield more honest answers, particularly if a stakeholder has some dissatisfaction to vent. Unfortunately, if they do, you won’t be able to gain additional detail or confirm you’ve solved their grievance after making changes. Our belief is that if you have a reasonably good relationship with your stakeholders, then they’ll know they can be frank, and you’ll take it the right way.

The Setup

You want a statistically significant number of responses. If you send a survey out to 250 people and get eight responses, that’s not helpful. That small number of results may be highly skewed. If you race off to make changes and later learn that 200 other people felt differently about something than those eight, you’ll have squandered your time and still not made things better. What you’ll want to do in advance is get some management support, especially from those people who manage the participants on your list. It needs to be clearly communicated that these surveys are infrequent, but important; that the purpose is to make the program better, not to make busy work. There will be follow-up on what actions were taken as a result. Someone will be monitoring the uptake rate. Lastly, they will know when to expect the survey and when their submissions are due.

Close the Loop

You know how frequently you’re asked for your thoughts, but then you have no idea where those thoughts went, if they were used, or if they resulted in changes? It sucks, right? The less you care about the thing you’re being asked to take a survey for, the less likely you are to even start that survey. Do I care to provide my feedback on a recent purchase of printer paper at Office Depot? Nope. But if my job performance hinges on leveraging customer advocates and someone is offering to make that part of my job easier, I care. The trick is to make it clear their feedback does matter, the program is adapting, and that future feedback requests will have the same positive effect on their work lives. So, formalize the outcome of the survey results. Share them in team meetings or a newsletter or whatever makes the most sense in your environment.

We would be remiss if we didn’t make the case for another awesome source of continuous feedback and that’s a stakeholder advisory board for your program. Every high performing customer marketing program has one. Read more.